Sample records for architectural model transformations

  1. A Concept Transformation Learning Model for Architectural Design Learning Process

    ERIC Educational Resources Information Center

    Wu, Yun-Wu; Weng, Kuo-Hua; Young, Li-Ming

    2016-01-01

    Generally, in the foundation course of architectural design, much emphasis is placed on teaching of the basic design skills without focusing on teaching students to apply the basic design concepts in their architectural designs or promoting students' own creativity. Therefore, this study aims to propose a concept transformation learning model to…

  2. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOEpatents

    Chao, Tian-Jy; Kim, Younghun

    2015-02-03

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  3. Formalism Challenges of the Cougaar Model Driven Architecture

    NASA Technical Reports Server (NTRS)

    Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.

    2004-01-01

    The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.

  4. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Tian-Jy; Kim, Younghun

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function tomore » convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.« less

  5. Rethinking the architectural design concept in the digital culture (in architecture's practice perspective)

    NASA Astrophysics Data System (ADS)

    Prawata, Albertus Galih

    2017-11-01

    The architectural design stages in architectural practices or in architectural design studio consist of many aspects. One of them is during the early phases of the design process, where the architects or designers try to interpret the project brief into the design concept. This paper is a report of the procedure of digital tools in the early design process in an architectural practice in Jakarta. It targets principally the use of BIM and digital modeling to generate information and transform them into conceptual forms, which is not very common in Indonesian architectural practices. Traditionally, the project brief is transformed into conceptual forms by using sketches, drawings, and physical model. The new method using digital tools shows that it is possible to do the same thing during the initial stage of the design process to create early architectural design forms. Architect's traditional tools and methods begin to be replaced effectively by digital tools, which would drive bigger opportunities for innovation.

  6. Paraxial diffractive elements for space-variant linear transforms

    NASA Astrophysics Data System (ADS)

    Teiwes, Stephan; Schwarzer, Heiko; Gu, Ben-Yuan

    1998-06-01

    Optical linear transform architectures bear good potential for future developments of very powerful hybrid vision systems and neural network classifiers. The optical modules of such systems could be used as pre-processors to solve complex linear operations at very high speed in order to simplify an electronic data post-processing. However, the applicability of linear optical architectures is strongly connected with the fundamental question of how to implement a specific linear transform by optical means and physical imitations. The large majority of publications on this topic focusses on the optical implementation of space-invariant transforms by the well-known 4f-setup. Only few papers deal with approaches to implement selected space-variant transforms. In this paper, we propose a simple algebraic method to design diffractive elements for an optical architecture in order to realize arbitrary space-variant transforms. The design procedure is based on a digital model of scalar, paraxial wave theory and leads to optimal element transmission functions within the model. Its computational and physical limitations are discussed in terms of complexity measures. Finally, the design procedure is demonstrated by some examples. Firstly, diffractive elements for the realization of different rotation operations are computed and, secondly, a Hough transform element is presented. The correct optical functions of the elements are proved in computer simulation experiments.

  7. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  8. A Model-Driven Architecture Approach for Modeling, Specifying and Deploying Policies in Autonomous and Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel

    2006-01-01

    Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.

  9. Transitioning ISR architecture into the cloud

    NASA Astrophysics Data System (ADS)

    Lash, Thomas D.

    2012-06-01

    Emerging cloud computing platforms offer an ideal opportunity for Intelligence, Surveillance, and Reconnaissance (ISR) intelligence analysis. Cloud computing platforms help overcome challenges and limitations of traditional ISR architectures. Modern ISR architectures can benefit from examining commercial cloud applications, especially as they relate to user experience, usage profiling, and transformational business models. This paper outlines legacy ISR architectures and their limitations, presents an overview of cloud technologies and their applications to the ISR intelligence mission, and presents an idealized ISR architecture implemented with cloud computing.

  10. Transforming medical imaging applications into collaborative PACS-based telemedical systems

    NASA Astrophysics Data System (ADS)

    Maani, Rouzbeh; Camorlinga, Sergio; Arnason, Neil

    2011-03-01

    Telemedical systems are not practical for use in a clinical workflow unless they are able to communicate with the Picture Archiving and Communications System (PACS). On the other hand, there are many medical imaging applications that are not developed as telemedical systems. Some medical imaging applications do not support collaboration and some do not communicate with the PACS and therefore limit their usability in clinical workflows. This paper presents a general architecture based on a three-tier architecture model. The architecture and the components developed within it, transform medical imaging applications into collaborative PACS-based telemedical systems. As a result, current medical imaging applications that are not telemedical, not supporting collaboration, and not communicating with PACS, can be enhanced to support collaboration among a group of physicians, be accessed remotely, and be clinically useful. The main advantage of the proposed architecture is that it does not impose any modification to the current medical imaging applications and does not make any assumptions about the underlying architecture or operating system.

  11. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  12. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  13. Data Warehouse Design from HL7 Clinical Document Architecture Schema.

    PubMed

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2015-01-01

    This paper proposes a semi-automatic approach to extract clinical information structured in a HL7 Clinical Document Architecture (CDA) and transform it in a data warehouse dimensional model schema. It is based on a conceptual framework published in a previous work that maps the dimensional model primitives with CDA elements. Its feasibility is demonstrated providing a case study based on the analysis of vital signs gathered during laboratory tests.

  14. Architectural and Urban Identity Transformation of Eskisehir - An Anatolian City

    NASA Astrophysics Data System (ADS)

    Kandemir, Ozlem

    2017-10-01

    City is the arena where we identify ourselves and interact with others and our environment; cities are epicentres of interaction, transition and fusion of different communities and their cultures. Thus, it is important to discuss the elements of change and their consequences in architectural - urban spaces and their products in the context of identity. Urban identity can be defined as the impression invoked on its inhabitants by the environmental, historical, sociocultural and spatial values. Both architectural and urban identity have a dynamic structure, susceptive to every change on both social and administrative structure. Both global and national economic fluctuations in the last decades and industrialisation throughout the 20th century caused dramatic and diverse changes in the conditions of life, consumption forms, the perception of time and space consequently transforming architecture and city. The changes in all the different aspects of the city life and structure with time cause transformation of architecture and urban identity. This dynamism caused by changes and new formations in the cultural life and environmental conditions also leads to transforming customs and the ways we occupy/use/live in a place. Consequently, these changes and new social norms that can transform the way we occupy a space and our demands from a place can be asserted. All new requirements caused by these new conditions of urban life transform the existing architecture and spaces. In this presentation, the transformation of the architectural and urban identity of Eskisehir will be discussed through its dynamics like architectural and urban transformation, industry and politics.

  15. Role of System Architecture in Architecture in Developing New Drafting Tools

    NASA Astrophysics Data System (ADS)

    Sorguç, Arzu Gönenç

    In this study, the impact of information technologies in architectural design process is discussed. In this discussion, first the differences/nuances between the concept of software engineering and system architecture are clarified. Then, the design process in engineering, and design process in architecture has been compared by considering 3-D models as the center of design process over which the other disciplines involve the design. It is pointed out that in many high-end engineering applications, 3-D solid models and consequently digital mock-up concept has become a common practice. But, architecture as one of the important customers of CAD systems employing these tools has not started to use these 3-D models. It is shown that the reason of this time lag between architecture and engineering lies behind the tradition of design attitude. Therefore, it is proposed a new design scheme a meta-model to develop an integrated design model being centered on 3-D model. It is also proposed a system architecture to achieve the transformation of architectural design process by replacing 2-D thinking with 3-D thinking. It is stated that in the proposed system architecture, the CAD systems are included and adapted for 3-D architectural design in order to provide interfaces for integration of all possible disciplines to design process. It is also shown that such a change will allow to elaborate the intelligent or smart building concept in future.

  16. C2 Product-Centric Approach to Transforming Current C4ISR Information Architectures

    DTIC Science & Technology

    2004-06-01

    each type of environment . For “Cultural Feature” Entity Kind only the “Land” Domain is defined and for “ Environmental ” Entity Kind only the...take advantage of both worlds. In particular, the unifying concept of a Model-Driven Architecture (MDA) under development by the Object Management...Exchange Requirements (IER) to an XML environment . FCS [4] developers have embraced both UML and XML for their architectures and MIP [5] too is migrating

  17. Transforming Aggregate Object-Oriented Formal Specifications to Code

    DTIC Science & Technology

    1999-03-01

    integration issues associated with a formal-based software transformation system, such as the source specification, the problem space architecture , design architecture ... design transforms, and target software transforms. Software is critical in today’s Air Force, yet its specification, design, and development

  18. Model Driven Engineering

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.

  19. Integration of a CAS/DGS as a CAD System in the Mathematics Curriculum for Architecture Students

    ERIC Educational Resources Information Center

    Falcon, R. M.

    2011-01-01

    Students of Architecture and Building Engineering Degrees work with Computer Aided Design systems daily in order to design and model architectonic constructions. Since this kind of software is based on the creation and transformation of geometrical objects, it seems to be a useful tool in Maths classes in order to capture the attention of the…

  20. Improving TOGAF ADM 9.1 Migration Planning Phase by ITIL V3 Service Transition

    NASA Astrophysics Data System (ADS)

    Hanum Harani, Nisa; Akhmad Arman, Arry; Maulana Awangga, Rolly

    2018-04-01

    Modification planning of business transformation involving technological utilization required a system of transition and migration planning process. Planning of system migration activity is the most important. The migration process is including complex elements such as business re-engineering, transition scheme mapping, data transformation, application development, individual involvement by computer and trial interaction. TOGAF ADM is the framework and method of enterprise architecture implementation. TOGAF ADM provides a manual refer to the architecture and migration planning. The planning includes an implementation solution, in this case, IT solution, but when the solution becomes an IT operational planning, TOGAF could not handle it. This paper presents a new model framework detail transitions process of integration between TOGAF and ITIL. We evaluated our models in field study inside a private university.

  1. Leveraging Terminology Services for Extract-Transform-Load Processes: A User-Centered Approach

    PubMed Central

    Peterson, Kevin J.; Jiang, Guoqian; Brue, Scott M.; Liu, Hongfang

    2016-01-01

    Terminology services serve an important role supporting clinical and research applications, and underpin a diverse set of processes and use cases. Through standardization efforts, terminology service-to-system interactions can leverage well-defined interfaces and predictable integration patterns. Often, however, users interact more directly with terminologies, and no such blueprints are available for describing terminology service-to-user interactions. In this work, we explore the main architecture principles necessary to build a user-centered terminology system, using an Extract-Transform-Load process as our primary usage scenario. To analyze our architecture, we present a prototype implementation based on the Common Terminology Services 2 (CTS2) standard using the Patient-Centered Network of Learning Health Systems (LHSNet) project as a concrete use case. We perform a preliminary evaluation of our prototype architecture using three architectural quality attributes: interoperability, adaptability and usability. We find that a design-time focus on user needs, cognitive models, and existing patterns is essential to maximize system utility. PMID:28269898

  2. Architectural frameworks: defining the structures for implementing learning health systems.

    PubMed

    Lessard, Lysanne; Michalowski, Wojtek; Fung-Kee-Fung, Michael; Jones, Lori; Grudniewicz, Agnes

    2017-06-23

    The vision of transforming health systems into learning health systems (LHSs) that rapidly and continuously transform knowledge into improved health outcomes at lower cost is generating increased interest in government agencies, health organizations, and health research communities. While existing initiatives demonstrate that different approaches can succeed in making the LHS vision a reality, they are too varied in their goals, focus, and scale to be reproduced without undue effort. Indeed, the structures necessary to effectively design and implement LHSs on a larger scale are lacking. In this paper, we propose the use of architectural frameworks to develop LHSs that adhere to a recognized vision while being adapted to their specific organizational context. Architectural frameworks are high-level descriptions of an organization as a system; they capture the structure of its main components at varied levels, the interrelationships among these components, and the principles that guide their evolution. Because these frameworks support the analysis of LHSs and allow their outcomes to be simulated, they act as pre-implementation decision-support tools that identify potential barriers and enablers of system development. They thus increase the chances of successful LHS deployment. We present an architectural framework for LHSs that incorporates five dimensions-goals, scientific, social, technical, and ethical-commonly found in the LHS literature. The proposed architectural framework is comprised of six decision layers that model these dimensions. The performance layer models goals, the scientific layer models the scientific dimension, the organizational layer models the social dimension, the data layer and information technology layer model the technical dimension, and the ethics and security layer models the ethical dimension. We describe the types of decisions that must be made within each layer and identify methods to support decision-making. In this paper, we outline a high-level architectural framework grounded in conceptual and empirical LHS literature. Applying this architectural framework can guide the development and implementation of new LHSs and the evolution of existing ones, as it allows for clear and critical understanding of the types of decisions that underlie LHS operations. Further research is required to assess and refine its generalizability and methods.

  3. Bit-parallel arithmetic in a massively-parallel associative processor

    NASA Technical Reports Server (NTRS)

    Scherson, Isaac D.; Kramer, David A.; Alleyne, Brian D.

    1992-01-01

    A simple but powerful new architecture based on a classical associative processor model is presented. Algorithms for performing the four basic arithmetic operations both for integer and floating point operands are described. For m-bit operands, the proposed architecture makes it possible to execute complex operations in O(m) cycles as opposed to O(m exp 2) for bit-serial machines. A word-parallel, bit-parallel, massively-parallel computing system can be constructed using this architecture with VLSI technology. The operation of this system is demonstrated for the fast Fourier transform and matrix multiplication.

  4. Developing a modular architecture for creation of rule-based clinical diagnostic criteria.

    PubMed

    Hong, Na; Pathak, Jyotishman; Chute, Christopher G; Jiang, Guoqian

    2016-01-01

    With recent advances in computerized patient records system, there is an urgent need for producing computable and standards-based clinical diagnostic criteria. Notably, constructing rule-based clinical diagnosis criteria has become one of the goals in the International Classification of Diseases (ICD)-11 revision. However, few studies have been done in building a unified architecture to support the need for diagnostic criteria computerization. In this study, we present a modular architecture for enabling the creation of rule-based clinical diagnostic criteria leveraging Semantic Web technologies. The architecture consists of two modules: an authoring module that utilizes a standards-based information model and a translation module that leverages Semantic Web Rule Language (SWRL). In a prototype implementation, we created a diagnostic criteria upper ontology (DCUO) that integrates ICD-11 content model with the Quality Data Model (QDM). Using the DCUO, we developed a transformation tool that converts QDM-based diagnostic criteria into Semantic Web Rule Language (SWRL) representation. We evaluated the domain coverage of the upper ontology model using randomly selected diagnostic criteria from broad domains (n = 20). We also tested the transformation algorithms using 6 QDM templates for ontology population and 15 QDM-based criteria data for rule generation. As the results, the first draft of DCUO contains 14 root classes, 21 subclasses, 6 object properties and 1 data property. Investigation Findings, and Signs and Symptoms are the two most commonly used element types. All 6 HQMF templates are successfully parsed and populated into their corresponding domain specific ontologies and 14 rules (93.3 %) passed the rule validation. Our efforts in developing and prototyping a modular architecture provide useful insight into how to build a scalable solution to support diagnostic criteria representation and computerization.

  5. Boussinesq Modeling for Inlets, Harbors, and Structures (Bouss-2D)

    DTIC Science & Technology

    2015-10-30

    a wide variety of coastal and ocean engineering and naval architecture problems, including: transformation of waves over small to medium spatial...and outputs, and GIS data used in modeling. Recent applications include: Pillar Point Harbor, Oyster Point Marina, CA; Mouth of Columbia River

  6. Patient care transformation: the plan and the reality.

    PubMed

    Drexler, Diane; Malloch, Kathy

    2006-01-01

    An explosion of new hospital building has created the opportunity for nurse leaders to transform the patient care experience with evidence-based architecture, technology innovations, and new patient care delivery models. The authors share the first-year results of the creation of a hospital of the future in which staff actively participated and addressed the challenges of transforming the patient care experience. Positive results include patient satisfaction at the 99th percentile, successful integration of 63 software applications, and energized nursing staff.

  7. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    NASA Astrophysics Data System (ADS)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  8. Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs

    NASA Astrophysics Data System (ADS)

    Dias, Tiago; Roma, Nuno; Sousa, Leonel

    2014-12-01

    A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.

  9. Exploring a model-driven architecture (MDA) approach to health care information systems development.

    PubMed

    Raghupathi, Wullianallur; Umar, Amjad

    2008-05-01

    To explore the potential of the model-driven architecture (MDA) in health care information systems development. An MDA is conceptualized and developed for a health clinic system to track patient information. A prototype of the MDA is implemented using an advanced MDA tool. The UML provides the underlying modeling support in the form of the class diagram. The PIM to PSM transformation rules are applied to generate the prototype application from the model. The result of the research is a complete MDA methodology to developing health care information systems. Additional insights gained include development of transformation rules and documentation of the challenges in the application of MDA to health care. Design guidelines for future MDA applications are described. The model has the potential for generalizability. The overall approach supports limited interoperability and portability. The research demonstrates the applicability of the MDA approach to health care information systems development. When properly implemented, it has the potential to overcome the challenges of platform (vendor) dependency, lack of open standards, interoperability, portability, scalability, and the high cost of implementation.

  10. Universal discrete Fourier optics RF photonic integrated circuit architecture.

    PubMed

    Hall, Trevor J; Hasan, Mehedi

    2016-04-04

    This paper describes a coherent electro-optic circuit architecture that generates a frequency comb consisting of N spatially separated orders using a generalised Mach-Zenhder interferometer (MZI) with its N × 1 combiner replaced by an optical N × N Discrete Fourier Transform (DFT). Advantage may be taken of the tight optical path-length control, component and circuit symmetries and emerging trimming algorithms offered by photonic integration in any platform that offers linear electro-optic phase modulation such as LiNbO3, silicon, III-V or hybrid technology. The circuit architecture subsumes all MZI-based RF photonic circuit architectures in the prior art given an appropriate choice of output port(s) and dimension N although the principal application envisaged is phase correlated subcarrier generation for all optical orthogonal frequency division multiplexing. A transfer matrix approach is used to model the operation of the architecture. The predictions of the model are validated by simulations performed using an industry standard software tool. Implementation is found to be practical.

  11. Designing an architectural style for Pervasive Healthcare systems.

    PubMed

    Rafe, Vahid; Hajvali, Masoumeh

    2013-04-01

    Nowadays, the Pervasive Healthcare (PH) systems are considered as an important research area. These systems have a dynamic structure and configuration. Therefore, an appropriate method for designing such systems is necessary. The Publish/Subscribe Architecture (pub/sub) is one of the convenient architectures to support such systems. PH systems are safety critical; hence, errors can bring disastrous results. To prevent such problems, a powerful analytical tool is required. So using a proper formal language like graph transformation systems for developing of these systems seems necessary. But even if software engineers use such high level methodologies, errors may occur in the system under design. Hence, it should be investigated automatically and formally that whether this model of system satisfies all their requirements or not. In this paper, a dynamic architectural style for developing PH systems is presented. Then, the behavior of these systems is modeled and evaluated using GROOVE toolset. The results of the analysis show its high reliability.

  12. Transformations in Air Transportation Systems For the 21st Century

    NASA Technical Reports Server (NTRS)

    Holmes, Bruce J.

    2004-01-01

    Globally, our transportation systems face increasingly discomforting realities: certain of the legacy air and ground infrastructures of the 20th century will not satisfy our 21st century mobility needs. The consequence of inaction is diminished quality of life and economic opportunity for those nations unable to transform from the 20th to 21st century systems. Clearly, new thinking is required regarding business models that cater to consumers value of time, airspace architectures that enable those new business models, and technology strategies for innovating at the system-of-networks level. This lecture proposes a structured way of thinking about transformation from the legacy systems of the 20th century toward new systems for the 21st century. The comparison and contrast between the legacy systems of the 20th century and the transformed systems of the 21st century provides insights into the structure of transformation of air transportation. Where the legacy systems tend to be analog (versus digital), centralized (versus distributed), and scheduled (versus on-demand) for example, transformed 21st century systems become capable of scalability through technological, business, and policy innovations. Where air mobility in our legacy systems of the 20th century brought economic opportunity and quality of life to large service markets, transformed air mobility of the 21st century becomes more equitable available to ever-thinner and widely distributed populations. Several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems create new foundations for 21st thinking about air transportation. One of the technological developments of importance arises from complexity science and modern network theory. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of robustness, resilience, and other metrics. The lecture offers an air transportation system topology and a scale-free network linkage graphic as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system topologies, and airspace architectures and procedural concepts. These new paradigms could support scalable alternatives for the expansion of future air mobility to more consumers in more parts of the world.

  13. A Quantitative Approach to Scar Analysis

    PubMed Central

    Khorasani, Hooman; Zheng, Zhong; Nguyen, Calvin; Zara, Janette; Zhang, Xinli; Wang, Joyce; Ting, Kang; Soo, Chia

    2011-01-01

    Analysis of collagen architecture is essential to wound healing research. However, to date no consistent methodologies exist for quantitatively assessing dermal collagen architecture in scars. In this study, we developed a standardized approach for quantitative analysis of scar collagen morphology by confocal microscopy using fractal dimension and lacunarity analysis. Full-thickness wounds were created on adult mice, closed by primary intention, and harvested at 14 days after wounding for morphometrics and standard Fourier transform-based scar analysis as well as fractal dimension and lacunarity analysis. In addition, transmission electron microscopy was used to evaluate collagen ultrastructure. We demonstrated that fractal dimension and lacunarity analysis were superior to Fourier transform analysis in discriminating scar versus unwounded tissue in a wild-type mouse model. To fully test the robustness of this scar analysis approach, a fibromodulin-null mouse model that heals with increased scar was also used. Fractal dimension and lacunarity analysis effectively discriminated unwounded fibromodulin-null versus wild-type skin as well as healing fibromodulin-null versus wild-type wounds, whereas Fourier transform analysis failed to do so. Furthermore, fractal dimension and lacunarity data also correlated well with transmission electron microscopy collagen ultrastructure analysis, adding to their validity. These results demonstrate that fractal dimension and lacunarity are more sensitive than Fourier transform analysis for quantification of scar morphology. PMID:21281794

  14. Transformation of standardized clinical models based on OWL technologies: from CEM to OpenEHR archetypes

    PubMed Central

    Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui

    2015-01-01

    Introduction The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Methods Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. Results We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. Conclusions We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. PMID:25670753

  15. Separating essentials from incidentals: an execution architecture for real-time control systems

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel; Reinholtz, Kirk

    2004-01-01

    This paper describes an execution architecture that makes such systems far more analyzable and verifiable by aggressive separation of concerns. The architecture separates two key software concerns: transformations of global state, as defined in pure functions; and sequencing/timing of transformations, as performed by an engine that enforces four prime invariants. The important advantage of this architecture, besides facilitating verification, is that it encourages formal specification of systems in a vocabulary that brings systems engineering closer to software engineering.

  16. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less

  17. End-to-end network models encompassing terrestrial, wireless, and satellite components

    NASA Astrophysics Data System (ADS)

    Boyarko, Chandler L.; Britton, John S.; Flores, Phil E.; Lambert, Charles B.; Pendzick, John M.; Ryan, Christopher M.; Shankman, Gordon L.; Williams, Ramon P.

    2004-08-01

    Development of network models that reflect true end-to-end architectures such as the Transformational Communications Architecture need to encompass terrestrial, wireless and satellite component to truly represent all of the complexities in a world wide communications network. Use of best-in-class tools including OPNET, Satellite Tool Kit (STK), Popkin System Architect and their well known XML-friendly definitions, such as OPNET Modeler's Data Type Description (DTD), or socket-based data transfer modules, such as STK/Connect, enable the sharing of data between applications for more rapid development of end-to-end system architectures and a more complete system design. By sharing the results of and integrating best-in-class tools we are able to (1) promote sharing of data, (2) enhance the fidelity of our results and (3) allow network and application performance to be viewed in the context of the entire enterprise and its processes.

  18. A hybrid silicon membrane spatial light modulator for optical information processing

    NASA Technical Reports Server (NTRS)

    Pape, D. R.; Hornbeck, L. J.

    1984-01-01

    A new two dimensional, fast, analog, electrically addressable, silicon based membrane spatial light modulator (SLM) was developed for optical information processing applications. Coherent light reflected from the mirror elements is phase modulated producing an optical Fourier transform of an analog signal input to the device. The DMD architecture and operating parameters related to this application are presented. A model is developed that describes the optical Fourier transform properties of the DMD.

  19. Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space

    NASA Astrophysics Data System (ADS)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2014-10-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.

  20. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  1. Optical chirp z-transform processor with a simplified architecture.

    PubMed

    Ngo, Nam Quoc

    2014-12-29

    Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.

  2. Architecture and data processing alternatives for Tse computer. Volume 1: Tse logic design concepts and the development of image processing machine architectures

    NASA Technical Reports Server (NTRS)

    Rickard, D. A.; Bodenheimer, R. E.

    1976-01-01

    Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.

  3. XMI2USE: A Tool for Transforming XMI to USE Specifications

    NASA Astrophysics Data System (ADS)

    Sun, Wuliang; Song, Eunjee; Grabow, Paul C.; Simmonds, Devon M.

    The UML-based Specification Environment (USE) tool supports syntactic analysis, type checking, consistency checking, and dynamic validation of invariants and pre-/post conditions specified in the Object Constraint Language (OCL). Due to its animation and analysis power, it is useful when checking critical non-functional properties such as security policies. However, the USE tool requires one to specify (i.e., "write") a model using its own textual language and does not allow one to import any model specification files created by other UML modeling tools. Hence, to make the best use of existing UML tools, we often create a model with OCL constraints using a modeling tool such as the IBM Rational Software Architect (RSA) and then use the USE tool for model validation. This approach, however, requires a manual transformation between the specifications of two different tool formats, which is error-prone and diminishes the benefit of automated model-level validations. In this paper, we describe our own implementation of a specification transformation engine that is based on the Model Driven Architecture (MDA) framework and currently supports automatic tool-level transformations from RSA to USE.

  4. Transformation of standardized clinical models based on OWL technologies: from CEM to OpenEHR archetypes.

    PubMed

    Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui

    2015-05-01

    The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Real-time implementations of image segmentation algorithms on shared memory multicore architecture: a survey (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed

    2017-05-01

    The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.

  6. Organoids as Models for Neoplastic Transformation | Office of Cancer Genomics

    Cancer.gov

    Cancer models strive to recapitulate the incredible diversity inherent in human tumors. A key challenge in accurate tumor modeling lies in capturing the panoply of homo- and heterotypic cellular interactions within the context of a three-dimensional tissue microenvironment. To address this challenge, researchers have developed organotypic cancer models (organoids) that combine the 3D architecture of in vivo tissues with the experimental facility of 2D cell lines.

  7. Fourier transform spectrometer controller for partitioned architectures

    NASA Astrophysics Data System (ADS)

    Tamas-Selicean, D.; Keymeulen, D.; Berisford, D.; Carlson, R.; Hand, K.; Pop, P.; Wadsworth, W.; Levy, R.

    The current trend in spacecraft computing is to integrate applications of different criticality levels on the same platform using no separation. This approach increases the complexity of the development, verification and integration processes, with an impact on the whole system life cycle. Researchers at ESA and NASA advocated for the use of partitioned architecture to reduce this complexity. Partitioned architectures rely on platform mechanisms to provide robust temporal and spatial separation between applications. Such architectures have been successfully implemented in several industries, such as avionics and automotive. In this paper we investigate the challenges of developing and the benefits of integrating a scientific instrument, namely a Fourier Transform Spectrometer, in such a partitioned architecture.

  8. Model-Driven Theme/UML

    NASA Astrophysics Data System (ADS)

    Carton, Andrew; Driver, Cormac; Jackson, Andrew; Clarke, Siobhán

    Theme/UML is an existing approach to aspect-oriented modelling that supports the modularisation and composition of concerns, including crosscutting ones, in design. To date, its lack of integration with model-driven engineering (MDE) techniques has limited its benefits across the development lifecycle. Here, we describe our work on facilitating the use of Theme/UML as part of an MDE process. We have developed a transformation tool that adopts model-driven architecture (MDA) standards. It defines a concern composition mechanism, implemented as a model transformation, to support the enhanced modularisation features of Theme/UML. We evaluate our approach by applying it to the development of mobile, context-aware applications-an application area characterised by many non-functional requirements that manifest themselves as crosscutting concerns.

  9. Studio Mathematics: The Epistemology and Practice of Design Pedagogy as a Model for Mathematics Learning. WCER Working Paper No. 2005-3

    ERIC Educational Resources Information Center

    Shaffer, David Williamson

    2005-01-01

    This paper examines how middle school students developed understanding of transformational geometry through design activities in Escher's World, a computationally rich design experiment explicitly modeled on an architectural design studio. Escher's World was based on the theory of pedagogical praxis (Shaffer, 2004a), which suggests that preserving…

  10. MODEL-Based Methodology for System of Systems Architecture Development with Application to the Recapitalization of the Future Towing and Salvage Platform

    DTIC Science & Technology

    2008-09-01

    SEP) is a comprehensive , iterative and recursive problem solving process, applied sequentially top-down by integrated teams. It transforms needs...central integrated design repository. It includes a comprehensive behavior modeling notation to understand the dynamics of a design. CORE is a MBSE...37 F. DYNAMIC POSITIONING..........................................................................38 G. FIREFIGHTING

  11. 77 FR 31581 - U.S. Architecture Services Trade Mission to India; Chennai, Kolkata and Bangalore, India; October...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-29

    ... complete transformation in recent years. The booming economy and growing middle class has prompted... a transformation in the way projects are designed and built in India. Many foreign architecture... need for all building types, but corporate campuses, education, housing, infrastructure, and master...

  12. Reference Architecture Model Enabling Standards Interoperability.

    PubMed

    Blobel, Bernd

    2017-01-01

    Advanced health and social services paradigms are supported by a comprehensive set of domains managed by different scientific disciplines. Interoperability has to evolve beyond information and communication technology (ICT) concerns, including the real world business domains and their processes, but also the individual context of all actors involved. So, the system must properly reflect the environment in front and around the computer as essential and even defining part of the health system. This paper introduces an ICT-independent system-theoretical, ontology-driven reference architecture model allowing the representation and harmonization of all domains involved including the transformation into an appropriate ICT design and implementation. The entire process is completely formalized and can therefore be fully automated.

  13. Task scheduling in dataflow computer architectures

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.

  14. Managing Complex Interoperability Solutions using Model-Driven Architecture

    DTIC Science & Technology

    2011-06-01

    such as Oracle or MySQL . Each data model for a specific RDBMS is a distinct PSM. Or the system may want to exchange information with other C2...reduced number of transformations, e.g., from an RDBMS physical schema to the corresponding SQL script needed to instantiate the tables in a relational...tance of models. In engineering, a model serves several purposes: 1. It presents an abstract view of a complex system or of a complex information

  15. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    NASA Astrophysics Data System (ADS)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  16. The diabolo classifier

    PubMed

    Schwenk

    1998-11-15

    We present a new classification architecture based on autoassociative neural networks that are used to learn discriminant models of each class. The proposed architecture has several interesting properties with respect to other model-based classifiers like nearest-neighbors or radial basis functions: it has a low computational complexity and uses a compact distributed representation of the models. The classifier is also well suited for the incorporation of a priori knowledge by means of a problem-specific distance measure. In particular, we will show that tangent distance (Simard, Le Cun, & Denker, 1993) can be used to achieve transformation invariance during learning and recognition. We demonstrate the application of this classifier to optical character recognition, where it has achieved state-of-the-art results on several reference databases. Relations to other models, in particular those based on principal component analysis, are also discussed.

  17. iArchi[tech]ture: Developing a Mobile Social Media Framework for Pedagogical Transformation

    ERIC Educational Resources Information Center

    Cochrane, Thomas; Rhodes, David

    2013-01-01

    This paper critiques the journey of pedagogical change over three mobile learning (mlearning) project iterations (2009 to 2011) within the context of a Bachelor of Architecture degree. The three projects were supported by an intentional community of practice model involving a partnership of an educational researcher/technologist, course lecturers,…

  18. CHRONOS architecture: Experiences with an open-source services-oriented architecture for geoinformatics

    USGS Publications Warehouse

    Fils, D.; Cervato, C.; Reed, J.; Diver, P.; Tang, X.; Bohling, G.; Greer, D.

    2009-01-01

    CHRONOS's purpose is to transform Earth history research by seamlessly integrating stratigraphic databases and tools into a virtual on-line stratigraphic record. In this paper, we describe the various components of CHRONOS's distributed data system, including the encoding of semantic and descriptive data into a service-based architecture. We give examples of how we have integrated well-tested resources available from the open-source and geoinformatic communities, like the GeoSciML schema and the simple knowledge organization system (SKOS), into the services-oriented architecture to encode timescale and phylogenetic synonymy data. We also describe on-going efforts to use geospatially enhanced data syndication and informally including semantic information by embedding it directly into the XHTML Document Object Model (DOM). XHTML DOM allows machine-discoverable descriptive data such as licensing and citation information to be incorporated directly into data sets retrieved by users. ?? 2008 Elsevier Ltd. All rights reserved.

  19. Application of the medical data warehousing architecture EPIDWARE to epidemiological follow-up: data extraction and transformation.

    PubMed

    Kerkri, E; Quantin, C; Yetongnon, K; Allaert, F A; Dusserre, L

    1999-01-01

    In this paper, we present an application of EPIDWARE, medical data warehousing architecture, to our epidemiological follow-up project. The aim of this project is to extract and regroup information from various information systems for epidemiological studies. We give a description of the requirements of the epidemiological follow-up project such as anonymity of medical data information and data file linkage procedure. We introduce the concept of Data Warehousing Architecture. The particularities of data extraction and transformation are presented and discussed.

  20. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm.

    PubMed

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration.

  1. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm

    PubMed Central

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Objective Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. Materials and methods We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Results Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. Conclusions We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration. PMID:23571850

  2. Model-Driven Useware Engineering

    NASA Astrophysics Data System (ADS)

    Meixner, Gerrit; Seissler, Marc; Breiner, Kai

    User-oriented hardware and software development relies on a systematic development process based on a comprehensive analysis focusing on the users' requirements and preferences. Such a development process calls for the integration of numerous disciplines, from psychology and ergonomics to computer sciences and mechanical engineering. Hence, a correspondingly interdisciplinary team must be equipped with suitable software tools to allow it to handle the complexity of a multimodal and multi-device user interface development approach. An abstract, model-based development approach seems to be adequate for handling this complexity. This approach comprises different levels of abstraction requiring adequate tool support. Thus, in this chapter, we present the current state of our model-based software tool chain. We introduce the use model as the core model of our model-based process, transformation processes, and a model-based architecture, and we present different software tools that provide support for creating and maintaining the models or performing the necessary model transformations.

  3. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    NASA Astrophysics Data System (ADS)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  4. Building Paradigms: Major Transformations in School Architecture (1798-2009)

    ERIC Educational Resources Information Center

    Gislason, Neil

    2009-01-01

    This article provides an historical overview of significant trends in school architecture from 1798 to the present. I divide the history of school architecture into two major phases. The first period falls between 1798 and 1921: the modern graded classroom emerged as a standard architectural feature during this period. The second period, which…

  5. New approaches to digital transformation of petrochemical production

    NASA Astrophysics Data System (ADS)

    Andieva, E. Y.; Kapelyuhovskaya, A. A.

    2017-08-01

    The newest concepts of the reference architecture of digital industrial transformation are considered, the problems of their application for the enterprises having in their life cycle oil products processing and marketing are revealed. The concept of the reference architecture, providing a systematic representation of the fundamental changes in the approaches to production management based on the automation of production process control is proposed.

  6. LaPlace Transform1 Adaptive Control Law in Support of Large Flight Envelope Modeling Work

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.; Xargay, Enric; Cao, Chengyu; Hovakimyan, Naira

    2011-01-01

    This paper presents results of a flight test of the L1 adaptive control architecture designed to directly compensate for significant uncertain cross-coupling in nonlinear systems. The flight test was conducted on the subscale turbine powered Generic Transport Model that is an integral part of the Airborne Subscale Transport Aircraft Research system at the NASA Langley Research Center. The results presented are in support of nonlinear aerodynamic modeling and instrumentation calibration.

  7. Control and Communication for a Secure and Reconfigurable Power Distribution System

    NASA Astrophysics Data System (ADS)

    Giacomoni, Anthony Michael

    A major transformation is taking place throughout the electric power industry to overlay existing electric infrastructure with advanced sensing, communications, and control system technologies. This transformation to a smart grid promises to enhance system efficiency, increase system reliability, support the electrification of transportation, and provide customers with greater control over their electricity consumption. Upgrading control and communication systems for the end-to-end electric power grid, however, will present many new security challenges that must be dealt with before extensive deployment and implementation of these technologies can begin. In this dissertation, a comprehensive systems approach is taken to minimize and prevent cyber-physical disturbances to electric power distribution systems using sensing, communications, and control system technologies. To accomplish this task, an intelligent distributed secure control (IDSC) architecture is presented and validated in silico for distribution systems to provide greater adaptive protection, with the ability to proactively reconfigure, and rapidly respond to disturbances. Detailed descriptions of functionalities at each layer of the architecture as well as the whole system are provided. To compare the performance of the IDSC architecture with that of other control architectures, an original simulation methodology is developed. The simulation model integrates aspects of cyber-physical security, dynamic price and demand response, sensing, communications, intermittent distributed energy resources (DERs), and dynamic optimization and reconfiguration. Applying this comprehensive systems approach, performance results for the IEEE 123 node test feeder are simulated and analyzed. The results show the trade-offs between system reliability, operational constraints, and costs for several control architectures and optimization algorithms. Additional simulation results are also provided. In particular, the advantages of an IDSC architecture are highlighted when an intermittent DER is present on the system.

  8. Performance of the Wavelet Decomposition on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.

  9. Conceptual Modeling in the Time of the Revolution: Part II

    NASA Astrophysics Data System (ADS)

    Mylopoulos, John

    Conceptual Modeling was a marginal research topic at the very fringes of Computer Science in the 60s and 70s, when the discipline was dominated by topics focusing on programs, systems and hardware architectures. Over the years, however, the field has moved to centre stage and has come to claim a central role both in Computer Science research and practice in diverse areas, such as Software Engineering, Databases, Information Systems, the Semantic Web, Business Process Management, Service-Oriented Computing, Multi-Agent Systems, Knowledge Management, and more. The transformation was greatly aided by the adoption of standards in modeling languages (e.g., UML), and model-based methodologies (e.g., Model-Driven Architectures) by the Object Management Group (OMG) and other standards organizations. We briefly review the history of the field over the past 40 years, focusing on the evolution of key ideas. We then note some open challenges and report on-going research, covering topics such as the representation of variability in conceptual models, capturing model intentions, and models of laws.

  10. Learning Methods for Efficient Adoption of Contemporary Technologies in Architectural Design

    ERIC Educational Resources Information Center

    Mahdavinejad, Mohammadjavad; Dehghani, Sohaib; Shahsavari, Fatemeh

    2013-01-01

    The interaction between technology and history is one of the most significant issues in achieving an efficient and progressive architecture in any era. This is a concept which stems from lesson of traditional architecture of Iran. Architecture as a part of art, has permanently been transforming just like a living organism. In fact, it has been…

  11. Wavelet-enhanced convolutional neural network: a new idea in a deep learning paradigm.

    PubMed

    Savareh, Behrouz Alizadeh; Emami, Hassan; Hajiabadi, Mohamadreza; Azimi, Seyed Majid; Ghafoori, Mahyar

    2018-05-29

    Manual brain tumor segmentation is a challenging task that requires the use of machine learning techniques. One of the machine learning techniques that has been given much attention is the convolutional neural network (CNN). The performance of the CNN can be enhanced by combining other data analysis tools such as wavelet transform. In this study, one of the famous implementations of CNN, a fully convolutional network (FCN), was used in brain tumor segmentation and its architecture was enhanced by wavelet transform. In this combination, a wavelet transform was used as a complementary and enhancing tool for CNN in brain tumor segmentation. Comparing the performance of basic FCN architecture against the wavelet-enhanced form revealed a remarkable superiority of enhanced architecture in brain tumor segmentation tasks. Using mathematical functions and enhancing tools such as wavelet transform and other mathematical functions can improve the performance of CNN in any image processing task such as segmentation and classification.

  12. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  13. A program for handling map projections of small-scale geospatial raster data

    USGS Publications Warehouse

    Finn, Michael P.; Steinwand, Daniel R.; Trent, Jason R.; Buehler, Robert A.; Mattli, David M.; Yamamoto, Kristina H.

    2012-01-01

    Scientists routinely accomplish small-scale geospatial modeling using raster datasets of global extent. Such use often requires the projection of global raster datasets onto a map or the reprojection from a given map projection associated with a dataset. The distortion characteristics of these projection transformations can have significant effects on modeling results. Distortions associated with the reprojection of global data are generally greater than distortions associated with reprojections of larger-scale, localized areas. The accuracy of areas in projected raster datasets of global extent is dependent on spatial resolution. To address these problems of projection and the associated resampling that accompanies it, methods for framing the transformation space, direct point-to-point transformations rather than gridded transformation spaces, a solution to the wrap-around problem, and an approach to alternative resampling methods are presented. The implementations of these methods are provided in an open-source software package called MapImage (or mapIMG, for short), which is designed to function on a variety of computer architectures.

  14. Modeling, construction and experimental validation of actuated rolling dynamics of the cylindrical Transforming Roving-Rolling Explorer (TRREx)

    NASA Astrophysics Data System (ADS)

    Edwin, L.; Mazzoleni, A.; Gemmer, T.; Ferguson, S.

    2017-03-01

    Planetary surface exploration technology over the past few years has seen significant advancements on multiple fronts. Robotic exploration platforms are becoming more sophisticated and capable of embarking on more challenging missions. More unconventional designs, particularly transforming architectures that have multiple modes of locomotion, are being studied. This work explores the capabilities of one such novel transforming rover called the Transforming Roving-Rolling Explorer (TRREx). Biologically inspired by the armadillo and the golden-wheel spider, the TRREx has two modes of locomotion: it can traverse on six wheels like a conventional rover on benign terrain, but can transform into a sphere when necessary to negotiate steep rugged slopes. The ability to self-propel in the spherical configuration, even in the absence of a negative gradient, increases the TRREx's versatility and its concept value. This paper describes construction and testing of a prototype cylindrical TRREx that demonstrates that "actuated rolling" can be achieved, and also presents a dynamic model of this prototype version of the TRREx that can be used to investigate the feasibility and value of such self-propelled locomotion. Finally, we present results that validate our dynamic model by comparing results from computer simulations made using the dynamic model to experimental results acquired from test runs using the prototype.

  15. Programmable Remapper with Single Flow Architecture

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E. (Inventor)

    1993-01-01

    An apparatus for image processing comprising a camera for receiving an original visual image and transforming the original visual image into an analog image, a first converter for transforming the analog image of the camera to a digital image, a processor having a single flow architecture for receiving the digital image and producing, with a single algorithm, an output image, a second converter for transforming the digital image of the processor to an analog image, and a viewer for receiving the analog image, transforming the analog image into a transformed visual image for observing the transformations applied to the original visual image. The processor comprises one or more subprocessors for the parallel reception of a digital image for producing an output matrix of the transformed visual image. More particularly, the processor comprises a plurality of subprocessors for receiving in parallel and transforming the digital image for producing a matrix of the transformed visual image, and an output interface means for receiving the respective portions of the transformed visual image from the respective subprocessor for producing an output matrix of the transformed visual image.

  16. Theoretical modelling of residual and transformational stresses in SMA composites

    NASA Astrophysics Data System (ADS)

    Berman, J. B.; White, S. R.

    1996-12-01

    SMA composites are a class of smart materials in which shape memory alloy (SMA) actuators are embedded in a polymer matrix composite. The difference in thermal expansion between the SMA and the host material leads to residual stresses during processing. Similarly, the SMA transformations from martensite to austenite, or the reverse, also generate stresses. These stresses acting in combination can lead to SMA/epoxy interfacial debonding or microcracking of the composite phase. In this study the residual and transformational stresses are investigated for a nitinol wire embedded in a graphite/epoxy composite. A three-phase micromechanical model is developed. The nitinol wire is assumed to behave as a thermoelastic material. Nitinol austenitic and martensitic transformations are modelled using linear piecewise interpolation of experimental data. The interphase is modelled as a thermoelastic polymer. A transversely isotropic thermoelastic composite is used for the outer phase. Stress-free conditions are assumed immediately before cool down from the cure temperature. The effect of nitinol, coating and composite properties on residual and transformational stresses are evaluated. Fiber architectures favoring the axial direction decrease the magnitude of all residual stresses. A decrease in stresses at the composite/coating interface is also predicted through the use of thick, compliant coatings. Reducing the recovery strain and moving the transformation to higher temperatures were found to be most effective in reducing residual stresses.

  17. Implementation in an FPGA circuit of Edge detection algorithm based on the Discrete Wavelet Transforms

    NASA Astrophysics Data System (ADS)

    Bouganssa, Issam; Sbihi, Mohamed; Zaim, Mounia

    2017-07-01

    The 2D Discrete Wavelet Transform (DWT) is a computationally intensive task that is usually implemented on specific architectures in many imaging systems in real time. In this paper, a high throughput edge or contour detection algorithm is proposed based on the discrete wavelet transform. A technique for applying the filters on the three directions (Horizontal, Vertical and Diagonal) of the image is used to present the maximum of the existing contours. The proposed architectures were designed in VHDL and mapped to a Xilinx Sparten6 FPGA. The results of the synthesis show that the proposed architecture has a low area cost and can operate up to 100 MHz, which can perform 2D wavelet analysis for a sequence of images while maintaining the flexibility of the system to support an adaptive algorithm.

  18. ASIC implementation of recursive scaled discrete cosine transform algorithm

    NASA Astrophysics Data System (ADS)

    On, Bill N.; Narasimhan, Sam; Huang, Victor K.

    1994-05-01

    A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.

  19. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  20. Medicaid information technology architecture: an overview.

    PubMed

    Friedman, Richard H

    2006-01-01

    The Medicaid Information Technology Architecture (MITA) is a roadmap and tool-kit for States to transform their Medicaid Management Information System (MMIS) into an enterprise-wide, beneficiary-centric system. MITA will enable State Medicaid agencies to align their information technology (IT) opportunities with their evolving business needs. It also addresses long-standing issues of interoperability, adaptability, and data sharing, including clinical data, across organizational boundaries by creating models based on nationally accepted technical standards. Perhaps most significantly, MITA allows State Medicaid Programs to actively participate in the DHHS Secretary's vision of a transparent health care market that utilizes electronic health records (EHRs), ePrescribing and personal health records (PHRs).

  1. Implementation of Remaining Useful Lifetime Transformer Models in the Fleet-Wide Prognostic and Health Management Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Vivek; Lybeck, Nancy J.; Pham, Binh

    Research and development efforts are required to address aging and reliability concerns of the existing fleet of nuclear power plants. As most plants continue to operate beyond the license life (i.e., towards 60 or 80 years), plant components are more likely to incur age-related degradation mechanisms. To assess and manage the health of aging plant assets across the nuclear industry, the Electric Power Research Institute has developed a web-based Fleet-Wide Prognostic and Health Management (FW-PHM) Suite for diagnosis and prognosis. FW-PHM is a set of web-based diagnostic and prognostic tools and databases, comprised of the Diagnostic Advisor, the Asset Faultmore » Signature Database, the Remaining Useful Life Advisor, and the Remaining Useful Life Database, that serves as an integrated health monitoring architecture. The main focus of this paper is the implementation of prognostic models for generator step-up transformers in the FW-PHM Suite. One prognostic model discussed is based on the functional relationship between degree of polymerization, (the most commonly used metrics to assess the health of the winding insulation in a transformer) and furfural concentration in the insulating oil. The other model is based on thermal-induced degradation of the transformer insulation. By utilizing transformer loading information, established thermal models are used to estimate the hot spot temperature inside the transformer winding. Both models are implemented in the Remaining Useful Life Database of the FW-PHM Suite. The Remaining Useful Life Advisor utilizes the implemented prognostic models to estimate the remaining useful life of the paper winding insulation in the transformer based on actual oil testing and operational data.« less

  2. Anisotropic analysis of trabecular architecture in human femur bone radiographs using quaternion wavelet transforms.

    PubMed

    Sangeetha, S; Sujatha, C M; Manamalli, D

    2014-01-01

    In this work, anisotropy of compressive and tensile strength regions of femur trabecular bone are analysed using quaternion wavelet transforms. The normal and abnormal femur trabecular bone radiographic images are considered for this study. The sub-anatomic regions, which include compressive and tensile regions, are delineated using pre-processing procedures. These delineated regions are subjected to quaternion wavelet transforms and statistical parameters are derived from the transformed images. These parameters are correlated with apparent porosity, which is derived from the strength regions. Further, anisotropy is also calculated from the transformed images and is analyzed. Results show that the anisotropy values derived from second and third phase components of quaternion wavelet transform are found to be distinct for normal and abnormal samples with high statistical significance for both compressive and tensile regions. These investigations demonstrate that architectural anisotropy derived from QWT analysis is able to differentiate normal and abnormal samples.

  3. Dynamic Architecture. New Style Forming Aspects

    NASA Astrophysics Data System (ADS)

    Belyaeva, T. V.

    2017-11-01

    The article deals with the methods of buildings and structures transformation in the light of modern solutions in dynamic architecture. The mechanism for the formation of a modern object is proposed. Such design methods are becoming rather relevant in view of today’s trends while the priority of dynamic architecture directions keeps increasing.

  4. Design and implementation in VHDL code of the two-dimensional fast Fourier transform for frequency filtering, convolution and correlation operations

    NASA Astrophysics Data System (ADS)

    Vilardy, Juan M.; Giacometto, F.; Torres, C. O.; Mattos, L.

    2011-01-01

    The two-dimensional Fast Fourier Transform (FFT 2D) is an essential tool in the two-dimensional discrete signals analysis and processing, which allows developing a large number of applications. This article shows the description and synthesis in VHDL code of the FFT 2D with fixed point binary representation using the programming tool Simulink HDL Coder of Matlab; showing a quick and easy way to handle overflow, underflow and the creation registers, adders and multipliers of complex data in VHDL and as well as the generation of test bench for verification of the codes generated in the ModelSim tool. The main objective of development of the hardware architecture of the FFT 2D focuses on the subsequent completion of the following operations applied to images: frequency filtering, convolution and correlation. The description and synthesis of the hardware architecture uses the XC3S1200E family Spartan 3E FPGA from Xilinx Manufacturer.

  5. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  6. Model-Driven Development of Safety Architectures

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Pai, Ganesh; Whiteside, Iain

    2017-01-01

    We describe the use of model-driven development for safety assurance of a pioneering NASA flight operation involving a fleet of small unmanned aircraft systems (sUAS) flying beyond visual line of sight. The central idea is to develop a safety architecture that provides the basis for risk assessment and visualization within a safety case, the formal justification of acceptable safety required by the aviation regulatory authority. A safety architecture is composed from a collection of bow tie diagrams (BTDs), a practical approach to manage safety risk by linking the identified hazards to the appropriate mitigation measures. The safety justification for a given unmanned aircraft system (UAS) operation can have many related BTDs. In practice, however, each BTD is independently developed, which poses challenges with respect to incremental development, maintaining consistency across different safety artifacts when changes occur, and in extracting and presenting stakeholder specific information relevant for decision making. We show how a safety architecture reconciles the various BTDs of a system, and, collectively, provide an overarching picture of system safety, by considering them as views of a unified model. We also show how it enables model-driven development of BTDs, replete with validations, transformations, and a range of views. Our approach, which we have implemented in our toolset, AdvoCATE, is illustrated with a running example drawn from a real UAS safety case. The models and some of the innovations described here were instrumental in successfully obtaining regulatory flight approval.

  7. A Neurobehavioral Model of Flexible Spatial Language Behaviors

    PubMed Central

    Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schöner, Gregor

    2012-01-01

    We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes. PMID:21517224

  8. A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard

    NASA Astrophysics Data System (ADS)

    Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid

    2005-07-01

    The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.

  9. Hospital enterprise Architecture Framework (Study of Iranian University Hospital Organization).

    PubMed

    Haghighathoseini, Atefehsadat; Bobarshad, Hossein; Saghafi, Fatehmeh; Rezaei, Mohammad Sadegh; Bagherzadeh, Nader

    2018-06-01

    Nowadays developing smart and fast services for patients and transforming hospitals to modern hospitals is considered a necessity. Living in the world inundated with information systems, designing services based on information technology entails a suitable architecture framework. This paper aims to present a localized enterprise architecture framework for the Iranian university hospital. Using two dimensions of implementation and having appropriate characteristics, the best 17 enterprises frameworks were chosen. As part of this effort, five criteria were selected according to experts' inputs. According to these criteria, five frameworks which had the highest rank were chosen. Then 44 general characteristics were extracted from the existing 17 frameworks after careful studying. Then a questionnaire was written accordingly to distinguish the necessity of those characteristics using expert's opinions and Delphi method. The result showed eight important criteria. In the next step, using AHP method, TOGAF was chosen regarding having appropriate characteristics and the ability to be implemented among reference formats. In the next step, enterprise architecture framework was designed by TOGAF in a conceptual model and its layers. For determining architecture framework parts, a questionnaire with 145 questions was written based on literature review and expert's opinions. The results showed during localization of TOGAF for Iran, 111 of 145 parts were chosen and certified to be used in the hospital. The results showed that TOGAF could be suitable for use in the hospital. So, a localized Hospital Enterprise Architecture Modelling is developed by customizing TOGAF for an Iranian hospital at eight levels and 11 parts. This new model could be used to be performed in other Iranian hospitals. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Evolution of System Architectures: Where Do We Need to Fail Next?

    NASA Astrophysics Data System (ADS)

    Bermudez, Luis; Alameh, Nadine; Percivall, George

    2013-04-01

    Innovation requires testing and failing. Thomas Edison was right when he said "I have not failed. I've just found 10,000 ways that won't work". For innovation and improvement of standards to happen, service Architectures have to be tested and tested. Within the Open Geospatial Consortium (OGC), testing of service architectures has occurred for the last 15 years. This talk will present an evolution of these service architectures and a possible future path. OGC is a global forum for the collaboration of developers and users of spatial data products and services, and for the advancement and development of international standards for geospatial interoperability. The OGC Interoperability Program is a series of hands-on, fast paced, engineering initiatives to accelerate the development and acceptance of OGC standards. Each initiative is organized in threads that provide focus under a particular theme. The first testbed, OGC Web Services phase 1, completed in 2003 had four threads: Common Architecture, Web Mapping, Sensor Web and Web Imagery Enablement. The Common Architecture was a cross-thread theme, to ensure that the Web Mapping and Sensor Web experiments built on a base common architecture. The architecture was based on the three main SOA components: Broker, Requestor and Provider. It proposed a general service model defining service interactions and dependencies; categorization of service types; registries to allow discovery and access of services; data models and encodings; and common services (WMS, WFS, WCS). For the latter, there was a clear distinction on the different services: Data Services (e.g. WMS), Application services (e.g. Coordinate transformation) and server-side client applications (e.g. image exploitation). The latest testbed, OGC Web Service phase 9, completed in 2012 had 5 threads: Aviation, Cross-Community Interoperability (CCI), Security and Services Interoperability (SSI), OWS Innovations and Compliance & Interoperability Testing & Evaluation (CITE). Compared to the first testbed, OWS-9 did not have a separate common architecture thread. Instead the emphasis was on brokering information models, securing them and making data available efficiently on mobile devices. The outcome is an architecture based on usability and non-intrusiveness while leveraging mediation of information models from different communities. This talk will use lessons learned from the evolution from OGC Testbed phase 1 to phase 9 to better understand how global and complex infrastructures evolve to support many communities including the Earth System Science Community.

  11. A new class of sonic composites

    NASA Astrophysics Data System (ADS)

    Munteanu, Ligia; Chiroiu, Veturia; Donescu, Ştefania; Brişan, Cornel

    2014-03-01

    Transformation acoustics opens a new avenue towards the architecture, modeling and simulation of a new class of sonic composites with scatterers made of various materials and having various shapes embedded in an epoxy matrix. The design of acoustic scatterers is based on the property of Helmholtz equations to be invariant under a coordinate transformation, i.e., a specific spatial compression is equivalent to a new material in a new space. In this paper, the noise suppression for a wide full band-gap of frequencies is discussed for spherical shell scatterers made of auxetic materials (materials with negative Poisson's ratio). The original domain consists of spheres made from conventional foams with positive Poisson's ratio. The spatial compression is controlled by the coordinate transformation, and leads to an equivalent domain filled with an auxetic material. The coordinate transformation is strongly supported by the manufacturing of auxetics which is based on the pore size reduction through radial compression molds.

  12. Management Architecture and Solutions for French Tactical Systems

    DTIC Science & Technology

    2006-10-01

    RTO-MP-IST-062 3 - 1 UNCLASSIFIED/UNLIMITED UNCLASSIFIED/UNLIMITED Management Architecture and Solutions for French Tactical Systems Vincent...COTTIGNIES THALES Land & Joint Systems – Battlespace Transformation Center 160 Boulevard de Valmy - BP 82 92704 Colombes Cedex FRANCE ...planning, configuration and monitoring of Systems. Then, given the limitations of existing Management System Architecture, an innovative design based on

  13. Towards Implementation of a Generalized Architecture for High-Level Quantum Programming Language

    NASA Astrophysics Data System (ADS)

    Ameen, El-Mahdy M.; Ali, Hesham A.; Salem, Mofreh M.; Badawy, Mahmoud

    2017-08-01

    This paper investigates a novel architecture to the problem of quantum computer programming. A generalized architecture for a high-level quantum programming language has been proposed. Therefore, the programming evolution from the complicated quantum-based programming to the high-level quantum independent programming will be achieved. The proposed architecture receives the high-level source code and, automatically transforms it into the equivalent quantum representation. This architecture involves two layers which are the programmer layer and the compilation layer. These layers have been implemented in the state of the art of three main stages; pre-classification, classification, and post-classification stages respectively. The basic building block of each stage has been divided into subsequent phases. Each phase has been implemented to perform the required transformations from one representation to another. A verification process was exposed using a case study to investigate the ability of the compiler to perform all transformation processes. Experimental results showed that the efficacy of the proposed compiler achieves a correspondence correlation coefficient about R ≈ 1 between outputs and the targets. Also, an obvious achievement has been utilized with respect to the consumed time in the optimization process compared to other techniques. In the online optimization process, the consumed time has increased exponentially against the amount of accuracy needed. However, in the proposed offline optimization process has increased gradually.

  14. Building net-centric data strategies in support of a transformational MIW capability

    NASA Astrophysics Data System (ADS)

    Cramer, M. A.; Stack, J.

    2010-04-01

    The Mine Warfare (MIW) Community of Interest (COI) was established to develop data strategies in support of a future information-based architecture for naval MIW. As these strategies are developed and deployed, the ability for these datafocused efforts to enable technology insertion is becoming increasingly evident. This paper explores and provides concrete examples as to the ways in which these data strategies are supporting the technology insertion process for software-based systems and ultimately contribute to the establishment of an Open Business Model virtual environment. It is through the creation of such a collaborative research platform that a truly transformation MIW capability can be realized.

  15. NATO Human View Architecture and Human Networks

    NASA Technical Reports Server (NTRS)

    Handley, Holly A. H.; Houston, Nancy P.

    2010-01-01

    The NATO Human View is a system architectural viewpoint that focuses on the human as part of a system. Its purpose is to capture the human requirements and to inform on how the human impacts the system design. The viewpoint contains seven static models that include different aspects of the human element, such as roles, tasks, constraints, training and metrics. It also includes a Human Dynamics component to perform simulations of the human system under design. One of the static models, termed Human Networks, focuses on the human-to-human communication patterns that occur as a result of ad hoc or deliberate team formation, especially teams distributed across space and time. Parameters of human teams that effect system performance can be captured in this model. Human centered aspects of networks, such as differences in operational tempo (sense of urgency), priorities (common goal), and team history (knowledge of the other team members), can be incorporated. The information captured in the Human Network static model can then be included in the Human Dynamics component so that the impact of distributed teams is represented in the simulation. As the NATO militaries transform to a more networked force, the Human View architecture is an important tool that can be used to make recommendations on the proper mix of technological innovations and human interactions.

  16. Quantum Computing Architectural Design

    NASA Astrophysics Data System (ADS)

    West, Jacob; Simms, Geoffrey; Gyure, Mark

    2006-03-01

    Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.

  17. Greek classicism in living structure? Some deductive pathways in animal morphology.

    PubMed

    Zweers, G A

    1985-01-01

    Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".

  18. Enabling Data-as- a-Service (DaaS) - Biggest Challenge of Geoscience Australia

    NASA Astrophysics Data System (ADS)

    Bastrakova, I.; Kemp, C.; Car, N. J.

    2016-12-01

    Geoscience Australia (GA) is recognised and respected as the national repository and steward of multiple national significance data collections that provides geoscience information, services and capability to the Australian Government, industry and stakeholders. Provision of Data-as-a-Service is both GA's key responsibility and core business. Through the Science First Transformation Program GA is undergoing a significant rethinking of its data architecture, curation and access to support the Digital Science capability for which DaaS forms both a dependency and underpins its implementation. DaaS, being a service, means we can deliver its outputs in multiple ways thus providing users with data on demand in ready-for-consumption forms. We can then to reuse prebuilt data constructions to allow self-serviced integration of data underpinned by dynamic query tools. In GA's context examples of DaaS are the Australian Geoscience Data Cube, the Foundation Spatial Data Framework and data served through several Virtual Laboratories. We have implemented a three-layered architecture for DaaS in order to store and manage the data while honouring the semantics of Scientific Data Models defined by subject matter experts and GA's Enterprise Data Architecture as well as retain that delivery flexibility. The foundation layer of DaaS is Canonical Datasets, which are optimised for a long-term data stewardship and curation. Data is well structured, standardised, described and audited. All data creation and editing happen within this layer. The middle Data Transformation layer assists with transformation of data from Canonical Datasets to data integration layer. It provides mechanisms for multi-format and multi-technology data transformation. The top Data Integration layer is optimised for data access. Data can be easily reused and repurposed; data formats made available are optimised for scientific computing and adjusted for access by multiple applications, tools and libraries. Moving to DaaS enables GA to increase data alertness, generate new capabilities and be prepared for emerging technological challengers.

  19. Novel transformation-based response prediction of shear building using interval neural network

    NASA Astrophysics Data System (ADS)

    Chakraverty, S.; Sahoo, Deepti Moyi

    2017-04-01

    Present paper uses powerful technique of interval neural network (INN) to simulate and estimate structural response of multi-storey shear buildings subject to earthquake motion. The INN is first trained for a real earthquake data, viz., the ground acceleration as input and the numerically generated responses of different floors of multi-storey buildings as output. Till date, no model exists to handle positive and negative data in the INN. As such here, the bipolar data in [ -1, 1] are converted first to unipolar form, i.e., to [0, 1] by means of a novel transformation for the first time to handle the above training patterns in normalized form. Once the training is done, again the unipolar data are converted back to its bipolar form by using the inverse transformation. The trained INN architecture is then used to simulate and test the structural response of different floors for various intensity earthquake data and it is found that the predicted responses given by INN model are good for practical purposes.

  20. Transforming System Engineering through Model-Centric Engineering

    DTIC Science & Technology

    2015-01-31

    story that is being applied and evolved on Jupiter Europa Orbiter (JEO) project [75], and we summarize some aspects of it here, because it goes beyond...JEO Jupiter Europa Orbiter project at NASA/JPL JSF Joint Strike Fighter JPL Jet Propulsion Laboratory of NASA Linux An operating system created by...Adaptation of Flight-Critical Systems, Digital Avionics Systems Conference, 2009. [75] Rasumussen, R., R. Shishko, Jupiter Europa Orbiter Architecture

  1. View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation

    PubMed Central

    Leibo, Joel Z.; Liao, Qianli; Freiwald, Winrich A.; Anselmi, Fabio; Poggio, Tomaso

    2017-01-01

    SUMMARY The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations like depth-rotations [1, 2]. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3, 4, 5, 6]. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here we demonstrate that one specific biologically-plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli like faces at intermediate levels of the architecture and show why it does so. Thus the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. PMID:27916522

  2. Self-Folded Gripper-Like Architectures from Stimuli-Responsive Bilayers.

    PubMed

    Abdullah, Arif M; Li, Xiuling; Braun, Paul V; Rogers, John A; Hsia, K Jimmy

    2018-06-19

    Self-folding microgrippers are an emerging class of smart structures that have widespread applications in medicine and micro/nanomanipulation. To achieve their functionalities, these architectures rely on spatially patterned hinges to transform into 3D configurations in response to an external stimulus. Incorporating hinges into the devices requires the processing of multiple layers which eventually increases the fabrication costs and actuation complexities. The goal of this work is to demonstrate that it is possible to achieve gripper-like configurations in an on-demand manner from simple planar bilayers that do not require hinges for their actuation. Finite element modeling of bilayers is performed to understand the mechanics behind their stimuli-responsive shape transformation behavior. The model predictions are then experimentally validated and axisymmetric gripper-like shapes are realized using millimeter-scale poly(dimethylsiloxane) bilayers that undergo differential swelling in organic solvents. Owing to the nature of the computational scheme which is independent of length scales and material properties, the guidelines reported here would be applicable to a diverse array of gripping systems and functional devices. Thus, this work not only demonstrates a simple route to fabricate functional microgrippers but also contributes to self-assembly in general. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Clinical data interoperability based on archetype transformation.

    PubMed

    Costa, Catalina Martínez; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2011-10-01

    The semantic interoperability between health information systems is a major challenge to improve the quality of clinical practice and patient safety. In recent years many projects have faced this problem and provided solutions based on specific standards and technologies in order to satisfy the needs of a particular scenario. Most of such solutions cannot be easily adapted to new scenarios, thus more global solutions are needed. In this work, we have focused on the semantic interoperability of electronic healthcare records standards based on the dual model architecture and we have developed a solution that has been applied to ISO 13606 and openEHR. The technological infrastructure combines reference models, archetypes and ontologies, with the support of Model-driven Engineering techniques. For this purpose, the interoperability infrastructure developed in previous work by our group has been reused and extended to cover the requirements of data transformation. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Stereopsis, vertical disparity and relief transformations.

    PubMed

    Gårding, J; Porrill, J; Mayhew, J E; Frisby, J P

    1995-03-01

    The pattern of retinal binocular disparities acquired by a fixating visual system depends on both the depth structure of the scene and the viewing geometry. This paper treats the problem of interpreting the disparity pattern in terms of scene structure without relying on estimates of fixation position from eye movement control and proprioception mechanisms. We propose a sequential decomposition of this interpretation process into disparity correction, which is used to compute three-dimensional structure up to a relief transformation, and disparity normalization, which is used to resolve the relief ambiguity to obtain metric structure. We point out that the disparity normalization stage can often be omitted, since relief transformations preserve important properties such as depth ordering and coplanarity. Based on this framework we analyse three previously proposed computational models of disparity processing; the Mayhew and Longuet-Higgins model, the deformation model and the polar angle disparity model. We show how these models are related, and argue that none of them can account satisfactorily for available psychophysical data. We therefore propose an alternative model, regional disparity correction. Using this model we derive predictions for a number of experiments based on vertical disparity manipulations, and compare them to available experimental data. The paper is concluded with a summary and a discussion of the possible architectures and mechanisms underling stereopsis in the human visual system.

  5. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    PubMed

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  6. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding

    PubMed Central

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. PMID:28377709

  7. Feature-extracted joint transform correlation.

    PubMed

    Alam, M S

    1995-12-10

    A new technique for real-time optical character recognition that uses a joint transform correlator is proposed. This technique employs feature-extracted patterns for the reference image to detect a wide range of characters in one step. The proposed technique significantly enhances the processing speed when compared with the presently available joint transform correlator architectures and shows feasibility for multichannel joint transform correlation.

  8. The Palazzo della Ragione in Padua: Representation and Communication of Art, Architecture, and Astrology of a Civic Monument

    NASA Astrophysics Data System (ADS)

    Borgherini, M.; Garbin, E.

    2011-06-01

    Eight centuries of the history of art and of Padua's scientific and technological culture deposited on the stones and frescoes of its Palace of Law ("Palazzo della Ragione") make this great work of urban architecture a part of the city's collective identity. This "palimpsest", legible only to a restricted circle of specialists, should be accessible to a vaster public interested in understanding this object symbol of local culture. The project planned for interactive exploration on the web is a series of digital models, employing tomographic-endoscopic visualizations and, in future, multi-resolution images. The various models devised allow the visitor to superimpose the Palace's current conditions on the various transformations undergone over the centuries. Similarly, comparisons can be made between the astrological fresco cycle with maps of the heavens, cosmological hypotheses, ancient and contemporary astrological treatises, and the related exchange of knowledge between the Orient and the Occident.

  9. A model-driven approach for representing clinical archetypes for Semantic Web environments.

    PubMed

    Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Maldonado, José Alberto

    2009-02-01

    The life-long clinical information of any person supported by electronic means configures his Electronic Health Record (EHR). This information is usually distributed among several independent and heterogeneous systems that may be syntactically or semantically incompatible. There are currently different standards for representing and exchanging EHR information among different systems. In advanced EHR approaches, clinical information is represented by means of archetypes. Most of these approaches use the Archetype Definition Language (ADL) to specify archetypes. However, ADL has some drawbacks when attempting to perform semantic activities in Semantic Web environments. In this work, Semantic Web technologies are used to specify clinical archetypes for advanced EHR architectures. The advantages of using the Ontology Web Language (OWL) instead of ADL are described and discussed in this work. Moreover, a solution combining Semantic Web and Model-driven Engineering technologies is proposed to transform ADL into OWL for the CEN EN13606 EHR architecture.

  10. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    NASA Astrophysics Data System (ADS)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  11. A parallel VLSI architecture for a digital filter using a number theoretic transform

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1983-01-01

    The advantages of a very large scalee integration (VLSI) architecture for implementing a digital filter using fermat number transforms (FNT) are the following: It requires no multiplication. Only additions and bit rotations are needed. It alleviates the usual dynamic range limitation for long sequence FNT's. It utilizes the FNT and inverse FNT circuits 100% of the time. The lengths of the input data and filter sequences can be arbitraty and different. It is regular, simple, and expandable, and as a consequence suitable for VLSI implementation.

  12. Novel structures for Discrete Hartley Transform based on first-order moments

    NASA Astrophysics Data System (ADS)

    Xiong, Jun; Zheng, Wenjuan; Wang, Hao; Liu, Jianguo

    2018-03-01

    Discrete Hartley Transform (DHT) is an important tool in digital signal processing. In the present paper, the DHT is firstly transformed into the first-order moments-based form, then a new fast algorithm is proposed to calculate the first-order moments without multiplication. Based on the algorithm theory, the corresponding hardware architecture for DHT is proposed, which only contains shift operations and additions with no need for multipliers and large memory. To verify the availability and effectiveness, the proposed design is implemented with hardware description language and synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. A series of experiments have proved that the proposed architecture has better performance in terms of the product of the hardware consumption and computation time.

  13. Transformation as a Design Process and Runtime Architecture for High Integrity Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bespalko, S.J.; Winter, V.L.

    1999-04-05

    We have discussed two aspects of creating high integrity software that greatly benefit from the availability of transformation technology, which in this case is manifest by the requirement for a sophisticated backtracking parser. First, because of the potential for correctly manipulating programs via small changes, an automated non-procedural transformation system can be a valuable tool for constructing high assurance software. Second, modeling the processing of translating data into information as a, perhaps, context-dependent grammar leads to an efficient, compact implementation. From a practical perspective, the transformation process should begin in the domain language in which a problem is initially expressed.more » Thus in order for a transformation system to be practical it must be flexible with respect to domain-specific languages. We have argued that transformation applied to specification results in a highly reliable system. We also attempted to briefly demonstrate that transformation technology applied to the runtime environment will result in a safe and secure system. We thus believe that the sophisticated multi-lookahead backtracking parsing technology is central to the task of being in a position to demonstrate the existence of HIS.« less

  14. Nonlinear adaptive inverse control via the unified model neural network

    NASA Astrophysics Data System (ADS)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  15. Opening the Black Box: Exploring the Effect of Transformation on Online Service Delivery in Local Governments

    NASA Astrophysics Data System (ADS)

    van Veenstra, Anne Fleur; Zuurmond, Arre

    To enhance the quality of their online service delivery, many government organizations seek to transform their organization beyond merely setting up a front office. This transformation includes elements such as the formation of service delivery chains, the adoption of a management strategy supporting process orientation and the implementation of enterprise architecture. This paper explores whether undertaking this transformation has a positive effect on the quality of online service delivery, using data gathered from seventy local governments. We found that having an externally oriented management strategy in place, adopting enterprise architecture, aligning information systems to business and sharing activities between processes and departments are positively related to the quality of online service delivery. We recommend that further research should be carried out to find out whether dimensions of organizational development too have an effect on online service delivery in the long term.

  16. Note: Tesla transformer damping

    NASA Astrophysics Data System (ADS)

    Reed, J. L.

    2012-07-01

    Unexpected heavy damping in the two winding Tesla pulse transformer is shown to be due to small primary inductances. A small primary inductance is a necessary condition of operability, but is also a refractory inefficiency. A 30% performance loss is demonstrated using a typical "spiral strip" transformer. The loss is investigated by examining damping terms added to the transformer's governing equations. A significant alteration of the transformer's architecture is suggested to mitigate these losses. Experimental and simulated data comparing the 2 and 3 winding transformers are cited to support the suggestion.

  17. From grid cells and visual place cells to multimodal place cell: a new robotic architecture

    PubMed Central

    Jauffret, Adrien; Cuperlier, Nicolas; Gaussier, Philippe

    2015-01-01

    In the present study, a new architecture for the generation of grid cells (GC) was implemented on a real robot. In order to test this model a simple place cell (PC) model merging visual PC activity and GC was developed. GC were first built from a simple “several to one” projection (similar to a modulo operation) performed on a neural field coding for path integration (PI). Robotics experiments raised several practical and theoretical issues. To limit the important angular drift of PI, head direction information was introduced in addition to the robot proprioceptive signal coming from the wheel rotation. Next, a simple associative learning between visual place cells and the neural field coding for the PI has been used to recalibrate the PI and to limit its drift. Finally, the parameters controlling the shape of the PC built from the GC have been studied. Increasing the number of GC obviously improves the shape of the resulting place field. Yet, other parameters such as the discretization factor of PI or the lateral interactions between GC can have an important impact on the place field quality and avoid the need of a very large number of GC. In conclusion, our results show our GC model based on the compression of PI is congruent with neurobiological studies made on rodent. GC firing patterns can be the result of a modulo transformation of PI information. We argue that such a transformation may be a general property of the connectivity from the cortex to the entorhinal cortex. Our model predicts that the effect of similar transformations on other kinds of sensory information (visual, tactile, auditory, etc…) in the entorhinal cortex should be observed. Consequently, a given EC cell should react to non-contiguous input configurations in non-spatial conditions according to the projection from its different inputs. PMID:25904862

  18. Feasibility study, software design, layout and simulation of a two-dimensional Fast Fourier Transform machine for use in optical array interferometry

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin

    1994-01-01

    The goal of this project was the feasibility study of a particular architecture of a digital signal processing machine operating in real time which could do in a pipeline fashion the computation of the fast Fourier transform (FFT) of a time-domain sampled complex digital data stream. The particular architecture makes use of simple identical processors (called inner product processors) in a linear organization called a systolic array. Through computer simulation the new architecture to compute the FFT with systolic arrays was proved to be viable, and computed the FFT correctly and with the predicted particulars of operation. Integrated circuits to compute the operations expected of the vital node of the systolic architecture were proven feasible, and even with a 2 micron VLSI technology can execute the required operations in the required time. Actual construction of the integrated circuits was successful in one variant (fixed point) and unsuccessful in the other (floating point).

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less

  20. Local wavelet transform: a cost-efficient custom processor for space image compression

    NASA Astrophysics Data System (ADS)

    Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier

    2002-11-01

    Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.

  1. E-Governance and Service Oriented Computing Architecture Model

    NASA Astrophysics Data System (ADS)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  2. Building the Ferretome

    PubMed Central

    Sukhinin, Dmitrii I.; Engel, Andreas K.; Manger, Paul; Hilgetag, Claus C.

    2016-01-01

    Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity. PMID:27242503

  3. Building the Ferretome.

    PubMed

    Sukhinin, Dmitrii I; Engel, Andreas K; Manger, Paul; Hilgetag, Claus C

    2016-01-01

    Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity.

  4. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  5. Polar exponential sensor arrays unify iconic and Hough space representation

    NASA Technical Reports Server (NTRS)

    Weiman, Carl F. R.

    1990-01-01

    The log-polar coordinate system, inherent in both polar exponential sensor arrays and log-polar remapped video imagery, is identical to the coordinate system of its corresponding Hough transform parameter space. The resulting unification of iconic and Hough domains simplifies computation for line recognition and eliminates the slope quantization problems inherent in the classical Cartesian Hough transform. The geometric organization of the algorithm is more amenable to massively parallel architectures than that of the Cartesian version. The neural architecture of the human visual cortex meets the geometric requirements to execute 'in-place' log-Hough algorithms of the kind described here.

  6. Direct Associations or Internal Transformations? Exploring the Mechanisms Underlying Sequential Learning Behavior

    PubMed Central

    Gureckis, Todd M.; Love, Bradley C.

    2009-01-01

    We evaluate two broad classes of cognitive mechanisms that might support the learning of sequential patterns. According to the first, learning is based on the gradual accumulation of direct associations between events based on simple conditioning principles. The other view describes learning as the process of inducing the transformational structure that defines the material. Each of these learning mechanisms predict differences in the rate of acquisition for differently organized sequences. Across a set of empirical studies, we compare the predictions of each class of model with the behavior of human subjects. We find that learning mechanisms based on transformations of an internal state, such as recurrent network architectures (e.g., Elman, 1990), have difficulty accounting for the pattern of human results relative to a simpler (but more limited) learning mechanism based on learning direct associations. Our results suggest new constraints on the cognitive mechanisms supporting sequential learning behavior. PMID:20396653

  7. View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation.

    PubMed

    Leibo, Joel Z; Liao, Qianli; Anselmi, Fabio; Freiwald, Winrich A; Poggio, Tomaso

    2017-01-09

    The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations, like depth rotations [1, 2]. Current computational models of object recognition, including recent deep-learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3-6]. Here, we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here, we demonstrate that one specific biologically plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli, like faces, at intermediate levels of the architecture and show why it does so. Thus, the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Design of SIP transformation server for efficient media negotiation

    NASA Astrophysics Data System (ADS)

    Pack, Sangheon; Paik, Eun Kyoung; Choi, Yanghee

    2001-07-01

    Voice over IP (VoIP) is one of the advanced services supported by the next generation mobile communication. VoIP should support various media formats and terminals existing together. This heterogeneous environment may prevent diverse users from establishing VoIP sessions among them. To solve the problem an efficient media negotiation mechanism is required. In this paper, we propose the efficient media negotiation architecture using the transformation server and the Intelligent Location Server (ILS). The transformation server is an extended Session Initiation Protocol (SIP) proxy server. It can modify an unacceptable session INVITE message into an acceptable one using the ILS. The ILS is a directory server based on the Lightweight Directory Access Protocol (LDAP) that keeps userí*s location information and available media information. The proposed architecture can eliminate an unnecessary response and re-INVITE messages of the standard SIP architecture. It takes only 1.5 round trip times to negotiate two different media types while the standard media negotiation mechanism takes 2.5 round trip times. The extra processing time in message handling is negligible in comparison to the reduced round trip time. The experimental results show that the session setup time in the proposed architecture is less than the setup time in the standard SIP. These results verify that the proposed media negotiation mechanism is more efficient in solving diversity problems.

  9. Interior Reconstruction Using the 3d Hough Transform

    NASA Astrophysics Data System (ADS)

    Dumitru, R.-C.; Borrmann, D.; Nüchter, A.

    2013-02-01

    Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.

  10. Cultural heritage conservation and communication by digital modeling tools. Case studies: minor architectures of the Thirties in the Turin area

    NASA Astrophysics Data System (ADS)

    Bruno, A., Jr.; Spallone, R.

    2015-08-01

    Between the end of the twenties and the beginning of the World war two Turin, as the most of the Italian cities, was endowed by the fascist regime of many new buildings to guarantee its visibility and to control the territory: the fascist party main houses and the local ones. The style that was adopted for these constructions was inspired by the guide lines of the Modern movement which were spreading by a generation of architects as Le Corbusier, Gropius, Mendelsohn. At the end of the war many buildings were reconverted to several functions that led heavy transformations not respectful of the original worth, other were demolished. Today it's possible to rebuild those lost architectures in their primal format as it was created by their architects on paper (and in their mind). This process can guarantee the three-dimensional perception, the authenticity of the materials and the placement into the Turin urban tissue, using static and dynamic digital representation systems. The "three-dimensional re-drawing" of the projects, thought as an heuristic practice devoted to reveal the original idea of the project, inserts itself in a digital model of the urban and natural context as we can live it today, to simulate the perceptive effects that the building could stir up today. The modeling skills are the basis to product videos able to explore the relationship between the environment and "re-built architectures", describing with the synthetic movie techniques, the main formal and perceptive roots. The model represents a scientific product that can be involved in a virtual archive of cultural goods to preserve the collective memory of the architectural and urban past image of Turin.

  11. Predicting the practice effects on the blood oxygenation level-dependent (BOLD) function of fMRI in a symbolic manipulation task

    NASA Astrophysics Data System (ADS)

    Qin, Yulin; Sohn, Myeong-Ho; Anderson, John R.; Stenger, V. Andrew; Fissell, Kate; Goode, Adam; Carter, Cameron S.

    2003-04-01

    Based on adaptive control of thought-rational (ACT-R), a cognitive architecture for cognitive modeling, researchers have developed an information-processing model to predict the blood oxygenation level-dependent (BOLD) response of functional MRI in symbol manipulation tasks. As an extension of this research, the current event-related functional MRI study investigates the effect of relatively extensive practice on the activation patterns of related brain regions. The task involved performing transformations on equations in an artificial algebra system. This paper shows that the base-level activation learning in the ACT-R theory can predict the change of the BOLD response in practice in a left prefrontal region reflecting retrieval of information. In contrast, practice has relatively little effect on the form of BOLD response in the parietal region reflecting imagined transformations to the equation or the motor region reflecting manual programming.

  12. In vivo multiphoton and second harmonic generation microscopy of epithelial carcinogenesis

    NASA Astrophysics Data System (ADS)

    Vargas, Gracie; Shilagard, Tuya; Sun, Ju; Motamedi, Massoud

    2006-02-01

    Multiphoton microscopy and second harmonic generation microscopy were used to image epithelial changes in a hamster model for oral malignant transformation. In vivo imaging was performed to characterize morphometric alterations in normal and precancerous regions. Morphometric measurements such as cell nucleus area and epithelial thicknesses obtained from MPM-SHGM were in excellent agreement with histology obtained after in vivo imaging. MPM-SHGM was highly sensitive to spectroscopic and architectural alterations throughout carcinogenesis, showing statistically significant changes in morphology. MPM revealed hyperkeratosis, nuclear enlargement/crowding in dysplasia, and immune cell infiltration. SHGM revealed alterations in submucosal architecture, with a decrease in SHG density evident during early stages of precancer. By combining MPM with SHGM, the basement membrane could be identified in normal, hyperplasia, and dysplasia samples and in some cases of early invasion. The combined technique of MPM-SHGM has the potential to serve as an adjunct to biopsy for assessing precancerous changes and will be investigated further for that purpose. Additionally, the method can provide spatiotemporal assessment of early neoplastic changes in order to elucidate the stages of transformation in vivo and could be used to assess therapeutic efficacy of agents being tested for the treatment of epithelial precancers/cancer.

  13. Complex network view of evolving manifolds

    NASA Astrophysics Data System (ADS)

    da Silva, Diamantino C.; Bianconi, Ginestra; da Costa, Rui A.; Dorogovtsev, Sergey N.; Mendes, José F. F.

    2018-03-01

    We study complex networks formed by triangulations and higher-dimensional simplicial complexes representing closed evolving manifolds. In particular, for triangulations, the set of possible transformations of these networks is restricted by the condition that at each step, all the faces must be triangles. Stochastic application of these operations leads to random networks with different architectures. We perform extensive numerical simulations and explore the geometries of growing and equilibrium complex networks generated by these transformations and their local structural properties. This characterization includes the Hausdorff and spectral dimensions of the resulting networks, their degree distributions, and various structural correlations. Our results reveal a rich zoo of architectures and geometries of these networks, some of which appear to be small worlds while others are finite dimensional with Hausdorff dimension equal or higher than the original dimensionality of their simplices. The range of spectral dimensions of the evolving triangulations turns out to be from about 1.4 to infinity. Our models include simplicial complexes representing manifolds with evolving topologies, for example, an h -holed torus with a progressively growing number of holes. This evolving graph demonstrates features of a small-world network and has a particularly heavy-tailed degree distribution.

  14. Integration of the Gene Ontology into an object-oriented architecture.

    PubMed

    Shegogue, Daniel; Zheng, W Jim

    2005-05-10

    To standardize gene product descriptions, a formal vocabulary defined as the Gene Ontology (GO) has been developed. GO terms have been categorized into biological processes, molecular functions, and cellular components. However, there is no single representation that integrates all the terms into one cohesive model. Furthermore, GO definitions have little information explaining the underlying architecture that forms these terms, such as the dynamic and static events occurring in a process. In contrast, object-oriented models have been developed to show dynamic and static events. A portion of the TGF-beta signaling pathway, which is involved in numerous cellular events including cancer, differentiation and development, was used to demonstrate the feasibility of integrating the Gene Ontology into an object-oriented model. Using object-oriented models we have captured the static and dynamic events that occur during a representative GO process, "transforming growth factor-beta (TGF-beta) receptor complex assembly" (GO:0007181). We demonstrate that the utility of GO terms can be enhanced by object-oriented technology, and that the GO terms can be integrated into an object-oriented model by serving as a basis for the generation of object functions and attributes.

  15. Integration of the Gene Ontology into an object-oriented architecture

    PubMed Central

    Shegogue, Daniel; Zheng, W Jim

    2005-01-01

    Background To standardize gene product descriptions, a formal vocabulary defined as the Gene Ontology (GO) has been developed. GO terms have been categorized into biological processes, molecular functions, and cellular components. However, there is no single representation that integrates all the terms into one cohesive model. Furthermore, GO definitions have little information explaining the underlying architecture that forms these terms, such as the dynamic and static events occurring in a process. In contrast, object-oriented models have been developed to show dynamic and static events. A portion of the TGF-beta signaling pathway, which is involved in numerous cellular events including cancer, differentiation and development, was used to demonstrate the feasibility of integrating the Gene Ontology into an object-oriented model. Results Using object-oriented models we have captured the static and dynamic events that occur during a representative GO process, "transforming growth factor-beta (TGF-beta) receptor complex assembly" (GO:0007181). Conclusion We demonstrate that the utility of GO terms can be enhanced by object-oriented technology, and that the GO terms can be integrated into an object-oriented model by serving as a basis for the generation of object functions and attributes. PMID:15885145

  16. Analyses Made to Order: Using Transformation to Rapidly Configure a Multidisciplinary Environment

    NASA Technical Reports Server (NTRS)

    Cole, Bjorn

    2013-01-01

    Aerospace problems are highly multidisciplinary. Four or more major disciplines are involved in analyzing any particular vehicle. Moreover, the choice of implementation technology of various subsystems can lead to a change of leading domain or reformation of the driving equations. An excellent example is the change of expertise required to consider aircraft built from composite or metallic structures, or those propelled by chemical or electrical thrusters. Another example is in the major reconfiguration of handling and stability equations with different control surface configuration (e.g., canards, t-tail v four-post tail). Combinatorial problems are also commonplace anytime that a major system is to be designed. If there are only 5 attributes of a design to consider with 4 different options, this is already 1024 options. Adding just 5 more dimensions to the study explodes the space to over one million. Even generous assumptions like the idea that only 10% of the combinations are physically feasible can only contain the problem for so long. To make matters worse, the simple number of combinations is only the beginning. Combining the issue of trade space size with the need to reformulate the design problem for many of the possibilities makes life exponentially more difficult. Advances in software modeling approaches have led to the development of model-driven architecture. This approach uses the transformation of models into inferred models (e.g. inferred execution traces from state machines) or the skeletons for code generation. When the emphasis on transformation is applied to aerospace, it becomes possible to exploit redundancy in the information specified in multiple domain models into a unified system model. F1urther, it becomes possible to overcome the combinatorial nature of specifying integrated system behavior by manually combining the equations governing a given component technology. Transformations from a system specification combined with a system-analysis mapping specification enable one-click combination of domain analyses. This is a flexibility that has been missing from many engineering codes, which often entangle design specification and physical examination much more than is required to conduct the analysis. This capability has been investigated and cultivated within the DARPA F6 program by a team of JPL and Phoenix Integration engineers building the Adapatable Systems Design and Analysis (ASDA) framework. By embracing system modeling with SysML and the Query-View-Transformation (QVT) language, the ASDA team has been able to build a flexible, easily reconfigurable framework for building up and solving large tradespaces. Examples of application and lessons learned in building the framework will be described in this paper. In addition, the motivation will be laid for various tool vendors to develop open model description standards while being able to maintain competitive advantage through proprietary algorithms and approaches. These standards will also be compared to the underpinnings of model-driven architecture and the OMG standards of the Meta-Object Facility (MOF), SysML, and QVT.

  17. Parallel consensual neural networks.

    PubMed

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  18. Fast underdetermined BSS architecture design methodology for real time applications.

    PubMed

    Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R

    2015-01-01

    In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.

  19. Exploration of Potential Future Fleet Architectures

    DTIC Science & Technology

    2005-07-01

    alternative architectures are those espoused by the OFT sponsoring office: flexibility, adaptability, agility, speed, and information dominance through...including naval forces, which we used. The OFT advocates flexibility, adaptability, agility, speed, and information dominance through networking...challenges and transnational threats. In future conflicts, the Navy has plans to expand strike power, realize information dominance , and transform methods

  20. Thinking inside the Box

    ERIC Educational Resources Information Center

    Demski, Jennifer

    2012-01-01

    When one thinks of 21st century schools, one thinks of geometric modern architecture, sustainable building materials, and high-tech modular classrooms. It's rare, though, that a district has the space or the money to build that school from the ground up. Instead, the challenge for most is the transformation of the 20th century architecture to…

  1. The Oregon Experiment.

    ERIC Educational Resources Information Center

    Alexander, Christopher; And Others

    This is the third of three works that lay the basis for a new approach to architecture, building, and planning. At the core of these books is the idea that people should design for themselves their own houses, streets, and communities. Although the idea implies a radical transformation of the architectural profession, it comes from the observation…

  2. Enabling All-Access Mobility for Planetary Exploration Vehicles via Transformative Reconfiguration

    NASA Technical Reports Server (NTRS)

    Ferguson, Scott; Mazzoleni, Andre

    2016-01-01

    Effective large-scale exploration of planetary surfaces requires robotic vehicles capable of mobility across chaotic terrain. Characterized by a combination of ridges, cracks and valleys, the demands of this environment can cause spacecraft to experience significant reductions in operating footprint, performance, or even result in total system loss. Significantly increasing the scientific return of an interplanetary mission is facilitated by architectures capable of real-time configuration changes that go beyond that of active suspensions while concurrently meeting system, mass, power, and cost constraints. This Phase 1 report systematically explores how in-service architecture changes can expand system capabilities and mission opportunities. A foundation for concept generation is supplied by four Martian mission profiles spanning chasms, ice fields, craters and rocky terrain. A fifth mission profile centered on Near Earth Object exploration is also introduced. Concept generation is directed using four transformation principles - a taxonomy developed by the engineering design community to explain the cause of an architecture change and existing brainstorming techniques. This allowed early conceptual sketches of architecture changes to be organized by the principle driving the greatest increase in mission performance capability.

  3. Evolutionary Space Communications Architectures for Human/Robotic Exploration and Science Missions

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul; Hayden, Jeffrey L.

    2004-01-01

    NASA enterprises have growing needs for an advanced, integrated, communications infrastructure that will satisfy the capabilities needed for multiple human, robotic and scientific missions beyond 2015. Furthermore, the reliable, multipoint infrastructure is required to provide continuous, maximum coverage of areas of concentrated activities, such as around Earth and in the vicinity of the Moon or Mars, with access made available on demand of the human or robotic user. As a first step, the definitions of NASA's future space communications and networking architectures are underway. Architectures that describe the communications and networking needed between the nodal regions consisting of Earth, Moon, Lagrange points, Mars, and the places of interest within the inner and outer solar system have been laid out. These architectures will need the modular flexibility that must be included in the communication and networking technologies to enable the infrastructure to grow in capability with time and to transform from supporting robotic missions in the solar system to supporting human ventures to Mars, Jupiter, Jupiter's moons, and beyond. The protocol-based networking capability seamlessly connects the backbone, access, inter-spacecraft and proximity network elements of the architectures employed in the infrastructure. In this paper, we present the summary of NASA's near and long term needs and capability requirements that were gathered by participative methods. We describe an integrated architecture concept and model that will enable communications for evolutionary robotic and human science missions. We then define the communication nodes, their requirements, and various options to connect them.

  4. Evolutionary Space Communications Architectures for Human/Robotic Exploration and Science Missions

    NASA Astrophysics Data System (ADS)

    Bhasin, Kul; Hayden, Jeffrey L.

    2004-02-01

    NASA enterprises have growing needs for an advanced, integrated, communications infrastructure that will satisfy the capabilities needed for multiple human, robotic and scientific missions beyond 2015. Furthermore, the reliable, multipoint infrastructure is required to provide continuous, maximum coverage of areas of concentrated activities, such as around Earth and in the vicinity of the Moon or Mars, with access made available on demand of the human or robotic user. As a first step, the definitions of NASA's future space communications and networking architectures are underway. Architectures that describe the communications and networking needed between the nodal regions consisting of Earth, Moon, Lagrange points, Mars, and the places of interest within the inner and outer solar system have been laid out. These architectures will need the modular flexibility that must be included in the communication and networking technologies to enable the infrastructure to grow in capability with time and to transform from supporting robotic missions in the solar system to supporting human ventures to Mars, Jupiter, Jupiter's moons, and beyond. The protocol-based networking capability seamlessly connects the backbone, access, inter-spacecraft and proximity network elements of the architectures employed in the infrastructure. In this paper, we present the summary of NASA's near and long term needs and capability requirements that were gathered by participative methods. We describe an integrated architecture concept and model that will enable communications for evolutionary robotic and human science missions. We then define the communication nodes, their requirements, and various options to connect them.

  5. Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2003-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.

  6. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks.

    PubMed

    Xia, Peng; Hu, Jie; Peng, Yinghong

    2017-10-25

    A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  7. Object-oriented approach to the automatic segmentation of bones from pediatric hand radiographs

    NASA Astrophysics Data System (ADS)

    Shim, Hyeonjoon; Liu, Brent J.; Taira, Ricky K.; Hall, Theodore R.

    1997-04-01

    The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The development of this system draws principles from object-oriented design, model- guided analysis, and feedback control. A system architecture called 'the object segmentation machine' was implemented incorporating these design philosophies. The system is aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. These models include object structure models, shape models, 1-D wrist profiles, and gray level histogram models. Shape analysis is performed first by using an arc-length orientation transform to break down a given contour into elementary segments and curves. Then an interpretation tree is used as an inference engine to map known model contour segments to data contour segments obtained from the transform. Spatial and anatomical relationships among contour segments work as constraints from shape model. These constraints aid in generating a list of candidate matches. The candidate match with the highest confidence is chosen to be the current intermediate result. Verification of intermediate results are perform by a feedback control loop.

  8. Representation and presentation of requirements knowledge

    NASA Technical Reports Server (NTRS)

    Johnson, W. L.; Feather, Martin S.; Harris, David R.

    1992-01-01

    An approach to representation and presentation of knowledge used in the ARIES, an experimental requirements/specification environment, is described. The approach applies the notion of a representation architecture to the domain of software engineering and incorporates a strong coupling to a transformation system. It is characterized by a single highly expressive underlying representation, interfaced simultaneously to multiple presentations, each with notations of differing degrees of expressivity. This enables analysts to use multiple languages for describing systems and have these descriptions yield a single consistent model of the system.

  9. The Local Ensemble Transform Kalman Filter with the Weather Research and Forecasting Model: Experiments with Real Observations

    NASA Astrophysics Data System (ADS)

    Miyoshi, Takemasa; Kunii, Masaru

    2012-03-01

    The local ensemble transform Kalman filter (LETKF) is implemented with the Weather Research and Forecasting (WRF) model, and real observations are assimilated to assess the newly-developed WRF-LETKF system. The WRF model is a widely-used mesoscale numerical weather prediction model, and the LETKF is an ensemble Kalman filter (EnKF) algorithm particularly efficient in parallel computer architecture. This study aims to provide the basis of future research on mesoscale data assimilation using the WRF-LETKF system, an additional testbed to the existing EnKF systems with the WRF model used in the previous studies. The particular LETKF system adopted in this study is based on the system initially developed in 2004 and has been continuously improved through theoretical studies and wide applications to many kinds of dynamical models including realistic geophysical models. Most recent and important improvements include an adaptive covariance inflation scheme which considers the spatial and temporal inhomogeneity of inflation parameters. Experiments show that the LETKF successfully assimilates real observations and that adaptive inflation is advantageous. Additional experiments with various ensemble sizes show that using more ensemble members improves the analyses consistently.

  10. PONS2train: tool for testing the MLP architecture and local traning methods for runoff forecast

    NASA Astrophysics Data System (ADS)

    Maca, P.; Pavlasek, J.; Pech, P.

    2012-04-01

    The purpose of presented poster is to introduce the PONS2train developed for runoff prediction via multilayer perceptron - MLP. The software application enables the implementation of 12 different MLP's transfer functions, comparison of 9 local training algorithms and finally the evaluation the MLP performance via 17 selected model evaluation metrics. The PONS2train software is written in C++ programing language. Its implementation consists of 4 classes. The NEURAL_NET and NEURON classes implement the MLP, the CRITERIA class estimates model evaluation metrics and for model performance evaluation via testing and validation datasets. The DATA_PATTERN class prepares the validation, testing and calibration datasets. The software application uses the LAPACK, BLAS and ARMADILLO C++ linear algebra libraries. The PONS2train implements the first order local optimization algorithms: standard on-line and batch back-propagation with learning rate combined with momentum and its variants with the regularization term, Rprop and standard batch back-propagation with variable momentum and learning rate. The second order local training algorithms represents: the Levenberg-Marquardt algorithm with and without regularization and four variants of scaled conjugate gradients. The other important PONS2train features are: the multi-run, the weight saturation control, early stopping of trainings, and the MLP weights analysis. The weights initialization is done via two different methods: random sampling from uniform distribution on open interval or Nguyen Widrow method. The data patterns can be transformed via linear and nonlinear transformation. The runoff forecast case study focuses on PONS2train implementation and shows the different aspects of the MLP training, the MLP architecture estimation, the neural network weights analysis and model uncertainty estimation.

  11. Mashup Model and Verification Using Mashup Processing Network

    NASA Astrophysics Data System (ADS)

    Zahoor, Ehtesham; Perrin, Olivier; Godart, Claude

    Mashups are defined to be lightweight Web applications aggregating data from different Web services, built using ad-hoc composition and being not concerned with long term stability and robustness. In this paper we present a pattern based approach, called Mashup Processing Network (MPN). The idea is based on Event Processing Network and is supposed to facilitate the creation, modeling and the verification of mashups. MPN provides a view of how different actors interact for the mashup development namely the producer, consumer, mashup processing agent and the communication channels. It also supports modeling transformations and validations of data and offers validation of both functional and non-functional requirements, such as reliable messaging and security, that are key issues within the enterprise context. We have enriched the model with a set of processing operations and categorize them into data composition, transformation and validation categories. These processing operations can be seen as a set of patterns for facilitating the mashup development process. MPN also paves a way for realizing Mashup Oriented Architecture where mashups along with services are used as building blocks for application development.

  12. Input relegation control for gross motion of a kinematically redundant manipulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    1992-10-01

    This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less

  13. Transforming Space Missions into Service Oriented Architectures

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Frye, Stuart; Cappelaere, Pat

    2006-01-01

    This viewgraph presentation reviews the vision of the sensor web enablement via a Service Oriented Architecture (SOA). An generic example is given of a user finding a service through the Web, and initiating a request for the desired observation. The parts that comprise this system and how they interact are reviewed. The advantages of the use of SOA are reviewed.

  14. Dismantling the Built Drawing: Working with Mood in Architectural Design

    ERIC Educational Resources Information Center

    Teal, Randall

    2010-01-01

    From the late Middle Ages onward an emphasis on the rational and the technical aspects of design and design drawing gained hold of architectural practice. In this transformation, the phenomenon of mood has been frequently overlooked or seen as something to be added on to a design; yet the fundamental grounding of mood, as described in Martin…

  15. A Neural Circuit for Angular Velocity Computation

    PubMed Central

    Snider, Samuel B.; Yuste, Rafael; Packer, Adam M.

    2010-01-01

    In one of the most remarkable feats of motor control in the animal world, some Diptera, such as the housefly, can accurately execute corrective flight maneuvers in tens of milliseconds. These reflexive movements are achieved by the halteres, gyroscopic force sensors, in conjunction with rapidly tunable wing steering muscles. Specifically, the mechanosensory campaniform sensilla located at the base of the halteres transduce and transform rotation-induced gyroscopic forces into information about the angular velocity of the fly's body. But how exactly does the fly's neural architecture generate the angular velocity from the lateral strain forces on the left and right halteres? To explore potential algorithms, we built a neuromechanical model of the rotation detection circuit. We propose a neurobiologically plausible method by which the fly could accurately separate and measure the three-dimensional components of an imposed angular velocity. Our model assumes a single sign-inverting synapse and formally resembles some models of directional selectivity by the retina. Using multidimensional error analysis, we demonstrate the robustness of our model under a variety of input conditions. Our analysis reveals the maximum information available to the fly given its physical architecture and the mathematics governing the rotation-induced forces at the haltere's end knob. PMID:21228902

  16. A neural circuit for angular velocity computation.

    PubMed

    Snider, Samuel B; Yuste, Rafael; Packer, Adam M

    2010-01-01

    In one of the most remarkable feats of motor control in the animal world, some Diptera, such as the housefly, can accurately execute corrective flight maneuvers in tens of milliseconds. These reflexive movements are achieved by the halteres, gyroscopic force sensors, in conjunction with rapidly tunable wing steering muscles. Specifically, the mechanosensory campaniform sensilla located at the base of the halteres transduce and transform rotation-induced gyroscopic forces into information about the angular velocity of the fly's body. But how exactly does the fly's neural architecture generate the angular velocity from the lateral strain forces on the left and right halteres? To explore potential algorithms, we built a neuromechanical model of the rotation detection circuit. We propose a neurobiologically plausible method by which the fly could accurately separate and measure the three-dimensional components of an imposed angular velocity. Our model assumes a single sign-inverting synapse and formally resembles some models of directional selectivity by the retina. Using multidimensional error analysis, we demonstrate the robustness of our model under a variety of input conditions. Our analysis reveals the maximum information available to the fly given its physical architecture and the mathematics governing the rotation-induced forces at the haltere's end knob.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    A rigid body model for the entire system which accounts for the load distribution scheme proposed in Part 1 as well as for the dynamics of the manipulators and the kinematic constraints is derived in the joint space. A technique is presented for expressing the object dynamics in terms of the joint variables of both manipulators which leads to a positive definite and symmetric inertia matrix. The model is then transformed to obtain reduced order equations of motion and a separate set of equations which govern the behavior of the internal contact forces. The control architecture is applied to themore » model which results in the explicit decoupling of the position and internal contact force-controlled degrees of freedom (DOF).« less

  18. Practical Sub-Nyquist Sampling via Array-Based Compressed Sensing Receiver Architecture

    DTIC Science & Technology

    2016-07-10

    different array ele- ments at different sub-Nyquist sampling rates. Signal processing inspired by the sparse fast Fourier transform allows for signal...reconstruction algorithms can be computationally demanding (REF). The related sparse Fourier transform algorithms aim to reduce the processing time nec- essary to...compute the DFT of frequency-sparse signals [7]. In particular, the sparse fast Fourier transform (sFFT) achieves processing time better than the

  19. Efficient Phase Unwrapping Architecture for Digital Holographic Microscopy

    PubMed Central

    Hwang, Wen-Jyi; Cheng, Shih-Chang; Cheng, Chau-Jern

    2011-01-01

    This paper presents a novel phase unwrapping architecture for accelerating the computational speed of digital holographic microscopy (DHM). A fast Fourier transform (FFT) based phase unwrapping algorithm providing a minimum squared error solution is adopted for hardware implementation because of its simplicity and robustness to noise. The proposed architecture is realized in a pipeline fashion to maximize throughput of the computation. Moreover, the number of hardware multipliers and dividers are minimized to reduce the hardware costs. The proposed architecture is used as a custom user logic in a system on programmable chip (SOPC) for physical performance measurement. Experimental results reveal that the proposed architecture is effective for expediting the computational speed while consuming low hardware resources for designing an embedded DHM system. PMID:22163688

  20. Sustainable Architecture in the Context of Regional Activities

    NASA Astrophysics Data System (ADS)

    Sołkeiewicz-Kos, Nina

    2017-10-01

    The relationship between man and the surrounding cultural environment directs attention in urban and architectural design to the realm of interdisciplinary research. As a result, they should create architectural and urban solutions which provide aesthetic satisfaction. They should also generate social bonds, a sense of identity and maintain the specificity of the local building environment, where tradition and the context of surroundings is the starting point for creating a sustainable living environment. Presented problems focus on the analysis of formal, functional and spatial solutions, in which materials and technology were selected in an optimal way. The continuation of the subject concerns the relationship between the use of the local urban, architectural, material and technological solutions and the quality of the cultural space that meets the principles of sustainable development. Adaptation and transformation of old techniques and traditional materials to create contemporary designs is one of the forms of experimentation encountered in contemporary architecture. Its economic, social and ecological aspects are realised in the form of: satisfying the needs of the local community, renewal and maintenance of modern standards of the surrounding buildings, use of local materials and available space. This means striving to design and transform the space already in use, while reducing the impact on the environment. Analysed buildings and urban spaces are an attempt to answer: whether the strategies applied in the field of architectural, technological and material solutions provide the identification of the place and meet the users’ expectations?

  1. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shippert, Tim; Gaustad, Krista

    Consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. These challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of data consolidation methods, present a frameworkmore » for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  2. The Analytic Information Warehouse (AIW): a Platform for Analytics using Electronic Health Record Data

    PubMed Central

    Post, Andrew R.; Kurc, Tahsin; Cholleti, Sharath; Gao, Jingjing; Lin, Xia; Bornstein, William; Cantrell, Dedra; Levine, David; Hohmann, Sam; Saltz, Joel H.

    2013-01-01

    Objective To create an analytics platform for specifying and detecting clinical phenotypes and other derived variables in electronic health record (EHR) data for quality improvement investigations. Materials and Methods We have developed an architecture for an Analytic Information Warehouse (AIW). It supports transforming data represented in different physical schemas into a common data model, specifying derived variables in terms of the common model to enable their reuse, computing derived variables while enforcing invariants and ensuring correctness and consistency of data transformations, long-term curation of derived data, and export of derived data into standard analysis tools. It includes software that implements these features and a computing environment that enables secure high-performance access to and processing of large datasets extracted from EHRs. Results We have implemented and deployed the architecture in production locally. The software is available as open source. We have used it as part of hospital operations in a project to reduce rates of hospital readmission within 30 days. The project examined the association of over 100 derived variables representing disease and co-morbidity phenotypes with readmissions in five years of data from our institution’s clinical data warehouse and the UHC Clinical Database (CDB). The CDB contains administrative data from over 200 hospitals that are in academic medical centers or affiliated with such centers. Discussion and Conclusion A widely available platform for managing and detecting phenotypes in EHR data could accelerate the use of such data in quality improvement and comparative effectiveness studies. PMID:23402960

  3. DFT algorithms for bit-serial GaAs array processor architectures

    NASA Technical Reports Server (NTRS)

    Mcmillan, Gary B.

    1988-01-01

    Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.

  4. Digital and optical shape representation and pattern recognition; Proceedings of the Meeting, Orlando, FL, Apr. 4-6, 1988

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Editor)

    1988-01-01

    The present conference discusses topics in pattern-recognition correlator architectures, digital stereo systems, geometric image transformations and their applications, topics in pattern recognition, filter algorithms, object detection and classification, shape representation techniques, and model-based object recognition methods. Attention is given to edge-enhancement preprocessing using liquid crystal TVs, massively-parallel optical data base management, three-dimensional sensing with polar exponential sensor arrays, the optical processing of imaging spectrometer data, hybrid associative memories and metric data models, the representation of shape primitives in neural networks, and the Monte Carlo estimation of moment invariants for pattern recognition.

  5. Optical computing using optical flip-flops in Fourier processors: use in matrix multiplication and discrete linear transforms.

    PubMed

    Ando, S; Sekine, S; Mita, M; Katsuo, S

    1989-12-15

    An architecture and the algorithms for matrix multiplication using optical flip-flops (OFFs) in optical processors are proposed based on residue arithmetic. The proposed system is capable of processing all elements of matrices in parallel utilizing the information retrieving ability of optical Fourier processors. The employment of OFFs enables bidirectional data flow leading to a simpler architecture and the burden of residue-to-decimal (or residue-to-binary) conversion to operation time can be largely reduced by processing all elements in parallel. The calculated characteristics of operation time suggest a promising use of the system in a real time 2-D linear transform.

  6. A single VLSI chip for computing syndromes in the (225, 223) Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.

    1986-01-01

    A description of a single VLSI chip for computing syndromes in the (255, 223) Reed-Solomon decoder is presented. The architecture that leads to this single VLSI chip design makes use of the dual basis multiplication algorithm. The same architecture can be applied to design VLSI chips to compute various kinds of number theoretic transforms.

  7. Using Container Structures in Architecture and Urban Design

    NASA Astrophysics Data System (ADS)

    Grębowski, Karol; Kałdunek, Daniel

    2017-10-01

    The paper presents the use of shipping containers in architecture and urban design. Even today, houses and apartments are still too expensive. Since 1923 architects have been improving the living conditions of citizens by building very simple, repeatable forms. With prefabrication technology it became possible to build quicker, causing house prices to decrease. Apartments in block of flats became affordable to more and more people. Modernism had great impact on the quality of living spaces, despite the detrimental effect of large panel technology on social life. It gave people their own bathrooms, and gifted them with simple solutions we now consider indispensable. The ambition to build cheaply but effectively is still here. The future of housing lies in prefabricated apartment modules. A well optimized creation process is the key, but taking into consideration the mistakes made by past generations should be the second most important factor. Studies show that large panel buildings were too monumental and solid for a housing structure, and offered no public spaces between them. Lack of urban design transformed a great idea into blocks that are considered to be ugly and unfriendly. Diversity is something that large panel structures were missing. While most block of flats were being constructed out of the same module (Model 770), differentiated architecture was difficult to achieve. Nowadays, increasing numbers of shipping containers are being used for housing purposes. These constructions show that it is possible to create astonishing housing with modules. Shipping containers were not designed to be a building material, but in contrast to large panel modules, there are many more possibilities of their transformation. In this paper the authors propose a set of rules that, if followed, would result in cheaper apartments, while keeping in consideration both tremendous architecture and friendly urban design. What is more, the proposed solution is designed to adapt to personalized requirements. In this paper the authors include information about design guidelines for structures made from shipping containers.

  8. Universal Design for the Digital Environment: Transforming the Institution

    ERIC Educational Resources Information Center

    Rowland, Cyndi; Mariger, Heather; Siegel, Peter M.; Whiting, Jonathan

    2010-01-01

    A revolution is about to transform higher education. To participate in this revolution, those in higher education need to explore a critical concept: "universal design." Universal design was originally aimed at innovations in architecture, community spaces, and products, but today it is about creating services and products, from the beginning, in…

  9. Transformational Spaceport and Range Capabilities Roadmap Interim Review to National Research Council External Review Panel

    NASA Technical Reports Server (NTRS)

    Poniatowski, Karen

    2005-01-01

    Contents include the following: Overview/Introduction. Roadmap Approach/Considerations. Roadmap Timeline/Spirals. Requirements Development. Spaceport/Range Capabilities. Mixed Range Architecture. User Requirements/Customer Considerations. Manifest Considerations. Emerging Launch User Requirements. Capability Breakdown Structure/Assessment. Roadmap Team Observations. Transformational Range Test Concept. Roadmap Team Conclusions. Next Steps.

  10. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  11. Feedforward Inhibition Allows Input Summation to Vary in Recurrent Cortical Networks

    PubMed Central

    2018-01-01

    Abstract Brain computations depend on how neurons transform inputs to spike outputs. Here, to understand input-output transformations in cortical networks, we recorded spiking responses from visual cortex (V1) of awake mice of either sex while pairing sensory stimuli with optogenetic perturbation of excitatory and parvalbumin-positive inhibitory neurons. We found that V1 neurons’ average responses were primarily additive (linear). We used a recurrent cortical network model to determine whether these data, as well as past observations of nonlinearity, could be described by a common circuit architecture. Simulations showed that cortical input-output transformations can be changed from linear to sublinear with moderate (∼20%) strengthening of connections between inhibitory neurons, but this change away from linear scaling depends on the presence of feedforward inhibition. Simulating a variety of recurrent connection strengths showed that, compared with when input arrives only to excitatory neurons, networks produce a wider range of output spiking responses in the presence of feedforward inhibition. PMID:29682603

  12. Two-Stream Transformer Networks for Video-based Face Alignment.

    PubMed

    Liu, Hao; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2017-08-01

    In this paper, we propose a two-stream transformer networks (TSTN) approach for video-based face alignment. Unlike conventional image-based face alignment approaches which cannot explicitly model the temporal dependency in videos and motivated by the fact that consistent movements of facial landmarks usually occur across consecutive frames, our TSTN aims to capture the complementary information of both the spatial appearance on still frames and the temporal consistency information across frames. To achieve this, we develop a two-stream architecture, which decomposes the video-based face alignment into spatial and temporal streams accordingly. Specifically, the spatial stream aims to transform the facial image to the landmark positions by preserving the holistic facial shape structure. Accordingly, the temporal stream encodes the video input as active appearance codes, where the temporal consistency information across frames is captured to help shape refinements. Experimental results on the benchmarking video-based face alignment datasets show very competitive performance of our method in comparisons to the state-of-the-arts.

  13. Architectural design, interior decoration, and three-dimensional plumbing en route to multifunctional nanoarchitectures.

    PubMed

    Long, Jeffrey W

    2007-09-01

    Ultraporous aperiodic solids, such as aerogels and ambigels, are sol-gel-derived equivalents of architectures. The walls are defined by the nanoscopic, covalently bonded solid network of the gel. The vast open, interconnected space characteristic of a building is represented by the three-dimensionally continuous nanoscopic pore network. We discuss how an architectural construct serves as a powerful metaphor that guides the chemist in the design of aerogel-like nanoarchitectures and in their physical and chemical transformation into multifunctional objects that yield high performance for rate-critical applications.

  14. Overview of Socio-economic Transformations Based on Residential Architecture in a Suburban Area - Case Study of Villages in the Polish Region of Warmia

    NASA Astrophysics Data System (ADS)

    Źróbek-Różańska, Alina; Zysk, Elżbieta; Źróbek, Sabina

    2017-10-01

    Poland has a turbulent and rich history. Partitions, wars, a centrally planned economy of the socialist era and the rapid transition to a market economy left visible marks on the Polish landscape. The changes that took place in the 20th century and the early 21st century have vastly influenced the country’s architecture. Residential buildings in rural suburbs bear witness to turbulent historical events and change processes. This study analyzed residential buildings in two villages situated in the historical district of Warmia (north-eastern Poland) which is now a part of the Region of Warmia and Mazury. The results of the observations were used to review the social, economic, legal and planning factors that influenced residential architecture between 1900 and 2017. The traditional layout of Warmian villages is well preserved in the analyzed locations where pre-war architectural design mingles with buildings erected in the socialist era when construction materials were scarce. Many buildings in the surveyed villages are reminiscent of collective farms, the prescribed architectural style of the 1970s as well as the stylistic diversity of the early transformation period when customized building plans and construction materials became available. The local landscape also features buildings erected in successive decades which brought a significant increase in the price of land and maintenance costs.

  15. Macro-economic factors influencing the architectural business model shift in the pharmaceutical industry.

    PubMed

    Dierks, Raphaela Marie Louisa; Bruyère, Olivier; Reginster, Jean-Yves; Richy, Florent-Frederic

    2016-10-01

    Technological innovations, new regulations, increasing costs of drug productions and new demands are only few key drivers of a projected alternation in the pharmaceutical industry. The purpose of this review is to understand the macro economic factors responsible for the business model revolution to possess a competitive advantage over market players. Areas covered: Existing literature on macro-economic factors changing the pharmaceutical landscape has been reviewed to present a clear image of the current market environment. Expert commentary: Literature shows that pharmaceutical companies are facing an architectural alteration, however the evidence on the rationale driving the transformation is outstanding. Merger & Acquisitions (M&A) deals and collaborations are headlining the papers. Q1 2016 did show a major slowdown in M&A deals by volume since 2013 (with deal cancellations of Pfizer and Allergan, or the downfall of Valeant), but pharmaceutical analysts remain confident that this shortfall was a consequence of the equity market volatility. It seems likely that the shift to an M&A model will become apparent during the remainder of 2016, with deal announcements of Abbott Laboratories, AbbVie and Sanofi worth USD 45billion showing the appetite of big pharma companies to shift from the fully vertical integrated business model to more horizontal business models.

  16. Low complexity 1D IDCT for 16-bit parallel architectures

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2007-09-01

    This paper shows that using the Loeffler, Ligtenberg, and Moschytz factorization of 8-point IDCT [2] one-dimensional (1-D) algorithm as a fast approximation of the Discrete Cosine Transform (DCT) and using only 16 bit numbers, it is possible to create in an IEEE 1180-1990 compliant and multiplierless algorithm with low computational complexity. This algorithm as characterized by its structure is efficiently implemented on parallel high performance architectures as well as due to its low complexity is sufficient for wide range of other architectures. Additional constraint on this work was the requirement of compliance with the existing MPEG standards. The hardware implementation complexity and low resources where also part of the design criteria for this algorithm. This implementation is also compliant with the precision requirements described in MPEG IDCT precision specification ISO/IEC 23002-1. Complexity analysis is performed as an extension to the simple measure of shifts and adds for the multiplierless algorithm as additional operations are included in the complexity measure to better describe the actual transform implementation complexity.

  17. Distorted Character Recognition Via An Associative Neural Network

    NASA Astrophysics Data System (ADS)

    Messner, Richard A.; Szu, Harold H.

    1987-03-01

    The purpose of this paper is two-fold. First, it is intended to provide some preliminary results of a character recognition scheme which has foundations in on-going neural network architecture modeling, and secondly, to apply some of the neural network results in a real application area where thirty years of effort has had little effect on providing the machine an ability to recognize distorted objects within the same object class. It is the author's belief that the time is ripe to start applying in ernest the results of over twenty years of effort in neural modeling to some of the more difficult problems which seem so hard to solve by conventional means. The character recognition scheme proposed utilizes a preprocessing stage which performs a 2-dimensional Walsh transform of an input cartesian image field, then sequency filters this spectrum into three feature bands. Various features are then extracted and organized into three sets of feature vectors. These vector patterns that are stored and recalled associatively. Two possible associative neural memory models are proposed for further investigation. The first being an outer-product linear matrix associative memory with a threshold function controlling the strength of the output pattern (similar to Kohonen's crosscorrelation approach [1]). The second approach is based upon a modified version of Grossberg's neural architecture [2] which provides better self-organizing properties due to its adaptive nature. Preliminary results of the sequency filtering and feature extraction preprocessing stage and discussion about the use of the proposed neural architectures is included.

  18. a N-D Virtual Notebook about the Basilica of S. Ambrogio in Milan: Information Modeling for the Communication of Historical Phases Subtraction Process

    NASA Astrophysics Data System (ADS)

    Stanga, C.; Spinelli, C.; Brumana, R.; Oreni, D.; Valente, R.; Banfi, F.

    2017-08-01

    This essay describes the combination of 3D solutions and software techniques with traditional studies and researches in order to achieve an integrated digital documentation between performed surveys, collected data, and historical research. The approach of this study is based on the comparison of survey data with historical research, and interpretations deduced from a data cross-check between the two mentioned sources. The case study is the Basilica of S. Ambrogio in Milan, one of the greatest monuments in the city, a pillar of the Christianity and of the History of Architecture. It is characterized by a complex stratification of phases of restoration and transformation. Rediscovering the great richness of the traditional architectural notebook, which collected surveys and data, this research aims to realize a virtual notebook, based on a 3D model that supports the dissemination of the collected information. It can potentially be understandable and accessible by anyone through the development of a mobile app. The 3D model was used to explore the different historical phases, starting from the recent layers to the oldest ones, through a virtual subtraction process, following the methods of Archaeology of Architecture. Its components can be imported into parametric software and recognized both in their morphological and typological aspects. It is based on the concept of LoD and ReverseLoD in order to fit the accuracy required by each step of the research.

  19. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  20. H2, fixed architecture, control design for large scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1990-01-01

    The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.

  1. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations

    PubMed Central

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414

  2. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    PubMed

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  3. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    PubMed

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  4. Clinical Predictive Modeling Development and Deployment through FHIR Web Services

    PubMed Central

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207

  5. Microlens array processor with programmable weight mask and direct optical input

    NASA Astrophysics Data System (ADS)

    Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen

    1999-03-01

    We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.

  6. Service-oriented infrastructure for scientific data mashups

    NASA Astrophysics Data System (ADS)

    Baru, C.; Krishnan, S.; Lin, K.; Moreland, J. L.; Nadeau, D. R.

    2009-12-01

    An important challenge in informatics is the development of concepts and corresponding architecture and tools to assist scientists with their data integration tasks. A typical Earth Science data integration request may be expressed, for example, as “For a given region (i.e. lat/long extent, plus depth), return a 3D structural model with accompanying physical parameters of density, seismic velocities, geochemistry, and geologic ages, using a cell size of 10km.” Such requests create “mashups” of scientific data. Currently, such integration is hand-crafted and depends heavily upon a scientist’s intimate knowledge of how to process, interpret, and integrate data from individual sources. In most case, the ultimate “integration” is performed by overlaying output images from individual processing steps using image manipulation software such as, say, Adobe Photoshop—leading to “Photoshop science”, where it is neither easy to repeat the integration steps nor to share the data mashup. As a result, scientists share only the final images and not the mashup itself. A more capable information infrastructure is needed to support the authoring and sharing of scientific data mashups. The infrastructure must include services for data discovery, access, and transformation and should be able to create mashups that are interactive, allowing users to probe and manipulate the data and follow its provenance. We present an architectural framework based on a service-oriented architecture for scientific data mashups in a distributed environment. The framework includes services for Data Access, Data Modeling, and Data Interaction. The Data Access services leverage capabilities for discovery and access to distributed data resources provided by efforts such as GEON and the EarthScope Data Portal, and services for federated metadata catalogs under development by projects like the Geosciences Information Network (GIN). The Data Modeling services provide 2D, 3D, and 4D modeling services based on standards such as WFS, WMS, WCS, and GeoSciML that allow integration of disparate data in a distributed, Web-based environment. Along these lines, we introduce the notion of a Web Volume Service (WVS) for modeling and manipulating 3D data. The Data Interaction Services provide services for rich interactions with the integrated 3D data. To provide efficient interactions with large-scale data in a distributed environment the architecture must include capabilities for caching and reuse of data, use of multi-level indexing, and the ability to orchestrate and coordinate execution of data processing and transformation routines as part of the data access and integration steps. The data mashup infrastructure is based on a service-oriented architecture. A range of alternatives are available for implementing these mashup services in a scalable fashion, using the cloud computing paradigm. We will describe the tradeoffs of each approach and provide an evaluation of which options are best suited to which types of services. We will describe security, privacy, performance, and price/performance issues and considerations in implementing services on dedicated servers versus private as well as public clouds, including systems such as Amazon Web Services.

  7. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shippert, Tim; Gaustad, Krista

    In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  8. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE PAGES

    Shippert, Tim; Gaustad, Krista

    2016-12-16

    In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  9. Parallel processing for digital picture comparison

    NASA Technical Reports Server (NTRS)

    Cheng, H. D.; Kou, L. T.

    1987-01-01

    In picture processing an important problem is to identify two digital pictures of the same scene taken under different lighting conditions. This kind of problem can be found in remote sensing, satellite signal processing and the related areas. The identification can be done by transforming the gray levels so that the gray level histograms of the two pictures are closely matched. The transformation problem can be solved by using the packing method. Researchers propose a VLSI architecture consisting of m x n processing elements with extensive parallel and pipelining computation capabilities to speed up the transformation with the time complexity 0(max(m,n)), where m and n are the numbers of the gray levels of the input picture and the reference picture respectively. If using uniprocessor and a dynamic programming algorithm, the time complexity will be 0(m(3)xn). The algorithm partition problem, as an important issue in VLSI design, is discussed. Verification of the proposed architecture is also given.

  10. A reconfigurable multicarrier demodulator architecture

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Jamali, M. M.

    1991-01-01

    An architecture based on parallel and pipline design approaches has been developed for the Frequency Division Multiple Access/Time Domain Multiplexed (FDMA/TDM) conversion system. The architecture has two main modules namely the transmultiplexer and the demodulator. The transmultiplexer has two pipelined modules. These are the shared multiplexed polyphase filter and the Fast Fourier Transform (FFT). The demodulator consists of carrier, clock, and data recovery modules which are interactive. Progress on the design of the MultiCarrier Demodulator (MCD) using commercially available chips and Application Specific Integrated Circuits (ASIC) and simulation studies using Viewlogic software will be presented at the conference.

  11. Comments on `Area and power efficient DCT architecture for image compression' by Dhandapani and Ramachandran

    NASA Astrophysics Data System (ADS)

    Cintra, Renato J.; Bayer, Fábio M.

    2017-12-01

    In [Dhandapani and Ramachandran, "Area and power efficient DCT architecture for image compression", EURASIP Journal on Advances in Signal Processing 2014, 2014:180] the authors claim to have introduced an approximation for the discrete cosine transform capable of outperforming several well-known approximations in literature in terms of additive complexity. We could not verify the above results and we offer corrections for their work.

  12. Transforming the Gray Factory: The Presidential Leadership of Charles M. Vest and the Architecture of Change at Massachusetts Institute of Technology

    ERIC Educational Resources Information Center

    Daas, Mahesh

    2013-01-01

    The single-site exemplar study presents an in-depth account of the presidential leadership of Charles M. Vest of MIT--the second longest presidency in the Institute's history--and his leadership team's journey between 1990 and 2004 into campus architectural changes that involved over a billion dollars, added a quarter of floor space to MIT's…

  13. A Learning Framework for Winner-Take-All Networks with Stochastic Synapses.

    PubMed

    Mostafa, Hesham; Cauwenberghs, Gert

    2018-06-01

    Many recent generative models make use of neural networks to transform the probability distribution of a simple low-dimensional noise process into the complex distribution of the data. This raises the question of whether biological networks operate along similar principles to implement a probabilistic model of the environment through transformations of intrinsic noise processes. The intrinsic neural and synaptic noise processes in biological networks, however, are quite different from the noise processes used in current abstract generative networks. This, together with the discrete nature of spikes and local circuit interactions among the neurons, raises several difficulties when using recent generative modeling frameworks to train biologically motivated models. In this letter, we show that a biologically motivated model based on multilayer winner-take-all circuits and stochastic synapses admits an approximate analytical description. This allows us to use the proposed networks in a variational learning setting where stochastic backpropagation is used to optimize a lower bound on the data log likelihood, thereby learning a generative model of the data. We illustrate the generality of the proposed networks and learning technique by using them in a structured output prediction task and a semisupervised learning task. Our results extend the domain of application of modern stochastic network architectures to networks where synaptic transmission failure is the principal noise mechanism.

  14. Chosen-plaintext attack on a joint transform correlator encrypting system

    NASA Astrophysics Data System (ADS)

    Barrera, John Fredy; Vargas, Carlos; Tebaldi, Myrian; Torroba, Roberto

    2010-10-01

    We demonstrate that optical encryption methods based on the joint transform correlator architecture are vulnerable to chosen-plaintext attack. An unauthorized user, who introduces three chosen plaintexts in the accessible encryption machine, can obtain the security key code mask. In this contribution, we also propose an alternative method to eliminate ambiguities that allows obtaining the right decrypting key.

  15. [R]MIT Research Centre at Delft University of Technology: A Bridge between Research, Education, Society and Profession

    ERIC Educational Resources Information Center

    Zijlstra, Hielkje

    2009-01-01

    In 2006, we launched the [R]MIT Research Centre (Modification, Intervention Transformation) at the Faculty of Architecture at Delft University of Technology. [R]MIT was founded to respond to the need for an integrated, multi-disciplinary approach to the transformation of the built environment. [R]MIT aims to bring momentum to the renewal of…

  16. Transformation of dwelling culture based on riverine community in Musi River Palembang

    NASA Astrophysics Data System (ADS)

    Wicaksono, Bambang; Siswanto, Ari; Kusdiwanggo, Susilo; Anwar, Widya Fransiska Febriati

    2017-11-01

    Palembang City development since the Palembang Darussalam Sultanate era to the reformation era has impact on the living culture community, less of the raft houses, houses on stilts transformed into a terraced house, and the house became the dominant land. Dwelling Culture oriented on transformation of river become land-oriented. The development has leaving identity, character, and potential of the riverine architecture and dwelling life of river. The goals of study are to describe a case and revealing the meaning of dwelling cultural transformation in Musi River society from the process of cultural acculturation and investigate the architectural aspect from the form of house and modes of dwelling through the structuralism approach. The data collection is conducted qualitatively by using data collection techniques such as observation, interview, literature study, whereas the method of analysis, is a method that is done through Levi-Strauss structuralism approach that identifies all the elements of community thought in a systematic procedure. The results showed the structure behind the orientation, position, shape, and layout of dwelling revealed through the meanings in it. It means, the change and development from cultural acculturation process which oriented in the land dwelling, based on structure thinking of Palembang society.

  17. Cellular automata simulation of topological effects on the dynamics of feed-forward motifs

    PubMed Central

    Apte, Advait A; Cain, John W; Bonchev, Danail G; Fong, Stephen S

    2008-01-01

    Background Feed-forward motifs are important functional modules in biological and other complex networks. The functionality of feed-forward motifs and other network motifs is largely dictated by the connectivity of the individual network components. While studies on the dynamics of motifs and networks are usually devoted to the temporal or spatial description of processes, this study focuses on the relationship between the specific architecture and the overall rate of the processes of the feed-forward family of motifs, including double and triple feed-forward loops. The search for the most efficient network architecture could be of particular interest for regulatory or signaling pathways in biology, as well as in computational and communication systems. Results Feed-forward motif dynamics were studied using cellular automata and compared with differential equation modeling. The number of cellular automata iterations needed for a 100% conversion of a substrate into a target product was used as an inverse measure of the transformation rate. Several basic topological patterns were identified that order the specific feed-forward constructions according to the rate of dynamics they enable. At the same number of network nodes and constant other parameters, the bi-parallel and tri-parallel motifs provide higher network efficacy than single feed-forward motifs. Additionally, a topological property of isodynamicity was identified for feed-forward motifs where different network architectures resulted in the same overall rate of the target production. Conclusion It was shown for classes of structural motifs with feed-forward architecture that network topology affects the overall rate of a process in a quantitatively predictable manner. These fundamental results can be used as a basis for simulating larger networks as combinations of smaller network modules with implications on studying synthetic gene circuits, small regulatory systems, and eventually dynamic whole-cell models. PMID:18304325

  18. Parallel processing approach to transform-based image coding

    NASA Astrophysics Data System (ADS)

    Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.

    1991-06-01

    This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.

  19. Design and Analysis of Compact DNA Strand Displacement Circuits for Analog Computation Using Autocatalytic Amplifiers.

    PubMed

    Song, Tianqi; Garg, Sudhanshu; Mokhtar, Reem; Bui, Hieu; Reif, John

    2018-01-19

    A main goal in DNA computing is to build DNA circuits to compute designated functions using a minimal number of DNA strands. Here, we propose a novel architecture to build compact DNA strand displacement circuits to compute a broad scope of functions in an analog fashion. A circuit by this architecture is composed of three autocatalytic amplifiers, and the amplifiers interact to perform computation. We show DNA circuits to compute functions sqrt(x), ln(x) and exp(x) for x in tunable ranges with simulation results. A key innovation in our architecture, inspired by Napier's use of logarithm transforms to compute square roots on a slide rule, is to make use of autocatalytic amplifiers to do logarithmic and exponential transforms in concentration and time. In particular, we convert from the input that is encoded by the initial concentration of the input DNA strand, to time, and then back again to the output encoded by the concentration of the output DNA strand at equilibrium. This combined use of strand-concentration and time encoding of computational values may have impact on other forms of molecular computation.

  20. Wishart Deep Stacking Network for Fast POLSAR Image Classification.

    PubMed

    Jiao, Licheng; Liu, Fang

    2016-05-11

    Inspired by the popular deep learning architecture - Deep Stacking Network (DSN), a specific deep model for polarimetric synthetic aperture radar (POLSAR) image classification is proposed in this paper, which is named as Wishart Deep Stacking Network (W-DSN). First of all, a fast implementation of Wishart distance is achieved by a special linear transformation, which speeds up the classification of POLSAR image and makes it possible to use this polarimetric information in the following Neural Network (NN). Then a single-hidden-layer neural network based on the fast Wishart distance is defined for POLSAR image classification, which is named as Wishart Network (WN) and improves the classification accuracy. Finally, a multi-layer neural network is formed by stacking WNs, which is in fact the proposed deep learning architecture W-DSN for POLSAR image classification and improves the classification accuracy further. In addition, the structure of WN can be expanded in a straightforward way by adding hidden units if necessary, as well as the structure of the W-DSN. As a preliminary exploration on formulating specific deep learning architecture for POLSAR image classification, the proposed methods may establish a simple but clever connection between POLSAR image interpretation and deep learning. The experiment results tested on real POLSAR image show that the fast implementation of Wishart distance is very efficient (a POLSAR image with 768000 pixels can be classified in 0.53s), and both the single-hidden-layer architecture WN and the deep learning architecture W-DSN for POLSAR image classification perform well and work efficiently.

  1. Understanding genetic variation - the value of systems biology.

    PubMed

    Hütt, Marc-Thorsten

    2014-04-01

    Pharmacology is currently transformed by the vast amounts of genome-associated information available for system-level interpretation. Here I review the potential of systems biology to facilitate this interpretation, thus paving the way for the emerging field of systems pharmacology. In particular, I will show how gene regulatory and metabolic networks can serve as a framework for interpreting high throughput data and as an interface to detailed dynamical models. In addition to the established connectivity analyses of effective networks, I suggest here to also analyze higher order architectural properties of effective networks. © 2013 The British Pharmacological Society.

  2. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    NASA Astrophysics Data System (ADS)

    Kmet', Tibor; Kmet'ová, Mária

    2009-09-01

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  3. Efficient transformation and artificial miRNA gene silencing in Lemna minor

    PubMed Central

    Cantó-Pastor, Alex; Mollá-Morales, Almudena; Ernst, Evan; Dahl, William; Zhai, Jixian; Yan, Yiheng; Meyers, Blake; Shanklin, John; Martienssen, Robert

    2015-01-01

    Lack of genetic tools in the Lemnaceae (duckweed) has impeded full implementation of this organism as model for biological research, despite its rapid doubling time, simple architecture and unusual metabolic characteristics. Here we present technologies to facilitate high-throughput genetic studies in duckweed. We developed a fast and efficient method for producing Lemna minor stable transgenic fronds via agrobacterium-mediated transformation and regeneration from tissue culture. Additionally, we engineered an artificial microRNA (amiRNA) gene silencing system. We identified a Lemna gibba endogenous miR166 precursor and used it as a backbone to produce amiRNAs. As a proof of concept we induced the silencing of CH42, a Magnesium Chelatase subunit, using our amiRNA platform. Expression of CH42 in transgenic Lemna minor fronds was significantly reduced, which resulted in reduction of chlorophyll pigmentation. The techniques presented here will enable tackling future challenges in the biology and biotechnology of Lemnaceae. PMID:24989135

  4. DRS: Derivational Reasoning System

    NASA Technical Reports Server (NTRS)

    Bose, Bhaskar

    1995-01-01

    The high reliability requirements for airborne systems requires fault-tolerant architectures to address failures in the presence of physical faults, and the elimination of design flaws during the specification and validation phase of the design cycle. Although much progress has been made in developing methods to address physical faults, design flaws remain a serious problem. Formal methods provides a mathematical basis for removing design flaws from digital systems. DRS (Derivational Reasoning System) is a formal design tool based on advanced research in mathematical modeling and formal synthesis. The system implements a basic design algebra for synthesizing digital circuit descriptions from high level functional specifications. DRS incorporates an executable specification language, a set of correctness preserving transformations, verification interface, and a logic synthesis interface, making it a powerful tool for realizing hardware from abstract specifications. DRS integrates recent advances in transformational reasoning, automated theorem proving and high-level CAD synthesis systems in order to provide enhanced reliability in designs with reduced time and cost.

  5. Multi-Beam Radio Frequency (RF) Aperture Arrays Using Multiplierless Approximate Fast Fourier Transform (FFT)

    DTIC Science & Technology

    2017-08-01

    filtering, correlation and radio- astronomy . In this report approximate transforms that closely follow the DFT have been studied and found. The approximate...communications, data networks, sensor networks, cognitive radio, radar and beamforming, imaging, filtering, correlation and radio- astronomy . FFTs efficiently...public release; distribution is unlimited. 4.3 Digital Hardware and Design Architectures Collaboration for Astronomy Signal Processing and Electronics

  6. A New Space for a New Science: The Transformation of the JAE Campus after the Spanish Civil War

    ERIC Educational Resources Information Center

    Canales, Antonio Fco.

    2012-01-01

    This article explores the way in which different concepts of education are expressed through architecture. It analyses the case of the transformation of the campus of the "Colina de los Chopos" in Madrid after the Spanish Civil War. The victor's vision for education was captured physically in a new space that expressed the opposite…

  7. High-rise architecture in Ufa, Russia, based on crystallography canons

    NASA Astrophysics Data System (ADS)

    Narimanovich Sabitov, Ildar; Radikovna Kudasheva, Dilara; Yaroslavovich Vdovin, Denis

    2018-03-01

    The article considers fundamental steps of high-rise architecture forming stylistic tendencies, based on C. Willis and M. A. Korotich's studies. Crystallographic shaping as a direction is assigned on basis of classification by M. A. Korotich's. This direction is particularly examined and the main high-rise architecture forming aspects on basis of natural polycrystals forming principles are assigned. The article describes crystal forms transformation into an architectural composition, analyzes constructive systems within the framework of CTBUH (Council on Tall Buildings and Urban Habitat) classification, and picks out one of its types as the most optimal for using in buildings-crystals. The last stage of our research is the theoretical principles approbation into an experimental project of high-rise building in Ufa with the description of its contextual dislocation aspects.

  8. Standard representation and unified stability analysis for dynamic artificial neural network models.

    PubMed

    Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D

    2018-02-01

    An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.

  9. A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2015-10-01

    In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e. , internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature.

  10. A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI

    PubMed Central

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e., internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature. PMID:27054199

  11. [Utilitarian goals and artistic autonomy architectural forms and their functions].

    PubMed

    Thibault, Estelle

    2012-01-01

    In the late 19(th) century, authors writing on aesthetics often referred to architecture to justify establishing a new hierarchy between things beautiful and things useful, a change underwritten by the rising sociological and anthropological perspectives on art. Meanwhile, architects debated the origins and evolution of artistic styles from the earliest forms of art to the most advanced monumental art works, a debate that fundamentally transformed the relationship between artistic expression and material determinism.

  12. Add Control: plant virtualization for control solutions in WWTP.

    PubMed

    Maiza, M; Bengoechea, A; Grau, P; De Keyser, W; Nopens, I; Brockmann, D; Steyer, J P; Claeys, F; Urchegui, G; Fernández, O; Ayesa, E

    2013-01-01

    This paper summarizes part of the research work carried out in the Add Control project, which proposes an extension of the wastewater treatment plant (WWTP) models and modelling architectures used in traditional WWTP simulation tools, addressing, in addition to the classical mass transformations (transport, physico-chemical phenomena, biological reactions), all the instrumentation, actuation and automation & control components (sensors, actuators, controllers), considering their real behaviour (signal delays, noise, failures and power consumption of actuators). Its ultimate objective is to allow a rapid transition from the simulation of the control strategy to its implementation at full-scale plants. Thus, this paper presents the application of the Add Control simulation platform for the design and implementation of new control strategies at the WWTP of Mekolalde.

  13. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip

    NASA Astrophysics Data System (ADS)

    Shulaker, Max M.; Hills, Gage; Park, Rebecca S.; Howe, Roger T.; Saraswat, Krishna; Wong, H.-S. Philip; Mitra, Subhasish

    2017-07-01

    The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors—promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage—fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce ‘highly processed’ information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

  14. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip.

    PubMed

    Shulaker, Max M; Hills, Gage; Park, Rebecca S; Howe, Roger T; Saraswat, Krishna; Wong, H-S Philip; Mitra, Subhasish

    2017-07-05

    The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors-promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage-fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce 'highly processed' information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

  15. Area and power efficient DCT architecture for image compression

    NASA Astrophysics Data System (ADS)

    Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan

    2014-12-01

    The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.

  16. Why Do Membranes of Some Unhealthy Cells Adopt a Cubic Architecture?

    DOE PAGES

    Xiao, Qi; Wang, Zhichun; Williams, Dewight; ...

    2016-12-05

    Nonlamellar lipid arrangements, including cubosomes, appear in unhealthy cells, e.g., when they are subject to stress, starvation, or viral infection. The bioactivity of cubosomes—nanoscale particles exhibiting bicontinuous cubic structures—versus more common vesicles is an unexplored area due to lack of suitable model systems. Here, glycodendrimercubosomes (GDCs)—sugar-presenting cubosomes assembled from Janus glycodendrimers by simple injection into buffer—are proposed as mimics of biological cubic membranes. The bicontinuous cubic GDC architecture has been demonstrated by electron tomography. The stability of these GDCs in buffer enabled studies on lectin-dependent agglutination, revealing significant differences compared with the vesicular glycodendrimersome (GDS) counterpart. In particular, GDCs showedmore » an increased activity toward concanavalin A, as well as an increased sensitivity and selectivity toward two variants of banana lectins, a wild-type and a genetically modified variant, which is not exhibited by GDSs. These results suggest that cells may adapt under unhealthy conditions by undergoing a transformation from lamellar to cubic membranes as a method of defense.« less

  17. Why Do Membranes of Some Unhealthy Cells Adopt a Cubic Architecture?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Qi; Wang, Zhichun; Williams, Dewight

    Nonlamellar lipid arrangements, including cubosomes, appear in unhealthy cells, e.g., when they are subject to stress, starvation, or viral infection. The bioactivity of cubosomes—nanoscale particles exhibiting bicontinuous cubic structures—versus more common vesicles is an unexplored area due to lack of suitable model systems. Here, glycodendrimercubosomes (GDCs)—sugar-presenting cubosomes assembled from Janus glycodendrimers by simple injection into buffer—are proposed as mimics of biological cubic membranes. The bicontinuous cubic GDC architecture has been demonstrated by electron tomography. The stability of these GDCs in buffer enabled studies on lectin-dependent agglutination, revealing significant differences compared with the vesicular glycodendrimersome (GDS) counterpart. In particular, GDCs showedmore » an increased activity toward concanavalin A, as well as an increased sensitivity and selectivity toward two variants of banana lectins, a wild-type and a genetically modified variant, which is not exhibited by GDSs. These results suggest that cells may adapt under unhealthy conditions by undergoing a transformation from lamellar to cubic membranes as a method of defense.« less

  18. Atomic switch networks—nanoarchitectonic design of a complex system for natural computing

    NASA Astrophysics Data System (ADS)

    Demis, E. C.; Aguilera, R.; Sillin, H. O.; Scharnhorst, K.; Sandouk, E. J.; Aono, M.; Stieg, A. Z.; Gimzewski, J. K.

    2015-05-01

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing—a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.

  19. Atomic switch networks-nanoarchitectonic design of a complex system for natural computing.

    PubMed

    Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Aono, M; Stieg, A Z; Gimzewski, J K

    2015-05-22

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing-a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.

  20. SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.

    2013-12-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.

  1. Performance of linear and nonlinear texture measures in 2D and 3D for monitoring architectural changes in osteoporosis using computer-generated models of trabecular bone

    NASA Astrophysics Data System (ADS)

    Boehm, Holger F.; Link, Thomas M.; Monetti, Roberto A.; Mueller, Dirk; Rummeny, Ernst J.; Raeth, Christoph W.

    2005-04-01

    Osteoporosis is a metabolic bone disease leading to de-mineralization and increased risk of fracture. The two major factors that determine the biomechanical competence of bone are the degree of mineralization and the micro-architectural integrity. Today, modern imaging modalities (high resolution MRI, micro-CT) are capable of depicting structural details of trabecular bone tissue. From the image data, structural properties obtained by quantitative measures are analysed with respect to the presence of osteoporotic fractures of the spine (in-vivo) or correlated with biomechanical strength as derived from destructive testing (in-vitro). Fairly well established are linear structural measures in 2D that are originally adopted from standard histo-morphometry. Recently, non-linear techniques in 2D and 3D based on the scaling index method (SIM), the standard Hough transform (SHT), and the Minkowski Functionals (MF) have been introduced, which show excellent performance in predicting bone strength and fracture risk. However, little is known about the performance of the various parameters with respect to monitoring structural changes due to progression of osteoporosis or as a result of medical treatment. In this contribution, we generate models of trabecular bone with pre-defined structural properties which are exposed to simulated osteoclastic activity. We apply linear and non-linear texture measures to the models and analyse their performance with respect to detecting architectural changes. This study demonstrates, that the texture measures are capable of monitoring structural changes of complex model data. The diagnostic potential varies for the different parameters and is found to depend on the topological composition of the model and initial "bone density". In our models, non-linear texture measures tend to react more sensitively to small structural changes than linear measures. Best performance is observed for the 3rd and 4th Minkowski Functionals and for the scaling index method.

  2. Contemporary Spaces of Memory - Towards Transdisciplinarity in Architecture

    NASA Astrophysics Data System (ADS)

    Kabrońska, Joanna

    2017-10-01

    The paper explores new phenomena in the contemporary practice of commemoration implemented through architecture. Architectural objects related to memory can be a place where new trends and phenomena appear earlier than in other architectural objects. The text is an attempt to prove that these new spaces of memory are a kind of laboratory where new ideas taking place in architecture and related disciplines are being tested. Research focuses on the bond between the complex and difficult problem of memory and the issue of transdisciplinarity in architecture. Over the last few decades architecture has been - in comparison to other areas - a relatively closed domain of knowledge. Contemporary places of memory - different from the traditional - may be the evidence of changes. On the basis of theoretical approaches, interdisciplinary surveys, in-field analyses and case studies the paper give insight into the relationships between architecture and other areas, emerging in the recently created spaces of memory of different types. The text indicates that today both the study and the design of such places is difficult without going beyond the field of architecture. There is a need for further extensive research, but the paper confirms the potential of this research direction. Spaces of memory offer the opportunity to capture the transformation of the discipline at the moment when the process begins.

  3. Acousto-Optic Processing of 2-D Signals Using Temporal and Spatial Integration.

    DTIC Science & Technology

    1983-05-31

    Documents includes data on: Architectures; Coherence Properties of Pulsed Laser Diodes; Acousto - optic device data; Dynamic Range Issues; Image correlation; Synthetic aperture radar; 2-D Fourier transform; and Moments.

  4. An Extensible, Modular Architecture Coupling HydroShare and Tethys Platform to Deploy Water Science Web Apps

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Ames, D. P.; Jones, N.; Tarboton, D. G.; Li, Z.; Qiao, X.; Crawley, S.

    2016-12-01

    As water resources data continue to move to the web in the form of well-defined, open access, machine readable web services provided by government, academic, and private institutions, there is increased opportunity to move additional parts of the water science workflow to the web (e.g. analysis, modeling, decision support, and collaboration.) Creating such web-based functionality can be extremely time-consuming and resource-intensive and can lead the erstwhile water scientist down a veritable cyberinfrastructure rabbit hole, through an unintended tunnel of transformation to become a Cyber-Wonderland software engineer. We posit that such transformations were never the intention of the research programs that fund earth science cyberinfrastructure, nor is it in the best interest of water researchers to spend exorbitant effort developing and deploying such technologies. This presentation will introduce a relatively simple and ready-to-use water science web app environment funded by the National Science Foundation that couples the new HydroShare data publishing system with the Tethys Platform web app development toolkit. The coupled system has already been shown to greatly lower the barrier to deploying of web based visualization and analysis tools for the CUAHSI Water Data Center and for the National Weather Service's National Water Model. The design and implementation of the developed web app architecture will be presented together key examples of existing apps created using this system. In each of the cases presented, water resources students with basic programming skills were able to develop and deploy highly functional web apps in a relatively short period of time (days to weeks) - allowing the focus to remain on water science rather on cyberinfrastructure. This presentation is accompanied by an open invitation for new collaborations that use the HydroShare-Tethys web app environment.

  5. Generalized fiber Fourier optics.

    PubMed

    Cincotti, Gabriella

    2011-06-15

    A twofold generalization of the optical schemes that perform the discrete Fourier transform (DFT) is given: new passive planar architectures are presented where the 2 × 2 3 dB couplers are replaced by M × M hybrids, reducing the number of required connections and phase shifters. Furthermore, the planar implementation of the discrete fractional Fourier transform (DFrFT) is also described, with a waveguide grating router (WGR) configuration and a properly modified slab coupler.

  6. A deep learning framework for causal shape transformation.

    PubMed

    Lore, Kin Gwn; Stoecklein, Daniel; Davies, Michael; Ganapathysubramanian, Baskar; Sarkar, Soumik

    2018-02-01

    Recurrent neural network (RNN) and Long Short-term Memory (LSTM) networks are the common go-to architecture for exploiting sequential information where the output is dependent on a sequence of inputs. However, in most considered problems, the dependencies typically lie in the latent domain which may not be suitable for applications involving the prediction of a step-wise transformation sequence that is dependent on the previous states only in the visible domain with a known terminal state. We propose a hybrid architecture of convolution neural networks (CNN) and stacked autoencoders (SAE) to learn a sequence of causal actions that nonlinearly transform an input visual pattern or distribution into a target visual pattern or distribution with the same support and demonstrated its practicality in a real-world engineering problem involving the physics of fluids. We solved a high-dimensional one-to-many inverse mapping problem concerning microfluidic flow sculpting, where the use of deep learning methods as an inverse map is very seldom explored. This work serves as a fruitful use-case to applied scientists and engineers in how deep learning can be beneficial as a solution for high-dimensional physical problems, and potentially opening doors to impactful advance in fields such as material sciences and medical biology where multistep topological transformations is a key element. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Integration of the Reconfigurable Self-Healing eDNA Architecture in an Embedded System

    NASA Technical Reports Server (NTRS)

    Boesen, Michael Reibel; Keymeulen, Didier; Madsen, Jan; Lu, Thomas; Chao, Tien-Hsin

    2011-01-01

    In this work we describe the first real world case study for the self-healing eDNA (electronic DNA) architecture by implementing the control and data processing of a Fourier Transform Spectrometer (FTS) on an eDNA prototype. For this purpose the eDNA prototype has been ported from a Xilinx Virtex 5 FPGA to an embedded system consisting of a PowerPC and a Xilinx Virtex 5 FPGA. The FTS instrument features a novel liquid crystal waveguide, which consequently eliminates all moving parts from the instrument. The addition of the eDNA architecture to do the control and data processing has resulted in a highly fault-tolerant FTS instrument. The case study has shown that the early stage prototype of the autonomous self-healing eDNA architecture is expensive in terms of execution time.

  8. Architectural Methodology Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.

  9. FPGA wavelet processor design using language for instruction-set architectures (LISA)

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios

    2007-04-01

    The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.

  10. Supporting shared data structures on distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, since there is no support for shared data structures. Current programming languages for distributed memory architectures force the user to decompose all data structures into separate pieces, with each piece owned by one of the processors in the machine, and with all communication explicitly specified by low-level message-passing primitives. A new programming environment is presented for distributed memory architectures, providing a global name space and allowing direct access to remote parts of data values. The analysis and program transformations required to implement this environment are described, and the efficiency of the resulting code on the NCUBE/7 and IPSC/2 hypercubes are described.

  11. Direct kinematics solution architectures for industrial robot manipulators: Bit-serial versus parallel

    NASA Astrophysics Data System (ADS)

    Lee, J.; Kim, K.

    A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.

  12. Array architectures for iterative algorithms

    NASA Technical Reports Server (NTRS)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  13. Transformational Communications Architecture for the Unit Operations Center (UOC); Common Aviation Command and Control System (CAC2S); and Command and Control On-the-Move Network, Digital Over-the-Horizon Relay (CONDOR)

    DTIC Science & Technology

    2004-06-01

    CAPABILITY SETS..............................................................................11 Figure 6. T3 DESIGN ...Radio System (JTRS) in 2008 and beyond. JTRS is being designed to provide a flexible new approach to meet diverse warfighter communications needs...Command and Control On-the-Move Network, Digital Over the Horizon Relay (CoNDOR) The CoNDOR Capability Set is an Architectural Approach designed to

  14. Direct kinematics solution architectures for industrial robot manipulators: Bit-serial versus parallel

    NASA Technical Reports Server (NTRS)

    Lee, J.; Kim, K.

    1991-01-01

    A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.

  15. Code generator for implementing dual tree complex wavelet transform on reconfigurable architectures for mobile applications

    PubMed Central

    Canbay, Ferhat; Levent, Vecdi Emre; Serbes, Gorkem; Ugurdag, H. Fatih; Goren, Sezer

    2016-01-01

    The authors aimed to develop an application for producing different architectures to implement dual tree complex wavelet transform (DTCWT) having near shift-invariance property. To obtain a low-cost and portable solution for implementing the DTCWT in multi-channel real-time applications, various embedded-system approaches are realised. For comparison, the DTCWT was implemented in C language on a personal computer and on a PIC microcontroller. However, in the former approach portability and in the latter desired speed performance properties cannot be achieved. Hence, implementation of the DTCWT on a reconfigurable platform such as field programmable gate array, which provides portable, low-cost, low-power, and high-performance computing, is considered as the most feasible solution. At first, they used the system generator DSP design tool of Xilinx for algorithm design. However, the design implemented by using such tools is not optimised in terms of area and power. To overcome all these drawbacks mentioned above, they implemented the DTCWT algorithm by using Verilog Hardware Description Language, which has its own difficulties. To overcome these difficulties, simplify the usage of proposed algorithms and the adaptation procedures, a code generator program that can produce different architectures is proposed. PMID:27733925

  16. Code generator for implementing dual tree complex wavelet transform on reconfigurable architectures for mobile applications.

    PubMed

    Canbay, Ferhat; Levent, Vecdi Emre; Serbes, Gorkem; Ugurdag, H Fatih; Goren, Sezer; Aydin, Nizamettin

    2016-09-01

    The authors aimed to develop an application for producing different architectures to implement dual tree complex wavelet transform (DTCWT) having near shift-invariance property. To obtain a low-cost and portable solution for implementing the DTCWT in multi-channel real-time applications, various embedded-system approaches are realised. For comparison, the DTCWT was implemented in C language on a personal computer and on a PIC microcontroller. However, in the former approach portability and in the latter desired speed performance properties cannot be achieved. Hence, implementation of the DTCWT on a reconfigurable platform such as field programmable gate array, which provides portable, low-cost, low-power, and high-performance computing, is considered as the most feasible solution. At first, they used the system generator DSP design tool of Xilinx for algorithm design. However, the design implemented by using such tools is not optimised in terms of area and power. To overcome all these drawbacks mentioned above, they implemented the DTCWT algorithm by using Verilog Hardware Description Language, which has its own difficulties. To overcome these difficulties, simplify the usage of proposed algorithms and the adaptation procedures, a code generator program that can produce different architectures is proposed.

  17. A Software Architecture for Intelligent Synthesis Environments

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    The NASA's Intelligent Synthesis Environment (ISE) program is a grand attempt to develop a system to transform the way complex artifacts are engineered. This paper discusses a "middleware" architecture for enabling the development of ISE. Desirable elements of such an Intelligent Synthesis Architecture (ISA) include remote invocation; plug-and-play applications; scripting of applications; management of design artifacts, tools, and artifact and tool attributes; common system services; system management; and systematic enforcement of policies. This paper argues that the ISA extend conventional distributed object technology (DOT) such as CORBA and Product Data Managers with flexible repositories of product and tool annotations and "plug-and-play" mechanisms for inserting "ility" or orthogonal concerns into the system. I describe the Object Infrastructure Framework, an Aspect Oriented Programming (AOP) environment for developing distributed systems that provides utility insertion and enables consistent annotation maintenance. This technology can be used to enforce policies such as maintaining the annotations of artifacts, particularly the provenance and access control rules of artifacts-, performing automatic datatype transformations between representations; supplying alternative servers of the same service; reporting on the status of jobs and the system; conveying privileges throughout an application; supporting long-lived transactions; maintaining version consistency; and providing software redundancy and mobility.

  18. Implementation of the 2-D Wavelet Transform into FPGA for Image

    NASA Astrophysics Data System (ADS)

    León, M.; Barba, L.; Vargas, L.; Torres, C. O.

    2011-01-01

    This paper presents a hardware system implementation of the of discrete wavelet transform algoritm in two dimensions for FPGA, using the Daubechies filter family of order 2 (db2). The decomposition algorithm of this transform is designed and simulated with the Hardware Description Language VHDL and is implemented in a programmable logic device (FPGA) XC3S1200E reference, Spartan IIIE family, by Xilinx, take advantage the parallels properties of these gives us and speeds processing that can reach them. The architecture is evaluated using images input of different sizes. This implementation is done with the aim of developing a future images encryption hardware system using wavelet transform for security information.

  19. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  20. Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space

    NASA Astrophysics Data System (ADS)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2015-09-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their definitions of global coverages intended to ensure the needs of major global and international organizations (UNFCCC and IPCC) are met as a core objective. Consider how new optimization tools like rule-based engines (RBES) offer alternative methods of evaluating collaborative architectures and constellations? What would the trade space of optimized operational climate monitoring architectures of ECV look like? Third, using the RBES tool kit (2014) demonstrate with application to a climate centric rule-based decision engine - optimizing architectural trades of earth observation satellite systems, allowing comparison(s) to existing architectures and gaining insights for global collaborative architectures. How difficult is it to pull together an optimized climate case study - utilizing for example 12 climate based instruments on multiple existing platforms and nominal handful of orbits; for best cost and performance benefits against the collection requirements of representative set of ECV. How much effort and resources would an organization expect to invest to realize these analysis and utility benefits?

  1. Morphological feature detection for cervical cancer screening

    NASA Astrophysics Data System (ADS)

    Narayanswamy, Ramkumar; Sharpe, John P.; Duke, Heather J.; Stewart, Rosemary J.; Johnson, Kristina M.

    1995-03-01

    An optoelectronic system has been designed to pre-screen pap-smear slides and detect the suspicious cells using the hit/miss transform. Computer simulation of the algorithm tested on 184 pap-smear images detected 95% of the suspicious region as suspect while tagging just 5% of the normal regions as suspect. An optoelectronic implementation of the hit/miss transform using a 4f Vander-Lugt correlator architecture is proposed and demonstrated with experimental results.

  2. Modulation of gene expression using electrospun scaffolds with templated architecture.

    PubMed

    Karchin, A; Wang, Y-N; Sanders, J E

    2012-06-01

    The fabrication of biomimetic scaffolds is a critical component to fulfill the promise of functional tissue-engineered materials. We describe herein a simple technique, based on printed circuit board manufacturing, to produce novel templates for electrospinning scaffolds for tissue-engineering applications. This technique facilitates fabrication of electrospun scaffolds with templated architecture, which we defined as a scaffold's bulk mechanical properties being driven by its fiber architecture. Electrospun scaffolds with templated architectures were characterized with regard to fiber alignment and mechanical properties. Fast Fourier transform analysis revealed a high degree of fiber alignment along the conducting traces of the templates. Mechanical testing showed that scaffolds demonstrated tunable mechanical properties as a function of templated architecture. Fibroblast-seeded scaffolds were subjected to a peak strain of 3 or 10% at 0.5 Hz for 1 h. Exposing seeded scaffolds to the low strain magnitude (3%) significantly increased collagen I gene expression compared to the high strain magnitude (10%) in a scaffold architecture-dependent manner. These experiments indicate that scaffolds with templated architectures can be produced, and modulation of gene expression is possible with templated architectures. This technology holds promise for the long-term goal of creating tissue-engineered replacements with the biomechanical and biochemical make-up of native tissues. Copyright © 2012 Wiley Periodicals, Inc.

  3. A consensual neural network

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Ersoy, O. K.; Swain, P. H.

    1991-01-01

    A neural network architecture called a consensual neural network (CNN) is proposed for the classification of data from multiple sources. Its relation to hierarchical and ensemble neural networks is discussed. CNN is based on the statistical consensus theory and uses nonlinearly transformed input data. The input data are transformed several times, and the different transformed data are applied as if they were independent inputs. The independent inputs are classified using stage neural networks and outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote-sensing data and geographic data are given.

  4. Achieving the Desired Transformation: Thoughts on Next Steps for Outcomes-Based Medical Education.

    PubMed

    Holmboe, Eric S; Batalden, Paul

    2015-09-01

    Since the introduction of the outcomes-based medical education (OBME) movement, progress toward implementation has been active but challenging. Much of the angst and criticism has been directed at the approaches to assessment that are associated with outcomes-based or competency frameworks, particularly defining the outcomes. In addition, these changes to graduate medical education (GME) are concomitant with major change in health care systems--specifically, changes to increase quality and safety while reducing cost. Every sector, from medical education to health care delivery and financing, is in the midst of substantial change and disruption.The recent release of the Institute of Medicine's report on the financing and governance of GME highlights the urgent need to accelerate the transformation of medical education. One source of continued tension within the medical education community arises from the assumption that the much-needed increases in value and improvement in health care can be achieved by holding the current educational structures and architecture of learning in place while concomitantly withdrawing resources. The authors of this Perspective seek to reframe the important and necessary debate surrounding the current challenges to implementing OBME. Building on recent change and service theories (e.g., Theory U and coproduction), they propose several areas of redirection, including reexamination of curricular models and greater involvement of learners, teachers, and regulators in cocreating new training models, to help facilitate the desired transformation in medical education.

  5. Cultural Identity of the Industrial Heritage in Gdansk

    NASA Astrophysics Data System (ADS)

    Szymański, Tomasz

    2017-10-01

    Since its inception, urbanized area passes a number of changes, caused by demands of its inhabitants. Industrial heritage, including historic architecture at the brownfields, that’s more and more present in the centres of our cities, is one of the most important components of the identity. The development of civilization causes the phenomenon of spatial and functional transformations. Revitalization of the areas recently occupied by the industry, provides a unique opportunity to rediscover their values. Increasingly, however, it uses the terms “wasteland” or “brownfields”. Land use by industry is associated only with its “predatory” use, destruction, devastation. However, we can venture to say, that the existing industrial use of the land, “civilized” them. Current developments have restored a public access to the “new” urban space. At these areas preserved quite a lot valuable architectural objects. That can be seen, unfortunately, tend to forget the fact of complexity, multithreaded value areas and facilities. Analyzed causes of the risks, ways to prevent adverse transformations, methods of developing action plans to re-create the industrial architecture - are still discussed. Industrial heritage, particularly architecture, is one of the important components of the material culture that specify identity of the city of Gdansk. It provides with no doubt about its distinctiveness and originality in relation to other cities and regions. Revitalization projects are at the same time the most effective way to protect and preserve the cultural identity of the brownfield facilities. Examples of such transformations are most relevant to Gdansk and also beginning to be more and more visible. Areas of the main activities of revitalization in Gdansk, are the area of the former Imperial Shipyard and the Ołowianka Island are still and only the beginning of the necessary changes. Old industrial plants and technical facilities should be subject of the regeneration and be activating element at the scale of the whole districts and the city.

  6. Physics architecture

    NASA Astrophysics Data System (ADS)

    Konopleva, Nelly

    2017-03-01

    Fundamental physical theory axiomatics is closely connected with methods of experimental measurements. The difference between the theories using global and local symmetries is explained. It is shown that symmetry group localization leads not only to the change of the relativity principle, but to the fundamental modification of experimental programs testing physical theory predictions. It is noticed that any fundamental physical theory must be consistent with the measurement procedures employed for its testing. These ideas are illustrated by events of my biography connected with Yang-Mills theory transformation from an ordinary phenomenological model to a fundamental physical theory based on local symmetry principles like the Einsteinian General Relativity. Baldin position in this situation is demonstrated.

  7. Space Generic Open Avionics Architecture (SGOAA): Overview

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1992-01-01

    A space generic open avionics architecture created for NASA is described. It will serve as the basis for entities in spacecraft core avionics, capable of being tailored by NASA for future space program avionics ranging from small vehicles such as Moon ascent/descent vehicles to large ones such as Mars transfer vehicles or orbiting stations. The standard consists of: (1) a system architecture; (2) a generic processing hardware architecture; (3) a six class architecture interface model; (4) a system services functional subsystem architectural model; and (5) an operations control functional subsystem architectural model.

  8. Large-scale chromosome folding versus genomic DNA sequences: A discrete double Fourier transform technique.

    PubMed

    Chechetkin, V R; Lobzin, V V

    2017-08-07

    Using state-of-the-art techniques combining imaging methods and high-throughput genomic mapping tools leaded to the significant progress in detailing chromosome architecture of various organisms. However, a gap still remains between the rapidly growing structural data on the chromosome folding and the large-scale genome organization. Could a part of information on the chromosome folding be obtained directly from underlying genomic DNA sequences abundantly stored in the databanks? To answer this question, we developed an original discrete double Fourier transform (DDFT). DDFT serves for the detection of large-scale genome regularities associated with domains/units at the different levels of hierarchical chromosome folding. The method is versatile and can be applied to both genomic DNA sequences and corresponding physico-chemical parameters such as base-pairing free energy. The latter characteristic is closely related to the replication and transcription and can also be used for the assessment of temperature or supercoiling effects on the chromosome folding. We tested the method on the genome of E. coli K-12 and found good correspondence with the annotated domains/units established experimentally. As a brief illustration of further abilities of DDFT, the study of large-scale genome organization for bacteriophage PHIX174 and bacterium Caulobacter crescentus was also added. The combined experimental, modeling, and bioinformatic DDFT analysis should yield more complete knowledge on the chromosome architecture and genome organization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Iterative-Transform Phase Diversity: An Object and Wavefront Recovery Algorithm

    NASA Technical Reports Server (NTRS)

    Smith, J. Scott

    2011-01-01

    Presented is a solution for recovering the wavefront and an extended object. It builds upon the VSM architecture and deconvolution algorithms. Simulations are shown for recovering the wavefront and extended object from noisy data.

  10. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    DOEpatents

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  11. Tailoring Selective Laser Melting Process Parameters for NiTi Implants

    NASA Astrophysics Data System (ADS)

    Bormann, Therese; Schumacher, Ralf; Müller, Bert; Mertmann, Matthias; de Wild, Michael

    2012-12-01

    Complex-shaped NiTi constructions become more and more essential for biomedical applications especially for dental or cranio-maxillofacial implants. The additive manufacturing method of selective laser melting allows realizing complex-shaped elements with predefined porosity and three-dimensional micro-architecture directly out of the design data. We demonstrate that the intentional modification of the applied energy during the SLM-process allows tailoring the transformation temperatures of NiTi entities within the entire construction. Differential scanning calorimetry, x-ray diffraction, and metallographic analysis were employed for the thermal and structural characterizations. In particular, the phase transformation temperatures, the related crystallographic phases, and the formed microstructures of SLM constructions were determined for a series of SLM-processing parameters. The SLM-NiTi exhibits pseudoelastic behavior. In this manner, the properties of NiTi implants can be tailored to build smart implants with pre-defined micro-architecture and advanced performance.

  12. Gregarious Data Re-structuring in a Many Core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Manzano Franco, Joseph B.; Marquez, Andres

    this paper, we have developed a new methodology that takes in consideration the access patterns from a single parallel actor (e.g. a thread), as well as, the access patterns of “grouped” parallel actors that share a resource (e.g. a distributed Level 3 cache). We start with a hierarchical tile code for our target machine and apply a series of transformations at the tile level to improve data residence in a given memory hierarchy level. The contribution of this paper includes (a) collaborative data restructuring for group reuse and (b) low overhead transformation technique to improve access pattern and bring closelymore » connected data elements together. Preliminary results in a many core architecture, Tilera TileGX, shows promising improvements over optimized OpenMP code (up to 31% increase in GFLOPS) and over our own previous work on fine grained runtimes (up to 16%) for selected kernels« less

  13. ESPC Common Model Architecture

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Common Model Architecture Earth System Modeling...Operational Prediction Capability (NUOPC) was established between NOAA and Navy to develop common software architecture for easy and efficient...development under a common model architecture and other software-related standards in this project. OBJECTIVES NUOPC proposes to accelerate

  14. Tough and deformable glasses with bioinspired cross-ply architectures.

    PubMed

    Yin, Zhen; Dastjerdi, Ahmad; Barthelat, Francois

    2018-05-15

    Glasses are optically transparent, hard materials that have been in sustained demand and usage in architectural windows, optical devices, electronics and solar panels. Despite their outstanding optical qualities and durability, their brittleness and low resistance to impact still limits wider applications. Here we present new laminated glass designs that contain toughening cross-ply architectures inspired from fish scales and arthropod cuticles. This seemingly minor enrichment completely transforms the way laminated glass deforms and fractures, and it turns a traditionally brittle material into a stretchy and tough material with little impact on surface hardness and optical quality. Large ply rotation propagates over large volumes, and localization is delayed in tension, even if a strain softening interlayer is used, in a remarkable mechanism which is generated by the kinematics of the plies and geometrical hardening. Compared to traditional laminated glass which degrades significantly in performance when damaged, our cross-ply architecture glass is damage-tolerant and 50 times tougher in energy terms. Despite the outstanding optical qualities and durability of glass, its brittleness and low resistance to impact still limits its wider application. Here we present new laminated glass designs that contain toughening cross-ply architectures inspired from fish scales and arthropod cuticles. Enriching laminated designs with crossplies completely transforms the material deforms and fractures, and turns a traditionally brittle material into a stretchy and tough material - with little impact on surface hardness and optical quality. Large ply rotation propagates over large volumes and localization is delayed in tension because of a remarkable and unexpected geometrical hardening effect. Compared to traditional laminated glass which degrades significantly in performance when damaged, our cross-ply architecture glass is damage-tolerant and it is 50 times tougher in energy terms. Our glass-based, transparent material is highly innovative and it is the first of its kind. We believe it will have impact in broad range of applications in construction, coatings, chemical engineering, electronics, photovoltaics. Copyright © 2018. Published by Elsevier Ltd.

  15. A Proposed Pattern of Enterprise Architecture

    DTIC Science & Technology

    2013-02-01

    consistent architecture descriptions. UPDM comprises extensions to both OMG’s Unified Modelling Language (UML) and Systems Modelling Language ( SysML ...those who use UML and SysML . These represent significant advancements that enable architecture trade-off analyses, architecture model execution...Language ( SysML ), and thus provides for architectural descriptions that contain a rich set of (formally) connected DoDAF/MoDAF viewpoints expressed

  16. The System of Systems Architecture Feasibility Assessment Model

    DTIC Science & Technology

    2016-06-01

    OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL by Stephen E. Gillespie June 2016 Dissertation Supervisor Eugene Paulo THIS PAGE...Dissertation 4. TITLE AND SUBTITLE THE SYSTEM OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL 5. FUNDING NUMBERS 6. AUTHOR(S) Stephen E...SoS architecture feasibility assessment model (SoS-AFAM). Together, these extend current model- based systems engineering (MBSE) and SoS engineering

  17. 3-D System-on-System (SoS) Biomedical-Imaging Architecture for Health-Care Applications.

    PubMed

    Sang-Jin Lee; Kavehei, O; Yoon-Ki Hong; Tae Won Cho; Younggap You; Kyoungrok Cho; Eshraghian, K

    2010-12-01

    This paper presents the implementation of a 3-D architecture for a biomedical-imaging system based on a multilayered system-on-system structure. The architecture consists of a complementary metal-oxide semiconductor image sensor layer, memory, 3-D discrete wavelet transform (3D-DWT), 3-D Advanced Encryption Standard (3D-AES), and an RF transmitter as an add-on layer. Multilayer silicon (Si) stacking permits fabrication and optimization of individual layers by different processing technology to achieve optimal performance. Utilization of through silicon via scheme can address required low-power operation as well as high-speed performance. Potential benefits of 3-D vertical integration include an improved form factor as well as a reduction in the total wiring length, multifunctionality, power efficiency, and flexible heterogeneous integration. The proposed imaging architecture was simulated by using Cadence Spectre and Synopsys HSPICE while implementation was carried out by Cadence Virtuoso and Mentor Graphic Calibre.

  18. A static data flow simulation study at Ames Research Center

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Howard, Lauri S.

    1987-01-01

    Demands in computational power, particularly in the area of computational fluid dynamics (CFD), led NASA Ames Research Center to study advanced computer architectures. One architecture being studied is the static data flow architecture based on research done by Jack B. Dennis at MIT. To improve understanding of this architecture, a static data flow simulator, written in Pascal, has been implemented for use on a Cray X-MP/48. A matrix multiply and a two-dimensional fast Fourier transform (FFT), two algorithms used in CFD work at Ames, have been run on the simulator. Execution times can vary by a factor of more than 2 depending on the partitioning method used to assign instructions to processing elements. Service time for matching tokens has proved to be a major bottleneck. Loop control and array address calculation overhead can double the execution time. The best sustained MFLOPS rates were less than 50% of the maximum capability of the machine.

  19. A GaAs vector processor based on parallel RISC microprocessors

    NASA Astrophysics Data System (ADS)

    Misko, Tim A.; Rasset, Terry L.

    A vector processor architecture based on the development of a 32-bit microprocessor using gallium arsenide (GaAs) technology has been developed. The McDonnell Douglas vector processor (MVP) will be fabricated completely from GaAs digital integrated circuits. The MVP architecture includes a vector memory of 1 megabyte, a parallel bus architecture with eight processing elements connected in parallel, and a control processor. The processing elements consist of a reduced instruction set CPU (RISC) with four floating-point coprocessor units and necessary memory interface functions. This architecture has been simulated for several benchmark programs including complex fast Fourier transform (FFT), complex inner product, trigonometric functions, and sort-merge routine. The results of this study indicate that the MVP can process a 1024-point complex FFT at a speed of 112 microsec (389 megaflops) while consuming approximately 618 W of power in a volume of approximately 0.1 ft-cubed.

  20. Fukunaga-Koontz feature transformation for statistical structural damage detection and hierarchical neuro-fuzzy damage localisation

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2017-07-01

    Considering jointly damage sensitive features (DSFs) of signals recorded by multiple sensors, applying advanced transformations to these DSFs and assessing systematically their contribution to damage detectability and localisation can significantly enhance the performance of structural health monitoring systems. This philosophy is explored here for partial autocorrelation coefficients (PACCs) of acceleration responses. They are interrogated with the help of the linear discriminant analysis based on the Fukunaga-Koontz transformation using datasets of the healthy and selected reference damage states. Then, a simple but efficient fast forward selection procedure is applied to rank the DSF components with respect to statistical distance measures specialised for either damage detection or localisation. For the damage detection task, the optimal feature subsets are identified based on the statistical hypothesis testing. For damage localisation, a hierarchical neuro-fuzzy tool is developed that uses the DSF ranking to establish its own optimal architecture. The proposed approaches are evaluated experimentally on data from non-destructively simulated damage in a laboratory scale wind turbine blade. The results support our claim of being able to enhance damage detectability and localisation performance by transforming and optimally selecting DSFs. It is demonstrated that the optimally selected PACCs from multiple sensors or their Fukunaga-Koontz transformed versions can not only improve the detectability of damage via statistical hypothesis testing but also increase the accuracy of damage localisation when used as inputs into a hierarchical neuro-fuzzy network. Furthermore, the computational effort of employing these advanced soft computing models for damage localisation can be significantly reduced by using transformed DSFs.

  1. Affixation in semantic space: Modeling morpheme meanings with compositional distributional semantics.

    PubMed

    Marelli, Marco; Baroni, Marco

    2015-07-01

    The present work proposes a computational model of morpheme combination at the meaning level. The model moves from the tenets of distributional semantics, and assumes that word meanings can be effectively represented by vectors recording their co-occurrence with other words in a large text corpus. Given this assumption, affixes are modeled as functions (matrices) mapping stems onto derived forms. Derived-form meanings can be thought of as the result of a combinatorial procedure that transforms the stem vector on the basis of the affix matrix (e.g., the meaning of nameless is obtained by multiplying the vector of name with the matrix of -less). We show that this architecture accounts for the remarkable human capacity of generating new words that denote novel meanings, correctly predicting semantic intuitions about novel derived forms. Moreover, the proposed compositional approach, once paired with a whole-word route, provides a new interpretative framework for semantic transparency, which is here partially explained in terms of ease of the combinatorial procedure and strength of the transformation brought about by the affix. Model-based predictions are in line with the modulation of semantic transparency on explicit intuitions about existing words, response times in lexical decision, and morphological priming. In conclusion, we introduce a computational model to account for morpheme combination at the meaning level. The model is data-driven, theoretically sound, and empirically supported, and it makes predictions that open new research avenues in the domain of semantic processing. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  2. Transforming Legacy Systems to Obtain Information Superiority

    DTIC Science & Technology

    2001-01-01

    is imperative that innovative technologies be developed to enable legacy weapon systems to exploit the information revolution, achieve information ... dominance , and meet the required operational tempo. This paper presents an embedded-system architecture, open system middleware services, and a software

  3. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Deutsch, Leslie J. (Inventor); Shao, Howard M. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  4. Signal processing applications of massively parallel charge domain computing devices

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Barhen, Jacob (Inventor); Toomarian, Nikzad (Inventor)

    1999-01-01

    The present invention is embodied in a charge coupled device (CCD)/charge injection device (CID) architecture capable of performing a Fourier transform by simultaneous matrix vector multiplication (MVM) operations in respective plural CCD/CID arrays in parallel in O(1) steps. For example, in one embodiment, a first CCD/CID array stores charge packets representing a first matrix operator based upon permutations of a Hartley transform and computes the Fourier transform of an incoming vector. A second CCD/CID array stores charge packets representing a second matrix operator based upon different permutations of a Hartley transform and computes the Fourier transform of an incoming vector. The incoming vector is applied to the inputs of the two CCD/CID arrays simultaneously, and the real and imaginary parts of the Fourier transform are produced simultaneously in the time required to perform a single MVM operation in a CCD/CID array.

  5. Lunar PMAD technology assessment

    NASA Technical Reports Server (NTRS)

    Metcalf, Kenneth J.

    1992-01-01

    This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.

  6. In-flight control and communication architecture of the GLORIA imaging limb sounder on atmospheric research aircraft

    NASA Astrophysics Data System (ADS)

    Kretschmer, E.; Bachner, M.; Blank, J.; Dapp, R.; Ebersoldt, A.; Friedl-Vallon, F.; Guggenmoser, T.; Gulde, T.; Hartmann, V.; Lutz, R.; Maucher, G.; Neubert, T.; Oelhaf, H.; Preusse, P.; Schardt, G.; Schmitt, C.; Schönfeld, A.; Tan, V.

    2015-06-01

    The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA), a Fourier-transform-spectrometer-based limb spectral imager, operates on high-altitude research aircraft to study the transit region between the troposphere and the stratosphere. It is one of the most sophisticated systems to be flown on research aircraft in Europe, requiring constant monitoring and human intervention in addition to an automation system. To ensure proper functionality and interoperability on multiple platforms, a flexible control and communication system was laid out. The architectures of the communication system as well as the protocols used are reviewed. The integration of this architecture in the automation process as well as the scientific campaign flight application context are discussed.

  7. In-flight control and communication architecture of the GLORIA imaging limb-sounder on atmospheric research aircraft

    NASA Astrophysics Data System (ADS)

    Kretschmer, E.; Bachner, M.; Blank, J.; Dapp, R.; Ebersoldt, A.; Friedl-Vallon, F.; Guggenmoser, T.; Gulde, T.; Hartmann, V.; Lutz, R.; Maucher, G.; Neubert, T.; Oelhaf, H.; Preusse, P.; Schardt, G.; Schmitt, C.; Schönfeld, A.; Tan, V.

    2015-02-01

    The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA), a Fourier transform spectrometer based limb spectral imager, operates on high-altitude research aircraft to study the transit region between the troposphere and the stratosphere. It is one of the most sophisticated systems to be flown on research aircraft in Europe, requiring constant monitoring and human intervention in addition to an automation system. To ensure proper functionality and interoperability on multiple platforms, a flexible control and communication system was laid out. The architectures of the communication system as well as the protocols used are reviewed. The integration of this architecture in the automation process as well as the scientific campaign flight application context are discussed.

  8. Re-Engineering JPL's Mission Planning Ground System Architecture for Cost Efficient Operations in the 21st Century

    NASA Technical Reports Server (NTRS)

    Fordyce, Jess

    1996-01-01

    Work carried out to re-engineer the mission analysis segment of JPL's mission planning ground system architecture is reported on. The aim is to transform the existing software tools, originally developed for specific missions on different support environments, into an integrated, general purpose, multi-mission tool set. The issues considered are: the development of a partnership between software developers and users; the definition of key mission analysis functions; the development of a consensus based architecture; the move towards evolutionary change instead of revolutionary replacement; software reusability, and the minimization of future maintenance costs. The current status and aims of new developments are discussed and specific examples of cost savings and improved productivity are presented.

  9. Efficient transformation and artificial miRNA gene silencing in Lemna minor.

    PubMed

    Cantó-Pastor, A; Mollá-Morales, A; Ernst, E; Dahl, W; Zhai, J; Yan, Y; Meyers, B C; Shanklin, J; Martienssen, R

    2015-01-01

    Despite rapid doubling time, simple architecture and ease of metabolic labelling, a lack of genetic tools in the Lemnaceae (duckweed) has impeded the full implementation of this organism as a model for biological research. Here, we present technologies to facilitate high-throughput genetic studies in duckweed. We developed a fast and efficient method for producing Lemna minor stable transgenic fronds via Agrobacterium-mediated transformation and regeneration from tissue culture. Additionally, we engineered an artificial microRNA (amiRNA) gene silencing system. We identified a Lemna gibba endogenous miR166 precursor and used it as a backbone to produce amiRNAs. As a proof of concept we induced the silencing of CH42, a magnesium chelatase subunit, using our amiRNA platform. Expression of CH42 in transgenic L. minor fronds was significantly reduced, which resulted in reduction of chlorophyll pigmentation. The techniques presented here will enable tackling future challenges in the biology and biotechnology of Lemnaceae. © 2014 German Botanical Society and The Royal Botanical Society of the Netherlands.

  10. Improved cancer risk stratification and diagnosis via quantitative phase microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Uttam, Shikhar; Pham, Hoa V.; Hartman, Douglas J.

    2017-02-01

    Pathology remains the gold standard for cancer diagnosis and in some cases prognosis, in which trained pathologists examine abnormality in tissue architecture and cell morphology characteristic of cancer cells with a bright-field microscope. The limited resolution of conventional microscope can result in intra-observer variation, missed early-stage cancers, and indeterminate cases that often result in unnecessary invasive procedures in the absence of cancer. Assessment of nanoscale structural characteristics via quantitative phase represents a promising strategy for identifying pre-cancerous or cancerous cells, due to its nanoscale sensitivity to optical path length, simple sample preparation (i.e., label-free) and low cost. I will present the development of quantitative phase microscopy system in transmission and reflection configuration to detect the structural changes in nuclear architecture, not be easily identifiable by conventional pathology. Specifically, we will present the use of transmission-mode quantitative phase imaging to improve diagnostic accuracy of urine cytology and the nuclear dry mass is progressively correlate with negative, atypical, suspicious and positive cytological diagnosis. In a second application, we will present the use of reflection-mode quantitative phase microscopy for depth-resolved nanoscale nuclear architecture mapping (nanoNAM) of clinically prepared formalin-fixed, paraffin-embedded tissue sections. We demonstrated that the quantitative phase microscopy system detects a gradual increase in the density alteration of nuclear architecture during malignant transformation in animal models of colon carcinogenesis and in human patients with ulcerative colitis, even in tissue that appears histologically normal according to pathologists. We evaluated the ability of nanoNAM to predict "future" cancer progression in patients with ulcerative colitis.

  11. Excavator Design Validation

    NASA Technical Reports Server (NTRS)

    Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je

    2010-01-01

    The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.

  12. Performance measurement integrated information framework in e-Manufacturing

    NASA Astrophysics Data System (ADS)

    Teran, Hilaida; Hernandez, Juan Carlos; Vizán, Antonio; Ríos, José

    2014-11-01

    The implementation of Internet technologies has led to e-Manufacturing technologies becoming more widely used and to the development of tools for compiling, transforming and synchronising manufacturing data through the Web. In this context, a potential area for development is the extension of virtual manufacturing to performance measurement (PM) processes, a critical area for decision making and implementing improvement actions in manufacturing. This paper proposes a PM information framework to integrate decision support systems in e-Manufacturing. Specifically, the proposed framework offers a homogeneous PM information exchange model that can be applied through decision support in e-Manufacturing environment. Its application improves the necessary interoperability in decision-making data processing tasks. It comprises three sub-systems: a data model, a PM information platform and PM-Web services architecture. A practical example of data exchange for measurement processes in the area of equipment maintenance is shown to demonstrate the utility of the model.

  13. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.

    PubMed

    Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin

    2018-01-01

    We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hyeokjin; Chen, Hua; Maksimovic, Dragan

    An experimental 30 kW boost composite converter is described in this paper. The composite converter architecture, which consists of a buck module, a boost module, and a dual active bridge module that operates as a DC transformer (DCX), leads to substantial reductions in losses at partial power points, and to significant improvements in weighted efficiency in applications that require wide variations in power and conversion ratio. A comprehensive loss model is developed, accounting for semiconductor conduction and switching losses, capacitor losses, as well as dc and ac losses in magnetic components. Based on the developed loss model, the module andmore » system designs are optimized to maximize efficiency at a 50% power point. Experimental results for the 30 kW prototype demonstrate 98.5%peak efficiency, very high efficiency over wide ranges of power and voltage conversion ratios, as well as excellent agreements between model predictions and measured efficiency curves.« less

  15. A Reference Architecture for Space Information Management

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.

    2006-01-01

    We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.

  16. Framework for the Parametric System Modeling of Space Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Komar, David R.; Hoffman, Jim; Olds, Aaron D.; Seal, Mike D., II

    2008-01-01

    This paper presents a methodology for performing architecture definition and assessment prior to, or during, program formulation that utilizes a centralized, integrated architecture modeling framework operated by a small, core team of general space architects. This framework, known as the Exploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), enables: 1) a significantly larger fraction of an architecture trade space to be assessed in a given study timeframe; and 2) the complex element-to-element and element-to-system relationships to be quantitatively explored earlier in the design process. Discussion of the methodology advantages and disadvantages with respect to the distributed study team approach typically used within NASA to perform architecture studies is presented along with an overview of EXAMINE s functional components and tools. An example Mars transportation system architecture model is used to demonstrate EXAMINE s capabilities in this paper. However, the framework is generally applicable for exploration architecture modeling with destinations to any celestial body in the solar system.

  17. Important ingredients for health adaptive information systems.

    PubMed

    Senathirajah, Yalini; Bakken, Suzanne

    2011-01-01

    Healthcare information systems frequently do not truly meet clinician needs, due to the complexity, variability, and rapid change in medical contexts. Recently the internet world has been transformed by approaches commonly termed 'Web 2.0'. This paper proposes a Web 2.0 model for a healthcare adaptive architecture. The vision includes creating modular, user-composable systems which aim to make all necessary information from multiple internal and external sources available via a platform, for the user to use, arrange, recombine, author, and share at will, using rich interfaces where advisable. Clinicians can create a set of 'widgets' and 'views' which can transform data, reflect their domain knowledge and cater to their needs, using simple drag and drop interfaces without the intervention of programmers. We have built an example system, MedWISE, embodying the user-facing parts of the model. This approach to HIS is expected to have several advantages, including greater suitability to user needs (reflecting clinician rather than programmer concepts and priorities), incorporation of multiple information sources, agile reconfiguration to meet emerging situations and new treatment deployment, capture of user domain expertise and tacit knowledge, efficiencies due to workflow and human-computer interaction improvements, and greater user acceptance.

  18. Nonlinear flight control design using backstepping methodology

    NASA Astrophysics Data System (ADS)

    Tran, Thanh Trung

    The subject of nonlinear flight control design using backstepping control methodology is investigated in the dissertation research presented here. Control design methods based on nonlinear models of the dynamic system provide higher utility and versatility because the design model more closely matches the physical system behavior. Obtaining requisite model fidelity is only half of the overall design process, however. Design of the nonlinear control loops can lessen the effects of nonlinearity, or even exploit nonlinearity, to achieve higher levels of closed-loop stability, performance, and robustness. The goal of the research is to improve control quality for a general class of strict-feedback dynamic systems and provide flight control architectures to augment the aircraft motion. The research is divided into two parts: theoretical control development for the strict-feedback form of nonlinear dynamic systems and application of the proposed theory for nonlinear flight dynamics. In the first part, the research is built on two components: transforming the nonlinear dynamic model to a canonical strict-feedback form and then applying backstepping control theory to the canonical model. The research considers a process to determine when this transformation is possible, and when it is possible, a systematic process to transfer the model is also considered when practical. When this is not the case, certain modeling assumptions are explored to facilitate the transformation. After achieving the canonical form, a systematic design procedure for formulating a backstepping control law is explored in the research. Starting with the simplest subsystem and ending with the full system, pseudo control concepts based on Lyapunov control functions are used to control each successive subsystem. Typically each pseudo control must be solved from a nonlinear algebraic equation. At the end of this process, the physical control input must be re-expressed in terms of the physical states by eliminating the pseudo control transformations. In the second part, the research focuses on nonlinear control design for flight dynamics of aircraft motion. Some assumptions on aerodynamics of the aircraft are addressed to transform full nonlinear flight dynamics into the canonical strict-feedback form. The assumptions are also analyzed, validated, and compared to show the advantages and disadvantages of the design models. With the achieved models, investigation focuses on formulating the backstepping control laws and provides an advanced control algorithm for nonlinear flight dynamics of the aircraft. Experimental and simulation studies are successfully implemented to validate the proposed control method. Advancement of nonlinear backstepping control theory and its application to nonlinear flight control are achieved in the dissertation research.

  19. Human Resource Automation Architecture Validation for a Transforming Army

    DTIC Science & Technology

    2005-03-03

    2004. 26 Ibid 27 Alan Bundy, “One Essential Direction: Information Literacy , Information Technology Fluency” 30 May 2003. available from...smartcard.html>. Internet. Accessed 11 December 2004. Bundy, Alan “One Essential Direction: Information Literacy , Information Technology Fluency” 30

  20. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    PubMed

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  1. Software architecture of INO340 telescope control system

    NASA Astrophysics Data System (ADS)

    Ravanmehr, Reza; Khosroshahi, Habib

    2016-08-01

    The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.

  2. Flame: A Flexible Data Reduction Pipeline for Near-Infrared and Optical Spectroscopy

    NASA Astrophysics Data System (ADS)

    Belli, Sirio; Contursi, Alessandra; Davies, Richard I.

    2018-05-01

    We present flame, a pipeline for reducing spectroscopic observations obtained with multi-slit near-infrared and optical instruments. Because of its flexible design, flame can be easily applied to data obtained with a wide variety of spectrographs. The flexibility is due to a modular architecture, which allows changes and customizations to the pipeline, and relegates the instrument-specific parts to a single module. At the core of the data reduction is the transformation from observed pixel coordinates (x, y) to rectified coordinates (λ, γ). This transformation consists in the polynomial functions λ(x, y) and γ(x, y) that are derived from arc or sky emission lines and slit edge tracing, respectively. The use of 2D transformations allows one to wavelength-calibrate and rectify the data using just one interpolation step. Furthermore, the γ(x, y) transformation includes also the spatial misalignment between frames, which can be measured from a reference star observed simultaneously with the science targets. The misalignment can then be fully corrected during the rectification, without having to further resample the data. Sky subtraction can be performed via nodding and/or modeling of the sky spectrum; the combination of the two methods typically yields the best results. We illustrate the pipeline by showing examples of data reduction for a near-infrared instrument (LUCI at the Large Binocular Telescope) and an optical one (LRIS at the Keck telescope).

  3. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    PubMed

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  4. Heuristic extraction of rules in pruned artificial neural networks models used for quantifying highly overlapping chromatographic peaks.

    PubMed

    Hervás, César; Silva, Manuel; Serrano, Juan Manuel; Orejuela, Eva

    2004-01-01

    The suitability of an approach for extracting heuristic rules from trained artificial neural networks (ANNs) pruned by a regularization method and with architectures designed by evolutionary computation for quantifying highly overlapping chromatographic peaks is demonstrated. The ANN input data are estimated by the Levenberg-Marquardt method in the form of a four-parameter Weibull curve associated with the profile of the chromatographic band. To test this approach, two N-methylcarbamate pesticides, carbofuran and propoxur, were quantified using a classic peroxyoxalate chemiluminescence reaction as a detection system for chromatographic analysis. Straightforward network topologies (one and two outputs models) allow the analytes to be quantified in concentration ratios ranging from 1:7 to 5:1 with an average standard error of prediction for the generalization test of 2.7 and 2.3% for carbofuran and propoxur, respectively. The reduced dimensions of the selected ANN architectures, especially those obtained after using heuristic rules, allowed simple quantification equations to be developed that transform the input variables into output variables. These equations can be easily interpreted from a chemical point of view to attain quantitative analytical information regarding the effect of both analytes on the characteristics of chromatographic bands, namely profile, dispersion, peak height, and residence time. Copyright 2004 American Chemical Society

  5. Modeling the Personal Health Ecosystem.

    PubMed

    Blobel, Bernd; Brochhausen, Mathias; Ruotsalainen, Pekka

    2018-01-01

    Complex ecosystems like the pHealth one combine different domains represented by a huge variety of different actors (human beings, organizations, devices, applications, components) belonging to different policy domains, coming from different disciplines, deploying different methodologies, terminologies, and ontologies, offering different levels of knowledge, skills, and experiences, acting in different scenarios and accommodating different business cases to meet the intended business objectives. For correctly modeling such systems, a system-oriented, architecture-centric, ontology-based, policy-driven approach is inevitable, thereby following established Good Modeling Best Practices. However, most of the existing standards, specifications and tools for describing, representing, implementing and managing health (information) systems reflect the advancement of information and communication technology (ICT) represented by different evolutionary levels of data modeling. The paper presents a methodology for integrating, adopting and advancing models, standards, specifications as well as implemented systems and components on the way towards the aforementioned ultimate approach, so meeting the challenge we face when transforming health systems towards ubiquitous, personalized, predictive, preventive, participative, and cognitive health and social care.

  6. Equivalence of restricted Boltzmann machines and tensor network states

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Cheng, Song; Xie, Haidong; Wang, Lei; Xiang, Tao

    2018-02-01

    The restricted Boltzmann machine (RBM) is one of the fundamental building blocks of deep learning. RBM finds wide applications in dimensional reduction, feature extraction, and recommender systems via modeling the probability distributions of a variety of input data including natural images, speech signals, and customer ratings, etc. We build a bridge between RBM and tensor network states (TNS) widely used in quantum many-body physics research. We devise efficient algorithms to translate an RBM into the commonly used TNS. Conversely, we give sufficient and necessary conditions to determine whether a TNS can be transformed into an RBM of given architectures. Revealing these general and constructive connections can cross fertilize both deep learning and quantum many-body physics. Notably, by exploiting the entanglement entropy bound of TNS, we can rigorously quantify the expressive power of RBM on complex data sets. Insights into TNS and its entanglement capacity can guide the design of more powerful deep learning architectures. On the other hand, RBM can represent quantum many-body states with fewer parameters compared to TNS, which may allow more efficient classical simulations.

  7. A software simulation study of a (255,223) Reed-Solomon encoder-decoder

    NASA Technical Reports Server (NTRS)

    Pollara, F.

    1985-01-01

    A set of software programs which simulates a (255,223) Reed-Solomon encoder/decoder pair is described. The transform decoder algorithm uses a modified Euclid algorithm, and closely follows the pipeline architecture proposed for the hardware decoder. Uncorrectable error patterns are detected by a simple test, and the inverse transform is computed by a finite field FFT. Numerical examples of the decoder operation are given for some test codewords, with and without errors. The use of the software package is briefly described.

  8. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  9. Embedding the shapes of regions of interest into a Clinical Document Architecture document.

    PubMed

    Minh, Nguyen Hai; Yi, Byoung-Kee; Kim, Il Kon; Song, Joon Hyun; Binh, Pham Viet

    2015-03-01

    Sharing a medical image visually annotated by a region of interest with a remotely located specialist for consultation is a good practice. It may, however, require a special-purpose (and most likely expensive) system to send and view them, which is an unfeasible solution in developing countries such as Vietnam. In this study, we design and implement interoperable methods based on the HL7 Clinical Document Architecture and the eXtensible Markup Language Stylesheet Language for Transformation standards to seamlessly exchange and visually present the shapes of regions of interest using web browsers. We also propose a new integration architecture for a Clinical Document Architecture generator that enables embedding of regions of interest and simultaneous auto-generation of corresponding style sheets. Using the Clinical Document Architecture document and style sheet, a sender can transmit clinical documents and medical images together with coordinate values of regions of interest to recipients. Recipients can easily view the documents and display embedded regions of interest by rendering them in their web browser of choice. © The Author(s) 2014.

  10. Performance and stability of telemanipulators using bilateral impedance control. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Moore, Christopher Lane

    1991-01-01

    A new method of control for telemanipulators called bilateral impedance control is investigated. This new method differs from previous approaches in that interaction forces are used as the communication signals between the master and slave robots. The new control architecture has several advantages: (1) It allows the master robot and the slave robot to be stabilized independently without becoming involved in the overall system dynamics; (2) It permits the system designers to arbitrarily specify desired performance characteristics such as the force and position ratios between the master and slave; (3) The impedance at both ends of the telerobotic system can be modulated to suit the requirements of the task. The main goals of the research are to characterize the performance and stability of the new control architecture. The dynamics of the telerobotic system are described by a bond graph model that illustrates how energy is transformed, stored, and dissipated. Performance can be completely described by a set of three independent parameters. These parameters are fundamentally related to the structure of the H matrix that regulates the communication of force signals within the system. Stability is analyzed with two mathematical techniques: the Small Gain Theorem and the Multivariable Nyquist Criterion. The theoretical predictions for performance and stability are experimentally verified by implementing the new control architecture on a multidegree of freedom telemanipulator.

  11. Signal Coherence Recovery Using Acousto-Optic Fourier Transform Architectures

    DTIC Science & Technology

    1990-06-14

    processing of data in ground- and space-based applications. We have implemented a prototype one-dimensional time-integrating acousto - optic (AO) Fourier...theory of optimum coherence recovery (CR) applicable in computation-limited environments. We have demonstrated direct acousto - optic implementation of CR

  12. Organizational Leadership Process for University Education

    ERIC Educational Resources Information Center

    Llamosa-Villalba, Ricardo; Delgado, Dario J.; Camacho, Heidi P.; Paéz, Ana M.; Valdivieso, Raúl F.

    2014-01-01

    This paper relates the "Agile School", an emerging archetype of the enterprise architecture: "Processes of Organizational Leadership" for leading and managing strategies, tactics and operations of forming in Higher Education Institutions. Agile School is a system for innovation and deep transformation of University Institutions…

  13. Rigorous Free-Fermion Entanglement Renormalization from Wavelet Theory

    NASA Astrophysics Data System (ADS)

    Haegeman, Jutho; Swingle, Brian; Walter, Michael; Cotler, Jordan; Evenbly, Glen; Scholz, Volkher B.

    2018-01-01

    We construct entanglement renormalization schemes that provably approximate the ground states of noninteracting-fermion nearest-neighbor hopping Hamiltonians on the one-dimensional discrete line and the two-dimensional square lattice. These schemes give hierarchical quantum circuits that build up the states from unentangled degrees of freedom. The circuits are based on pairs of discrete wavelet transforms, which are approximately related by a "half-shift": translation by half a unit cell. The presence of the Fermi surface in the two-dimensional model requires a special kind of circuit architecture to properly capture the entanglement in the ground state. We show how the error in the approximation can be controlled without ever performing a variational optimization.

  14. An image based information system - Architecture for correlating satellite and topological data bases

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Zobrist, A. L.

    1978-01-01

    The paper describes the development of an image based information system and its use to process a Landsat thematic map showing land use or land cover in conjunction with a census tract polygon file to produce a tabulation of land use acreages per census tract. The system permits the efficient cross-tabulation of two or more geo-coded data sets, thereby setting the stage for the practical implementation of models of diffusion processes or cellular transformation. Characteristics of geographic information systems are considered, and functional requirements, such as data management, geocoding, image data management, and data analysis are discussed. The system is described, and the potentialities of its use are examined.

  15. Spatial Dmbs Architecture for a Free and Open Source Bim

    NASA Astrophysics Data System (ADS)

    Logothetis, S.; Valari, E.; Karachaliou, E.; Stylianidis, E.

    2017-08-01

    Recent research on the field of Building Information Modelling (BIM) technology, revealed that except of a few, accessible and free BIM viewers there is a lack of Free & Open Source Software (FOSS) BIM software for the complete BIM process. With this in mind and considering BIM as the technological advancement of Computer-Aided Design (CAD) systems, the current work proposes the use of a FOSS CAD software in order to extend its capabilities and transform it gradually into a FOSS BIM platform. Towards this undertaking, a first approach on developing a spatial Database Management System (DBMS) able to store, organize and manage the overall amount of information within a single application, is presented.

  16. Hierarchical representation and machine learning from faulty jet engine behavioral examples to detect real time abnormal conditions

    NASA Technical Reports Server (NTRS)

    Gupta, U. K.; Ali, M.

    1988-01-01

    The theoretical basis and operation of LEBEX, a machine-learning system for jet-engine performance monitoring, are described. The behavior of the engine is modeled in terms of four parameters (the rotational speeds of the high- and low-speed sections and the exhaust and combustion temperatures), and parameter variations indicating malfunction are transformed into structural representations involving instances and events. LEBEX extracts descriptors from a set of training data on normal and faulty engines, represents them hierarchically in a knowledge base, and uses them to diagnose and predict faults on a real-time basis. Diagrams of the system architecture and printouts of typical results are shown.

  17. Dynamical principles in neuroscience

    NASA Astrophysics Data System (ADS)

    Rabinovich, Mikhail I.; Varona, Pablo; Selverston, Allen I.; Abarbanel, Henry D. I.

    2006-10-01

    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?

  18. Dynamical principles in neuroscience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabinovich, Mikhail I.; Varona, Pablo; Selverston, Allen I.

    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only amore » few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?.« less

  19. CSDMS2.0: Computational Infrastructure for Community Surface Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Syvitski, J. P.; Hutton, E.; Peckham, S. D.; Overeem, I.; Kettner, A.

    2012-12-01

    The Community Surface Dynamic Modeling System (CSDMS) is an NSF-supported, international and community-driven program that seeks to transform the science and practice of earth-surface dynamics modeling. CSDMS integrates a diverse community of more than 850 geoscientists representing 360 international institutions (academic, government, industry) from 60 countries and is supported by a CSDMS Interagency Committee (22 Federal agencies), and a CSDMS Industrial Consortia (18 companies). CSDMS presently distributes more 200 Open Source models and modeling tools, access to high performance computing clusters in support of developing and running models, and a suite of products for education and knowledge transfer. CSDMS software architecture employs frameworks and services that convert stand-alone models into flexible "plug-and-play" components to be assembled into larger applications. CSDMS2.0 will support model applications within a web browser, on a wider variety of computational platforms, and on other high performance computing clusters to ensure robustness and sustainability of the framework. Conversion of stand-alone models into "plug-and-play" components will employ automated wrapping tools. Methods for quantifying model uncertainty are being adapted as part of the modeling framework. Benchmarking data is being incorporated into the CSDMS modeling framework to support model inter-comparison. Finally, a robust mechanism for ingesting and utilizing semantic mediation databases is being developed within the Modeling Framework. Six new community initiatives are being pursued: 1) an earth - ecosystem modeling initiative to capture ecosystem dynamics and ensuing interactions with landscapes, 2) a geodynamics initiative to investigate the interplay among climate, geomorphology, and tectonic processes, 3) an Anthropocene modeling initiative, to incorporate mechanistic models of human influences, 4) a coastal vulnerability modeling initiative, with emphasis on deltas and their multiple threats and stressors, 5) a continental margin modeling initiative, to capture extreme oceanic and atmospheric events generating turbidity currents in the Gulf of Mexico, and 6) a CZO Focus Research Group, to develop compatibility between CSDMS architecture and protocols and Critical Zone Observatory-developed models and data.

  20. Enterprise application architecture development based on DoDAF and TOGAF

    NASA Astrophysics Data System (ADS)

    Tao, Zhi-Gang; Luo, Yun-Feng; Chen, Chang-Xin; Wang, Ming-Zhe; Ni, Feng

    2017-05-01

    For the purpose of supporting the design and analysis of enterprise application architecture, here, we report a tailored enterprise application architecture description framework and its corresponding design method. The presented framework can effectively support service-oriented architecting and cloud computing by creating the metadata model based on architecture content framework (ACF), DoDAF metamodel (DM2) and Cloud Computing Modelling Notation (CCMN). The framework also makes an effort to extend and improve the mapping between The Open Group Architecture Framework (TOGAF) application architectural inputs/outputs, deliverables and Department of Defence Architecture Framework (DoDAF)-described models. The roadmap of 52 DoDAF-described models is constructed by creating the metamodels of these described models and analysing the constraint relationship among metamodels. By combining the tailored framework and the roadmap, this article proposes a service-oriented enterprise application architecture development process. Finally, a case study is presented to illustrate the results of implementing the tailored framework in the Southern Base Management Support and Information Platform construction project using the development process proposed by the paper.

  1. A collaborative computer auditing system under SOA-based conceptual model

    NASA Astrophysics Data System (ADS)

    Cong, Qiushi; Huang, Zuoming; Hu, Jibing

    2013-03-01

    Some of the current challenges of computer auditing are the obstacles to retrieving, converting and translating data from different database schema. During the last few years, there are many data exchange standards under continuous development such as Extensible Business Reporting Language (XBRL). These XML document standards can be used for data exchange among companies, financial institutions, and audit firms. However, for many companies, it is still expensive and time-consuming to translate and provide XML messages with commercial application packages, because it is complicated and laborious to search and transform data from thousands of tables in the ERP databases. How to transfer transaction documents for supporting continuous auditing or real time auditing between audit firms and their client companies is a important topic. In this paper, a collaborative computer auditing system under SOA-based conceptual model is proposed. By utilizing the widely used XML document standards and existing data transformation applications developed by different companies and software venders, we can wrap these application as commercial web services that will be easy implemented under the forthcoming application environments: service-oriented architecture (SOA). Under the SOA environments, the multiagency mechanism will help the maturity and popularity of data assurance service over the Internet. By the wrapping of data transformation components with heterogeneous databases or platforms, it will create new component markets composed by many software vendors and assurance service companies to provide data assurance services for audit firms, regulators or third parties.

  2. Disciplines, models, and computers: the path to computational quantum chemistry.

    PubMed

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  3. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  4. Implementation of the semiclassical quantum Fourier transform in a scalable system.

    PubMed

    Chiaverini, J; Britton, J; Leibfried, D; Knill, E; Barrett, M D; Blakestad, R B; Itano, W M; Jost, J D; Langer, C; Ozeri, R; Schaetz, T; Wineland, D J

    2005-05-13

    We report the implementation of the semiclassical quantum Fourier transform in a system of three beryllium ion qubits (two-level quantum systems) confined in a segmented multizone trap. The quantum Fourier transform is the crucial final step in Shor's algorithm, and it acts on a register of qubits to determine the periodicity of the quantum state's amplitudes. Because only probability amplitudes are required for this task, a more efficient semiclassical version can be used, for which only single-qubit operations conditioned on measurement outcomes are required. We apply the transform to several input states of different periodicities; the results enable the location of peaks corresponding to the original periods. This demonstration incorporates the key elements of a scalable ion-trap architecture, suggesting the future capability of applying the quantum Fourier transform to a large number of qubits as required for a useful quantum factoring algorithm.

  5. Performance Analysis of GFDL's GCM Line-By-Line Radiative Transfer Model on GPU and MIC Architectures

    NASA Astrophysics Data System (ADS)

    Menzel, R.; Paynter, D.; Jones, A. L.

    2017-12-01

    Due to their relatively low computational cost, radiative transfer models in global climate models (GCMs) run on traditional CPU architectures generally consist of shortwave and longwave parameterizations over a small number of wavelength bands. With the rise of newer GPU and MIC architectures, however, the performance of high resolution line-by-line radiative transfer models may soon approach those of the physical parameterizations currently employed in GCMs. Here we present an analysis of the current performance of a new line-by-line radiative transfer model currently under development at GFDL. Although originally designed to specifically exploit GPU architectures through the use of CUDA, the radiative transfer model has recently been extended to include OpenMP in an effort to also effectively target MIC architectures such as Intel's Xeon Phi. Using input data provided by the upcoming Radiative Forcing Model Intercomparison Project (RFMIP, as part of CMIP 6), we compare model results and performance data for various model configurations and spectral resolutions run on both GPU and Intel Knights Landing architectures to analogous runs of the standard Oxford Reference Forward Model on traditional CPUs.

  6. A Reusable Framework for Regional Climate Model Evaluation

    NASA Astrophysics Data System (ADS)

    Hart, A. F.; Goodale, C. E.; Mattmann, C. A.; Lean, P.; Kim, J.; Zimdars, P.; Waliser, D. E.; Crichton, D. J.

    2011-12-01

    Climate observations are currently obtained through a diverse network of sensors and platforms that include space-based observatories, airborne and seaborne platforms, and distributed, networked, ground-based instruments. These global observational measurements are critical inputs to the efforts of the climate modeling community and can provide a corpus of data for use in analysis and validation of climate models. The Regional Climate Model Evaluation System (RCMES) is an effort currently being undertaken to address the challenges of integrating this vast array of observational climate data into a coherent resource suitable for performing model analysis at the regional level. Developed through a collaboration between the NASA Jet Propulsion Laboratory (JPL) and the UCLA Joint Institute for Regional Earth System Science and Engineering (JIFRESSE), the RCMES uses existing open source technologies (MySQL, Apache Hadoop, and Apache OODT), to construct a scalable, parametric, geospatial data store that incorporates decades of observational data from a variety of NASA Earth science missions, as well as other sources into a consistently annotated, highly available scientific resource. By eliminating arbitrary partitions in the data (individual file boundaries, differing file formats, etc), and instead treating each individual observational measurement as a unique, geospatially referenced data point, the RCMES is capable of transforming large, heterogeneous collections of disparate observational data into a unified resource suitable for comparison to climate model output. This facility is further enhanced by the availability of a model evaluation toolkit which consists of a set of Python libraries, a RESTful web service layer, and a browser-based graphical user interface that allows for orchestration of model-to-data comparisons by composing them visually through web forms. This combination of tools and interfaces dramatically simplifies the process of interacting with and utilizing large volumes of observational data for model evaluation research. We feel that the RCMES is particularly appealing in that it represents a principled, reusable architectural approach rather than a one-off technological implementation. In fact, early RCMES prototypes have already utilized a variety of implementation technologies in an effort to address different performance and scalability concerns. This has been greatly facilitated by the fact that, at the architectural level, the RCMES is fundamentally domain agnostic. Strictly separating the data model from the implementation has enabled us to create a reusable architecture that we believe can be modified and configured to suit the demands of researchers in other domains.

  7. Space Generic Open Avionics Architecture (SGOAA) reference model technical guide

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  8. An avionics scenario and command model description for Space Generic Open Avionics Architecture (SGOAA)

    NASA Technical Reports Server (NTRS)

    Stovall, John R.; Wray, Richard B.

    1994-01-01

    This paper presents a description of a model for a space vehicle operational scenario and the commands for avionics. This model will be used in developing a dynamic architecture simulation model using the Statemate CASE tool for validation of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA has been proposed as an avionics architecture standard to NASA through its Strategic Avionics Technology Working Group (SATWG) and has been accepted by the Society of Automotive Engineers (SAE) for conversion into an SAE Avionics Standard. This architecture was developed for the Flight Data Systems Division (FDSD) of the NASA Johnson Space Center (JSC) by the Lockheed Engineering and Sciences Company (LESC), Houston, Texas. This SGOAA includes a generic system architecture for the entities in spacecraft avionics, a generic processing external and internal hardware architecture, and a nine class model of interfaces. The SGOAA is both scalable and recursive and can be applied to any hierarchical level of hardware/software processing systems.

  9. Designing Networks for Innovation

    ERIC Educational Resources Information Center

    Laskowski, Paul Luke

    2009-01-01

    The last decades have seen tremendous growth and transformation in the Internet's commercial landscape. Underneath this success, however, the underlying network architecture has shown a marked resistance to change; it is now described as stagnant and ossified. Numerous design proposals have been developed by researchers, implemented in code, and…

  10. The Use of Microcomputers to Improve Army Ground-Vehicle Readiness

    DTIC Science & Technology

    1980-03-01

    point indicated in Fig. 1. A demonstration of the leverage of the algorithmic parameter transformation can be inferred from data from automobile ...data) 45 data input channels: 1 accelerometer 22 digital data 22 analog data Backup battery Modular software architecture Field reprogrammable

  11. Direct 4D printing via active composite materials.

    PubMed

    Ding, Zhen; Yuan, Chao; Peng, Xirui; Wang, Tiejun; Qi, H Jerry; Dunn, Martin L

    2017-04-01

    We describe an approach to print composite polymers in high-resolution three-dimensional (3D) architectures that can be rapidly transformed to a new permanent configuration directly by heating. The permanent shape of a component results from the programmed time evolution of the printed shape upon heating via the design of the architecture and process parameters of a composite consisting of a glassy shape memory polymer and an elastomer that is programmed with a built-in compressive strain during photopolymerization. Upon heating, the shape memory polymer softens, releases the constraint on the strained elastomer, and allows the object to transform into a new permanent shape, which can then be reprogrammed into multiple subsequent shapes. Our key advance, the markedly simplified creation of high-resolution complex 3D reprogrammable structures, promises to enable myriad applications across domains, including medical technology, aerospace, and consumer products, and even suggests a new paradigm in product design, where components are simultaneously designed to inhabit multiple configurations during service.

  12. Large CMOS imager using hadamard transform based multiplexing

    NASA Technical Reports Server (NTRS)

    Karasik, Boris S.; Wadsworth, Mark V.

    2005-01-01

    We have developed a concept design for a large (10k x 10k) CMOS imaging array whose elements are grouped in small subarrays with N pixels in each. The subarrays are code-division multiplexed using the Hadamard Transform (HT) based encoding. The Hadamard code improves the signal-to-noise (SNR) ratio to the reference of the read-out amplifier by a factor of N^1/2. This way of grouping pixels reduces the number of hybridization bumps by N. A single chip layout has been designed and the architecture of the imager has been developed to accommodate the HT base multiplexing into the existing CMOS technology. The imager architecture allows for a trade-off between the speed and the sensitivity. The envisioned imager would operate at a speed >100 fps with the pixel noise < 20 e-. The power dissipation would be 100 pW/pixe1. The combination of the large format, high speed, high sensitivity and low power dissipation can be very attractive for space reconnaissance applications.

  13. Direct 4D printing via active composite materials

    PubMed Central

    Ding, Zhen; Yuan, Chao; Peng, Xirui; Wang, Tiejun; Qi, H. Jerry; Dunn, Martin L.

    2017-01-01

    We describe an approach to print composite polymers in high-resolution three-dimensional (3D) architectures that can be rapidly transformed to a new permanent configuration directly by heating. The permanent shape of a component results from the programmed time evolution of the printed shape upon heating via the design of the architecture and process parameters of a composite consisting of a glassy shape memory polymer and an elastomer that is programmed with a built-in compressive strain during photopolymerization. Upon heating, the shape memory polymer softens, releases the constraint on the strained elastomer, and allows the object to transform into a new permanent shape, which can then be reprogrammed into multiple subsequent shapes. Our key advance, the markedly simplified creation of high-resolution complex 3D reprogrammable structures, promises to enable myriad applications across domains, including medical technology, aerospace, and consumer products, and even suggests a new paradigm in product design, where components are simultaneously designed to inhabit multiple configurations during service. PMID:28439560

  14. Comparison of different artificial neural network architectures in modeling of Chlorella sp. flocculation.

    PubMed

    Zenooz, Alireza Moosavi; Ashtiani, Farzin Zokaee; Ranjbar, Reza; Nikbakht, Fatemeh; Bolouri, Oberon

    2017-07-03

    Biodiesel production from microalgae feedstock should be performed after growth and harvesting of the cells, and the most feasible method for harvesting and dewatering of microalgae is flocculation. Flocculation modeling can be used for evaluation and prediction of its performance under different affective parameters. However, the modeling of flocculation in microalgae is not simple and has not performed yet, under all experimental conditions, mostly due to different behaviors of microalgae cells during the process under different flocculation conditions. In the current study, the modeling of microalgae flocculation is studied with different neural network architectures. Microalgae species, Chlorella sp., was flocculated with ferric chloride under different conditions and then the experimental data modeled using artificial neural network. Neural network architectures of multilayer perceptron (MLP) and radial basis function architectures, failed to predict the targets successfully, though, modeling was effective with ensemble architecture of MLP networks. Comparison between the performances of the ensemble and each individual network explains the ability of the ensemble architecture in microalgae flocculation modeling.

  15. NuSTAR: system engineering and modeling challenges in pointing reconstruction for a deployable x-ray telescope

    NASA Astrophysics Data System (ADS)

    Harp, D. Isaiah; Liebe, Carl Christian; Craig, William; Harrison, Fiona; Kruse-Madsen, Kristin; Zoglauer, Andreas

    2010-07-01

    The Nuclear Spectroscopic Telescope Array (NuSTAR) is a NASA Small Explorer mission that will make the first sensitive images of the sky in the high energy X-ray band (6 - 80 keV). The NuSTAR observatory consists of two co-aligned grazing incidence hard X-ray telescopes with a ~10 meter focal length, achieved by the on-orbit extension of a deployable mast. A principal science objective of the mission is to locate previously unknown high-energy X-ray sources to an accuracy of 10 arcseconds (3-sigma), sufficient to uniquely identify counterparts at other wavelengths. In order to achieve this, a star tracker and laser metrology system are an integral part of the instrument; in conjunction, they will determine the orientation of the optics bench in celestial coordinates and also measure the flexures in the deployable mast as it responds to the varying on-orbit thermal environment, as well as aerodynamic and control torques. The architecture of the NuSTAR system for solving the attitude and aspect problems differs from that of previous X-ray telescopes, which did not require ex post facto reconstruction of the instantaneous observatory alignment on-orbit. In this paper we describe the NuSTAR instrument metrology system architecture and implementation, focusing on the systems engineering challenges associated with validating the instantaneous transformations between focal plane and celestial coordinates to within the required accuracy. We present a mathematical solution to photon source reconstruction, along with a detailed error budget that relates component errors to science performance. We also describe the architecture of the instrument simulation software being used to validate the end-to-end performance model.

  16. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  17. Facilitating the Specification Capture and Transformation Process in the Development of Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Filho, Aluzio Haendehen; Caminada, Numo; Haeusler, Edward Hermann; vonStaa, Arndt

    2004-01-01

    To support the development of flexible and reusable MAS, we have built a framework designated MAS-CF. MAS-CF is a component framework that implements a layered architecture based on contextual composition. Interaction rules, controlled by architecture mechanisms, ensure very low coupling, making possible the sharing of distributed services in a transparent, dynamic and independent way. These properties propitiate large-scale reuse, since organizational abstractions can be reused and propagated to all instances created from a framework. The objective is to reduce complexity and development time of multi-agent systems through the reuse of generic organizational abstractions.

  18. Rolled-up Functionalized Nanomembranes as Three-Dimensional Cavities for Single Cell Studies

    PubMed Central

    2014-01-01

    We use micropatterning and strain engineering to encapsulate single living mammalian cells into transparent tubular architectures consisting of three-dimensional (3D) rolled-up nanomembranes. By using optical microscopy, we demonstrate that these structures are suitable for the scrutiny of cellular dynamics within confined 3D-microenvironments. We show that spatial confinement of mitotic mammalian cells inside tubular architectures can perturb metaphase plate formation, delay mitotic progression, and cause chromosomal instability in both a transformed and nontransformed human cell line. These findings could provide important clues into how spatial constraints dictate cellular behavior and function. PMID:24598026

  19. Improving Quantum Gate Simulation using a GPU

    NASA Astrophysics Data System (ADS)

    Gutierrez, Eladio; Romero, Sergio; Trenas, Maria A.; Zapata, Emilio L.

    2008-11-01

    Due to the increasing computing power of the graphics processing units (GPU), they are becoming more and more popular when solving general purpose algorithms. As the simulation of quantum computers results on a problem with exponential complexity, it is advisable to perform a parallel computation, such as the one provided by the SIMD multiprocessors present in recent GPUs. In this paper, we focus on an important quantum algorithm, the quantum Fourier transform (QTF), in order to evaluate different parallelization strategies on a novel GPU architecture. Our implementation makes use of the new CUDA software/hardware architecture developed recently by NVIDIA.

  20. Information Quality Evaluation of C2 Systems at Architecture Level

    DTIC Science & Technology

    2014-06-01

    based on architecture models of C2 systems, which can help to identify key factors impacting information quality and improve the system capability at the stage of architecture design of C2 system....capability evaluation of C2 systems at architecture level becomes necessary and important for improving the system capability at the stage of architecture ... design . This paper proposes a method for information quality evaluation of C2 system at architecture level. First, the information quality model is

  1. Emergence of a Common Modeling Architecture for Earth System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Deluca, C.

    2010-12-01

    Common modeling architecture can be viewed as a natural outcome of common modeling infrastructure. The development of model utility and coupling packages (ESMF, MCT, OpenMI, etc.) over the last decade represents the realization of a community vision for common model infrastructure. The adoption of these packages has led to increased technical communication among modeling centers and newly coupled modeling systems. However, adoption has also exposed aspects of interoperability that must be addressed before easy exchange of model components among different groups can be achieved. These aspects include common physical architecture (how a model is divided into components) and model metadata and usage conventions. The National Unified Operational Prediction Capability (NUOPC), an operational weather prediction consortium, is collaborating with weather and climate researchers to define a common model architecture that encompasses these advanced aspects of interoperability and looks to future needs. The nature and structure of the emergent common modeling architecture will be discussed along with its implications for future model development.

  2. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    NASA Technical Reports Server (NTRS)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  3. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision

    PubMed Central

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  4. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  5. Implementation of a direct-imaging and FX correlator for the BEST-2 array

    NASA Astrophysics Data System (ADS)

    Foster, G.; Hickish, J.; Magro, A.; Price, D.; Zarb Adami, K.

    2014-04-01

    A new digital backend has been developed for the Basic Element for SKA Training II (BEST-2) array at Radiotelescopi di Medicina, INAF-IRA, Italy, which allows concurrent operation of an FX correlator, and a direct-imaging correlator and beamformer. This backend serves as a platform for testing some of the spatial Fourier transform concepts which have been proposed for use in computing correlations on regularly gridded arrays. While spatial Fourier transform-based beamformers have been implemented previously, this is, to our knowledge, the first time a direct-imaging correlator has been deployed on a radio astronomy array. Concurrent observations with the FX and direct-imaging correlator allow for direct comparison between the two architectures. Additionally, we show the potential of the direct-imaging correlator for time-domain astronomy, by passing a subset of beams though a pulsar and transient detection pipeline. These results provide a timely verification for spatial Fourier transform-based instruments that are currently in commissioning. These instruments aim to detect highly redshifted hydrogen from the epoch of reionization and/or to perform wide-field surveys for time-domain studies of the radio sky. We experimentally show the direct-imaging correlator architecture to be a viable solution for correlation and beamforming.

  6. Deep learning architectures for multi-label classification of intelligent health risk prediction.

    PubMed

    Maxwell, Andrew; Li, Runzhi; Yang, Bei; Weng, Heng; Ou, Aihua; Hong, Huixiao; Zhou, Zhaoxian; Gong, Ping; Zhang, Chaoyang

    2017-12-28

    Multi-label classification of data remains to be a challenging problem. Because of the complexity of the data, it is sometimes difficult to infer information about classes that are not mutually exclusive. For medical data, patients could have symptoms of multiple different diseases at the same time and it is important to develop tools that help to identify problems early. Intelligent health risk prediction models built with deep learning architectures offer a powerful tool for physicians to identify patterns in patient data that indicate risks associated with certain types of chronic diseases. Physical examination records of 110,300 anonymous patients were used to predict diabetes, hypertension, fatty liver, a combination of these three chronic diseases, and the absence of disease (8 classes in total). The dataset was split into training (90%) and testing (10%) sub-datasets. Ten-fold cross validation was used to evaluate prediction accuracy with metrics such as precision, recall, and F-score. Deep Learning (DL) architectures were compared with standard and state-of-the-art multi-label classification methods. Preliminary results suggest that Deep Neural Networks (DNN), a DL architecture, when applied to multi-label classification of chronic diseases, produced accuracy that was comparable to that of common methods such as Support Vector Machines. We have implemented DNNs to handle both problem transformation and algorithm adaption type multi-label methods and compare both to see which is preferable. Deep Learning architectures have the potential of inferring more information about the patterns of physical examination data than common classification methods. The advanced techniques of Deep Learning can be used to identify the significance of different features from physical examination data as well as to learn the contributions of each feature that impact a patient's risk for chronic diseases. However, accurate prediction of chronic disease risks remains a challenging problem that warrants further studies.

  7. Transformations of the Cultural Space of Podhale

    NASA Astrophysics Data System (ADS)

    Hrehorowicz, Hanna; Gaber

    2017-10-01

    The landscape, coverage and climate have a major impact on the architecture of the mountain lands. This architecture can be considered in multiple layers. In terms of culture, the area of Sub-Tatra Mountains is an ethnically coherent area with similar conditions. The development of architecture in the mountain areas can be seen in the XIX century, when the resorts have developed massively in the Carpathians. Leisure in the mountains in the resort towns has popularised many European towns. This was mainly related to the presence of mineral waters. Also in Poland, the spas were created in the form of Alpine, Swiss and Tyrolean resorts. The type of Podhale buildings has changed over time, especially since the 80s of the XIX century, when the Alpine-style villas began to appear in Zakopane. On the wave of the growing interest in leisure associated with health care, the popularity of Zakopane increased. Stanislaw Witkiewicz was the creator and precursor of the style based on the folk architecture of the highlanders from Podhale. The affirmation of the Zakopane style opened the way for the search for a universal architectural form corresponding to the national style. Many variations about it were created over the centuries, which were not always accurate. In addition, the architectural and building regulations providing a rigid framework to the principles of shaping the solids, did not correspond and social needs. The ongoing pressure of the tourist traffic in Podhale and the “fashion for Zakopane” were intensifying by forcing more homes for the holiday-makers. The shape of the present architecture of Podhale is a resultant of the adaptation to the urban conditions and the attempts to push the maximum number of rooms in the facility. The whole area of Podhale (not only Zakopane) has been used in recent years for offering tourist attractions (lifts, spas, aqua parks, etc.), which greatly transforms the space and structure of the buildings. The areas with the highest spatial, landscape and cultural values are most often exposed to the enormous pressure of tourism. It makes the specificity of the area, delicate settlement structures, authenticity of the cultural space disappear. Unfortunately, many years of mass tourism in Podhale have made an irreversible devastation of the landscape, both of the “Pearl of Podhale”, which is Zakopane in the eyes of the tourists, but also the whole functional area associated with it.

  8. The apparatus composition and architecture of Cordylodus pander - Concepts of homology in primitive conodonts

    USGS Publications Warehouse

    Smith, M.P.; Donoghue, P.C.J.; Repetski, J.E.

    2005-01-01

    A clear distinction may be drawn between the perpendicular architecture of the feeding apparatus of ozarkodinid, prioniodontid and prioniodinid conodonts, in which the P elements are situated at a high angle to the M and S elements, and the parallel architecture of panderodontid and other coniform apparatuses, where two suites of coniform elements lie parallel to each other and oppose across the midline. The quest for homologies between the two architectures has been fraught with difficulty, at least in part because of the paucity of natural assemblages of coniform taxa. A diagenetically fused apparatus of Cordylodns lindstroini elements is here described which is made up of one rounded and two compressed element morphotypes. One of the compressed elements is bowed and asymmetrical and the other is unbowed and more symmetrical. These compressed elements are considered to be homologous with those of panderodontid apparatuses and would have lain at the caudal end of the parallel arrays, with the more symmetrical morphotypes located rostrally to the asymmetrical ones. The bowed and unbowed compressed elements of Cordylodns thus correspond, respectively, to the pt and pf positions of panderodontid apparatuses. In addition, the presence of symmetry transition within the rounded elements of Cordylodns, but not the compressed morphotypes, enables correlation of these with the S and M element locations of ozarkodinid apparatuses. By extension, the compressed elements must be homologues of the P elements. Specifically, the asymmetrical pt morphotype is homologous with the P1 of ozarkodinids and the more symmetrical and rostral pf morphotype is homologous with the P2 position. However, because of uncertainties over the nature of topological transformation of the rostral element array (the "rounded" or "costate" suites), it is not possible to recognize specific homologies between these elements and the M and S elements of ozarkodinids. Morphologic differentiation of P from M and S element suites thus preceded the topological transformation from parallel to perpendicular apparatus architectures.

  9. Aortopathy in a Mouse Model of Marfan Syndrome Is Not Mediated by Altered Transforming Growth Factor β Signaling.

    PubMed

    Wei, Hao; Hu, Jie Hong; Angelov, Stoyan N; Fox, Kate; Yan, James; Enstrom, Rachel; Smith, Alexandra; Dichek, David A

    2017-01-24

    Marfan syndrome (MFS) is caused by mutations in the gene encoding fibrillin-1 (FBN1); however, the mechanisms through which fibrillin-1 deficiency causes MFS-associated aortopathy are uncertain. Recently, attention was focused on the hypothesis that MFS-associated aortopathy is caused by increased transforming growth factor-β (TGF-β) signaling in aortic medial smooth muscle cells (SMC). However, there are many reasons to doubt that TGF-β signaling drives MFS-associated aortopathy. We used a mouse model to test whether SMC TGF-β signaling is perturbed by a fibrillin-1 variant that causes MFS and whether blockade of SMC TGF-β signaling prevents MFS-associated aortopathy. MFS mice (Fbn1 C1039G/+ genotype) were genetically modified to allow postnatal SMC-specific deletion of the type II TGF-β receptor (TBRII; essential for physiologic TGF-β signaling). In young MFS mice with and without superimposed deletion of SMC-TBRII, we measured aortic dimensions, histopathology, activation of aortic SMC TGF-β signaling pathways, and changes in aortic SMC gene expression. Young Fbn1 C1039G/+ mice had ascending aortic dilation and significant disruption of aortic medial architecture. Both aortic dilation and disrupted medial architecture were exacerbated by superimposed deletion of TBRII. TGF-β signaling was unaltered in aortic SMC of young MFS mice; however, SMC-specific deletion of TBRII in Fbn1 C1039G/+ mice significantly decreased activation of SMC TGF-β signaling pathways. In young Fbn1 C1039G/+ mice, aortopathy develops in the absence of detectable alterations in SMC TGF-β signaling. Loss of physiologic SMC TGF-β signaling exacerbates MFS-associated aortopathy. Our data support a protective role for SMC TGF-β signaling during early development of MFS-associated aortopathy. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  10. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  11. A verification strategy for web services composition using enhanced stacked automata model.

    PubMed

    Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali

    2015-01-01

    Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the existing models.

  12. Using SpF to Achieve Petascale for Legacy Pseudospectral Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Jiang, Weiyuan

    2014-01-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.

  13. The Emergence of Dominant Design(s) in Large Scale Cyber-Infrastructure Systems

    ERIC Educational Resources Information Center

    Diamanti, Eirini Ilana

    2012-01-01

    Cyber-infrastructure systems are integrated large-scale IT systems designed with the goal of transforming scientific practice by enabling multi-disciplinary, cross-institutional collaboration. Their large scale and socio-technical complexity make design decisions for their underlying architecture practically irreversible. Drawing on three…

  14. TSI-Enhanced Pedagogical Agents to Engage Learners in Virtual Worlds

    ERIC Educational Resources Information Center

    Leung, Steve; Virwaney, Sandeep; Lin, Fuhua; Armstrong, AJ; Dubbelboer, Adien

    2013-01-01

    Building pedagogical applications in virtual worlds is a multi-disciplinary endeavor that involves learning theories, application development framework, and mediated communication theories. This paper presents a project that integrates game-based learning, multi-agent system architecture (MAS), and the theory of Transformed Social Interaction…

  15. Knowledge Management for the Analysis of Complex Experimentation.

    ERIC Educational Resources Information Center

    Maule, R.; Schacher, G.; Gallup, S.

    2002-01-01

    Describes a knowledge management system that was developed to help provide structure for dynamic and static data and to aid in the analysis of complex experimentation. Topics include quantitative and qualitative data; mining operations using artificial intelligence techniques; information architecture of the system; and transforming data into…

  16. This Old House: Revitalizing Higher Education's Architecture.

    ERIC Educational Resources Information Center

    Flynn, William J.

    2000-01-01

    Asserts that in order to transform colleges into learning organizations, the infrastructure of higher education must be analyzed. States that basic relationships must be redesigned--the pedagogical interaction between teacher and student, the tension between faculty and administration, the caste system relationship that has existed between faculty…

  17. A Kinder and Gentler Transformation?

    ERIC Educational Resources Information Center

    Katz, Richard N.; Goldstein, Larry; Dobbin, Gregory

    2001-01-01

    Discusses the shifting focus regarding information technology in higher education from technology itself toward the people and business processes connected with it. Describes the University of California's efforts toward a new business architecture, and an Educause-sponsored forum on e-business discussing the same themes. Offers discussion of the…

  18. NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations

    NASA Astrophysics Data System (ADS)

    Frisbie, T. E.; Hall, C. M.

    2006-12-01

    Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.

  19. Invariant recognition drives neural representations of action sequences

    PubMed Central

    Poggio, Tomaso

    2017-01-01

    Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864

  20. Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng

    This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA)more » models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.« less

  1. Implementation of an Integrated On-Board Aircraft Engine Diagnostic Architecture

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    An on-board diagnostic architecture for aircraft turbofan engine performance trending, parameter estimation, and gas-path fault detection and isolation has been developed and evaluated in a simulation environment. The architecture incorporates two independent models: a realtime self-tuning performance model providing parameter estimates and a performance baseline model for diagnostic purposes reflecting long-term engine degradation trends. This architecture was evaluated using flight profiles generated from a nonlinear model with realistic fleet engine health degradation distributions and sensor noise. The architecture was found to produce acceptable estimates of engine health and unmeasured parameters, and the integrated diagnostic algorithms were able to perform correct fault isolation in approximately 70 percent of the tested cases

  2. Multiple-image authentication with a cascaded multilevel architecture based on amplitude field random sampling and phase information multiplexing.

    PubMed

    Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2015-04-10

    A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.

  3. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.

  4. Compositional Specification of Software Architecture

    NASA Technical Reports Server (NTRS)

    Penix, John; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes our experience using parameterized algebraic specifications to model properties of software architectures. The goal is to model the decomposition of requirements independent of the style used to implement the architecture. We begin by providing an overview of the role of architecture specification in software development. We then describe how architecture specifications are build up from component and connector specifications and give an overview of insights gained from a case study used to validate the method.

  5. Jupiter Europa Orbiter Architecture Definition Process

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Shishko, Robert

    2011-01-01

    The proposed Jupiter Europa Orbiter mission, planned for launch in 2020, is using a new architectural process and framework tool to drive its model-based systems engineering effort. The process focuses on getting the architecture right before writing requirements and developing a point design. A new architecture framework tool provides for the structured entry and retrieval of architecture artifacts based on an emerging architecture meta-model. This paper describes the relationships among these artifacts and how they are used in the systems engineering effort. Some early lessons learned are discussed.

  6. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  7. The LUVOIR Large Mission Concept

    NASA Astrophysics Data System (ADS)

    O'Meara, John; LUVOIR Science and Technology Definition Team

    2018-01-01

    LUVOIR is one of four large mission concepts for which the NASA Astrophysics Division has commissioned studies by Science and Technology Definition Teams (STDTs) drawn from the astronomical community. We are currently developing two architectures: Architecture A with a 15.1 meter segmented primary mirror, and Architecture B with a 9.2 meter segmented primary mirror. Our focus in this presentation is the Architecture A LUVOIR. LUVOIR will operate at the Sun-Earth L2 point. It will be designed to support a broad range of astrophysics and exoplanet studies. The initial instruments developed for LUVOIR Architecture A include 1) a high-performance optical/NIR coronagraph with imaging and spectroscopic capability, 2) a UV imager and spectrograph with high spectral resolution and multi-object capability, 3) a high-definition wide-field optical/NIR camera, and 4) a high resolution UV/optical spectropolarimeter. LUVOIR will be designed for extreme stability to support unprecedented spatial resolution and coronagraphy. It is intended to be a long-lifetime facility that is both serviceable, upgradable, and primarily driven by guest observer science programs. In this presentation, we will describe the observatory, its instruments, and survey the transformative science LUVOIR can accomplish.

  8. Teaching Architecture - Contemporary Challenges and Threats in the Complexity of Built Environment

    NASA Astrophysics Data System (ADS)

    Borucka, Justyna; Macikowski, Bartosz

    2017-10-01

    The complexity of the modern built environment is a problem not only of architectural and urban issues. This issue extends to many other disciplines as well as covering a wide range of social engagements. The idea of writing this paper is generally initiated by the debate which took place in Gdańsk on 22.01.2016, and was prepared in order to meet representatives of the four circles of interest within the architectural sphere: universities, professional architectural organisations and associations, architectural practice (professionals running their own studios, managing projects and leading construction) and local social organisations active in city of Gdańsk. This paper is a comparison of the results of this discussion in relation to the policy and methodology of architecture teaching on the University level. Teaching architecture and urban planning according to the present discussion needs to be improved and advanced to meet the increasing complexity of both disciplines. Contemporary dynamic development of cities creates the necessity of engaging multiple stakeholders, participants and users of architecture and urban space. This is crucial to make them conscious of sharing responsibility for increasing the quality of living in the built environment. This discussion about architectural education is open and has the nature of an ongoing process adapting to a changing environment and is in fact a constant challenge which brings questions rather than simple answers. Transformation of architecture and urban planning, and consequently its education are increasingly entering into the related fields, especially into the professional practice and social environment. The question of how to teach architecture and urban planning and educate users of urban space should take place in the context of a wide discussion. This interdisciplinary debate seems to be a crucial and challenging step towards improving the future education of architecture and urban planning leading to a better life in the city.

  9. Instruction-level performance modeling and characterization of multimedia applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less

  10. Business Value of Information Sharing and the Role of Emerging Technologies

    ERIC Educational Resources Information Center

    Kumar, Sanjeev

    2009-01-01

    Information Technology has brought significant benefits to organizations by allowing greater information sharing within and across firm boundaries leading to performance improvements. Emerging technologies such as Service Oriented Architecture (SOA) and Web2.0 have transformed the volume and process of information sharing. However, a comprehensive…

  11. Strategies for Competition Beyond Open Architecture (OA): Acquisition at the Edge of Chaos

    DTIC Science & Technology

    2014-04-30

    Discipline of Systems Engineering. SERC -2009-TR-006: Systems Engineering Research Center. Wade, D. J., & Madni, D. A. (2010). Development of 3-Year...Roadmap to Transform the Discipline of Systems Engineering. SERC -2009-TR-006: Systems Engineering Research Center. Wikipedia. (2012, 4 10

  12. What Would a Non-Sexist City Be Like? Speculations on Housing, Urban Design, and Human Work.

    ERIC Educational Resources Information Center

    Hayden, Dolores

    1980-01-01

    Holds that in order to be equal members of society, women must transform the sexual division of domestic labor, the economics of domestic work, and the spatial separation of homes and workplaces. Proposes architectural and social reforms to meet these ends. (Author/GC)

  13. DoD Business Mission Area Service-Oriented Architecture to Support Business Transformation

    DTIC Science & Technology

    2008-10-01

    Notation ( BPMN ). The research also found strong support across vendors for the Business Process Execution Language standard, though there is also...emerging support for direct execution of BPMN through the use of the XML Process Definition Language, an XML serialization of BPMN . Many vendors also

  14. Using Online Education Technologies to Support Studio Instruction

    ERIC Educational Resources Information Center

    Bender, Diane M.; Vredevoogd, Jon D.

    2006-01-01

    Technology is transforming the education and practice of architecture and design. The newest form of education is blended learning, which combines personal interaction from live class sessions with online education for greater learning flexibility (Abrams & Haefner, 2002). Reluctant to join the digital era are educators teaching studio courses…

  15. Space Solar Power Management and Distribution (PMAD)

    NASA Technical Reports Server (NTRS)

    Lynch, Thomas H.

    2000-01-01

    This paper presents, in viewgraph form, SSP PMAD (Space Solar Power Management and Distribution). The topics include: 1) Architecture; 2) Backside Thermal View; 3) Solar Array Interface; 4) Transformer design and risks; 5) Twelve phase rectifier; 6) Antenna (80V) Converters; 7) Distribution Cables; 8) Weight Analysis; and 9) PMAD Summary.

  16. Modeling Techniques for High Dependability Protocols and Architecture

    NASA Technical Reports Server (NTRS)

    LaValley, Brian; Ellis, Peter; Walter, Chris J.

    2012-01-01

    This report documents an investigation into modeling high dependability protocols and some specific challenges that were identified as a result of the experiments. The need for an approach was established and foundational concepts proposed for modeling different layers of a complex protocol and capturing the compositional properties that provide high dependability services for a system architecture. The approach centers around the definition of an architecture layer, its interfaces for composability with other layers and its bindings to a platform specific architecture model that implements the protocols required for the layer.

  17. Assessing the effects of architectural variations on light partitioning within virtual wheat–pea mixtures

    PubMed Central

    Barillot, Romain; Escobar-Gutiérrez, Abraham J.; Fournier, Christian; Huynh, Pierre; Combes, Didier

    2014-01-01

    Background and Aims Predicting light partitioning in crop mixtures is a critical step in improving the productivity of such complex systems, and light interception has been shown to be closely linked to plant architecture. The aim of the present work was to analyse the relationships between plant architecture and light partitioning within wheat–pea (Triticum aestivum–Pisum sativum) mixtures. An existing model for wheat was utilized and a new model for pea morphogenesis was developed. Both models were then used to assess the effects of architectural variations in light partitioning. Methods First, a deterministic model (L-Pea) was developed in order to obtain dynamic reconstructions of pea architecture. The L-Pea model is based on L-systems formalism and consists of modules for ‘vegetative development’ and ‘organ extension’. A tripartite simulator was then built up from pea and wheat models interfaced with a radiative transfer model. Architectural parameters from both plant models, selected on the basis of their contribution to leaf area index (LAI), height and leaf geometry, were then modified in order to generate contrasting architectures of wheat and pea. Key results By scaling down the analysis to the organ level, it could be shown that the number of branches/tillers and length of internodes significantly determined the partitioning of light within mixtures. Temporal relationships between light partitioning and the LAI and height of the different species showed that light capture was mainly related to the architectural traits involved in plant LAI during the early stages of development, and in plant height during the onset of interspecific competition. Conclusions In silico experiments enabled the study of the intrinsic effects of architectural parameters on the partitioning of light in crop mixtures of wheat and pea. The findings show that plant architecture is an important criterion for the identification/breeding of plant ideotypes, particularly with respect to light partitioning. PMID:24907314

  18. RACE/A: An Architectural Account of the Interactions between Learning, Task Control, and Retrieval Dynamics

    ERIC Educational Resources Information Center

    van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels

    2012-01-01

    This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use…

  19. Declarative Consciousness for Reconstruction

    NASA Astrophysics Data System (ADS)

    Seymour, Leslie G.

    2013-12-01

    Existing information technology tools are harnessed and integrated to provide digital specification of human consciousness of individual persons. An incremental compilation technology is proposed as a transformation of LifeLog derived persona specifications into a Canonical representation of the neocortex architecture of the human brain. The primary purpose is to gain an understanding of the semantical allocation of the neocortex capacity. Novel neocortex content allocation simulators with browsers are proposed to experiment with various approaches of relieving the brain from overload conditions. An IT model of the neocortex is maintained, which is then updated each time new stimuli are received from the LifeLog data stream; new information is gained from brain signal measurements; and new functional dependencies are discovered between live persona consumed/produced signals

  20. National Institutes of Health eliminates funding for national architecture linking primary care research.

    PubMed

    Peterson, Kevin A

    2007-01-01

    With the ending of the National Electronic Clinical Trial and Research Network (NECTAR) pilot programs and the abridgement of Clinical Research Associate initiative, the National Institutes of Health Roadmap presents a strategic shift for practice-based research networks from direct funding of a harmonized national infrastructure of cooperating research networks to a model of local engagement of primary care clinics performing practice-based research under the aegis of regional academic health centers through Clinical and Translational Science Awards. Although this may present important opportunities for partnering between community practices and large health centers, for primary care researchers, the promise of a transformational change that brings a unified national primary care community into the clinical research enterprise seems likely to remain unfulfilled.

  1. Semantic Technologies for Re-Use of Clinical Routine Data.

    PubMed

    Kreuzthaler, Markus; Martínez-Costa, Catalina; Kaiser, Peter; Schulz, Stefan

    2017-01-01

    Routine patient data in electronic patient records are only partly structured, and an even smaller segment is coded, mainly for administrative purposes. Large parts are only available as free text. Transforming this content into a structured and semantically explicit form is a prerequisite for querying and information extraction. The core of the system architecture presented in this paper is based on SAP HANA in-memory database technology using the SAP Connected Health platform for data integration as well as for clinical data warehousing. A natural language processing pipeline analyses unstructured content and maps it to a standardized vocabulary within a well-defined information model. The resulting semantically standardized patient profiles are used for a broad range of clinical and research application scenarios.

  2. Multilayer Statistical Intrusion Detection in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Hamdi, Mohamed; Meddeb-Makhlouf, Amel; Boudriga, Noureddine

    2008-12-01

    The rapid proliferation of mobile applications and services has introduced new vulnerabilities that do not exist in fixed wired networks. Traditional security mechanisms, such as access control and encryption, turn out to be inefficient in modern wireless networks. Given the shortcomings of the protection mechanisms, an important research focuses in intrusion detection systems (IDSs). This paper proposes a multilayer statistical intrusion detection framework for wireless networks. The architecture is adequate to wireless networks because the underlying detection models rely on radio parameters and traffic models. Accurate correlation between radio and traffic anomalies allows enhancing the efficiency of the IDS. A radio signal fingerprinting technique based on the maximal overlap discrete wavelet transform (MODWT) is developed. Moreover, a geometric clustering algorithm is presented. Depending on the characteristics of the fingerprinting technique, the clustering algorithm permits to control the false positive and false negative rates. Finally, simulation experiments have been carried out to validate the proposed IDS.

  3. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1997-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  4. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1998-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  5. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  6. Early Prediction of Cancer Progression by Depth-Resolved Nanoscale Mapping of Nuclear Architecture from Unstained Tissue Specimens.

    PubMed

    Uttam, Shikhar; Pham, Hoa V; LaFace, Justin; Leibowitz, Brian; Yu, Jian; Brand, Randall E; Hartman, Douglas J; Liu, Yang

    2015-11-15

    Early cancer detection currently relies on screening the entire at-risk population, as with colonoscopy and mammography. Therefore, frequent, invasive surveillance of patients at risk for developing cancer carries financial, physical, and emotional burdens because clinicians lack tools to accurately predict which patients will actually progress into malignancy. Here, we present a new method to predict cancer progression risk via nanoscale nuclear architecture mapping (nanoNAM) of unstained tissue sections based on the intrinsic density alteration of nuclear structure rather than the amount of stain uptake. We demonstrate that nanoNAM detects a gradual increase in the density alteration of nuclear architecture during malignant transformation in animal models of colon carcinogenesis and in human patients with ulcerative colitis, even in tissue that appears histologically normal according to pathologists. We evaluated the ability of nanoNAM to predict "future" cancer progression in patients with ulcerative colitis who did and did not develop colon cancer up to 13 years after their initial colonoscopy. NanoNAM of the initial biopsies correctly classified 12 of 15 patients who eventually developed colon cancer and 15 of 18 who did not, with an overall accuracy of 85%. Taken together, our findings demonstrate great potential for nanoNAM in predicting cancer progression risk and suggest that further validation in a multicenter study with larger cohorts may eventually advance this method to become a routine clinical test. ©2015 American Association for Cancer Research.

  7. Network-driven design principles for neuromorphic systems.

    PubMed

    Partzsch, Johannes; Schüffny, Rene

    2015-01-01

    Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems.

  8. Network-driven design principles for neuromorphic systems

    PubMed Central

    Partzsch, Johannes; Schüffny, Rene

    2015-01-01

    Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems. PMID:26539079

  9. Demand Activated Manufacturing Architecture (DAMA) model for supply chain collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHAPMAN,LEON D.; PETERSEN,MARJORIE B.

    The Demand Activated Manufacturing Architecture (DAMA) project during the last five years of work with the U.S. Integrated Textile Complex (retail, apparel, textile, and fiber sectors) has developed an inter-enterprise architecture and collaborative model for supply chains. This model will enable improved collaborative business across any supply chain. The DAMA Model for Supply Chain Collaboration is a high-level model for collaboration to achieve Demand Activated Manufacturing. The five major elements of the architecture to support collaboration are (1) activity or process, (2) information, (3) application, (4) data, and (5) infrastructure. These five elements are tied to the application of themore » DAMA architecture to three phases of collaboration - prepare, pilot, and scale. There are six collaborative activities that may be employed in this model: (1) Develop Business Planning Agreements, (2) Define Products, (3) Forecast and Plan Capacity Commitments, (4) Schedule Product and Product Delivery, (5) Expedite Production and Delivery Exceptions, and (6) Populate Supply Chain Utility. The Supply Chain Utility is a set of applications implemented to support collaborative product definition, forecast visibility, planning, scheduling, and execution. The DAMA architecture and model will be presented along with the process for implementing this DAMA model.« less

  10. On Using SysML, DoDAF 2.0 and UPDM to Model the Architecture for the NOAA's Joint Polar Satellite System (JPSS) Ground System (GS)

    NASA Technical Reports Server (NTRS)

    Hayden, Jeffrey L.; Jeffries, Alan

    2012-01-01

    The JPSS Ground System is a lIexible system of systems responsible for telemetry, tracking & command (TT &C), data acquisition, routing and data processing services for a varied lIeet of satellites to support weather prediction, modeling and climate modeling. To assist in this engineering effort, architecture modeling tools are being employed to translate the former NPOESS baseline to the new JPSS baseline, The paper will focus on the methodology for the system engineering process and the use of these architecture modeling tools within that process, The Department of Defense Architecture Framework version 2,0 (DoDAF 2.0) viewpoints and views that are being used to describe the JPSS GS architecture are discussed. The Unified Profile for DoOAF and MODAF (UPDM) and Systems Modeling Language (SysML), as ' provided by extensions to the MagicDraw UML modeling tool, are used to develop the diagrams and tables that make up the architecture model. The model development process and structure are discussed, examples are shown, and details of handling the complexities of a large System of Systems (SoS), such as the JPSS GS, with an equally complex modeling tool, are described

  11. Electro-optic Imaging Fourier Transform Spectrometer

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    2005-01-01

    JPL is developing an innovative compact, low mass, Electro-Optic Imaging Fourier Transform Spectrometer (E-O IFTS) for hyperspectral imaging applications. The spectral region of this spectrometer will be 1 - 2.5 micron (1000-4000/cm) to allow high-resolution, high-speed hyperspectral imaging applications. One application will be the remote sensing of the measurement of a large number of different atmospheric gases simultaneously in the same airmass. Due to the use of a combination of birefringent phase retarders and multiple achromatic phase switches to achieve phase delay, this spectrometer is capable of hyperspectral measurements similar to that of the conventional Fourier transform spectrometer but without any moving parts. In this paper, the principle of operations, system architecture and recent experimental progress will be presented.

  12. Liquid/vapor-induced reversible dynamic structural transformation of a three-dimensional Cu-based MOF to a one-dimensional MOF showing gate adsorption.

    PubMed

    Kondo, Atsushi; Suzuki, Takayuki; Kotani, Ryosuke; Maeda, Kazuyuki

    2017-05-23

    A new 3D metal-organic framework (MOF), in which 2D layers are interlaced to form a 3D architecture, was synthesized by a reaction of Cu(BF 4 ) 2 and 1,3-bis(4-pyridyl)propane (bpp) in a water/1-hexanol solvent system, and the crystal structure of the MOF was successfully solved. The MOF is reversibly transformed to a 1D chain MOF, which shows gate adsorption properties. The dynamic transformation gives crystal size reduction resulting in a slight change in CO 2 adsorption isotherms. The 1D MOF shows selective adsorption/separation properties on benzene and its analogues with similar sizes and shapes (benzene, toluene, and cyclohexane).

  13. Development of a small single-ring OpenPET prototype with a novel transformable architecture.

    PubMed

    Tashima, Hideaki; Yoshida, Eiji; Inadama, Naoko; Nishikido, Fumihiko; Nakajima, Yasunori; Wakizaka, Hidekatsu; Shinaji, Tetsuya; Nitta, Munetaka; Kinouchi, Shoko; Suga, Mikio; Haneishi, Hideaki; Inaniwa, Taku; Yamaya, Taiga

    2016-02-21

    The single-ring OpenPET (SROP), for which the detector arrangement has a cylinder shape cut by two parallel planes at a slant angle to form an open space, is our original proposal for in-beam PET. In this study, we developed a small prototype of an axial-shift type SROP (AS-SROP) with a novel transformable architecture for a proof-of-concept. In the AS-SROP, detectors originally forming a cylindrical PET are axially shifted little by little. We designed the small AS-SROP prototype for 4-layer depth-of-interaction detectors arranged in a ring diameter of 250 mm. The prototype had two modes: open and closed. The open mode formed the SROP with the open space of 139 mm and the closed mode formed a conventional cylindrical PET. The detectors were simultaneously moved by a rotation handle allowing them to be transformed between the two modes. We evaluated the basic performance of the developed prototype and carried out in-beam imaging tests in the HIMAC using (11)C radioactive beam irradiation. As a result, we found the open mode enabled in-beam PET imaging at a slight cost of imaging performance; the spatial resolution and sensitivity were 2.6 mm and 5.1% for the open mode and 2.1 mm and 7.3% for the closed mode. We concluded that the AS-SROP can minimize the decrease of resolution and sensitivity, for example, by transforming into the closed mode immediately after the irradiation while maintaining the open space only for the in-beam PET measurement.

  14. How architecture wins technology wars.

    PubMed

    Morris, C R; Ferguson, C H

    1993-01-01

    Signs of revolutionary transformation in the global computer industry are everywhere. A roll call of the major industry players reads like a waiting list in the emergency room. The usual explanations for the industry's turmoil are at best inadequate. Scale, friendly government policies, manufacturing capabilities, a strong position in desktop markets, excellent software, top design skills--none of these is sufficient, either by itself or in combination, to ensure competitive success in information technology. A new paradigm is required to explain patterns of success and failure. Simply stated, success flows to the company that manages to establish proprietary architectural control over a broad, fast-moving, competitive space. Architectural strategies have become crucial to information technology because of the astonishing rate of improvement in microprocessors and other semiconductor components. Since no single vendor can keep pace with the outpouring of cheap, powerful, mass-produced components, customers insist on stitching together their own local systems solutions. Architectures impose order on the system and make the interconnections possible. The architectural controller is the company that controls the standard by which the entire information package is assembled. Microsoft's Windows is an excellent example of this. Because of the popularity of Windows, companies like Lotus must conform their software to its parameters in order to compete for market share. In the 1990s, proprietary architectural control is not only possible but indispensable to competitive success. What's more, it has broader implications for organizational structure: architectural competition is giving rise to a new form of business organization.

  15. MHDL CAD tool with fault circuit handling

    NASA Astrophysics Data System (ADS)

    Espinosa Flores-Verdad, Guillermo; Altamirano Robles, Leopoldo; Osorio Roque, Leticia

    2003-04-01

    Behavioral modeling and simulation, with Analog Hardware and Mixed Signal Description High Level Languages (MHDLs), have generated the development of diverse simulation tools that allow handling the requirements of the modern designs. These systems have million of transistors embedded and they are radically diverse between them. This tendency of simulation tools is exemplified by the development of languages for modeling and simulation, whose applications are the re-use of complete systems, construction of virtual prototypes, realization of test and synthesis. This paper presents the general architecture of a Mixed Hardware Description Language, based on the standard 1076.1-1999 IEEE VHDL Analog and Mixed-Signal Extensions known as VHDL-AMS. This architecture is novel by consider the modeling and simulation of faults. The main modules of the CAD tool are briefly described in order to establish the information flow and its transformations, starting from the description of a circuit model, going throw the lexical analysis, mathematical models generation and the simulation core, ending at the collection of the circuit behavior as simulation"s data. In addition, the incorporated mechanisms to the simulation core are explained in order to realize the handling of faults into the circuit models. Currently, the CAD tool works with algebraic and differential descriptions for the circuit models, nevertheless the language design is open to be able to handle different model types: Fuzzy Models, Differentials Equations, Transfer Functions and Tables. This applies for fault models too, in this sense the CAD tool considers the inclusion of mutants and saboteurs. To exemplified the results obtained until now, the simulated behavior of a circuit is shown when it is fault free and when it has been modified by the inclusion of a fault as a mutant or a saboteur. The obtained results allow the realization of a virtual diagnosis for mixed circuits. This language works in a UNIX system; it was developed with an object-oriented methodology and programmed in C++.

  16. A coupled-oscillator model of olfactory bulb gamma oscillations

    PubMed Central

    2017-01-01

    The olfactory bulb transforms not only the information content of the primary sensory representation, but also its underlying coding metric. High-variance, slow-timescale primary odor representations are transformed by bulbar circuitry into secondary representations based on principal neuron spike patterns that are tightly regulated in time. This emergent fast timescale for signaling is reflected in gamma-band local field potentials, presumably serving to efficiently integrate olfactory sensory information into the temporally regulated information networks of the central nervous system. To understand this transformation and its integration with interareal coordination mechanisms requires that we understand its fundamental dynamical principles. Using a biophysically explicit, multiscale model of olfactory bulb circuitry, we here demonstrate that an inhibition-coupled intrinsic oscillator framework, pyramidal resonance interneuron network gamma (PRING), best captures the diversity of physiological properties exhibited by the olfactory bulb. Most importantly, these properties include global zero-phase synchronization in the gamma band, the phase-restriction of informative spikes in principal neurons with respect to this common clock, and the robustness of this synchronous oscillatory regime to multiple challenging conditions observed in the biological system. These conditions include substantial heterogeneities in afferent activation levels and excitatory synaptic weights, high levels of uncorrelated background activity among principal neurons, and spike frequencies in both principal neurons and interneurons that are irregular in time and much lower than the gamma frequency. This coupled cellular oscillator architecture permits stable and replicable ensemble responses to diverse sensory stimuli under various external conditions as well as to changes in network parameters arising from learning-dependent synaptic plasticity. PMID:29140973

  17. Real-time million-synapse simulation of rat barrel cortex.

    PubMed

    Sharp, Thomas; Petersen, Rasmus; Furber, Steve

    2014-01-01

    Simulations of neural circuits are bounded in scale and speed by available computing resources, and particularly by the differences in parallelism and communication patterns between the brain and high-performance computers. SpiNNaker is a computer architecture designed to address this problem by emulating the structure and function of neural tissue, using very many low-power processors and an interprocessor communication mechanism inspired by axonal arbors. Here we demonstrate that thousand-processor SpiNNaker prototypes can simulate models of the rodent barrel system comprising 50,000 neurons and 50 million synapses. We use the PyNN library to specify models, and the intrinsic features of Python to control experimental procedures and analysis. The models reproduce known thalamocortical response transformations, exhibit known, balanced dynamics of excitation and inhibition, and show a spatiotemporal spread of activity though the superficial cortical layers. These demonstrations are a significant step toward tractable simulations of entire cortical areas on the million-processor SpiNNaker machines in development.

  18. Real-time million-synapse simulation of rat barrel cortex

    PubMed Central

    Sharp, Thomas; Petersen, Rasmus; Furber, Steve

    2014-01-01

    Simulations of neural circuits are bounded in scale and speed by available computing resources, and particularly by the differences in parallelism and communication patterns between the brain and high-performance computers. SpiNNaker is a computer architecture designed to address this problem by emulating the structure and function of neural tissue, using very many low-power processors and an interprocessor communication mechanism inspired by axonal arbors. Here we demonstrate that thousand-processor SpiNNaker prototypes can simulate models of the rodent barrel system comprising 50,000 neurons and 50 million synapses. We use the PyNN library to specify models, and the intrinsic features of Python to control experimental procedures and analysis. The models reproduce known thalamocortical response transformations, exhibit known, balanced dynamics of excitation and inhibition, and show a spatiotemporal spread of activity though the superficial cortical layers. These demonstrations are a significant step toward tractable simulations of entire cortical areas on the million-processor SpiNNaker machines in development. PMID:24910593

  19. Validity of flowmeter data in heterogeneous alluvial aquifers

    NASA Astrophysics Data System (ADS)

    Bianchi, Marco

    2017-04-01

    Numerical simulations are performed to evaluate the impact of medium-scale sedimentary architecture and small-scale heterogeneity on the validity of the borehole flowmeter test, a widely used method for measuring hydraulic conductivity (K) at the scale required for detailed groundwater flow and solute transport simulations. Reference data from synthetic K fields representing the range of structures and small-scale heterogeneity typically observed in alluvial systems are compared with estimated values from numerical simulations of flowmeter tests. Systematic errors inherent in the flowmeter K estimates are significant when the reference K field structure deviates from the hypothetical perfectly stratified conceptual model at the basis of the interpretation method of flowmeter tests. Because of these errors, the true variability of the K field is underestimated and the distributions of the reference K data and log-transformed spatial increments are also misconstrued. The presented numerical analysis shows that the validity of flowmeter based K data depends on measureable parameters defining the architecture of the hydrofacies, the conductivity contrasts between the hydrofacies and the sub-facies-scale K variability. A preliminary geological characterization is therefore essential for evaluating the optimal approach for accurate K field characterization.

  20. ALI (Autonomous Lunar Investigator): Revolutionary Approach to Exploring the Moon with Addressable Reconfigurable Technology

    NASA Technical Reports Server (NTRS)

    Clark, P. E.; Curtis, S. A.; Rilee, M. L.; Floyd, S. R.

    2005-01-01

    Addressable Reconfigurable Technology (ART) based structures: Mission Concepts based on Addressable Reconfigurable Technology (ART), originally studied for future ANTS (Autonomous Nanotechnology Swarm) Space Architectures, are now being developed as rovers for nearer term use in lunar and planetary surface exploration. The architecture is based on the reconfigurable tetrahedron as a building block. Tetrahedra are combined to form space-filling networks, shaped for the required function. Basic structural components are highly modular, addressable arrays of robust nodes (tetrahedral apices) from which highly reconfigurable struts (tetrahedral edges), acting as supports or tethers, are efficiently reversibly deployed/stowed, transforming and reshaping the structures as required.

  1. A single chip VLSI Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Hsu, I. S.; Deutsch, L. J.; Reed, I. S.

    1986-01-01

    A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous design is replaced by a time domain algorithm. A new architecture that implements such an algorithm permits efficient pipeline processing with minimum circuitry. A systolic array is also developed to perform erasure corrections in the new design. A modified form of Euclid's algorithm is implemented by a new architecture that maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and a significant reduction in silicon area, therefore making it possible to build a pipeline (31,15)RS decoder on a single VLSI chip.

  2. Environmentally stable seed source for high power ultrafast laser

    NASA Astrophysics Data System (ADS)

    Samartsev, Igor; Bordenyuk, Andrey; Gapontsev, Valentin

    2017-02-01

    We present an environmentally stable Yb ultrafast ring oscillator utilizing a new method of passive mode-locking. The laser is using all-fiber architecture which makes it insensitive to environmental factors, like temperature, humidity, vibrations, and shocks. The new method of mode-locking is utilizing crossed bandpass transmittance filters in ring architecture to discriminate against CW lasing. Broadband pulse evolves from cavity noise under amplification, after passing each filter, causing strong spectral broadening. The laser is self-starting. It generates transform limited spectrally flat pulses of 1 - 50 nm width at 6 - 15 MHz repetition rate and pulse energy 0.2 - 15 nJ at 1010 - 1080 nm CWL.

  3. Combined treatment with a transforming growth factor beta inhibitor (1D11) and bortezomib improves bone architecture in a mouse model of myeloma-induced bone disease

    PubMed Central

    Nyman, Jeffry S.; Merkel, Alyssa R.; Uppuganti, Sasidhar; Nayak, Bijaya; Rowland, Barbara; Makowski, Alexander J.; Oyajobi, Babatunde O.; Sterling, Julie A.

    2016-01-01

    Multiplemyeloma (MM) patients frequently develop tumor-induced bone destruction, yet no therapy completely eliminates the tumor or fully reverses bone loss. Transforming growth factor-β (TGF-β) activity often contributes to tumor-induced bone disease, and pre-clinical studies have indicated that TGF-β inhibition improves bone volume and reduces tumor growth in bone metastatic breast cancer. We hypothesized that inhibition of TGF-β signaling also reduces tumor growth, increases bone volume, and improves vertebral body strength in MM-bearing mice. We treated myeloma tumor-bearing (immunocompetent KaLwRij and immunocompromised Rag2 −/−) mice with a TGF-β inhibitory (1D11) or control (13C4) antibody, with or without the anti-myeloma drug bortezomib, for 4 weeks after inoculation of murine 5TGM1 MM cells. TGF-β inhibition increased trabecular bone volume, improved trabecular architecture, increased tissue mineral density of the trabeculae as assessed by ex vivo micro-computed tomography, and was associated with significantly greater vertebral body strength in biomechanical compression tests. Serum monoclonal paraprotein titers and spleen weights showed that 1D11 monotherapy did not reduce overall MM tumor burden. Combination therapy with 1D11 and bortezomib increased vertebral body strength, reduced tumor burden, and reduced cortical lesions in the femoral metaphysis, although it did not significantly improve cortical bone strength in three-point bending tests of the mid-shaft femur. Overall, our data provides rationale for evaluating inhibition of TGF-β signaling in combination with existing anti-myeloma agents as a potential therapeutic strategy to improve outcomes in patients with myeloma bone disease. PMID:27423464

  4. Combined treatment with a transforming growth factor beta inhibitor (1D11) and bortezomib improves bone architecture in a mouse model of myeloma-induced bone disease.

    PubMed

    Nyman, Jeffry S; Merkel, Alyssa R; Uppuganti, Sasidhar; Nayak, Bijaya; Rowland, Barbara; Makowski, Alexander J; Oyajobi, Babatunde O; Sterling, Julie A

    2016-10-01

    Multiple myeloma (MM) patients frequently develop tumor-induced bone destruction, yet no therapy completely eliminates the tumor or fully reverses bone loss. Transforming growth factor-β (TGF-β) activity often contributes to tumor-induced bone disease, and pre-clinical studies have indicated that TGF-β inhibition improves bone volume and reduces tumor growth in bone metastatic breast cancer. We hypothesized that inhibition of TGF-β signaling also reduces tumor growth, increases bone volume, and improves vertebral body strength in MM-bearing mice. We treated myeloma tumor-bearing (immunocompetent KaLwRij and immunocompromised Rag2-/-) mice with a TGF-β inhibitory (1D11) or control (13C4) antibody, with or without the anti-myeloma drug bortezomib, for 4weeks after inoculation of murine 5TGM1 MM cells. TGF-β inhibition increased trabecular bone volume, improved trabecular architecture, increased tissue mineral density of the trabeculae as assessed by ex vivo micro-computed tomography, and was associated with significantly greater vertebral body strength in biomechanical compression tests. Serum monoclonal paraprotein titers and spleen weights showed that 1D11 monotherapy did not reduce overall MM tumor burden. Combination therapy with 1D11 and bortezomib increased vertebral body strength, reduced tumor burden, and reduced cortical lesions in the femoral metaphysis, although it did not significantly improve cortical bone strength in three-point bending tests of the mid-shaft femur. Overall, our data provides rationale for evaluating inhibition of TGF-β signaling in combination with existing anti-myeloma agents as a potential therapeutic strategy to improve outcomes in patients with myeloma bone disease. Published by Elsevier Inc.

  5. The Transformation of Enterovirus Replication Structures: a Three-Dimensional Study of Single- and Double-Membrane Compartments

    PubMed Central

    Limpens, Ronald W. A. L.; van der Schaar, Hilde M.; Kumar, Darshan; Koster, Abraham J.; Snijder, Eric J.; van Kuppeveld, Frank J. M.; Bárcena, Montserrat

    2011-01-01

    ABSTRACT All positive-strand RNA viruses induce membrane structures in their host cells which are thought to serve as suitable microenvironments for viral RNA synthesis. The structures induced by enteroviruses, which are members of the family Picornaviridae, have so far been described as either single- or double-membrane vesicles (DMVs). Aside from the number of delimiting membranes, their exact architecture has also remained elusive due to the limitations of conventional electron microscopy. In this study, we used electron tomography (ET) to solve the three-dimensional (3-D) ultrastructure of these compartments. At different time points postinfection, coxsackievirus B3-infected cells were high-pressure frozen and freeze-substituted for ET analysis. The tomograms showed that during the exponential phase of viral RNA synthesis, closed smooth single-membrane tubules constituted the predominant virus-induced membrane structure, with a minor proportion of DMVs that were either closed or connected to the cytosol in a vase-like configuration. As infection progressed, the DMV number steadily increased, while the tubular single-membrane structures gradually disappeared. Late in infection, complex multilamellar structures, previously unreported, became apparent in the cytoplasm. Serial tomography disclosed that their basic unit is a DMV, which is enwrapped by one or multiple cisternae. ET also revealed striking intermediate structures that strongly support the conversion of single-membrane tubules into double-membrane and multilamellar structures by a process of membrane apposition, enwrapping, and fusion. Collectively, our work unravels the sequential appearance of distinct enterovirus-induced replication structures, elucidates their detailed 3-D architecture, and provides the basis for a model for their transformation during the course of infection. PMID:21972238

  6. A task-based support architecture for developing point-of-care clinical decision support systems for the emergency department.

    PubMed

    Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B

    2013-01-01

    The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.

  7. Evaluation of the Conservation of Modern Architectural Heritage through Ankara’s Public Buildings

    NASA Astrophysics Data System (ADS)

    Turgut Gültekin, Nevin

    2017-10-01

    This paper evaluates the approach to the field of modern architecture in Turkey through the public buildings of Ankara. Although the conservation of modern architecture as cultural heritage has been accepted, to a limited degree, within related frameworks and disciplines, and within theory, the inconsistency in preservation legislations have been evaluated critically. The scope of conservation is limited to the state of being old and historical, thereby rendering modern architecture not worth conserving. This is valid for many countries, just like it is for Turkey. Despite various local interpretations of the mode of modern architecture that foresees mono-typing, the connotations of “culture” and the state of being a “product of the past,” of the 20th century, are denied. The expanding and transforming characteristic of immovable cultural heritage is disregarded. As such, modern architecture in Turkey remains inadequately analyzed and documented within the framework of cultural heritage. The conservation of buildings dating back to the 20th century remains within the preference of the related Ministry. As the criteria for this preference is not determined, some public buildings that exemplify modern architecture are rapidly lost despite their being of the same style and period with other buildings designated for conservation. The threat of being torn down or destroyed due to aging functionally and physically renders the preservation of modern architecture products within the framework of cultural heritage, as well as the updating of the legal context according to new parameters, urgent and necessary. The sustenance of public buildings, which are not only products of modern architecture but also sources of the history of the city and architecture, and therefore the history of the Republic in Turkey and the modernization process, gains even more significance through its impact on the urban identity of the capital, Ankara. To this end, this paper focuses on the city of Ankara for its case study on the present status of sustaining modern architectural heritage.

  8. Information resources assessment of a healthcare integrated delivery system.

    PubMed Central

    Gadd, C. S.; Friedman, C. P.; Douglas, G.; Miller, D. J.

    1999-01-01

    While clinical healthcare systems may have lagged behind computer applications in other fields in the shift from mainframes to client-server architectures, the rapid deployment of newer applications is closing that gap. Organizations considering the transition to client-server must identify and position themselves to provide the resources necessary to implement and support the infrastructure requirements of client-server architectures and to manage the accelerated complexity at the desktop, including hardware and software deployment, training, and maintenance needs. This paper describes an information resources assessment of the recently aligned Pennsylvania regional Veterans Administration Stars and Stripes Health Network (VISN4), in anticipation of the shift from a predominantly mainframe to a client-server information systems architecture in its well-established VistA clinical information system. The multimethod assessment study is described here to demonstrate this approach and its value to regional healthcare networks undergoing organizational integration and/or significant information technology transformations. PMID:10566414

  9. A hybrid optical switch architecture to integrate IP into optical networks to provide flexible and intelligent bandwidth on demand for cloud computing

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Hall, Trevor J.

    2013-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.

  10. Comparing root architectural models

    NASA Astrophysics Data System (ADS)

    Schnepf, Andrea; Javaux, Mathieu; Vanderborght, Jan

    2017-04-01

    Plant roots play an important role in several soil processes (Gregory 2006). Root architecture development determines the sites in soil where roots provide input of carbon and energy and take up water and solutes. However, root architecture is difficult to determine experimentally when grown in opaque soil. Thus, root architectural models have been widely used and been further developed into functional-structural models that are able to simulate the fate of water and solutes in the soil-root system (Dunbabin et al. 2013). Still, a systematic comparison of the different root architectural models is missing. In this work, we focus on discrete root architecture models where roots are described by connected line segments. These models differ (a) in their model concepts, such as the description of distance between branches based on a prescribed distance (inter-nodal distance) or based on a prescribed time interval. Furthermore, these models differ (b) in the implementation of the same concept, such as the time step size, the spatial discretization along the root axes or the way stochasticity of parameters such as root growth direction, growth rate, branch spacing, branching angles are treated. Based on the example of two such different root models, the root growth module of R-SWMS and RootBox, we show the impact of these differences on simulated root architecture and aggregated information computed from this detailed simulation results, taking into account the stochastic nature of those models. References Dunbabin, V.M., Postma, J.A., Schnepf, A., Pagès, L., Javaux, M., Wu, L., Leitner, D., Chen, Y.L., Rengel, Z., Diggle, A.J. Modelling root-soil interactions using three-dimensional models of root growth, architecture and function (2013) Plant and Soil, 372 (1-2), pp. 93 - 124. Gregory (2006) Roots, rhizosphere and soil: the route to a better understanding of soil science? European Journal of Soil Science 57: 2-12.

  11. Methodology for object-oriented real-time systems analysis and design: Software engineering

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1991-01-01

    Successful application of software engineering methodologies requires an integrated analysis and design life-cycle in which the various phases flow smoothly 'seamlessly' from analysis through design to implementation. Furthermore, different analysis methodologies often lead to different structuring of the system so that the transition from analysis to design may be awkward depending on the design methodology to be used. This is especially important when object-oriented programming is to be used for implementation when the original specification and perhaps high-level design is non-object oriented. Two approaches to real-time systems analysis which can lead to an object-oriented design are contrasted: (1) modeling the system using structured analysis with real-time extensions which emphasizes data and control flows followed by the abstraction of objects where the operations or methods of the objects correspond to processes in the data flow diagrams and then design in terms of these objects; and (2) modeling the system from the beginning as a set of naturally occurring concurrent entities (objects) each having its own time-behavior defined by a set of states and state-transition rules and seamlessly transforming the analysis models into high-level design models. A new concept of a 'real-time systems-analysis object' is introduced and becomes the basic building block of a series of seamlessly-connected models which progress from the object-oriented real-time systems analysis and design system analysis logical models through the physical architectural models and the high-level design stages. The methodology is appropriate to the overall specification including hardware and software modules. In software modules, the systems analysis objects are transformed into software objects.

  12. An Evolutionarily Structured Universe of Protein Architecture

    PubMed Central

    Caetano-Anollés, Gustavo; Caetano-Anollés, Derek

    2003-01-01

    Protein structural diversity encompasses a finite set of architectural designs. Embedded in these topologies are evolutionary histories that we here uncover using cladistic principles and measurements of protein-fold usage and sharing. The reconstructed phylogenies are inherently rooted and depict histories of protein and proteome diversification. Proteome phylogenies showed two monophyletic sister-groups delimiting Bacteria and Archaea, and a topology rooted in Eucarya. This suggests three dramatic evolutionary events and a common ancestor with a eukaryotic-like, gene-rich, and relatively modern organization. Conversely, a general phylogeny of protein architectures showed that structural classes of globular proteins appeared early in evolution and in defined order, the α/β class being the first. Although most ancestral folds shared a common architecture of barrels or interleaved β-sheets and α-helices, many were clearly derived, such as polyhedral folds in the all-α class and β-sandwiches, β-propellers, and β-prisms in all-β proteins. We also describe transformation pathways of architectures that are prevalently used in nature. For example, β-barrels with increased curl and stagger were favored evolutionary outcomes in the all-β class. Interestingly, we found cases where structural change followed the α-to-β tendency uncovered in the tree of architectures. Lastly, we traced the total number of enzymatic functions associated with folds in the trees and show that there is a general link between structure and enzymatic function. PMID:12840035

  13. The parallel algorithm for the 2D discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel

    2018-04-01

    The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.

  14. Force-reflective teleoperated system with shared and compliant control capabilities

    NASA Technical Reports Server (NTRS)

    Szakaly, Z.; Kim, W. S.; Bejczy, A. K.

    1989-01-01

    The force-reflecting teleoperator breadboard is described. It is the first system among available Research and Development systems with the following combined capabilities: (1) The master input device is not a replica of the slave arm. It is a general purpose device which can be applied to the control of different robot arms through proper mathematical transformations. (2) Force reflection generated in the master hand controller is referenced to forces and moments measured by a six DOF force-moment sensor at the base of the robot hand. (3) The system permits a smooth spectrum of operations between full manual, shared manual and automatic, and full automatic (called traded) control. (4) The system can be operated with variable compliance or stiffness in force-reflecting control. Some of the key points of the system are the data handling and computing architecture, the communication method, and the handling of mathematical transformations. The architecture is a fully synchronized pipeline. The communication method achieves optimal use of a parallel communication channel between the local and remote computing nodes. A time delay box is also implemented in this communication channel permitting experiments with up to 8 sec time delay. The mathematical transformations are computed faster than 1 msec so that control at each node can be operated at 1 kHz servo rate without interpolation. This results in an overall force-reflecting loop rate of 200 Hz.

  15. An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures

    NASA Astrophysics Data System (ADS)

    Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.

    2009-07-01

    A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.

  16. Science and Technology Strategic Plan 2012 (Office of Naval Research C4ISR Department)

    DTIC Science & Technology

    2012-01-01

    Information Dominance domain. Information-based warfare will be fundamentally transformed from today’s paradigm. Today, data resides in mission specific networks, sensors and communication architectures. Information requirements are tied to a mission or mission phase where information for prosecution is generated and held for each mission in separate

  17. Office Information Systems and the 21st Century Business Organization: Challenges for Transition and Transformation.

    ERIC Educational Resources Information Center

    Parker, Marilyn M.

    1993-01-01

    Discusses what Office Information Systems and other Information Technology organizations, in concert with the business organizations they support, must do to exploit the opportunities and support the transition to the next generation enterprise: its business processes, its organizations and architectures, and its strategies. (Author/JOW)

  18. Service-Oriented Architecture and Curriculum Transformation at Manchester Metropolitan University

    ERIC Educational Resources Information Center

    Stubbs, Mark; Range, Phil

    2011-01-01

    Purpose: The need to establish more flexible and adaptable university curricula has been recognised as a strategic priority in recent years and has been supported by a number of initiatives including the Curriculum Design and Delivery programme funded by the Joint Information System Committee (JISC) in the UK. The challenges of addressing…

  19. Global Information Infrastructure: The Birth, Vision, and Architecture.

    ERIC Educational Resources Information Center

    Targowski, Andrew S.

    A new world has arrived in which computer and communications technologies will transform the national and global economies into information-driven economies. This is triggering the Information Revolution, which will have political and societal impacts every bit as profound as those of the Industrial Revolution. The 21st century is viewed as one…

  20. Enterprise Architecture as a Tool of Navy METOC Transformation

    DTIC Science & Technology

    2006-09-01

    Enterprise Service Integration Layer (MESIL) METOC Enterprise Service Bus (ESB) Local ESBl Impl InfraI l I f Production Center Node Local ESBl Impl...InfraI l I f Local ESBl Impl InfraI l I f METOC Edge Node NCOW Tenets NCOW Tenets SOA Tenets SOA Tenets Production Center Node Top-Down Analysis

  1. Developing a Systematic Architecture Approach for Designing an Enhanced Electronic Medical Record (EEMR) System

    ERIC Educational Resources Information Center

    Aldukheil, Maher A.

    2013-01-01

    The Healthcare industry is characterized by its complexity in delivering care to the patients. Accordingly, healthcare organizations adopt and implement Information Technology (IT) solutions to manage complexity, improve quality of care, and transform to a fully integrated and digitized environment. Electronic Medical Records (EMR), which is…

  2. Affiliated Practices and Aesthetic Interventions: Remaking Public Spaces in Cincinnati and Los Angeles

    ERIC Educational Resources Information Center

    Dutton, Thomas A.; Mann, Lian Hurst

    2003-01-01

    This article focuses on how architecture, and aesthetic interventions generally, might be transformed from hegemonic to counterhegemonic in order to realign political forces in the production of culture and social life. In a prior REMARX commentary for "Rethinking Marxism", the authors critiqued the concept of "the political"…

  3. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  4. The design plan of a VLSI single chip (255, 223) Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Shao, H. M.; Deutsch, L. J.

    1987-01-01

    The very large-scale integration (VLSI) architecture of a single chip (255, 223) Reed-Solomon decoder for decoding both errors and erasures is described. A decoding failure detection capability is also included in this system so that the decoder will recognize a failure to decode instead of introducing additional errors. This could happen whenever the received word contains too many errors and erasures for the code to correct. The number of transistors needed to implement this decoder is estimated at about 75,000 if the delay for received message is not included. This is in contrast to the older transform decoding algorithm which needs about 100,000 transistors. However, the transform decoder is simpler in architecture than the time decoder. It is therefore possible to implement a single chip (255, 223) Reed-Solomon decoder with today's VLSI technology. An implementation strategy for the decoder system is presented. This represents the first step in a plan to take advantage of advanced coding techniques to realize a 2.0 dB coding gain for future space missions.

  5. Based on a multi-agent system for multi-scale simulation and application of household's LUCC: a case study for Mengcha village, Mizhi county, Shaanxi province.

    PubMed

    Chen, Hai; Liang, Xiaoying; Li, Rui

    2013-01-01

    Multi-Agent Systems (MAS) offer a conceptual approach to include multi-actor decision making into models of land use change. Through the simulation based on the MAS, this paper tries to show the application of MAS in the micro scale LUCC, and reveal the transformation mechanism of difference scale. This paper starts with a description of the context of MAS research. Then, it adopts the Nested Spatial Choice (NSC) method to construct the multi-scale LUCC decision-making model. And a case study for Mengcha village, Mizhi County, Shaanxi Province is reported. Finally, the potentials and drawbacks of the following approach is discussed and concluded. From our design and implementation of the MAS in multi-scale model, a number of observations and conclusions can be drawn on the implementation and future research directions. (1) The use of the LUCC decision-making and multi-scale transformation framework provides, according to us, a more realistic modeling of multi-scale decision making process. (2) By using continuous function, rather than discrete function, to construct the decision-making of the households is more realistic to reflect the effect. (3) In this paper, attempts have been made to give a quantitative analysis to research the household interaction. And it provides the premise and foundation for researching the communication and learning among the households. (4) The scale transformation architecture constructed in this paper helps to accumulate theory and experience for the interaction research between the micro land use decision-making and the macro land use landscape pattern. Our future research work will focus on: (1) how to rational use risk aversion principle, and put the rule on rotation between household parcels into model. (2) Exploring the methods aiming at researching the household decision-making over a long period, it allows us to find the bridge between the long-term LUCC data and the short-term household decision-making. (3) Researching the quantitative method and model, especially the scenario analysis model which may reflect the interaction among different household types.

  6. Automated Design Space Exploration with Aspen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spafford, Kyle L.; Vetter, Jeffrey S.

    Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less

  7. Automated Design Space Exploration with Aspen

    DOE PAGES

    Spafford, Kyle L.; Vetter, Jeffrey S.

    2015-01-01

    Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less

  8. Electromagnetic Physics Models for Parallel Computing Architectures

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  9. Modelling of internal architecture of kinesin nanomotor as a machine language.

    PubMed

    Khataee, H R; Ibrahim, M Y

    2012-09-01

    Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.

  10. Detailed Primitive-Based 3d Modeling of Architectural Elements

    NASA Astrophysics Data System (ADS)

    Remondino, F.; Lo Buglio, D.; Nony, N.; De Luca, L.

    2012-07-01

    The article describes a pipeline, based on image-data, for the 3D reconstruction of building façades or architectural elements and the successive modeling using geometric primitives. The approach overcome some existing problems in modeling architectural elements and deliver efficient-in-size reality-based textured 3D models useful for metric applications. For the 3D reconstruction, an opensource pipeline developed within the TAPENADE project is employed. In the successive modeling steps, the user manually selects an area containing an architectural element (capital, column, bas-relief, window tympanum, etc.) and then the procedure fits geometric primitives and computes disparity and displacement maps in order to tie visual and geometric information together in a light but detailed 3D model. Examples are reported and commented.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yao; Balaprakash, Prasanna; Meng, Jiayuan

    We present Raexplore, a performance modeling framework for architecture exploration. Raexplore enables rapid, automated, and systematic search of architecture design space by combining hardware counter-based performance characterization and analytical performance modeling. We demonstrate Raexplore for two recent manycore processors IBM Blue- Gene/Q compute chip and Intel Xeon Phi, targeting a set of scientific applications. Our framework is able to capture complex interactions between architectural components including instruction pipeline, cache, and memory, and to achieve a 3–22% error for same-architecture and cross-architecture performance predictions. Furthermore, we apply our framework to assess the two processors, and discover and evaluate a list ofmore » architectural scaling options for future processor designs.« less

  12. Multiple feature extraction by using simultaneous wavelet transforms

    NASA Astrophysics Data System (ADS)

    Mazzaferri, Javier; Ledesma, Silvia; Iemmi, Claudio

    2003-07-01

    We propose here a method to optically perform multiple feature extraction by using wavelet transforms. The method is based on obtaining the optical correlation by means of a Vander Lugt architecture, where the scene and the filter are displayed on spatial light modulators (SLMs). Multiple phase filters containing the information about the features that we are interested in extracting are designed and then displayed on an SLM working in phase mostly mode. We have designed filters to simultaneously detect edges and corners or different characteristic frequencies contained in the input scene. Simulated and experimental results are shown.

  13. Can big data transform electronic health records into learning health systems?

    PubMed

    Harper, Ellen

    2014-01-01

    In the United States and globally, healthcare delivery is in the midst of an acute transformation with the adoption and use of health information technology (health IT) thus generating increasing amounts of patient care data available in computable form. Secure and trusted use of these data, beyond their original purpose can change the way we think about business, health, education, and innovation in the years to come. "Big Data" is data whose scale, diversity, and complexity require new architecture, techniques, algorithms, and analytics to manage it and extract value and hidden knowledge from it.

  14. Little by Little Does the Trick: Design and Construction of a Discrete Event Agent-Based Simulation Framework

    DTIC Science & Technology

    2007-12-01

    model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality. 15. NUMBER OF...and a Behavioral model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality...prototypes an architectural design which is generalizable, reusable, and extensible. We have created an initial set of model elements that demonstrate

  15. Plant architecture, growth and radiative transfer for terrestrial and space environments

    NASA Technical Reports Server (NTRS)

    Norman, John M.; Goel, Narendra S.

    1993-01-01

    The overall objective of this research was to develop a hardware implemented model that would incorporate realistic and dynamic descriptions of canopy architecture in physiologically based models of plant growth and functioning, with an emphasis on radiative transfer while accommodating other environmental constraints. The general approach has five parts: a realistic mathematical treatment of canopy architecture, a methodology for combining this general canopy architectural description with a general radiative transfer model, the inclusion of physiological and environmental aspects of plant growth, inclusion of plant phenology, and integration.

  16. Roofline model toolkit: A practical tool for architectural and program analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, Yu Jung; Williams, Samuel; Van Straalen, Brian

    We present preliminary results of the Roofline Toolkit for multicore, many core, and accelerated architectures. This paper focuses on the processor architecture characterization engine, a collection of portable instrumented micro benchmarks implemented with Message Passing Interface (MPI), and OpenMP used to express thread-level parallelism. These benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these microbenchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism, instruction-level parallelism and explicit SIMD parallelism, measured in the context of the compilers and run-time environments. We also measuremore » sustained PCIe throughput with four GPU memory managed mechanisms. By combining results from the architecture characterization with the Roofline model based solely on architectural specifications, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline model when run on a Blue Gene/Q architecture.« less

  17. a Framework for Architectural Heritage Hbim Semantization and Development

    NASA Astrophysics Data System (ADS)

    Brusaporci, S.; Maiezza, P.; Tata, A.

    2018-05-01

    Despite the recognized advantages of the use of BIM in the field of architecture and engineering, the extension of this procedure to the architectural heritage is neither immediate nor critical. The uniqueness and irregularity of historical architecture, on the one hand, and the great quantity of information necessary for the knowledge of architectural heritage, on the other, require appropriate reflections. The aim of this paper is to define a general framework for the use of BIM procedures for architectural heritage. The proposed methodology consists of three different Level of Development (LoD), depending on the characteristics of the building and the objectives of the study: a simplified model with a low geometric accuracy and a minimum quantity of information (LoD 200); a model nearer to the reality but, however, with a high deviation between virtual and real model (LoD 300); a detailed BIM model that reproduce as much as possible the geometric irregularities of the building and is enriched by the maximum quantity of information available (LoD 400).

  18. Quality evaluation of health information system's architectures developed using the HIS-DF methodology.

    PubMed

    López, Diego M; Blobel, Bernd; Gonzalez, Carolina

    2010-01-01

    Requirement analysis, design, implementation, evaluation, use, and maintenance of semantically interoperable Health Information Systems (HIS) have to be based on eHealth standards. HIS-DF is a comprehensive approach for HIS architectural development based on standard information models and vocabulary. The empirical validity of HIS-DF has not been demonstrated so far. Through an empirical experiment, the paper demonstrates that using HIS-DF and HL7 information models, semantic quality of HIS architecture can be improved, compared to architectures developed using traditional RUP process. Semantic quality of the architecture has been measured in terms of model's completeness and validity metrics. The experimental results demonstrated an increased completeness of 14.38% and an increased validity of 16.63% when using the HIS-DF and HL7 information models in a sample HIS development project. Quality assurance of the system architecture in earlier stages of HIS development presumes an increased quality of final HIS systems, which supposes an indirect impact on patient care.

  19. An e-consent-based shared EHR system architecture for integrated healthcare networks.

    PubMed

    Bergmann, Joachim; Bott, Oliver J; Pretschner, Dietrich P; Haux, Reinhold

    2007-01-01

    Virtual integration of distributed patient data promises advantages over a consolidated health record, but raises questions mainly about practicability and authorization concepts. Our work aims on specification and development of a virtual shared health record architecture using a patient-centred integration and authorization model. A literature survey summarizes considerations of current architectural approaches. Complemented by a methodical analysis in two regional settings, a formal architecture model was specified and implemented. Results presented in this paper are a survey of architectural approaches for shared health records and an architecture model for a virtual shared EHR, which combines a patient-centred integration policy with provider-oriented document management. An electronic consent system assures, that access to the shared record remains under control of the patient. A corresponding system prototype has been developed and is currently being introduced and evaluated in a regional setting. The proposed architecture is capable of partly replacing message-based communications. Operating highly available provider repositories for the virtual shared EHR requires advanced technology and probably means additional costs for care providers. Acceptance of the proposed architecture depends on transparently embedding document validation and digital signature into the work processes. The paradigm shift from paper-based messaging to a "pull model" needs further evaluation.

  20. MACCIS 2.0 - An Architecture Description Framework for Technical Infostructures and Their Enterprise Environment

    DTIC Science & Technology

    2004-06-01

    Viewpoint Component Viewpoint View Architecture Description of Enterprise or Infostructure View Security Concern Business Security Model Business...security concern, when applied to the different viewpoints, addresses both stakeholders, and is described as a business security model or component...Viewpoint View Architecture Description of Enterprise or Infostructure View Security Concern Business Security Model Business Stakeholder IT Architect

  1. A Model for Communications Satellite System Architecture Assessment

    DTIC Science & Technology

    2011-09-01

    This is shown in Equation 4. The total system cost includes all development, acquisition, fielding, operations, maintenance and upgrades, and system...protection. A mathematical model was implemented to enable the analysis of communications satellite system architectures based on multiple system... implemented to enable the analysis of communications satellite system architectures based on multiple system attributes. Utilization of the model in

  2. Clinical engineering and risk management in healthcare technological process using architecture framework.

    PubMed

    Signori, Marcos R; Garcia, Renato

    2010-01-01

    This paper presents a model that aids the Clinical Engineering to deal with Risk Management in the Healthcare Technological Process. The healthcare technological setting is complex and supported by three basics entities: infrastructure (IS), healthcare technology (HT), and human resource (HR). Was used an Enterprise Architecture - MODAF (Ministry of Defence Architecture Framework) - to model this process for risk management. Thus, was created a new model to contribute to the risk management in the HT process, through the Clinical Engineering viewpoint. This architecture model can support and improve the decision making process of the Clinical Engineering to the Risk Management in the Healthcare Technological process.

  3. An information model for a virtual private optical network (OVPN) using virtual routers (VRs)

    NASA Astrophysics Data System (ADS)

    Vo, Viet Minh Nhat

    2002-05-01

    This paper describes a virtual private optical network architecture (Optical VPN - OVPN) based on virtual router (VR). It improves over architectures suggested for virtual private networks by using virtual routers with optical networks. The new things in this architecture are necessary changes to adapt to devices and protocols used in optical networks. This paper also presents information models for the OVPN: at the architecture level and at the service level. These are extensions to the DEN (directory enable network) and CIM (Common Information Model) for OVPNs using VRs. The goal is to propose a common management model using policies.

  4. Modeling the evolution of protein domain architectures using maximum parsimony.

    PubMed

    Fong, Jessica H; Geer, Lewis Y; Panchenko, Anna R; Bryant, Stephen H

    2007-02-09

    Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture "neighbors" identified in this way may lead to new insights about the evolution of protein function.

  5. Modeling the Evolution of Protein Domain Architectures Using Maximum Parsimony

    PubMed Central

    Fong, Jessica H.; Geer, Lewis Y.; Panchenko, Anna R.; Bryant, Stephen H.

    2007-01-01

    Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture “neighbors” identified in this way may lead to new insights about the evolution of protein function. PMID:17166515

  6. Modeling driver behavior in a cognitive architecture.

    PubMed

    Salvucci, Dario D

    2006-01-01

    This paper explores the development of a rigorous computational model of driver behavior in a cognitive architecture--a computational framework with underlying psychological theories that incorporate basic properties and limitations of the human system. Computational modeling has emerged as a powerful tool for studying the complex task of driving, allowing researchers to simulate driver behavior and explore the parameters and constraints of this behavior. An integrated driver model developed in the ACT-R (Adaptive Control of Thought-Rational) cognitive architecture is described that focuses on the component processes of control, monitoring, and decision making in a multilane highway environment. This model accounts for the steering profiles, lateral position profiles, and gaze distributions of human drivers during lane keeping, curve negotiation, and lane changing. The model demonstrates how cognitive architectures facilitate understanding of driver behavior in the context of general human abilities and constraints and how the driving domain benefits cognitive architectures by pushing model development toward more complex, realistic tasks. The model can also serve as a core computational engine for practical applications that predict and recognize driver behavior and distraction.

  7. Model-Based Systems Engineering for Capturing Mission Architecture System Processes with an Application Case Study - Orion Flight Test 1

    NASA Technical Reports Server (NTRS)

    Bonanne, Kevin H.

    2011-01-01

    Model-based Systems Engineering (MBSE) is an emerging methodology that can be leveraged to enhance many system development processes. MBSE allows for the centralization of an architecture description that would otherwise be stored in various locations and formats, thus simplifying communication among the project stakeholders, inducing commonality in representation, and expediting report generation. This paper outlines the MBSE approach taken to capture the processes of two different, but related, architectures by employing the Systems Modeling Language (SysML) as a standard for architecture description and the modeling tool MagicDraw. The overarching goal of this study was to demonstrate the effectiveness of MBSE as a means of capturing and designing a mission systems architecture. The first portion of the project focused on capturing the necessary system engineering activities that occur when designing, developing, and deploying a mission systems architecture for a space mission. The second part applies activities from the first to an application problem - the system engineering of the Orion Flight Test 1 (OFT-1) End-to-End Information System (EEIS). By modeling the activities required to create a space mission architecture and then implementing those activities in an application problem, the utility of MBSE as an approach to systems engineering can be demonstrated.

  8. Miniature, Low-Power, Waveguide Based Infrared Fourier Transform Spectrometer for Spacecraft Remote Sensing

    NASA Technical Reports Server (NTRS)

    Hewagama, TIlak; Aslam, Shahid; Talabac, Stephen; Allen, John E., Jr.; Annen, John N.; Jennings, Donald E.

    2011-01-01

    Fourier transform spectrometers have a venerable heritage as flight instruments. However, obtaining an accurate spectrum exacts a penalty in instrument mass and power requirements. Recent advances in a broad class of non-scanning Fourier transform spectrometer (FTS) devices, generally called spatial heterodyne spectrometers, offer distinct advantages as flight optimized systems. We are developing a miniaturized system that employs photonics lightwave circuit principles and functions as an FTS operating in the 7-14 micrometer spectral region. The inteferogram is constructed from an ensemble of Mach-Zehnder interferometers with path length differences calibrated to mimic scan mirror sample positions of a classic Michelson type FTS. One potential long-term application of this technology in low cost planetary missions is the concept of a self-contained sensor system. We are developing a systems architecture concept for wide area in situ and remote monitoring of characteristic properties that are of scientific interest. The system will be based on wavelength- and resolution-independent spectroscopic sensors for studying atmospheric and surface chemistry, physics, and mineralogy. The self-contained sensor network is based on our concept of an Addressable Photonics Cube (APC) which has real-time flexibility and broad science applications. It is envisaged that a spatially distributed autonomous sensor web concept that integrates multiple APCs will be reactive and dynamically driven. The network is designed to respond in an event- or model-driven manner or reconfigured as needed.

  9. An adaptive semantic based mediation system for data interoperability among Health Information Systems.

    PubMed

    Khan, Wajahat Ali; Khattak, Asad Masood; Hussain, Maqbool; Amin, Muhammad Bilal; Afzal, Muhammad; Nugent, Christopher; Lee, Sungyoung

    2014-08-01

    Heterogeneity in the management of the complex medical data, obstructs the attainment of data level interoperability among Health Information Systems (HIS). This diversity is dependent on the compliance of HISs with different healthcare standards. Its solution demands a mediation system for the accurate interpretation of data in different heterogeneous formats for achieving data interoperability. We propose an adaptive AdapteR Interoperability ENgine mediation system called ARIEN, that arbitrates between HISs compliant to different healthcare standards for accurate and seamless information exchange to achieve data interoperability. ARIEN stores the semantic mapping information between different standards in the Mediation Bridge Ontology (MBO) using ontology matching techniques. These mappings are provided by our System for Parallel Heterogeneity (SPHeRe) matching system and Personalized-Detailed Clinical Model (P-DCM) approach to guarantee accuracy of mappings. The realization of the effectiveness of the mappings stored in the MBO is evaluation of the accuracy in transformation process among different standard formats. We evaluated our proposed system with the transformation process of medical records between Clinical Document Architecture (CDA) and Virtual Medical Record (vMR) standards. The transformation process achieved over 90 % of accuracy level in conversion process between CDA and vMR standards using pattern oriented approach from the MBO. The proposed mediation system improves the overall communication process between HISs. It provides an accurate and seamless medical information exchange to ensure data interoperability and timely healthcare services to patients.

  10. Towards a Framework for Modeling Space Systems Architectures

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Skipper, Joseph

    2006-01-01

    Topics covered include: 1) Statement of the problem: a) Space system architecture is complex; b) Existing terrestrial approaches must be adapted for space; c) Need a common architecture methodology and information model; d) Need appropriate set of viewpoints. 2) Requirements on a space systems model. 3) Model Based Engineering and Design (MBED) project: a) Evaluated different methods; b) Adapted and utilized RASDS & RM-ODP; c) Identified useful set of viewpoints; d) Did actual model exchanges among selected subset of tools. 4) Lessons learned & future vision.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    The report reviews a method for modeling and controlling two serial link manipulators which mutually lift and transport a rigid body object in a three dimensional workspace. A new vector variable is introduced which parameterizes the internal contact force controlled degrees of freedom. A technique for dynamically distributing the payload between the manipulators is suggested which yields a family of solutions for the contact forces and torques the manipulators impart to the object. A set of rigid body kinematic constraints which restricts the values of the joint velocities of both manipulators is derived. A rigid body dynamical model for themore » closed chain system is first developed in the joint space. The model is obtained by generalizing the previous methods for deriving the model. The joint velocity and acceleration variables in the model are expressed in terms of independent pseudovariables. The pseudospace model is transformed to obtain reduced order equations of motion and a separate set of equations governing the internal components of the contact forces and torques. A theoretic control architecture is suggested which explicitly decouples the two sets of equations comprising the model. The controller enables the designer to develop independent, non-interacting control laws for the position control and internal force control of the system.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    The paper reviews a method for modeling and controlling two serial link manipulators which mutually lift and transport a rigid body object in a three dimensional workspace. A new vector variable is introduced which parameterizes the internal contact force controlled degrees of freedom. A technique for dynamically distributing the payload between the manipulators is suggested which yields a family of solutions for the contact forces and torques the manipulators impart to the object. A set of rigid body kinematic constraints which restrict the values of the joint velocities of both manipulators is derived. A rigid body dynamical model for themore » closed chain system is first developed in the joint space. The model is obtained by generalizing the previous methods for deriving the model. The joint velocity and acceleration variables in the model are expressed in terms of independent pseudovariables. The pseudospace model is transformed to obtain reduced order equations of motion and a separate set of equations governing the internal components of the contact forces and torques. A theoretic control architecture is suggested which explicitly decouples the two sets of equations comprising the model. The controller enables the designer to develop independent, non-interacting control laws for the position control and internal force control of the system.« less

  13. Software architecture and design of the web services facilitating climate model diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  14. A filter-mediated communication model for design collaboration in building construction.

    PubMed

    Lee, Jaewook; Jeong, Yongwook; Oh, Minho; Hong, Seung Wan

    2014-01-01

    Multidisciplinary collaboration is an important aspect of modern engineering activities, arising from the growing complexity of artifacts whose design and construction require knowledge and skills that exceed the capacities of any one professional. However, current collaboration in the architecture, engineering, and construction industries often fails due to lack of shared understanding between different participants and limitations of their supporting tools. To achieve a high level of shared understanding, this study proposes a filter-mediated communication model. In the proposed model, participants retain their own data in the form most appropriate for their needs with domain-specific filters that transform the neutral representations into semantically rich ones, as needed by the participants. Conversely, the filters can translate semantically rich, domain-specific data into a neutral representation that can be accessed by other domain-specific filters. To validate the feasibility of the proposed model, we computationally implement the filter mechanism and apply it to a hypothetical test case. The result acknowledges that the filter mechanism can let the participants know ahead of time what will be the implications of their proposed actions, as seen from other participants' points of view.

  15. Morphological Plant Modeling: Unleashing Geometric and Topological Potential within the Plant Sciences

    PubMed Central

    Bucksch, Alexander; Atta-Boateng, Acheampong; Azihou, Akomian F.; Battogtokh, Dorjsuren; Baumgartner, Aly; Binder, Brad M.; Braybrook, Siobhan A.; Chang, Cynthia; Coneva, Viktoirya; DeWitt, Thomas J.; Fletcher, Alexander G.; Gehan, Malia A.; Diaz-Martinez, Diego Hernan; Hong, Lilan; Iyer-Pascuzzi, Anjali S.; Klein, Laura L.; Leiboff, Samuel; Li, Mao; Lynch, Jonathan P.; Maizel, Alexis; Maloof, Julin N.; Markelz, R. J. Cody; Martinez, Ciera C.; Miller, Laura A.; Mio, Washington; Palubicki, Wojtek; Poorter, Hendrik; Pradal, Christophe; Price, Charles A.; Puttonen, Eetu; Reese, John B.; Rellán-Álvarez, Rubén; Spalding, Edgar P.; Sparks, Erin E.; Topp, Christopher N.; Williams, Joseph H.; Chitwood, Daniel H.

    2017-01-01

    The geometries and topologies of leaves, flowers, roots, shoots, and their arrangements have fascinated plant biologists and mathematicians alike. As such, plant morphology is inherently mathematical in that it describes plant form and architecture with geometrical and topological techniques. Gaining an understanding of how to modify plant morphology, through molecular biology and breeding, aided by a mathematical perspective, is critical to improving agriculture, and the monitoring of ecosystems is vital to modeling a future with fewer natural resources. In this white paper, we begin with an overview in quantifying the form of plants and mathematical models of patterning in plants. We then explore the fundamental challenges that remain unanswered concerning plant morphology, from the barriers preventing the prediction of phenotype from genotype to modeling the movement of leaves in air streams. We end with a discussion concerning the education of plant morphology synthesizing biological and mathematical approaches and ways to facilitate research advances through outreach, cross-disciplinary training, and open science. Unleashing the potential of geometric and topological approaches in the plant sciences promises to transform our understanding of both plants and mathematics. PMID:28659934

  16. A Distributed Model for Stressors Monitoring Based on Environmental Smart Sensors.

    PubMed

    de Ramón-Fernández, Alberto; Ruiz-Fernández, Daniel; Marcos-Jorquera, Diego; Gilart-Iglesias, Virgilio

    2018-06-14

    Nowadays, in many countries, stress is becoming a problem that increasingly affects the health of people. Suffering stress continuously can lead to serious behavioral disorders such as anxiety or depression. Every person, in his daily routine, can face many factors which can contribute to increase his stress level. This paper describes a flexible and distributed model to monitor environmental variables associated with stress, which provides adaptability to any environment in an agile way. This model was designed to transform stress environmental variables in value added information (key stress indicator) and to provide it to external systems, in both proactive and reactive mode. Thus, this value-added information will assist organizations and users in a personalized way helping in the detection and prevention of acute stress cases. Our proposed model is supported by an architecture that achieves the features above mentioned, in addition to interoperability, robustness, scalability, autonomy, efficient, low cost and consumption, and information availability in real time. Finally, a prototype of the system was implemented, allowing the validation of the proposal in different environments at the University of Alicante.

  17. A self-scaling, distributed information architecture for public health, research, and clinical care.

    PubMed

    McMurry, Andrew J; Gilbert, Clint A; Reis, Ben Y; Chueh, Henry C; Kohane, Isaac S; Mandl, Kenneth D

    2007-01-01

    This study sought to define a scalable architecture to support the National Health Information Network (NHIN). This architecture must concurrently support a wide range of public health, research, and clinical care activities. The architecture fulfils five desiderata: (1) adopt a distributed approach to data storage to protect privacy, (2) enable strong institutional autonomy to engender participation, (3) provide oversight and transparency to ensure patient trust, (4) allow variable levels of access according to investigator needs and institutional policies, (5) define a self-scaling architecture that encourages voluntary regional collaborations that coalesce to form a nationwide network. Our model has been validated by a large-scale, multi-institution study involving seven medical centers for cancer research. It is the basis of one of four open architectures developed under funding from the Office of the National Coordinator of Health Information Technology, fulfilling the biosurveillance use case defined by the American Health Information Community. The model supports broad applicability for regional and national clinical information exchanges. This model shows the feasibility of an architecture wherein the requirements of care providers, investigators, and public health authorities are served by a distributed model that grants autonomy, protects privacy, and promotes participation.

  18. Electromagnetic physics models for parallel computing architectures

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less

  19. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.

    PubMed

    Su, Po-Chang; Shen, Ju; Xu, Wanxin; Cheung, Sen-Ching S; Luo, Ying

    2018-01-15

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds.

  20. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks †

    PubMed Central

    Shen, Ju; Xu, Wanxin; Luo, Ying

    2018-01-01

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds. PMID:29342968

  1. D Survey Techniques for the Architectutal Restoration: the Case of ST. Agata in Pisa

    NASA Astrophysics Data System (ADS)

    Bevilacqua, M. G.; Caroti, G.; Piemonte, A.; Ruschi, P.; Tenchini, L.

    2017-05-01

    The historical architectural heritage may be considered as the product of a complex system of interaction between several factors - cultural, socio-economic, technical, aesthetic etc. The restoration and conservation of this important heritage, therefore, requires necessarily a multidisciplinary approach, both in the preliminary phase of knowledge and in the operative one, strictly connected to the first, regarding the development of the restoration works in all their steps, from the project to the realization. The historical-critical analysis of bibliographic, archival and iconographic sources, together with the architectural survey, aims at interpreting all the events that, from the initial project to all the eventual phases of transformation, have lead the monument in its current state. This is therefore a multi-temporal and multi-spatial study in which geomatics gives an innovative contribution for its capability of gathering, storing, processing, and delivering different levels of spatially referenced information. The current techniques of architectural survey, supported by specific methodological skills, are therefore not limited to a mere mathematical-geometrical description of the historical building, but are useful also for many other purposes, such as formal-linguistic analysis, interpretation of the historical phases of transformation, description of the state of degradation/conservation etc. In this interdisciplinary perspective, photogrammetry and laser scanner represent the two main techniques, as they offer the greatest potential of performing integrated surveys. In the last decades, we have witnessed the growth and development of these 3D-survey techniques as alternative or complementary tools to the traditional ones. In particular, in the field of architectural restoration, these techniques have made significant improvements not only in terms of measure precision or reduction of time for survey operations, but also for the possibility to represent and visualize the historical building in its context. These modern techniques of survey, based on the creation of point clouds, are now widely used both in the study of a building and for the thorough description of architectural details and decorations. This paper aims at describing the methodological approach and the results of the 3D survey of the Chapel of St. Agata in Pisa, aimed at its restoration. For the development of a restoration project, the survey drawings must represent not only the geometry of a building, but also the materials and the level of degradation. So, we chose to use both the laser scanner - which guarantees uniformity of the geometric survey precision - and a 3D image-based modelling. The combined use of these two techniques, supported by a total station survey, has produced two point clouds in the same reference system, and allowed the determination of the external orientation parameters of the photographic images. Since these parameters are known, it was possible to texturize the laser scanner model with high quality images. The adopted methodology, as expected, gave back metrically correct and graphically high-quality drawings. The level of detail of the survey, and consequently of the final drawings, has been previously defined for the identification of all the elements required for the analysis of the current state, such as the clear identification and position of all the degradation phenomena, materials and decorative elements such as some fragmented and heavily damaged frescoes.

  2. Programming model for distributed intelligent systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  3. Executable Architecture Research at Old Dominion University

    NASA Technical Reports Server (NTRS)

    Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.

    2011-01-01

    Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.

  4. Space Generic Open Avionics Architecture (SGOAA) standard specification

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1994-01-01

    This standard establishes the Space Generic Open Avionics Architecture (SGOAA). The SGOAA includes a generic functional model, processing structural model, and an architecture interface model. This standard defines the requirements for applying these models to the development of spacecraft core avionics systems. The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture models to the design of a specific avionics hardware/software processing system. This standard defines a generic set of system interface points to facilitate identification of critical services and interfaces. It establishes the requirement for applying appropriate low level detailed implementation standards to those interfaces points. The generic core avionics functions and processing structural models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  5. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  6. Requirements and Solutions for Personalized Health Systems.

    PubMed

    Blobel, Bernd; Ruotsalainen, Pekka; Lopez, Diego M; Oemig, Frank

    2017-01-01

    Organizational, methodological and technological paradigm changes enable a precise, personalized, predictive, preventive and participative approach to health and social services supported by multiple actors from different domains at diverse level of knowledge and skills. Interoperability has to advance beyond Information and Communication Technologies (ICT) concerns, including the real world business domains and their processes, but also the individual context of all actors involved. The paper introduces and compares personalized health definitions, summarizes requirements and principles for pHealth systems, and considers intelligent interoperability. It addresses knowledge representation and harmonization, decision intelligence, and usability as crucial issues in pHealth. On this basis, a system-theoretical, ontology-based, policy-driven reference architecture model for open and intelligent pHealth ecosystems and its transformation into an appropriate ICT design and implementation is proposed.

  7. Revitalization of Energy Supply Systems in the Scale of a Town, a District and an Island

    NASA Astrophysics Data System (ADS)

    Juchimiuk, Justyna

    2016-09-01

    Model actions undertaken in HafenCity and Wilhelmsburg during IBA Hamburg 2006- 13 as well as energy transformation of Danish island of Samsø towards self-sufficiency are examples of the use of energy as one of the key factors in the design of revitalization process in various scales. An important issue is to determine the impact of renewable energy systems on design process, architecture and urbanism of revitalized structures. Article examines the programs and projects related to the processes: renewal of degraded inner-industrial areas (brownfields), ecological restoration of degraded land, the revitalization of port and underdeveloped areas in the aspects of climate protection, the use of energy from renewable sources and improvement of technical conditions of building substance while maintaining the principles of sustainable development.

  8. Active site architecture of a sugar N-oxygenase.

    PubMed

    Thoden, James B; Branch, Megan C; Zimmer, Alex L; Bruender, Nathan A; Holden, Hazel M

    2013-05-14

    KijD3 is a flavin-dependent N-oxygenase implicated in the formation of the nitro-containing sugar d-kijanose, found attached to the antibiotic kijanimicin. For this investigation, the structure of KijD3 in complex with FMN and its dTDP-sugar substrate was solved to 2.1 Å resolution. In contrast to the apoenzyme structure, the C-terminus of the protein becomes ordered and projects into the active site cleft [Bruender, N. A., Thoden, J. B., and Holden, H. M. (2010) Biochemistry 49, 3517-3524]. The amino group of the dTDP-aminosugar that is oxidized is located 4.9 Å from C4a of the flavin ring. The model provides a molecular basis for understanding the manner in which KijD3 catalyzes its unusual chemical transformation.

  9. The renovation of Madison's Oscar Mayer Theater

    NASA Astrophysics Data System (ADS)

    Myers, Joseph W. A.; Calamia, Paul T.

    2002-05-01

    Originally opened in 1928 as the Capitol Theatre, the Oscar Mayer Theatre in Madison, WI underwent substantial renovation in 1980 to support its current performance program. The theatre is now home to the Madison Symphony and the Madison Opera, and is used for ballet, touring shows, and popular concerts as well. As part of the ongoing Overture Project which will transform the theatre's home, the Madison Civic Center, into the Overture Center, the Oscar Mayer Theatre will be further improved to support its new role. In this paper we will discuss the reasoning behind the upcoming renovation within the context of the Overture Project, we will describe the pending architectural modifications to the theatre, and we will discuss the intended changes in acoustics. Computer modeling results will also be presented for the existing and renovated conditions.

  10. Chrestenson transform FPGA embedded factorizations.

    PubMed

    Corinthios, Michael J

    2016-01-01

    Chrestenson generalized Walsh transform factorizations for parallel processing imbedded implementations on field programmable gate arrays are presented. This general base transform, sometimes referred to as the Discrete Chrestenson transform, has received special attention in recent years. In fact, the Discrete Fourier transform and Walsh-Hadamard transform are but special cases of the Chrestenson generalized Walsh transform. Rotations of a base-p hypercube, where p is an arbitrary integer, are shown to produce dynamic contention-free memory allocation, in processor architecture. The approach is illustrated by factorizations involving the processing of matrices of the transform which are function of four variables. Parallel operations are implemented matrix multiplications. Each matrix, of dimension N × N, where N = p (n) , n integer, has a structure that depends on a variable parameter k that denotes the iteration number in the factorization process. The level of parallelism, in the form of M = p (m) processors can be chosen arbitrarily by varying m between zero to its maximum value of n - 1. The result is an equation describing the generalised parallelism factorization as a function of the four variables n, p, k and m. Applications of the approach are shown in relation to configuring field programmable gate arrays for digital signal processing applications.

  11. The IASI detection chain

    NASA Astrophysics Data System (ADS)

    Nicol, Patrick; Fleury, Joel; Le Naour, Claire; Bernard, Frédéric

    2017-11-01

    IASI (Infrared Atmospheric Sounding Interferometer) is an infrared atmospheric sounder. It will provide meteorologist and scientific community with atmospheric spectra. The instrument is composed of a Fourier transform spectrometer and an associated infrared imager. The presentation will describe the spectrometer detection chain architecture, composed by three different detectors cooled in a passive cryo-cooler (so called CBS : Cold Box Subsystem) and associated analog electronics up to digital conversion. It will mainly focus on design choices with regards to environment constraints, implemented technologies, and associated performances. CNES is leading the IASI program in collaboration with EUMETSAT. The instrument Prime is ALCATEL SPACE responsible, notably, of the detection chain architecture. SAGEM SA provides the detector package (so called CAU : Cold Acquisition Unit).

  12. The IASI detection chain

    NASA Astrophysics Data System (ADS)

    Nicol, Patrick; Fleury, Joel; Bernard, Frédéric

    2004-06-01

    IASI (Infrared Atmospheric Sounding Interferometer) is an infrared atmospheric sounder. It will provide meteorologist and scientific community with atmospheric spectra. The instrument is composed of a Fourier transform spectrometer and an associated infrared imager. The presentation will describe the spectrometer detection chain architecture, composed by three different detectors cooled in a passive cryo-cooler (so called CBS : Cold Box Subsystem) and associated analog electronics up to digital conversion. It will mainly focus on design choices with regards to environment constraints, implemented technologies, and associated performances . CNES is leading the IASI program in collaboration with EUMETSAT. The instrument Prime is ALCATEL SPACE responsible, notably, of the detection chain architecture. SAGEM SA provides the detector package (so called CAU: Cold Acquisition Unit).

  13. Low AC Loss YBCO Coated Conductor Geometry by Direct Inkjet Printing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupich, Martin, Dr.; Duckworth, Robert, Dr.

    The second generation (2G) high temperature superconductors (HTS) wire offers potential benefits for many electric power applications, including ones requiring filamentized conductors with low ac loss, such as transformers and fault current limiters. However, the use of 2G wire in these applications requires the development of both novel multi-filamentary conductor designs with lower ac losses and the development of advanced manufacturing technologies that enable the low-cost manufacturing of these filamentized architectures. This Phase I SBIR project focused on testing inkjet printing as a potential low-cost, roll-to-roll manufacturing technique to fabricate potential low ac loss filamentized architectures directly on the 2Gmore » template strips.« less

  14. Pedagogy and Student Services for Institutional Transformation: Implementing Universal Design in Higher Education

    ERIC Educational Resources Information Center

    Higbee, Jeanne L., Ed.; Goff, Emily, Ed.

    2008-01-01

    PASS IT seeks to address a compelling need in higher education by developing a corps of trainers to facilitate professional development workshops in the implementation of Universal Design (UD) and Universal Instructional Design (UID) in higher education. UID, an adaptation of the architectural concept of Universal Design, is a relatively new model…

  15. Art at the Airport: An Exploration of New Art Worlds

    ERIC Educational Resources Information Center

    Szekely, Ilona

    2012-01-01

    Many airports have transformed empty waiting spaces into mini malls, children's play areas, and displays of beautiful art, making a long wait a bit more pleasant. For the modern airport, showcasing art has become an important component, with perks including a built-in global audience, as well as the vast spaces of modern architecture. For the art…

  16. Creative Process and Experiences Leading to Creative Achievement in the Case of Accomplished Architects

    ERIC Educational Resources Information Center

    Lee, Seon-Young; Lee, Ghang

    2017-01-01

    This study has identified factors stimulating creative ideas, transforming creative ideas to products, and continuing creative performance in the field of architecture based on interviews with 10 creative and successful architects. Having a penchant for liberal arts and reading books on a broad range of topics on arts, humanities, social sciences,…

  17. Does Digitized Virtual Space Allow for Effective Learning in Creating Environments for Theatrical Productions?

    ERIC Educational Resources Information Center

    Magruder, Lewis

    2016-01-01

    Learning how to transform an empty space into one alive with dramatic possibilities is one of the challenges facing students in several disciplines--for example, graphic design, filmmaking, gaming, architecture, interior design, visual arts, and designing and directing for the theatre. The author, a professor of directing for the theatre,…

  18. Application of Interactive Multimedia Tools in Teaching Mathematics--Examples of Lessons from Geometry

    ERIC Educational Resources Information Center

    Milovanovic, Marina; Obradovic, Jasmina; Milajic, Aleksandar

    2013-01-01

    This article presents the benefits and importance of using multimedia in the math classes by the selected examples of multimedia lessons from geometry (isometric transformations and regular polyhedra). The research included two groups of 50 first year students of the Faculty of the Architecture and the Faculty of Civil Construction Management.…

  19. The use of geospatial web services for exchanging utilities data

    NASA Astrophysics Data System (ADS)

    Kuczyńska, Joanna

    2013-04-01

    Geographic information technologies and related geo-information systems currently play an important role in the management of public administration in Poland. One of these tasks is to maintain and update Geodetic Evidence of Public Utilities (GESUT), part of the National Geodetic and Cartographic Resource, which contains an important for many institutions information of technical infrastructure. It requires an active exchange of data between the Geodesy and Cartography Documentation Centers and institutions, which administrate transmission lines. The administrator of public utilities, is legally obliged to provide information about utilities to GESUT. The aim of the research work was to develop a universal data exchange methodology, which can be implemented on a variety of hardware and software platforms. This methodology use Unified Modeling Language (UML), eXtensible Markup Language (XML), and Geography Markup Language (GML). The proposed methodology is based on the two different strategies: Model Driven Architecture (MDA) and Service Oriented Architecture (SOA). Used solutions are consistent with the INSPIRE Directive and ISO 19100 series standards for geographic information. On the basis of analysis of the input data structures, conceptual models were built for both databases. Models were written in the universal modeling language: UML. Combined model that defines a common data structure was also built. This model was transformed into developed for the exchange of geographic information GML standard. The structure of the document describing the data that may be exchanged is defined in the .xsd file. Network services were selected and implemented in the system designed for data exchange based on open source tools. Methodology was implemented and tested. Data in the agreed data structure and metadata were set up on the server. Data access was provided by geospatial network services: data searching possibilities by Catalog Service for the Web (CSW), data collection by Web Feature Service (WFS). WFS provides also operation for modification data, for example to update them by utility administrator. The proposed solution significantly increases the efficiency of data exchange and facilitates maintenance the National Geodetic and Cartographic Resource.

  20. A Distributed Intelligent E-Learning System

    ERIC Educational Resources Information Center

    Kristensen, Terje

    2016-01-01

    An E-learning system based on a multi-agent (MAS) architecture combined with the Dynamic Content Manager (DCM) model of E-learning, is presented. We discuss the benefits of using such a multi-agent architecture. Finally, the MAS architecture is compared with a pure service-oriented architecture (SOA). This MAS architecture may also be used within…

  1. A development framework for semantically interoperable health information systems.

    PubMed

    Lopez, Diego M; Blobel, Bernd G M E

    2009-02-01

    Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.

  2. Modeling Operations Costs for Human Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  3. Enhancement of the Acquisition Process for a Combat System-A Case Study to Model the Workflow Processes for an Air Defense System Acquisition

    DTIC Science & Technology

    2009-12-01

    Business Process Modeling BPMN Business Process Modeling Notation SoA Service-oriented Architecture UML Unified Modeling Language CSP...system developers. Supporting technologies include Business Process Modeling Notation ( BPMN ), Unified Modeling Language (UML), model-driven architecture

  4. Sensitivity Analysis of a Cognitive Architecture for the Cultural Geography Model

    DTIC Science & Technology

    2011-12-01

    developmental inquiry. American Psychologist, 34(10), 906–911. Gazzaniga, M. S . (2004) The cognitive neurosciences III. Cambridge: MIT Press. Greeno, J. G...129 ix LIST OF FIGURES Situation-Based Cognitive Architecture (From Alt et al., 2011) .....................13 Figure 1. Theory of Planned...Harold, CG Model developer at TRAC-MTRY, who spend countless hours explaining to me the implementation of the Cognitive Architecture and CG model

  5. Capability-Based Modeling Methodology: A Fleet-First Approach to Architecture

    DTIC Science & Technology

    2014-02-01

    reconnaissance (ISR) aircraft , or unmanned systems . Accordingly, a mission architecture used to model SAG operations for a given Fleet unit should include all...would use an ISR aircraft to increase fidelity of a targeting solution; another mission thread to show how unmanned systems can augment targeting... unmanned systems . Therefore, an architect can generate, from a comprehensive SAG mission architecture, individual mission threads that model how a SAG

  6. Petri net model for analysis of concurrently processed complex algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  7. Upper and Lower Limb Muscle Architecture of a 104 Year-Old Cadaver

    PubMed Central

    Infantolino, Benjamin

    2016-01-01

    Muscle architecture is an important component to typical musculoskeletal models. Previous studies of human muscle architecture have focused on a single joint, two adjacent joints, or an entire limb. To date, no study has presented muscle architecture for the upper and lower limbs of a single cadaver. Additionally, muscle architectural parameters from elderly cadavers are lacking, making it difficult to accurately model elderly populations. Therefore, the purpose of this study was to present muscle architecture of the upper and lower limbs of a 104 year old female cadaver. The major muscles of the upper and lower limbs were removed and the musculotendon mass, tendon mass, musculotendon length, tendon length, pennation angle, optimal fascicle length, physiological cross-sectional area, and tendon cross-sectional area were determined for each muscle. Data from this complete cadaver are presented in table format. The data from this study can be used to construct a musculoskeletal model of a specific individual who was ambulatory, something which has not been possible to date. This should increase the accuracy of the model output as the model will be representing a specific individual, not a synthesis of measurements from multiple individuals. Additionally, an elderly individual can be modeled which will provide insight into muscle function as we age. PMID:28033339

  8. Upper and Lower Limb Muscle Architecture of a 104 Year-Old Cadaver.

    PubMed

    Ruggiero, Marissa; Cless, Daniel; Infantolino, Benjamin

    2016-01-01

    Muscle architecture is an important component to typical musculoskeletal models. Previous studies of human muscle architecture have focused on a single joint, two adjacent joints, or an entire limb. To date, no study has presented muscle architecture for the upper and lower limbs of a single cadaver. Additionally, muscle architectural parameters from elderly cadavers are lacking, making it difficult to accurately model elderly populations. Therefore, the purpose of this study was to present muscle architecture of the upper and lower limbs of a 104 year old female cadaver. The major muscles of the upper and lower limbs were removed and the musculotendon mass, tendon mass, musculotendon length, tendon length, pennation angle, optimal fascicle length, physiological cross-sectional area, and tendon cross-sectional area were determined for each muscle. Data from this complete cadaver are presented in table format. The data from this study can be used to construct a musculoskeletal model of a specific individual who was ambulatory, something which has not been possible to date. This should increase the accuracy of the model output as the model will be representing a specific individual, not a synthesis of measurements from multiple individuals. Additionally, an elderly individual can be modeled which will provide insight into muscle function as we age.

  9. Logs Analysis of Adapted Pedagogical Scenarios Generated by a Simulation Serious Game Architecture

    ERIC Educational Resources Information Center

    Callies, Sophie; Gravel, Mathieu; Beaudry, Eric; Basque, Josianne

    2017-01-01

    This paper presents an architecture designed for simulation serious games, which automatically generates game-based scenarios adapted to learner's learning progression. We present three central modules of the architecture: (1) the learner model, (2) the adaptation module and (3) the logs module. The learner model estimates the progression of the…

  10. Architectural Large Constructed Environment. Modeling and Interaction Using Dynamic Simulations

    NASA Astrophysics Data System (ADS)

    Fiamma, P.

    2011-09-01

    How to use for the architectural design, the simulation coming from a large size data model? The topic is related to the phase coming usually after the acquisition of the data, during the construction of the model and especially after, when designers must have an interaction with the simulation, in order to develop and verify their idea. In the case of study, the concept of interaction includes the concept of real time "flows". The work develops contents and results that can be part of the large debate about the current connection between "architecture" and "movement". The focus of the work, is to realize a collaborative and participative virtual environment on which different specialist actors, client and final users can share knowledge, targets and constraints to better gain the aimed result. The goal is to have used a dynamic micro simulation digital resource that allows all the actors to explore the model in powerful and realistic way and to have a new type of interaction in a complex architectural scenario. On the one hand, the work represents a base of knowledge that can be implemented more and more; on the other hand the work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. The architectural design before, and the architectural fact after, both happen in a sort of "Spatial Analysis System". The way is open to offer to this "system", knowledge and theories, that can support architectural design work for every application and scale. We think that the presented work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. Architecture like a spatial configuration, that can be reconfigurable too through designing.

  11. Hijazi Architectural Object Library (haol)

    NASA Astrophysics Data System (ADS)

    Baik, A.; Boehm, J.

    2017-02-01

    As with many historical buildings around the world, building façades are of special interest; moreover, the details of such windows, stonework, and ornaments give each historic building its individual character. Each object of these buildings must be classified in an architectural object library. Recently, a number of researches have been focusing on this topic in Europe and Canada. From this standpoint, the Hijazi Architectural Objects Library (HAOL) has reproduced Hijazi elements as 3D computer models, which are modelled using a Revit Family (RFA). The HAOL will be dependent on the image survey and point cloud data. The Hijazi Object such as Roshan and Mashrabiyah, become as vocabulary of many Islamic cities in the Hijazi region such as Jeddah in Saudi Arabia, and even for a number of Islamic historic cities such as Istanbul and Cairo. These architectural vocabularies are the main cause of the beauty of these heritage. However, there is a big gap in both the Islamic architectural library and the Hijazi architectural library to provide these unique elements. Besides, both Islamic and Hijazi architecture contains a huge amount of information which has not yet been digitally classified according to period and styles. Due to this issue, this paper will be focusing on developing of Heritage BIM (HBIM) standards and the HAOL library to reduce the cost and the delivering time for heritage and new projects that involve in Hijazi architectural styles. Through this paper, the fundamentals of Hijazi architecture informatics will be provided via developing framework for HBIM models and standards. This framework will provide schema and critical information, for example, classifying the different shapes, models, and forms of structure, construction, and ornamentation of Hijazi architecture in order to digitalize parametric building identity.

  12. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of the FPGAs makes it possible to effectively alter the design to some extent to satisfy different requirements without adding hardware. The implementation could be easily propagated to future FPGA generations and/or to custom application-specific integrated circuits.

  13. Top-down solid-phase fabrication of nanoporous cadmium oxide architectures.

    PubMed

    Yu, Haidong; Wang, Deshen; Han, Ming-Yong

    2007-02-28

    In this article, we have demonstrated one-step solid-phase transformation from high-quality cadmium carbonate microcrystals into highly nanoporous cadmium oxide. The high crystal quality of cadmium carbonate is critical for the successful fabrication of porous nanoarchitectures with predetermined morphology and well-controlled internal structure. This novel strategy has a good potential to prepare nanoporous materials at a large scale by using perfect monolithic carbonate crystals, and it is also useful to synthesize different nanoporous materials on metal-oxide-coated substrates. Meanwhile, this simple thermal transformation of cadmium carbonate into porous structures has further been extended to convert calcium carbonate into such porous structures.

  14. Water Fountains in Environment Transformation Correcting

    NASA Astrophysics Data System (ADS)

    Sidorenko, M. Yu; Ponomareva, Zh V.

    2017-11-01

    The article provides information on the means and principles for adjusting the process of the urban environment transformation. The interest in the topic is caused by the fact that the surrounding artificial environment is turning into a dangerous factor in the mechanism of human visual perception which requires immediate, effective intervention in the adjustment of the existing modern buildings. The paper considers The correction with the help of new dominants, small architectural forms, in particular, water fountains. Fountains are an important part of the measures to create a comfortable, environmentally friendly urban human environment. Their planning and functional links with the system of streets, squares, traffic arteries can create the urban plan basis.

  15. A low-power and high-quality implementation of the discrete cosine transformation

    NASA Astrophysics Data System (ADS)

    Heyne, B.; Götze, J.

    2007-06-01

    In this paper a computationally efficient and high-quality preserving DCT architecture is presented. It is obtained by optimizing the Loeffler DCT based on the Cordic algorithm. The computational complexity is reduced from 11 multiply and 29 add operations (Loeffler DCT) to 38 add and 16 shift operations (which is similar to the complexity of the binDCT). The experimental results show that the proposed DCT algorithm not only reduces the computational complexity significantly, but also retains the good transformation quality of the Loeffler DCT. Therefore, the proposed Cordic based Loeffler DCT is especially suited for low-power and high-quality CODECs in battery-based systems.

  16. Methods for genetic transformation in Dendrobium.

    PubMed

    da Silva, Jaime A Teixeira; Dobránszki, Judit; Cardoso, Jean Carlos; Chandler, Stephen F; Zeng, Songjun

    2016-03-01

    The genetic transformation of Dendrobium orchids will allow for the introduction of novel colours, altered architecture and valuable traits such as abiotic and biotic stress tolerance. The orchid genus Dendrobium contains species that have both ornamental value and medicinal importance. There is thus interest in producing cultivars that have increased resistance to pests, novel horticultural characteristics such as novel flower colours, improved productivity, longer flower spikes, or longer post-harvest shelf-life. Tissue culture is used to establish clonal plants while in vitro flowering allows for the production of flowers or floral parts within a sterile environment, expanding the selection of explants that can be used for tissue culture or genetic transformation. The latter is potentially the most effective, rapid and practical way to introduce new agronomic traits into Dendrobium. Most (69.4 %) Dendrobium genetic transformation studies have used particle bombardment (biolistics) while 64 % have employed some form of Agrobacterium-mediated transformation. A singe study has explored ovary injection, but no studies exist on floral dip transformation. While most of these studies have involved the use of selector or reporter genes, there are now a handful of studies that have introduced genes for horticulturally important traits.

  17. Art and Healthcare - Healing Potential of Artistic Interventions in Medical Settings

    NASA Astrophysics Data System (ADS)

    Awtuch, Anna; Gębczyńska-Janowicz, Agnieszka

    2017-10-01

    The stereotype of a machine for healing seems to be well rooted in common thinking and social perception of hospital buildings. The technological aspect of healthcare architecture has been influenced for several years by three major factors. The first is linked to the necessity of providing safety and security in the environment of elevated epidemiological risk. The second concerns the need for incorporating advanced technology required for medical equipment and building infrastructure. Finally, the third relates to Cartesian dualism in medical sciences. Fortunately, healthcare architecture of 21st-century is in the process of dynamic transformations resulting from the change in approach to patients. The holistic perspective gradually enters into medical sciences, and as a result a patient is perceived as a human being whose needs are discussed on three equally important dimensions: biological, social and psychological. The new trend has influenced the design process of contemporary hospitals. One can observe a turn from the primacy of medical technology over environmental conditions towards the balance between medical requirements and psychological and social needs of hospital users. The research on the impact of hospital environment on therapeutic process gave rise to a new trend of incorporating arts into the space and form of medical facilities. Both architecture and interior design details are more carefully negotiated in terms of aesthetics. Designers expand the possibilities of exhibiting visual art in functional and spatial arrangement. The initiatives introducing artistic objects, installations and activities into medical spaces aim at increasing the efficiency of medical services, transforming the image of sterile hospital architecture and introducing high quality public space. These interventions generate the impact both on micro and macro scales and they concern several fields of activity and forms of art. The paper presents the scope of possibilities for introducing art into contemporary medical spaces and discusses the influence of selected solutions incorporating artistic interventions on healthcare users.

  18. Systematic Development of Intelligent Systems for Public Road Transport.

    PubMed

    García, Carmelo R; Quesada-Arencibia, Alexis; Cristóbal, Teresa; Padrón, Gabino; Alayón, Francisco

    2016-07-16

    This paper presents an architecture model for the development of intelligent systems for public passenger transport by road. The main objective of our proposal is to provide a framework for the systematic development and deployment of telematics systems to improve various aspects of this type of transport, such as efficiency, accessibility and safety. The architecture model presented herein is based on international standards on intelligent transport system architectures, ubiquitous computing and service-oriented architecture for distributed systems. To illustrate the utility of the model, we also present a use case of a monitoring system for stops on a public passenger road transport network.

  19. Systematic Development of Intelligent Systems for Public Road Transport

    PubMed Central

    García, Carmelo R.; Quesada-Arencibia, Alexis; Cristóbal, Teresa; Padrón, Gabino; Alayón, Francisco

    2016-01-01

    This paper presents an architecture model for the development of intelligent systems for public passenger transport by road. The main objective of our proposal is to provide a framework for the systematic development and deployment of telematics systems to improve various aspects of this type of transport, such as efficiency, accessibility and safety. The architecture model presented herein is based on international standards on intelligent transport system architectures, ubiquitous computing and service-oriented architecture for distributed systems. To illustrate the utility of the model, we also present a use case of a monitoring system for stops on a public passenger road transport network. PMID:27438836

  20. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction.

    PubMed

    Kang, Eunhee; Min, Junhong; Ye, Jong Chul

    2017-10-01

    Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from large data sets. Therefore, we believe that the proposed algorithm opens a new direction in the area of low-dose CT research. © 2017 American Association of Physicists in Medicine.

  1. Nanoscale nuclear architecture for cancer diagnosis by spatial-domain low-coherence quantitative phase microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Pin; Bista, Rajan K.; Khalbuss, Walid E.; Qiu, Wei; Staton, Kevin D.; Zhang, Lin; Brentnall, Teresa A.; Brand, Randall E.; Liu, Yang

    2011-03-01

    Alterations in nuclear architecture are the hallmark diagnostic characteristic of cancer cells. In this work, we show that the nuclear architectural characteristics quantified by spatial-domain low-coherence quantitative phase microscopy (SL-QPM), is more sensitive for the identification of cancer cells than conventional cytopathology. We demonstrated the importance of nuclear architectural characteristics in both an animal model of intestinal carcinogenesis - APC/Min mouse model and human cytology specimens with colorectal cancer by identifying cancer from cytologically noncancerous appearing cells. The determination of nanoscale nuclear architecture using this simple and practical optical instrument is a significant advance towards cancer diagnosis.

  2. The past, present, and future of cognitive architectures.

    PubMed

    Taatgen, Niels; Anderson, John R

    2010-10-01

    Cognitive architectures are theories of cognition that try to capture the essential representations and mechanisms that underlie cognition. Research in cognitive architectures has gradually moved from a focus on the functional capabilities of architectures to the ability to model the details of human behavior, and, more recently, brain activity. Although there are many different architectures, they share many identical or similar mechanisms, permitting possible future convergence. In judging the quality of a particular cognitive model, it is pertinent to not just judge its fit to the experimental data but also its simplicity and ability to make predictions. Copyright © 2009 Cognitive Science Society, Inc.

  3. Open mHealth Architecture: A Primer for Tomorrow's Orthopedic Surgeon and Introduction to Its Use in Lower Extremity Arthroplasty.

    PubMed

    Ramkumar, Prem N; Muschler, George F; Spindler, Kurt P; Harris, Joshua D; McCulloch, Patrick C; Mont, Michael A

    2017-04-01

    The recent private-public partnership to unlock and utilize all available health data has large-scale implications for public health and personalized medicine, especially within orthopedics. Today, consumer based technologies such as smartphones and "wearables" store tremendous amounts of personal health data (known as "mHealth") that, when processed and contextualized, have the potential to open new windows of insight for the orthopedic surgeon about their patients. In the present report, the landscape, role, and future technical considerations of mHealth and open architecture are defined with particular examples in lower extremity arthroplasty. A limitation of the current mHealth landscape is the fragmentation and lack of interconnectivity between the myriad of available apps. The importance behind the currently lacking open mHealth architecture is underscored by the offer of improved research, increased workflow efficiency, and value capture for the orthopedic surgeon. There exists an opportunity to leverage existing mobile health data for orthopaedic surgeons, particularly those specializing in lower extremity arthroplasty, by transforming patient small data into insightful big data through the implementation of "open" architecture that affords universal data standards and a global interconnected network. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. The Software Architecture of Global Climate Models

    NASA Astrophysics Data System (ADS)

    Alexander, K. A.; Easterbrook, S. M.

    2011-12-01

    It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.

  5. A hybrid wavelet transform based short-term wind speed forecasting approach.

    PubMed

    Wang, Jujie

    2014-01-01

    It is important to improve the accuracy of wind speed forecasting for wind parks management and wind power utilization. In this paper, a novel hybrid approach known as WTT-TNN is proposed for wind speed forecasting. In the first step of the approach, a wavelet transform technique (WTT) is used to decompose wind speed into an approximate scale and several detailed scales. In the second step, a two-hidden-layer neural network (TNN) is used to predict both approximated scale and detailed scales, respectively. In order to find the optimal network architecture, the partial autocorrelation function is adopted to determine the number of neurons in the input layer, and an experimental simulation is made to determine the number of neurons within each hidden layer in the modeling process of TNN. Afterwards, the final prediction value can be obtained by the sum of these prediction results. In this study, a WTT is employed to extract these different patterns of the wind speed and make it easier for forecasting. To evaluate the performance of the proposed approach, it is applied to forecast Hexi Corridor of China's wind speed. Simulation results in four different cases show that the proposed method increases wind speed forecasting accuracy.

  6. How do bendy straws bend? A study of re-configurability of multi-stable corrugated shells

    NASA Astrophysics Data System (ADS)

    Bende, Nakul; Selden, Sarah; Evans, Arthur; Santangelo, Christian; Hayward, Ryan

    Shape programmable systems have evolved to allow for reconfiguration of structures through a variety of mechanisms including swelling, stress-relaxation, and thermal expansion. Particularly, there has been a recent interest in systems that exhibit bi-stability or multi-stability to achieve transformation between two or more pre-programmed states. Here, we study the ubiquitous architecture of corrugated shells, such as drinking straws or bellows, which has been well known for centuries. Some of these structures exhibit almost continuous stability amongst a wide range of reconfigurable shapes, but the underlying mechanisms are not well understood. To understand multi-stability in `bendy-straw' structures, we study the unit bi-conical segment using experiments and finite element modeling to elucidate the key geometrical and mechanical factors responsible for its multi-stability. The simple transformations of a unit segment - a change in length or angle can impart complex re-configurability of a structure containing many of these units. The fundamental understanding provided of this simple multi-stable building block could yield improvements in shape re-configurability for a wide array of applications such as corrugated medical tubing, robotics, and deployable structures. NSF EFRI ODISSEI-1240441.

  7. A Hybrid Wavelet Transform Based Short-Term Wind Speed Forecasting Approach

    PubMed Central

    Wang, Jujie

    2014-01-01

    It is important to improve the accuracy of wind speed forecasting for wind parks management and wind power utilization. In this paper, a novel hybrid approach known as WTT-TNN is proposed for wind speed forecasting. In the first step of the approach, a wavelet transform technique (WTT) is used to decompose wind speed into an approximate scale and several detailed scales. In the second step, a two-hidden-layer neural network (TNN) is used to predict both approximated scale and detailed scales, respectively. In order to find the optimal network architecture, the partial autocorrelation function is adopted to determine the number of neurons in the input layer, and an experimental simulation is made to determine the number of neurons within each hidden layer in the modeling process of TNN. Afterwards, the final prediction value can be obtained by the sum of these prediction results. In this study, a WTT is employed to extract these different patterns of the wind speed and make it easier for forecasting. To evaluate the performance of the proposed approach, it is applied to forecast Hexi Corridor of China's wind speed. Simulation results in four different cases show that the proposed method increases wind speed forecasting accuracy. PMID:25136699

  8. Modeling and Simulation of Phased Array Antennas to Support Next-Generation Satellite Design

    NASA Technical Reports Server (NTRS)

    Tchorowski, Nicole; Murawski, Robert; Manning, Robert; Fuentes, Michael

    2016-01-01

    Developing enhanced simulation capabilities has become a significant priority for the Space Communications and Navigation (SCaN) project at NASA as new space communications technologies are proposed to replace aging NASA communications assets, such as the Tracking and Data Relay Satellite System (TDRSS). When developing the architecture for these new space communications assets, it is important to develop updated modeling and simulation methodologies, such that competing architectures can be weighed against one another and the optimal path forward can be determined. There have been many simulation tools developed here at NASA for the simulation of single RF link budgets, or for the modeling and simulation of an entire network of spacecraft and their supporting SCaN network elements. However, the modeling capabilities are never fully complete and as new technologies are proposed, gaps are identified. One such gap is the ability to rapidly develop high fidelity simulation models of electronically steerable phased array systems. As future relay satellite architectures are proposed that include optical communications links, electronically steerable antennas will become more desirable due to the reduction in platform vibration introduced by mechanically steerable devices. In this research, we investigate how modeling of these antennas can be introduced into out overall simulation and modeling structure. The ultimate goal of this research is two-fold. First, to enable NASA engineers to model various proposed simulation architectures and determine which proposed architecture meets the given architectural requirements. Second, given a set of communications link requirements for a proposed satellite architecture, determine the optimal configuration for a phased array antenna. There is a variety of tools available that can be used to model phased array antennas. To meet our stated goals, the first objective of this research is to compare the subset of tools available to us, trading-off modeling fidelity of the tool with simulation performance. When comparing several proposed architectures, higher- fidelity modeling may be desirable, however, when iterating a proposed set of communication link requirements across ranges of phased array configuration parameters, the practicality of performance becomes a significant requirement. In either case, a minimum simulation - fidelity must be met, regardless of performance considerations, which will be discussed in this research. Given a suitable set of phased array modeling tools, this research then focuses on integration with current SCaN modeling and simulation tools. While properly modeling the antenna elements of a system are vital, this is only a small part of the end-to-end communication path between a satellite and the supporting ground station and/or relay satellite assets. To properly model a proposed simulation architecture, this toolset must be integrated with other commercial and government development tools, such that the overall architecture can be examined in terms of communications, reliability, and cost. In this research, integration with previously developed communication tools is investigated.

  9. Modeling and Verification of Dependable Electronic Power System Architecture

    NASA Astrophysics Data System (ADS)

    Yuan, Ling; Fan, Ping; Zhang, Xiao-fang

    The electronic power system can be viewed as a system composed of a set of concurrently interacting subsystems to generate, transmit, and distribute electric power. The complex interaction among sub-systems makes the design of electronic power system complicated. Furthermore, in order to guarantee the safe generation and distribution of electronic power, the fault tolerant mechanisms are incorporated in the system design to satisfy high reliability requirements. As a result, the incorporation makes the design of such system more complicated. We propose a dependable electronic power system architecture, which can provide a generic framework to guide the development of electronic power system to ease the development complexity. In order to provide common idioms and patterns to the system *designers, we formally model the electronic power system architecture by using the PVS formal language. Based on the PVS model of this system architecture, we formally verify the fault tolerant properties of the system architecture by using the PVS theorem prover, which can guarantee that the system architecture can satisfy high reliability requirements.

  10. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    DOE PAGES

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; ...

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less

  11. An effective detection algorithm for region duplication forgery in digital images

    NASA Astrophysics Data System (ADS)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  12. Improved decryption quality and security of a joint transform correlator-based encryption system

    NASA Astrophysics Data System (ADS)

    Vilardy, Juan M.; Millán, María S.; Pérez-Cabré, Elisabet

    2013-02-01

    Some image encryption systems based on modified double random phase encoding and joint transform correlator architecture produce low quality decrypted images and are vulnerable to a variety of attacks. In this work, we analyse the algorithm of some reported methods that optically implement the double random phase encryption in a joint transform correlator. We show that it is possible to significantly improve the quality of the decrypted image by introducing a simple nonlinear operation in the encrypted function that contains the joint power spectrum. This nonlinearity also makes the system more resistant to chosen-plaintext attacks. We additionally explore the system resistance against this type of attack when a variety of probability density functions are used to generate the two random phase masks of the encryption-decryption process. Numerical results are presented and discussed.

  13. Shift-connected SIMD array architectures for digital optical computing systems, with algorithms for numerical transforms and partial differential equations

    NASA Astrophysics Data System (ADS)

    Drabik, Timothy J.; Lee, Sing H.

    1986-11-01

    The intrinsic parallelism characteristics of easily realizable optical SIMD arrays prompt their present consideration in the implementation of highly structured algorithms for the numerical solution of multidimensional partial differential equations and the computation of fast numerical transforms. Attention is given to a system, comprising several spatial light modulators (SLMs), an optical read/write memory, and a functional block, which performs simple, space-invariant shifts on images with sufficient flexibility to implement the fastest known methods for partial differential equations as well as a wide variety of numerical transforms in two or more dimensions. Either fixed or floating-point arithmetic may be used. A performance projection of more than 1 billion floating point operations/sec using SLMs with 1000 x 1000-resolution and operating at 1-MHz frame rates is made.

  14. Study on the “3F-in-1” Sustainable Reconstruction of Rural Architecture from Placeality Perspective--A Case Study of Caiyuan Village in Jingmen City, Hubei Province

    NASA Astrophysics Data System (ADS)

    Fangyu, Fu; Yu, Cao

    2017-05-01

    This paper takes Caiyuan Village in Jingmen City of Hubei Province as the research object, analyzes the production, life and ecological functions of rural buildings and the “3F-in-1” inherent mechanism from the local perspective. Based on the concept analysis of placeality and “3F-in-1”, this paper clarifies the relationship among the value of life function, production function, ecological function so as to analyze the “3F-in-1” mode of rural architecture with placeality. On this basis, this thesis puts forward the strategy of sustainable spatial transformation (1) preserve the traditional overall spatial structure of villages, (2) improve the adaptability and function of rural architecture, (3) extend the rural social culture, (4) pay attention to local perception, with a view to explore an organic system design method for the exhibition of placeality and sustainable development of beautiful countryside.

  15. Materials science and architecture

    NASA Astrophysics Data System (ADS)

    Bechthold, Martin; Weaver, James C.

    2017-12-01

    Materiality — the use of various materials in architecture — has been fundamental to the design and construction of buildings, and materials science has traditionally responded to needs formulated by design, engineering and construction professionals. Material properties and processes are shaping buildings and influencing how they perform. The advent of technologies such as digital fabrication, robotics and 3D printing have not only accelerated the development of new construction solutions, but have also led to a renewed interest in materials as a catalyst for novel architectural design. In parallel, materials science has transformed from a field that explains materials to one that designs materials from the bottom up. The conflation of these two trends is giving rise to materials-based design research in which architects, engineers and materials scientists work as partners in the conception of new materials systems and their applications. This Review surveys this development for different material classes (wood, ceramics, metals, concrete, glass, synthetic composites and polymers), with an emphasis on recent trends and innovations.

  16. UML Profiles for Design Decisions and Non-Functional Requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Liming; Gorton, Ian

    2007-06-30

    A software architecture is composed of a collection of design decisions. Each design decision helps or hinders certain Non-Functional Requirements (NFR). Current software architecture views focus on expressing components and connectors in the system. Design decisions and their relationships with non-functional requirements are often captured in separate design documentation, not explicitly expressed in any views. This disassociation makes architecture comprehension and architecture evolution harder. In this paper, we propose a UML profile for modeling design decisions and an associated UML profile for modeling non-functional requirements in a generic way. The two UML profiles treat design decisions and nonfunctional requirements asmore » first-class elements. Modeled design decisions always refer to existing architectural elements and thus maintain traceability between the two. We provide a mechanism for checking consistency over this traceability. An exemplar is given as« less

  17. Sirepo - Warp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagler, Robert; Moeller, Paul

    Sirepo is an open source framework for cloud computing. The graphical user interface (GUI) for Sirepo, also known as the client, executes in any HTML5 compliant web browser on any computing platform, including tablets. The client is built in JavaScript, making use of the following open source libraries: Bootstrap, which is fundamental for cross-platform web applications; AngularJS, which provides a model–view–controller (MVC) architecture and GUI components; and D3.js, which provides interactive plots and data-driven transformations. The Sirepo server is built on the following Python technologies: Flask, which is a lightweight framework for web development; Jin-ja, which is a secure andmore » widely used templating language; and Werkzeug, a utility library that is compliant with the WSGI standard. We use Nginx as the HTTP server and proxy, which provides a scalable event-driven architecture. The physics codes supported by Sirepo execute inside a Docker container. One of the codes supported by Sirepo is Warp. Warp is a particle-in-cell (PIC) code de-signed to simulate high-intensity charged particle beams and plasmas in both the electrostatic and electromagnetic regimes, with a wide variety of integrated physics models and diagnostics. At pre-sent, Sirepo supports a small subset of Warp’s capabilities. Warp is open source and is part of the Berkeley Lab Accelerator Simulation Toolkit.« less

  18. The genetic architecture of economic and political preferences

    PubMed Central

    Benjamin, Daniel J.; Cesarini, David; van der Loos, Matthijs J. H. M.; Dawes, Christopher T.; Koellinger, Philipp D.; Magnusson, Patrik K. E.; Chabris, Christopher F.; Conley, Dalton; Laibson, David; Johannesson, Magnus; Visscher, Peter M.

    2012-01-01

    Preferences are fundamental building blocks in all models of economic and political behavior. We study a new sample of comprehensively genotyped subjects with data on economic and political preferences and educational attainment. We use dense single nucleotide polymorphism (SNP) data to estimate the proportion of variation in these traits explained by common SNPs and to conduct genome-wide association study (GWAS) and prediction analyses. The pattern of results is consistent with findings for other complex traits. First, the estimated fraction of phenotypic variation that could, in principle, be explained by dense SNP arrays is around one-half of the narrow heritability estimated using twin and family samples. The molecular-genetic–based heritability estimates, therefore, partially corroborate evidence of significant heritability from behavior genetic studies. Second, our analyses suggest that these traits have a polygenic architecture, with the heritable variation explained by many genes with small effects. Our results suggest that most published genetic association studies with economic and political traits are dramatically underpowered, which implies a high false discovery rate. These results convey a cautionary message for whether, how, and how soon molecular genetic data can contribute to, and potentially transform, research in social science. We propose some constructive responses to the inferential challenges posed by the small explanatory power of individual SNPs. PMID:22566634

  19. The genetic architecture of economic and political preferences.

    PubMed

    Benjamin, Daniel J; Cesarini, David; van der Loos, Matthijs J H M; Dawes, Christopher T; Koellinger, Philipp D; Magnusson, Patrik K E; Chabris, Christopher F; Conley, Dalton; Laibson, David; Johannesson, Magnus; Visscher, Peter M

    2012-05-22

    Preferences are fundamental building blocks in all models of economic and political behavior. We study a new sample of comprehensively genotyped subjects with data on economic and political preferences and educational attainment. We use dense single nucleotide polymorphism (SNP) data to estimate the proportion of variation in these traits explained by common SNPs and to conduct genome-wide association study (GWAS) and prediction analyses. The pattern of results is consistent with findings for other complex traits. First, the estimated fraction of phenotypic variation that could, in principle, be explained by dense SNP arrays is around one-half of the narrow heritability estimated using twin and family samples. The molecular-genetic-based heritability estimates, therefore, partially corroborate evidence of significant heritability from behavior genetic studies. Second, our analyses suggest that these traits have a polygenic architecture, with the heritable variation explained by many genes with small effects. Our results suggest that most published genetic association studies with economic and political traits are dramatically underpowered, which implies a high false discovery rate. These results convey a cautionary message for whether, how, and how soon molecular genetic data can contribute to, and potentially transform, research in social science. We propose some constructive responses to the inferential challenges posed by the small explanatory power of individual SNPs.

  20. Patient-Specific Deep Architectural Model for ECG Classification

    PubMed Central

    Luo, Kan; Cuschieri, Alfred

    2017-01-01

    Heartbeat classification is a crucial step for arrhythmia diagnosis during electrocardiographic (ECG) analysis. The new scenario of wireless body sensor network- (WBSN-) enabled ECG monitoring puts forward a higher-level demand for this traditional ECG analysis task. Previously reported methods mainly addressed this requirement with the applications of a shallow structured classifier and expert-designed features. In this study, modified frequency slice wavelet transform (MFSWT) was firstly employed to produce the time-frequency image for heartbeat signal. Then the deep learning (DL) method was performed for the heartbeat classification. Here, we proposed a novel model incorporating automatic feature abstraction and a deep neural network (DNN) classifier. Features were automatically abstracted by the stacked denoising auto-encoder (SDA) from the transferred time-frequency image. DNN classifier was constructed by an encoder layer of SDA and a softmax layer. In addition, a deterministic patient-specific heartbeat classifier was achieved by fine-tuning on heartbeat samples, which included a small subset of individual samples. The performance of the proposed model was evaluated on the MIT-BIH arrhythmia database. Results showed that an overall accuracy of 97.5% was achieved using the proposed model, confirming that the proposed DNN model is a powerful tool for heartbeat pattern recognition. PMID:29065597

Top