Science.gov

Sample records for architecture supporting self-organisation

  1. Introducing Live ePortfolios to Support Self Organised Learning

    ERIC Educational Resources Information Center

    Kirkham, Thomas; Winfield, Sandra; Smallwood, Angela; Coolin, Kirstie; Wood, Stuart; Searchwell, Louis

    2009-01-01

    This paper presents a platform on which a new generation of applications targeted to aid the self-organised learner can be presented. The new application is enabled by innovations in trust-based security of data built upon emerging infrastructures to aid federated data access in the UK education sector. Within the proposed architecture, users and…

  2. Effects of the ISIS Recommender System for Navigation Support in Self-Organised Learning Networks

    ERIC Educational Resources Information Center

    Drachsler, Hendrik; Hummel, Hans; van den Berg, Bert; Eshuis, Jannes; Waterink, Wim; Nadolski, Rob; Berlanga, Adriana; Boers, Nanda; Koper, Rob

    2009-01-01

    The need to support users of the Internet with the selection of information is becoming more important. Learners in complex, self-organising Learning Networks have similar problems and need guidance to find and select most suitable learning activities, in order to attain their lifelong learning goals in the most efficient way. Several research…

  3. Support-vector-based emergent self-organising approach for emotional understanding

    NASA Astrophysics Data System (ADS)

    Nguwi, Yok-Yen; Cho, Siu-Yeung

    2010-12-01

    This study discusses the computational analysis of general emotion understanding from questionnaires methodology. The questionnaires method approaches the subject by investigating the real experience that accompanied the emotions, whereas the other laboratory approaches are generally associated with exaggerated elements. We adopted a connectionist model called support-vector-based emergent self-organising map (SVESOM) to analyse the emotion profiling from the questionnaires method. The SVESOM first identifies the important variables by giving discriminative features with high ranking. The classifier then performs the classification based on the selected features. Experimental results show that the top rank features are in line with the work of Scherer and Wallbott [(1994), 'Evidence for Universality and Cultural Variation of Differential Emotion Response Patterning', Journal of Personality and Social Psychology, 66, 310-328], which approached the emotions physiologically. While the performance measures show that using the full features for classifications can degrade the performance, the selected features provide superior results in terms of accuracy and generalisation.

  4. Semi- and Fully Self-Organised Teams

    NASA Astrophysics Data System (ADS)

    Kumlander, Deniss

    Most modern companies realise that the best way to improve stability and earning in the global, rapidly changing world is to be innovating and produce software that will be fully used and appreciated by customers. The key aspect on this road is personnel and processes. In the paper we review self-organised teams proposing several new approaches and constraints ensuring such teams' stability and efficiency. The paper also introduce a semi-self organised teams, which are in the shortterm time perspective as the same reliable as fully self-organised teams and much simpler to organise and support.

  5. Self-Organisation and Capacity Building: Sustaining the Change

    ERIC Educational Resources Information Center

    Bain, Alan; Walker, Allan; Chan, Anissa

    2011-01-01

    Purpose: The paper aims to describe the application of theoretical principles derived from a study of self-organisation and complex systems theory and their application to school-based capacity building to support planned change. Design/methodology/approach: The paper employs a case example in a Hong Kong School to illustrate the application of…

  6. Self-organising structures of lecithin

    NASA Astrophysics Data System (ADS)

    Shchipunov, Yurii A.

    1997-04-01

    Modern concepts of the self-assembly of amphiphiles are considered on the example of self-organising structures of the natural lecithin. Binary, ternary and multicomponent systems are discussed. A considerable part of the review is devoted to the peculiarities of self-organisation of this phospholipid in non-aqueous media and to the role of polar inorganic solvents. Virtually all of the structures formed by lecithin are examined: micelles, swollen micelles, microemulsions, emulsions, organogels, vesicles (liposomes), and lyotropic liquid crystals. In each specific case, attention is drawn to the dependence of self-assembly at the macroscopic level on interactions at the molecular level, shape of molecules, and their solvation and packing at the interface. The self-organising lecithin structures formed in the interfacial area of immiscible liquids in the course of unrestricted adsorption from the bulk of non-aqueous solution are considered. The bibliography includes 282 references.

  7. Ising, Schelling and self-organising segregation

    NASA Astrophysics Data System (ADS)

    Stauffer, D.; Solomon, S.

    2007-06-01

    The similarities between phase separation in physics and residential segregation by preference in the Schelling model of 1971 are reviewed. Also, new computer simulations of asymmetric interactions different from the usual Ising model are presented, showing spontaneous magnetisation (=self-organising segregation) and in one case a sharp phase transition.

  8. Landscape self organisation: Modelling Sediment trains

    NASA Astrophysics Data System (ADS)

    Schoorl, J. M.; Temme, A. J. A. M.; Veldkamp, A.

    2012-04-01

    Rivers tend to develop towards an equilibrium length profile, independently of exogenous factors. In general, although still under debate, this so-called self-organisation is assumed to be caused by simple feedbacks between sedimentation and erosion. Erosion correlates positively with gradient and discharge and sedimentation negatively. With the LAPSUS model, which was run for the catchment of the Sabinal, a small river in the South of Spain, this interplay of erosion and sedimentation results in sediment pulses (sequences of incision and sedimentation through time). These pulses are visualised in a short movie ( see http://www.youtube.com/watch?v=V5LDUMvYZxU). In this case the LAPSUS model run did not take climate, base level nor tectonics into account. Therefore, these pulses can be considered independent of them. Furthermore, different scenarios show that the existence of the pulses is independent of precipitation, erodibility and sedimentation rate, although they control the number and shape of the pulses. A fieldwork check showed the plausibility of the occurrence of these sediment pulses. We conclude that the pulses as modelled with LAPSUS are indeed the consequence of the feedbacks between erosion and sedimentation and are not depending on exogenous factors. Keywords: Landscape self-organisation, Erosion, Deposition, LAPSUS, Modelling

  9. Individual variation by self-organisation.

    PubMed

    Hemelrijk, C K; Wantia, J

    2005-02-01

    In this paper, we show that differences in dominance and spatial centrality of individuals in a group may arise through self-organisation. Our instrument is a model, called DomWorld, that represents two traits that are often found in animals, namely grouping and competing. In this model individual differences grow under the following conditions: (1) when the intensity of aggression increases and grouping becomes denser, (2) when the degree of sexual dimorphism in fighting power increases. In this case the differences among females compared to males grow too, (3) when, upon encountering another individual, the tendency to attack is 'obligate' and not conditional, namely 'sensitive to risks'. Results resemble phenomena described for societies of primates, mice, birds and pigs. PMID:15652260

  10. Self-Organised Criticality in Astrophysical Accretion Systems

    NASA Astrophysics Data System (ADS)

    Dendy, R. O.; Helander, P.; Tagger, M.

    Self-organised criticality (SOC) has been proposed as a potentially powerful unifying paradigm for interpreting non-diffusive avalanche-type transport in laboratory, space and astrophysical plasmas. After reviewing the most promising astrophysical sites where SOC might be observable, we consider the theoretical arguments for supposing that SOC can occur in accretion discs. Perhaps the most rigorous evidence is provided by numerical modelling of energy dissipation due to magnetohydrodynamic turbulence in accretion discs by G. Geertsema & A. Achterberg (Astron. Astrophys. 255, 427 (1992)); we investigate how “sandpile”-type dynamics arise in this model. It is concluded that the potential sites for SOC in accretion systems are numerous and observationally accessible, and that theoretical support for the possible occurrence of SOC can be derived from first principles.

  11. Self organising maps for visualising and modelling

    PubMed Central

    2012-01-01

    The paper describes the motivation of SOMs (Self Organising Maps) and how they are generally more accessible due to the wider available modern, more powerful, cost-effective computers. Their advantages compared to Principal Components Analysis and Partial Least Squares are discussed. These allow application to non-linear data, are not so dependent on least squares solutions, normality of errors and less influenced by outliers. In addition there are a wide variety of intuitive methods for visualisation that allow full use of the map space. Modern problems in analytical chemistry include applications to cultural heritage studies, environmental, metabolomic and biological problems result in complex datasets. Methods for visualising maps are described including best matching units, hit histograms, unified distance matrices and component planes. Supervised SOMs for classification including multifactor data and variable selection are discussed as is their use in Quality Control. The paper is illustrated using four case studies, namely the Near Infrared of food, the thermal analysis of polymers, metabolomic analysis of saliva using NMR, and on-line HPLC for pharmaceutical process monitoring. PMID:22594434

  12. The Self-Organising Seismic Early Warning Information Network

    NASA Astrophysics Data System (ADS)

    Kühnlenz, F.; Eveslage, I.; Fischer, J.; Fleming, K. M.; Lichtblau, B.; Milkereit, C.; Picozzi, M.

    2009-12-01

    The Self-Organising Seismic Early Warning Information Network (SOSEWIN) represents a new approach for Earthquake Early Warning Systems (EEWS), consisting in taking advantage of novel wireless communications technologies without the need of a planned, centralised infrastructure. It also sets out to overcome problems of insufficient node density, which typically affects present existing early warning systems, by having the SOSEWIN seismological sensing units being comprised of low-cost components (generally bought "off-the-shelf"), with each unit initially costing 100's of Euros, in contrast to 1,000's to 10,000's for standard seismological stations. The reduced sensitivity of the new sensing units arising from the use of lower-cost components will be compensated by the network's density, which in the future is expected to number 100's to 1000's over areas served currently by the order of 10's of standard stations. The robustness, independence of infrastructure, spontaneous extensibility due to a self-healing/self-organizing character in the case of removing/failing or adding sensors makes SOSEWIN potentially useful for various use cases, e.g. monitoring of building structures or seismic microzonation. Nevertheless its main purpose is the earthquake early warning, for which reason the ground motion is continuously monitored by conventional accelerometers (3-component) and processed within a station. Based on this, the network itself decides whether an event is detected through cooperating stations. SEEDLink is used to store and provide access to the sensor data. Experiences and selected experiment results with the SOSEWIN-prototype installation in the Ataköy district of Istanbul (Turkey) are presented. SOSEWIN considers also the needs of earthquake task forces, which want to set-up a temporary seismic network rapidly and with light-weighted stations to record after-shocks. The wireless and self-organising character of this sensor network is of great value to do this

  13. The Self-Organising Seismic Early Warning Information Network: Scenarios

    NASA Astrophysics Data System (ADS)

    Kühnlenz, F.; Fischer, J.; Eveslage, I.

    2009-04-01

    SAFER and EDIM working groups, the Department of Computer Science, Humboldt-Universität zu Berlin, Berlin, Germany, and Section 2.1 Earthquake Risk and Early Warning, GFZ German Research Centre for Geosciences, Germany Contact: Frank Kühnlenz, kuehnlenz@informatik.hu-berlin.de The Self-Organising Seismic Early Warning Information Network (SOSEWIN) represents a new approach for Earthquake Early Warning Systems (EEWS), consisting in taking advantage of novel wireless communications technologies without the need of a planned, centralised infrastructure. It also sets out to overcome problems of insufficient node density, which typically affects present existing early warning systems, by having the SOSEWIN seismological sensing units being comprised of low-cost components (generally bought "off-the-shelf"), with each unit initially costing 100's of Euros, in contrast to 1,000's to 10,000's for standard seismological stations. The reduced sensitivity of the new sensing units arising from the use of lower-cost components will be compensated by the network's density, which in the future is expected to number 100's to 1000's over areas served currently by the order of 10's of standard stations. The robustness, independence of infrastructure, spontaneous extensibility due to a self-healing/self-organizing character in the case of removing/failing or adding sensors makes SOSEWIN potentially useful for various use cases, e.g. monitoring of building structures or seismic microzonation. Nevertheless its main purpose is the earthquake early warning, for which reason the ground motion is continuously monitored by conventional accelerometers (3-component). It uses SEEDLink to store and provide access to the sensor data. SOSEWIN considers also the needs of earthquake task forces, which want to set-up a temporary seismic network rapidly and with light-weighted stations to record after-shocks. The wireless and self-organising character of this sensor network should be of great value

  14. Assured Mission Support Space Architecture (AMSSA) study

    NASA Technical Reports Server (NTRS)

    Hamon, Rob

    1993-01-01

    The assured mission support space architecture (AMSSA) study was conducted with the overall goal of developing a long-term requirements-driven integrated space architecture to provide responsive and sustained space support to the combatant commands. Although derivation of an architecture was the focus of the study, there are three significant products from the effort. The first is a philosophy that defines the necessary attributes for the development and operation of space systems to ensure an integrated, interoperable architecture that, by design, provides a high degree of combat utility. The second is the architecture itself; based on an interoperable system-of-systems strategy, it reflects a long-range goal for space that will evolve as user requirements adapt to a changing world environment. The third product is the framework of a process that, when fully developed, will provide essential information to key decision makers for space systems acquisition in order to achieve the AMSSA goal. It is a categorical imperative that military space planners develop space systems that will act as true force multipliers. AMSSA provides the philosophy, process, and architecture that, when integrated with the DOD requirements and acquisition procedures, can yield an assured mission support capability from space to the combatant commanders. An important feature of the AMSSA initiative is the participation by every organization that has a role or interest in space systems development and operation. With continued community involvement, the concept of the AMSSA will become a reality. In summary, AMSSA offers a better way to think about space (philosophy) that can lead to the effective utilization of limited resources (process) with an infrastructure designed to meet the future space needs (architecture) of our combat forces.

  15. Architectural prospects for lunar mission support

    NASA Technical Reports Server (NTRS)

    Cesarone, Robert J.; Abraham, Douglas S.; Deutsch, Leslie J.; Noreen, Gary K.; Soloff, Jason A.

    2005-01-01

    A top-level architectural approach facilitates the provision of communications and navigation support services to the anticipated lunar mission set. Following the time-honored principles of systems architecting, i.e., form follows function, the first step is to define the functions or services to be provided, both in terms of character and degree. These will include communication as well as trackin and navigation services.

  16. Self-organised clustering for road extraction in classified imagery

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Agouris, Peggy; Stefanidis, Anthony; Musavi, Mohamad

    The extraction of road networks from digital imagery is a fundamental image analysis operation. Common problems encountered in automated road extraction include high sensitivity to typical scene clutter in high-resolution imagery, and inefficiency to meaningfully exploit multispectral imagery (MSI). With a ground sample distance (GSD) of less than 2 m per pixel, roads can be broadly described as elongated regions. We propose an approach of elongated region-based analysis for 2D road extraction from high-resolution imagery, which is suitable for MSI, and is insensitive to conventional edge definition. A self-organising road map (SORM) algorithm is presented, inspired from a specialised variation of Kohonen's self-organising map (SOM) neural network algorithm. A spectrally classified high-resolution image is assumed to be the input for our analysis. Our approach proceeds by performing spatial cluster analysis as a mid-level processing technique. This allows us to improve tolerance to road clutter in high-resolution images, and to minimise the effect on road extraction of common classification errors. This approach is designed in consideration of the emerging trend towards high-resolution multispectral sensors. Preliminary results demonstrate robust road extraction ability due to the non-local approach, when presented with noisy input.

  17. How to Trigger Emergence and Self-Organisation in Learning Networks

    NASA Astrophysics Data System (ADS)

    Brouns, Francis; Fetter, Sibren; van Rosmalen, Peter

    The previous chapters of this section discussed why the social structure of Learning Networks is important and present guidelines on how to maintain and allow the emergence of communities in Learning Networks. Chapter 2 explains how Learning Networks rely on social interaction and active participations of the participants. Chapter 3 then continues by presenting guidelines and policies that should be incorporated into Learning Network Services in order to maintain existing communities by creating conditions that promote social interaction and knowledge sharing. Chapter 4 discusses the necessary conditions required for knowledge sharing to occur and to trigger communities to self-organise and emerge. As pointed out in Chap. 4, ad-hoc transient communities facilitate the emergence of social interaction in Learning Networks, self-organising them into communities, taking into account personal characteristics, community characteristics and general guidelines. As explained in Chap. 4 community members would benefit from a service that brings suitable people together for a specific purpose, because it will allow the participant to focus on the knowledge sharing process by reducing the effort or costs. In the current chapter, we describe an example of a peer support Learning Network Service based on the mechanism of peer tutoring in ad-hoc transient communities.

  18. Formal versus self-organised knowledge systems: A network approach

    NASA Astrophysics Data System (ADS)

    Masucci, A. P.

    2011-11-01

    In this work, we consider the topological analysis of symbolic formal systems in the framework of network theory. In particular, we analyse the network extracted by Principia Mathematica of B. Russell and A.N. Whitehead, where the vertices are the statements and two statements are connected with a directed link if one statement is used to demonstrate the other one. We compare the obtained network with other directed acyclic graphs, such as a scientific citation network and a stochastic model. We also introduce a novel topological ordering for directed acyclic graphs and we discuss its properties with respect to the classical one. The main result is the observation that formal systems of knowledge topologically behave similarly to self-organised systems.

  19. A Self-Organising Model of Thermoregulatory Huddling

    PubMed Central

    Glancy, Jonathan; Groß, Roderich; Stone, James V.; Wilson, Stuart P.

    2015-01-01

    Endotherms such as rats and mice huddle together to keep warm. The huddle is considered to be an example of a self-organising system, because complex properties of the collective group behaviour are thought to emerge spontaneously through simple interactions between individuals. Groups of rodent pups display two such emergent properties. First, huddling undergoes a ‘phase transition’, such that pups start to aggregate rapidly as the temperature of the environment falls below a critical temperature. Second, the huddle maintains a constant ‘pup flow’, where cooler pups at the periphery continually displace warmer pups at the centre. We set out to test whether these complex group behaviours can emerge spontaneously from local interactions between individuals. We designed a model using a minimal set of assumptions about how individual pups interact, by simply turning towards heat sources, and show in computer simulations that the model reproduces the first emergent property—the phase transition. However, this minimal model tends to produce an unnatural behaviour where several smaller aggregates emerge rather than one large huddle. We found that an extension of the minimal model to include heat exchange between pups allows the group to maintain one large huddle but eradicates the phase transition, whereas inclusion of an additional homeostatic term recovers the phase transition for large huddles. As an unanticipated consequence, the extended model also naturally gave rise to the second observed emergent property—a continuous pup flow. The model therefore serves as a minimal description of huddling as a self-organising system, and as an existence proof that group-level huddling dynamics emerge spontaneously through simple interactions between individuals. We derive a specific testable prediction: Increasing the capacity of the individual to generate or conserve heat will increase the range of ambient temperatures over which adaptive thermoregulatory huddling

  20. Resilience of Self-Organised and Top-Down Planned Cities—A Case Study on London and Beijing Street Networks

    PubMed Central

    Wang, Jiaqiu

    2015-01-01

    The success or failure of the street network depends on its reliability. In this article, using resilience analysis, the author studies how the shape and appearance of street networks in self-organised and top-down planned cities influences urban transport. Considering London and Beijing as proxies for self-organised and top-down planned cities, the structural properties of London and Beijing networks first are investigated based on their primal and dual representations of planar graphs. The robustness of street networks then is evaluated in primal space and dual space by deactivating road links under random and intentional attack scenarios. The results show that the reliability of London street network differs from that of Beijing, which seems to rely more on its architecture and connectivity. It is found that top-down planned Beijing with its higher average degree in the dual space and assortativity in the primal space is more robust than self-organised London using the measures of maximum and second largest cluster size and network efficiency. The article offers an insight, from a network perspective, into the reliability of street patterns in self-organised and top-down planned city systems. PMID:26682551

  1. A Framework and Model for Evaluating Clinical Decision Support Architectures

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2008-01-01

    In this paper, we develop a four-phase model for evaluating architectures for clinical decision support that focuses on: defining a set of desirable features for a decision support architecture; building a proof-of-concept prototype; demonstrating that the architecture is useful by showing that it can be integrated with existing decision support systems and comparing its coverage to that of other architectures. We apply this framework to several well-known decision support architectures, including Arden Syntax, GLIF, SEBASTIAN and SAGE PMID:18462999

  2. Functional Interface Considerations within an Exploration Life Support System Architecture

    NASA Technical Reports Server (NTRS)

    Perry, Jay L.; Sargusingh, Miriam J.; Toomarian, Nikzad

    2016-01-01

    As notional life support system (LSS) architectures are developed and evaluated, myriad options must be considered pertaining to process technologies, components, and equipment assemblies. Each option must be evaluated relative to its impact on key functional interfaces within the LSS architecture. A leading notional architecture has been developed to guide the path toward realizing future crewed space exploration goals. This architecture includes atmosphere revitalization, water recovery and management, and environmental monitoring subsystems. Guiding requirements for developing this architecture are summarized and important interfaces within the architecture are discussed. The role of environmental monitoring within the architecture is described.

  3. On the role of self-organised criticality in accretion systems

    NASA Astrophysics Data System (ADS)

    Dendy, R. O.; Helander, P.; Tagger, M.

    1998-09-01

    Self-organised criticality (SOC) has been suggested as a potentially powerful unifying paradigm for interpreting the structure of, and signals from, accretion systems. After reviewing the most promising sites where SOC might be observable, we consider the theoretical arguments for supposing that SOC can occur in accretion discs. Perhaps the most rigorous evidence is provided by numerical modelling of energy dissipation due to magnetohydrodynamic turbulence in accretion discs by G Geertsema & A Achterberg (A&A 255, 427 (1992)); we investigate how ``sandpile"-type dynamics arise in this model. It is concluded that the potential sites for SOC in accretion systems are numerous and observationally accessible, and that theoretical support for the possible occurrence of SOC can be derived from first principles.

  4. Cytoplasmic streaming emerges naturally from hydrodynamic self-organisation of a microfilament suspension

    NASA Astrophysics Data System (ADS)

    Woodhouse, Francis; Goldstein, Raymond

    2013-03-01

    Cytoplasmic streaming is the ubiquitous phenomenon of deliberate, active circulation of the entire liquid contents of a plant or animal cell by the walking of motor proteins on polymer filament tracks. Its manifestation in the plant kingdom is particularly striking, where many cells exhibit highly organised patterns of flow. How these regimented flow templates develop is biologically unclear, but there is growing experimental evidence to support hydrodynamically-mediated self-organisation of the underlying microfilament tracks. Using the spirally-streaming giant internodal cells of the characean algae Chara and Nitella as our prototype, we model the developing sub-cortical streaming cytoplasm as a continuum microfilament suspension subject to hydrodynamic and geometric forcing. We show that our model successfully reproduces emergent streaming behaviour by evolving from a totally disordered initial state into a steady characean ``conveyor belt'' configuration as a consequence of the cell geometry, and discuss applicability to other classes of steadily streaming plant cells.

  5. Desert Stone Mantles: Quantification and Significance of Self-Organisation

    NASA Astrophysics Data System (ADS)

    Higgitt, David; Rosser, Nick

    2010-05-01

    Desert stone mantles exhibit sorting patterns which are evidence of self-organisation. Previous investigations of stone mantles developed on Late Tertiary and Quaternary basalts in arid northeastern Jordan, revealed distinct variations in the nature of stone cover both downslope and between lithologies of different age. However, manual field measurements of clast size and shape did not preserve information about the spatial configuration of the stone surface. Improved digital image capture and analysis techniques, including using a kite-based platform for vertical photography of the surface, has permitted the nature of stone mantles to be examined and modelled in greater detail. Image analysis has been assisted by the strong contrast in colour between the basalt clasts and the underlying surface enabling a binary classification of images, from which data on size, shape and position of clasts can be readily acquired. Quantification of self-organisation through a box-counting technique for measuring fractal dimension and a procedure using Thiessen polygons to determine ‘locking structures' indicates a general increase in organisation of the stone mantle downslope. Recognition of emergent behaviour requires an explanation in terms of positive feedback between controlling process and the influence of surface form. A series of rainfall simulation and infiltration experiments have been undertaken on plots to assess the variation in surface hydrology as a response to variations in ground surface and slope profile form. The relative contribution of runoff events of varying size and the degree to which the ground surface configuration accelerates or restricts modification of the surface influences the overall evolution of slope profiles via the erosion, transfer and deposition of both surface clasts and the underlying fine grained sediments. Critical to this modification is the interplay between the surface configuration, rainfall and runoff. The experiments presented

  6. Self-organisation Processes In The Carbon ARC For Nanosynthis

    SciTech Connect

    Ng, Jonathan; Raitses, Yevgeny

    2014-02-26

    The atmospheric pressure carbon arc in inert gases such as helium is an important method for the production of nanomaterials. It has recently been shown that the formation of the carbon deposit on the cathode from gaseous carbon plays a crucial role in the operation of the arc, reaching the high temperatures necessary for thermionic emission to take place even with low melting point cathodes. Based on observed ablation and deposition rates, we explore the implications of deposit formation on the energy balance at the cathode surface, and show how the operation of the arc is self-organised process. Our results suggest that the can arc operate in two di erent regimes, one of which has an important contribution from latent heat to the cathode energy balance. This regime is characterised by the enhanced ablation rate, which may be favourable for high yield synthesis of nanomaterials. The second regime has a small and approximately constant ablation rate with a negligible contribution from latent heat.

  7. Self-organisation Processes In The Carbon ARC For Nanosynthis

    SciTech Connect

    Ng, J.; Raitses, Yefgeny

    2014-02-02

    The atmospheric pressure carbon arc in inert gases such as helium is an important method for the production of nanomaterials. It has recently been shown that the formation of the carbon deposit on the cathode from gaseous carbon plays a crucial role in the operation of the arc, reaching the high temperatures necessary for thermionic emission to take place even with low melting point cathodes. Based on observed ablation and deposition rates, we explore the implications of deposit formation on the energy balance at the cathode surface, and show how the operation of the arc is self-organised process. Our results suggest that the can arc operate in two di erent regimes, one of which has an important contribution from latent heat to the cathode energy balance. This regime is characterised by the enhanced ablation rate, which may be favourable for high yield synthesis of nanomaterials. The second regime has a small and approximately constant ablation rate with a negligible contribution from latent heat.

  8. Knowledge representation and retrieval using conceptual graphs and free text document self-organisation techniques.

    PubMed

    Chu, S; Cesnik, B

    2001-07-01

    Hospitals generate and store a large amount of clinical data each year, a significant portion of which is in free text format. Conventional database storage and retrieval algorithms are incapable of effectively processing free text medical data. The rich information and knowledge buried in healthcare records are unavailable for clinical decision-making. We examined a number of techniques for structuring and processing free text documents to effective and efficient for information retrieval and knowledge discovery. One critical success criterion is that the complexity of the techniques must be polynomial both in space and time for them to be able to cope with very large databases. We used conceptual graphs (CG) to capture the structure and semantic information/knowledge contained within the free text medical documents. Ordering and self-organising techniques (lattice techniques and knowledge space) were used to improve organisation of concepts from standard medical nomenclatures and large sets of free text medical documents. Pair-wise union of CG was performed to identify the common generalisation structure and a lattice structure of these CG documents. A combination of all three techniques allowed us to organise a set of 9000 discharge summaries into a generalisation hierarchy that supported efficient and rich information/knowledge retrieval. PMID:11470615

  9. Participatory sensing as an enabler for self-organisation in future cellular networks

    NASA Astrophysics Data System (ADS)

    Imran, Muhammad Ali; Imran, Ali; Onireti, Oluwakayode

    2013-12-01

    In this short review paper we summarise the emerging challenges in the field of participatory sensing for the self-organisation of the next generation of wireless cellular networks. We identify the potential of participatory sensing in enabling the self-organisation, deployment optimisation and radio resource management of wireless cellular networks. We also highlight how this approach can meet the future goals for the next generation of cellular system in terms of infrastructure sharing, management of multiple radio access techniques, flexible usage of spectrum and efficient management of very small data cells.

  10. Self-organising disturbance attenuation for synchronised agents with individual dynamics

    NASA Astrophysics Data System (ADS)

    Lunze, Jan

    2015-03-01

    This paper proposes a self-organising networked controller for disturbance attenuation in multi-agent systems. A disturbance affecting a single agent has some effect on all neighbouring agents through the communication network. To avoid large disturbance effects, the proposed self-organising controller switches off the communication whenever the effect of the disturbance on the corresponding agent exceeds a given bound. As a consequence, the structure of the networked controller is adjusted to the current disturbance. It is proved that the proposed controller bounds the effect of any disturbance on all undisturbed agents. The results are illustrated by their application to a robot formation problem.

  11. The Psychology of Delivering a Psychological Service: Self-Organised Learning as a Model for Consultation

    ERIC Educational Resources Information Center

    Clarke, Steve; Jenner, Simon

    2006-01-01

    The article describes how one Educational Psychology Service in the UK developed a service delivery based on self-organised learning (SOL). This model is linked to the paradigms and discourses within which educational psychology and special educational needs work. The work described here is dedicated to the memory of Brian Roberts, academic, close…

  12. Discrete Self-Organising Migrating Algorithm for Flow Shop Scheduling with no Wait Makespan

    NASA Astrophysics Data System (ADS)

    Davendra, Donald; Zelinka, Ivan; Senkerik, Roman; Jasek, Roman

    2011-06-01

    This paper introduces a novel discrete Self Organising Migrating Algorithm for the task of flowshop scheduling with no-wait makespan. The new algorithm is tested with the small and medium Taillard benchmark problems and the obtained results are competitive with the best performing heuristics in literature.

  13. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange Kevin E.; Anderson, Molly S.

    2012-01-01

    Quantitative assessments of system reliability and equivalent system mass (ESM) were made for different life support architectures based primarily on International Space Station technologies. The analysis was applied to a one-year deep-space mission. System reliability was increased by adding redundancy and spares, which added to the ESM. Results were thus obtained allowing a comparison of the ESM for each architecture at equivalent levels of reliability. Although the analysis contains numerous simplifications and uncertainties, the results suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support ESM and could influence the optimal degree of life support closure. Approaches for reducing reliability impacts were investigated and are discussed.

  14. Biologically relevant neural network architectures for support vector machines.

    PubMed

    Jändel, Magnus

    2014-01-01

    Neural network architectures that implement support vector machines (SVM) are investigated for the purpose of modeling perceptual one-shot learning in biological organisms. A family of SVM algorithms including variants of maximum margin, 1-norm, 2-norm and ν-SVM is considered. SVM training rules adapted for neural computation are derived. It is found that competitive queuing memory (CQM) is ideal for storing and retrieving support vectors. Several different CQM-based neural architectures are examined for each SVM algorithm. Although most of the sixty-four scanned architectures are unconvincing for biological modeling four feasible candidates are found. The seemingly complex learning rule of a full ν-SVM implementation finds a particularly simple and natural implementation in bisymmetric architectures. Since CQM-like neural structures are thought to encode skilled action sequences and bisymmetry is ubiquitous in motor systems it is speculated that trainable pattern recognition in low-level perception has evolved as an internalized motor programme. PMID:24126252

  15. Lunar Surface Architecture Utilization and Logistics Support Assessment

    NASA Astrophysics Data System (ADS)

    Bienhoff, Dallas; Findiesen, William; Bayer, Martin; Born, Andrew; McCormick, David

    2008-01-01

    Crew and equipment utilization and logistics support needs for the point of departure lunar outpost as presented by the NASA Lunar Architecture Team (LAT) and alternative surface architectures were assessed for the first ten years of operation. The lunar surface architectures were evaluated and manifests created for each mission. Distances between Lunar Surface Access Module (LSAM) landing sites and emplacement locations were estimated. Physical characteristics were assigned to each surface element and operational characteristics were assigned to each surface mobility element. Stochastic analysis was conducted to assess probable times to deploy surface elements, conduct exploration excursions, and perform defined crew activities. Crew time is divided into Outpost-related, exploration and science, overhead, and personal activities. Outpost-related time includes element deployment, EVA maintenance, IVA maintenance, and logistics resupply. Exploration and science activities include mapping, geological surveys, science experiment deployment, sample analysis and categorizing, and physiological and biological tests in the lunar environment. Personal activities include sleeping, eating, hygiene, exercising, and time off. Overhead activities include precursor or close-out tasks that must be accomplished but don't fit into the other three categories such as: suit donning and doffing, airlock cycle time, suit cleaning, suit maintenance, post-landing safing actions, and pre-departure preparations. Equipment usage time, spares, maintenance actions, and Outpost consumables are also estimated to provide input into logistics support planning. Results are normalized relative to the NASA LAT point of departure lunar surface architecture.

  16. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2011-01-01

    Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

  17. A supportive architecture for CFD-based design optimisation

    NASA Astrophysics Data System (ADS)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture

  18. Supporting Community Emergency Management Planning Through a Geocollaboration Software Architecture

    NASA Astrophysics Data System (ADS)

    Schafer, Wendy A.; Ganoe, Craig H.; Carroll, John M.

    Emergency management is more than just events occurring within an emergency situation. It encompasses a variety of persistent activities such as planning, training, assessment, and organizational change. We are studying emergency management planning practices in which geographic communities (towns and regions) prepare to respond efficiently to significant emergency events. Community emergency management planning is an extensive collaboration involving numerous stakeholders throughout the community and both reflecting and challenging the community’s structure and resources. Geocollaboration is one aspect of the effort. Emergency managers, public works directors, first responders, and local transportation managers need to exchange information relating to possible emergency event locations and their surrounding areas. They need to examine geospatial maps together and collaboratively develop emergency plans and procedures. Issues such as emergency vehicle traffic routes and staging areas for command posts, arriving media, and personal first responders’ vehicles must be agreed upon prior to an emergency event to ensure an efficient and effective response. This work presents a software architecture that facilitates the development of geocollaboration solutions. The architecture extends prior geocollaboration research and reuses existing geospatial information models. Emergency management planning is one application domain for the architecture. Geocollaboration tools can be developed that support community-wide emergency management planning and preparedness. This chapter describes how the software architecture can be used for the geospatial, emergency management planning activities of one community.

  19. Do Performance-Based Codes Support Universal Design in Architecture?

    PubMed

    Grangaard, Sidse; Frandsen, Anne Kathrine

    2016-01-01

    The research project 'An analysis of the accessibility requirements' studies how Danish architectural firms experience the accessibility requirements of the Danish Building Regulations and it examines their opinions on how future regulative models can support innovative and inclusive design - Universal Design (UD). The empirical material consists of input from six workshops to which all 700 Danish Architectural firms were invited, as well as eight group interviews. The analysis shows that the current prescriptive requirements are criticized for being too homogenous and possibilities for differentiation and zoning are required. Therefore, a majority of professionals are interested in a performance-based model because they think that such a model will support 'accessibility zoning', achieving flexibility because of different levels of accessibility in a building due to its performance. The common understanding of accessibility and UD is directly related to buildings like hospitals and care centers. When the objective is both innovative and inclusive architecture, the request of a performance-based model should be followed up by a knowledge enhancement effort in the building sector. Bloom's taxonomy of educational objectives is suggested as a tool for such a boost. The research project has been financed by the Danish Transport and Construction Agency. PMID:27534292

  20. Supporting shared data structures on distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, since there is no support for shared data structures. Current programming languages for distributed memory architectures force the user to decompose all data structures into separate pieces, with each piece owned by one of the processors in the machine, and with all communication explicitly specified by low-level message-passing primitives. A new programming environment is presented for distributed memory architectures, providing a global name space and allowing direct access to remote parts of data values. The analysis and program transformations required to implement this environment are described, and the efficiency of the resulting code on the NCUBE/7 and IPSC/2 hypercubes are described.

  1. Information Architecture for Quality Management Support in Hospitals.

    PubMed

    Rocha, Álvaro; Freixo, Jorge

    2015-10-01

    Quality Management occupies a strategic role in organizations, and the adoption of computer tools within an aligned information architecture facilitates the challenge of making more with less, promoting the development of a competitive edge and sustainability. A formal Information Architecture (IA) lends organizations an enhanced knowledge but, above all, favours management. This simplifies the reinvention of processes, the reformulation of procedures, bridging and the cooperation amongst the multiple actors of an organization. In the present investigation work we planned the IA for the Quality Management System (QMS) of a Hospital, which allowed us to develop and implement the QUALITUS (QUALITUS, name of the computer application developed to support Quality Management in a Hospital Unit) computer application. This solution translated itself in significant gains for the Hospital Unit under study, accelerating the quality management process and reducing the tasks, the number of documents, the information to be filled in and information errors, amongst others. PMID:26306878

  2. Self-organised nano-structuring of thin oxide-films under swift heavy ion bombardment

    NASA Astrophysics Data System (ADS)

    Bolse, Wolfgang

    2006-03-01

    Surface instabilities and the resulting self-organisation processes play an important role in nano-technology since they allow for large-array nano-structuring. We have recently found that the occurrence of such instabilities in thin film systems can be triggered by energetic ion bombardment and the subsequent self-assembly of the surface can be nicely controlled by fine-tuning of the irradiation conditions. The role of the ion in such processes is of double nature: If the instability is latently present already in the virgin sample, but self-assembly cannot take place because of kinetic barriers, the ion impact may just supply the necessary atomic mobility. On the other hand, the surface may become instable due to the ion beam induced material modifications and further irradiation then results in its reorganisation. In the present paper, we will review recently observed nano-scale self-organisation processes in thin oxide-films induced by the irradiation with swift heavy ions (SHI) at some MeV/amu energies. The first example is about SHI induced dewetting, which is driven by capillary forces already present in the as-deposited samples. The achieved dewetting pattern show an amazing similarity to those observed for liquid polymer films on Si, although in the present case the samples were kept at 80 K and hence have never reached their melting point. The second example is about self-organised lamellae formation driven by planar stresses, which are induced by SHI bombardment under grazing incidence and result in a surface instability and anisotropic plastic deformation (hammering effect). Taking advantage of these effects and modifying the irradiation procedure, we were able to generate more complex structures like NiO-"nano-towers" of 2 μm height and 200 nm in diameter.

  3. The influence of the physical environment on the self-organised foraging patterns of ants

    NASA Astrophysics Data System (ADS)

    Detrain, C.; Natan, C.; Deneubourg, J.-L.

    2001-04-01

    Among social insects such as ants, scouts that modulate their recruiting behaviour, following simple rules based on local information, generate collective patterns of foraging. Here we demonstrate that features of the abiotic environment, specifically the foraging substrate, may also be influential in the emergence of group-level decisions such as the choice of one foraging path. Experimental data and theoretical analyses show that the collective patterns can arise independently of behavioural changes of individual scouts and can result, through self-organising processes, from the physico-chemical properties of the environment that alter the dynamics of information transfer by chemical trails.

  4. Microtubule self-organisation by reaction-diffusion processes causes collective transport and organisation of cellular particles

    PubMed Central

    Glade, Nicolas; Demongeot, Jacques; Tabony, James

    2004-01-01

    Background The transport of intra-cellular particles by microtubules is a major biological function. Under appropriate in vitro conditions, microtubule preparations behave as a 'complex' system and show 'emergent' phenomena. In particular, they form dissipative structures that self-organise over macroscopic distances by a combination of reaction and diffusion. Results Here, we show that self-organisation also gives rise to a collective transport of colloidal particles along a specific direction. Particles, such as polystyrene beads, chromosomes, nuclei, and vesicles are carried at speeds of several microns per minute. The process also results in the macroscopic self-organisation of these particles. After self-organisation is completed, they show the same pattern of organisation as the microtubules. Numerical simulations of a population of growing and shrinking microtubules, incorporating experimentally realistic reaction dynamics, predict self-organisation. They forecast that during self-organisation, macroscopic parallel arrays of oriented microtubules form which cross the reaction space in successive waves. Such travelling waves are capable of transporting colloidal particles. The fact that in the simulations, the aligned arrays move along the same direction and at the same speed as the particles move, suggest that this process forms the underlying mechanism for the observed transport properties. Conclusions This process constitutes a novel physical chemical mechanism by which chemical energy is converted into collective transport of colloidal particles along a given direction. Self-organisation of this type provides a new mechanism by which intra cellular particles such as chromosomes and vesicles can be displaced and simultaneously organised by microtubules. It is plausible that processes of this type occur in vivo. PMID:15176973

  5. Female dominance over males in primates: self-organisation and sexual dimorphism.

    PubMed

    Hemelrijk, Charlotte K; Wantia, Jan; Isler, Karin

    2008-01-01

    The processes that underlie the formation of the dominance hierarchy in a group are since long under debate. Models of self-organisation suggest that dominance hierarchies develop by the self-reinforcing effects of winning and losing fights (the so-called winner-loser effect), but according to 'the prior attribute hypothesis', dominance hierarchies develop from pre-existing individual differences, such as in body mass. In the present paper, we investigate the relevance of each of these two theories for the degree of female dominance over males. We investigate this in a correlative study in which we compare female dominance between groups of 22 species throughout the primate order. In our study female dominance may range from 0 (no female dominance) to 1 (complete female dominance). As regards 'the prior attribute hypothesis', we expected a negative correlation between female dominance over males and species-specific sexual dimorphism in body mass. However, to our surprise we found none (we use the method of independent contrasts). Instead, we confirm the self-organisation hypothesis: our model based on the winner-loser effect predicts that female dominance over males increases with the percentage of males in the group. We confirm this pattern at several levels in empirical data (among groups of a single species and between species of the same genus and of different ones). Since the winner-loser effect has been shown to work in many taxa including humans, these results may have broad implications. PMID:18628830

  6. Female Dominance over Males in Primates: Self-Organisation and Sexual Dimorphism

    PubMed Central

    Hemelrijk, Charlotte K.; Wantia, Jan; Isler, Karin

    2008-01-01

    The processes that underlie the formation of the dominance hierarchy in a group are since long under debate. Models of self-organisation suggest that dominance hierarchies develop by the self-reinforcing effects of winning and losing fights (the so-called winner-loser effect), but according to ‘the prior attribute hypothesis’, dominance hierarchies develop from pre-existing individual differences, such as in body mass. In the present paper, we investigate the relevance of each of these two theories for the degree of female dominance over males. We investigate this in a correlative study in which we compare female dominance between groups of 22 species throughout the primate order. In our study female dominance may range from 0 (no female dominance) to 1 (complete female dominance). As regards ‘the prior attribute hypothesis’, we expected a negative correlation between female dominance over males and species-specific sexual dimorphism in body mass. However, to our surprise we found none (we use the method of independent contrasts). Instead, we confirm the self-organisation hypothesis: our model based on the winner-loser effect predicts that female dominance over males increases with the percentage of males in the group. We confirm this pattern at several levels in empirical data (among groups of a single species and between species of the same genus and of different ones). Since the winner-loser effect has been shown to work in many taxa including humans, these results may have broad implications. PMID:18628830

  7. Self-organising mixture autoregressive model for non-stationary time series modelling.

    PubMed

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches. PMID:19145663

  8. Rollback-recovery techniques and architectural support for multiprocessor systems

    SciTech Connect

    Chiang Chungyang.

    1991-01-01

    The author proposes efficient and robust fault diagnosis and rollback-recovery techniques to enhance system availability as well as performance in both distributed-memory and shared-bus shared-memory multiprocessor systems. Architectural support for the proposed rollback-recovery technique in a bus-based shared-memory multiprocessor system is also investigated to adaptively fine tune the proposed rollback-recovery technique in this type of system. A comparison of the performance of the proposed techniques with other existing techniques is made, a topic on which little quantitative information is available in the literature. New diagnosis concepts are introduced to show that the author's diagnosis technique yields higher diagnosis coverage and facilitates the performance evaluation of various fault-diagnosis techniques.

  9. Architecture and life support systems for a rotating space habitat

    NASA Astrophysics Data System (ADS)

    Misra, Gaurav

    Life Support Systems are critical to sustain human habitation of space over long time periods. As orbiting space habitats become operational in the future, support systems such as atmo-sphere, food, water etc. will play a very pivotal role in sustaining life. To design a long-duration space habitat, it's important to consider the full gamut of human experience of the environment. Long-term viability depends on much more than just the structural or life support efficiency. A space habitat isn't just a machine; it's a life experience. To be viable, it needs to keep the inhabitants satisfied with their condition. This paper provides conceptual research on several key factors that influence the growth and sustainability of humans in a space habitat. Apart from the main life support system parameters, the architecture (both interior and exterior) of the habitat will play a crucial role in influencing the liveability in the space habitat. In order to ensure the best possible liveability for the inhabitants, a truncated (half cut) torus is proposed as the shape of the habitat. This structure rotating at an optimum rpm will en-sure 1g pseudo gravity to the inhabitants. The truncated torus design has several advantages over other proposed shapes such as a cylinder or a sphere. The design provides minimal grav-ity variation (delta g) in the living area, since its flat outer pole ensures a constant gravity. The design is superior in economy of structural and atmospheric mass. Interior architecture of the habitat addresses the total built environment, drawing from diverse disciplines includ-ing physiology, psychology, and sociology. Furthermore, factors such as line of sight, natural sunlight and overhead clearance have been discussed in the interior architecture. Substantial radiation shielding is also required in order to prevent harmful cosmic radiations and solar flares from causing damage to inhabitants. Regolith shielding of 10 tons per meter square is proposed for the

  10. Self-organisation in protoplanetary discs. Global, non-stratified Hall-MHD simulations

    NASA Astrophysics Data System (ADS)

    Béthune, William; Lesur, Geoffroy; Ferreira, Jonathan

    2016-05-01

    Context. Recent observations have revealed organised structures in protoplanetary discs, such as axisymmetric rings or horseshoe concentrations, evocative of large-scale vortices. These structures are often interpreted as the result of planet-disc interactions. However, these discs are also known to be unstable to the magneto-rotational instability (MRI) which is believed to be one of the dominant angular momentum transport mechanism in these objects. It is therefore natural to ask whether the MRI itself could produce these structures without invoking planets. Aims: The nonlinear evolution of the MRI is strongly affected by the low ionisation fraction in protoplanetary discs. The Hall effect in particular, which is dominant in dense and weakly ionised parts of these objects, has been shown to spontaneously drive self-organising flows in local, shearing box simulations. Here, we investigate the behaviour of global MRI-unstable disc models dominated by the Hall effect and characterise their dynamics. Methods: We validated our implementation of the Hall effect into the PLUTO code with predictions from a spectral method in cylindrical geometry. We then performed 3D unstratified Hall-MHD simulations of Keplerian discs for a broad range of Hall, Ohmic, and ambipolar Elsasser numbers. Results: We confirm the transition from a turbulent to an organised state as the intensity of the Hall effect is increased. We observe the formation of zonal flows, their number depending on the available magnetic flux and on the intensity of the Hall effect. For intermediate Hall intensity, the flow self-organises into long-lived magnetised vortices. Neither the addition of a toroidal field nor Ohmic or ambipolar diffusion change this picture drastically in the range of parameters we have explored. Conclusions: Self-organisation by the Hall effect is a robust phenomenon in global non-stratified simulations. It is able to quench turbulent transport and spontaneously produce axisymmetric

  11. Tailoring broadband light trapping of GaAs and Si substrates by self-organised nanopatterning

    NASA Astrophysics Data System (ADS)

    Martella, C.; Chiappe, D.; Mennucci, C.; de Mongeot, F. Buatier

    2014-05-01

    We report on the formation of high aspect ratio anisotropic nanopatterns on crystalline GaAs (100) and Si (100) substrates exploiting defocused Ion Beam Sputtering assisted by a sacrificial self-organised Au stencil mask. The tailored optical properties of the substrates are characterised in terms of total reflectivity and haze by means of integrating sphere measurements as a function of the morphological modification at increasing ion fluence. Refractive index grading from sub-wavelength surface features induces polarisation dependent anti-reflection behaviour in the visible-near infrared (VIS-NIR) range, while light scattering at off-specular angles from larger structures leads to very high values of the haze functions in reflection. The results, obtained for an important class of technologically relevant materials, are appealing in view of photovoltaic and photonic applications aiming at photon harvesting in ultrathin crystalline solar cells.

  12. Tailoring broadband light trapping of GaAs and Si substrates by self-organised nanopatterning

    SciTech Connect

    Martella, C.; Chiappe, D.; Mennucci, C.; Buatier de Mongeot, F.

    2014-05-21

    We report on the formation of high aspect ratio anisotropic nanopatterns on crystalline GaAs (100) and Si (100) substrates exploiting defocused Ion Beam Sputtering assisted by a sacrificial self-organised Au stencil mask. The tailored optical properties of the substrates are characterised in terms of total reflectivity and haze by means of integrating sphere measurements as a function of the morphological modification at increasing ion fluence. Refractive index grading from sub-wavelength surface features induces polarisation dependent anti-reflection behaviour in the visible-near infrared (VIS-NIR) range, while light scattering at off-specular angles from larger structures leads to very high values of the haze functions in reflection. The results, obtained for an important class of technologically relevant materials, are appealing in view of photovoltaic and photonic applications aiming at photon harvesting in ultrathin crystalline solar cells.

  13. Descriptive Characteristics of Surface Water Quality in Hong Kong by a Self-Organising Map.

    PubMed

    An, Yan; Zou, Zhihong; Li, Ranran

    2016-01-01

    In this study, principal component analysis (PCA) and a self-organising map (SOM) were used to analyse a complex dataset obtained from the river water monitoring stations in the Tolo Harbor and Channel Water Control Zone (Hong Kong), covering the period of 2009-2011. PCA was initially applied to identify the principal components (PCs) among the nonlinear and complex surface water quality parameters. SOM followed PCA, and was implemented to analyze the complex relationships and behaviors of the parameters. The results reveal that PCA reduced the multidimensional parameters to four significant PCs which are combinations of the original ones. The positive and inverse relationships of the parameters were shown explicitly by pattern analysis in the component planes. It was found that PCA and SOM are efficient tools to capture and analyze the behavior of multivariable, complex, and nonlinear related surface water quality data. PMID:26761018

  14. Descriptive Characteristics of Surface Water Quality in Hong Kong by a Self-Organising Map

    PubMed Central

    An, Yan; Zou, Zhihong; Li, Ranran

    2016-01-01

    In this study, principal component analysis (PCA) and a self-organising map (SOM) were used to analyse a complex dataset obtained from the river water monitoring stations in the Tolo Harbor and Channel Water Control Zone (Hong Kong), covering the period of 2009–2011. PCA was initially applied to identify the principal components (PCs) among the nonlinear and complex surface water quality parameters. SOM followed PCA, and was implemented to analyze the complex relationships and behaviors of the parameters. The results reveal that PCA reduced the multidimensional parameters to four significant PCs which are combinations of the original ones. The positive and inverse relationships of the parameters were shown explicitly by pattern analysis in the component planes. It was found that PCA and SOM are efficient tools to capture and analyze the behavior of multivariable, complex, and nonlinear related surface water quality data. PMID:26761018

  15. Self-Organisation, Thermotropic and Lyotropic Properties of Glycolipids Related to their Biological Implications

    PubMed Central

    Garidel, Patrick; Kaconis, Yani; Heinbockel, Lena; Wulf, Matthias; Gerber, Sven; Munk, Ariane; Vill, Volkmar; Brandenburg, Klaus

    2015-01-01

    Glycolipids are amphiphilic molecules which bear an oligo- or polysaccharide as hydrophilic head group and hydrocarbon chains in varying numbers and lengths as hydrophobic part. They play an important role in life science as well as in material science. Their biological and physiological functions are quite diverse, ranging from mediators of cell-cell recognition processes, constituents of membrane domains or as membrane-forming units. Glycolipids form an exceptional class of liquid-crystal mesophases due to the fact that their self-organisation obeys more complex rules as compared to classical monophilic liquid-crystals. Like other amphiphiles, the supra-molecular structures formed by glycolipids are driven by their chemical structure; however, the details of this process are still hardly understood. Based on the synthesis of specific glycolipids with a clearly defined chemical structure, e.g., type and length of the sugar head group, acyl chain linkage, substitution pattern, hydrocarbon chain lengths and saturation, combined with a profound physico-chemical characterisation of the formed mesophases, the principles of the organisation in different aggregate structures of the glycolipids can be obtained. The importance of the observed and formed phases and their properties are discussed with respect to their biological and physiological relevance. The presented data describe briefly the strategies used for the synthesis of the used glycolipids. The main focus, however, lies on the thermotropic as well as lyotropic characterisation of the self-organised structures and formed phases based on physico-chemical and biophysical methods linked to their potential biological implications and relevance. PMID:26464591

  16. Solubilisation of multi walled carbon nanotubes by alpha-pyrene functionalised PMMA and their liquid crystalline self-organisation.

    PubMed

    Meuer, Stefan; Braun, Lydia; Zentel, Rudolf

    2008-07-21

    alpha-Pyrene functionalised poly(methyl methacrylate) (PMMA) chains were synthesised by RAFT polymerisation and found to be highly efficient to solubilise and disentangle multi walled carbon nanotubes that can now self-organise as liquid crystalline phases in PMMA and PEG 400 matrices. PMID:18594730

  17. SANDS: a service-oriented architecture for clinical decision support in a National Health Information Network.

    PubMed

    Wright, Adam; Sittig, Dean F

    2008-12-01

    In this paper, we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. The SANDS architecture for decision support has several significant advantages over other architectures for clinical decision support. The most salient of these are: PMID:18434256

  18. A Proposed Clinical Decision Support Architecture Capable of Supporting Whole Genome Sequence Information

    PubMed Central

    Welch, Brandon M.; Rodriguez Loya, Salvador; Eilbeck, Karen; Kawamoto, Kensaku

    2014-01-01

    Whole genome sequence (WGS) information may soon be widely available to help clinicians personalize the care and treatment of patients. However, considerable barriers exist, which may hinder the effective utilization of WGS information in a routine clinical care setting. Clinical decision support (CDS) offers a potential solution to overcome such barriers and to facilitate the effective use of WGS information in the clinic. However, genomic information is complex and will require significant considerations when developing CDS capabilities. As such, this manuscript lays out a conceptual framework for a CDS architecture designed to deliver WGS-guided CDS within the clinical workflow. To handle the complexity and breadth of WGS information, the proposed CDS framework leverages service-oriented capabilities and orchestrates the interaction of several independently-managed components. These independently-managed components include the genome variant knowledge base, the genome database, the CDS knowledge base, a CDS controller and the electronic health record (EHR). A key design feature is that genome data can be stored separately from the EHR. This paper describes in detail: (1) each component of the architecture; (2) the interaction of the components; and (3) how the architecture attempts to overcome the challenges associated with WGS information. We believe that service-oriented CDS capabilities will be essential to using WGS information for personalized medicine. PMID:25411644

  19. A proposed clinical decision support architecture capable of supporting whole genome sequence information.

    PubMed

    Welch, Brandon M; Loya, Salvador Rodriguez; Eilbeck, Karen; Kawamoto, Kensaku

    2014-04-01

    Whole genome sequence (WGS) information may soon be widely available to help clinicians personalize the care and treatment of patients. However, considerable barriers exist, which may hinder the effective utilization of WGS information in a routine clinical care setting. Clinical decision support (CDS) offers a potential solution to overcome such barriers and to facilitate the effective use of WGS information in the clinic. However, genomic information is complex and will require significant considerations when developing CDS capabilities. As such, this manuscript lays out a conceptual framework for a CDS architecture designed to deliver WGS-guided CDS within the clinical workflow. To handle the complexity and breadth of WGS information, the proposed CDS framework leverages service-oriented capabilities and orchestrates the interaction of several independently-managed components. These independently-managed components include the genome variant knowledge base, the genome database, the CDS knowledge base, a CDS controller and the electronic health record (EHR). A key design feature is that genome data can be stored separately from the EHR. This paper describes in detail: (1) each component of the architecture; (2) the interaction of the components; and (3) how the architecture attempts to overcome the challenges associated with WGS information. We believe that service-oriented CDS capabilities will be essential to using WGS information for personalized medicine. PMID:25411644

  20. Network architectures in support of digital subscriber line (DSL) deployment

    NASA Astrophysics Data System (ADS)

    Peuch, Bruno

    1998-09-01

    DSL technology enables very high bandwidth transmission in a point-to-point fashion from a customer's premises to a central office (CO), wiring center, or other logical point of traffic aggregation. Unlike many technologies that enable broadband Internet access, DSL technology does not determine a specific architecture to be deployed at either the customer's premises or in the service/access provider's network. In fact, DSL technology can be used in conjunction with a variety of network architectures. While being agnostic regarding to higher-layer protocols, there are still several critical 'protocol specific' issues that need to be addressed when deploying DSL as a solution for IP (Internet/intrAnet) access. This paper will address these issues and present a range of network architectures that incorporate DSL technology. This paper will only focus on those architectures that enable IP access. These architectures are divided into three categories: Traditional Dialled Model (TDM), frame-based (Frame Relay/Ethernet), and cell-based (ATM).

  1. Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA

    NASA Astrophysics Data System (ADS)

    Chandra, Abhijit; Chattopadhyay, Sudipta

    2015-01-01

    In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.

  2. Photon Binding Energy during Self-trapping and filaments' Self-Organisation

    NASA Astrophysics Data System (ADS)

    Dantu, Subbarao; Uma, R.; Goyal, Sanjeev

    2000-10-01

    Self-trapping profiles of laser beams in one space dimension and in cylindrical geometry are obtained for saturating-type nonlinearities computationally. The relavant nonlinear Shrodinger equations are solved adjusting for the nonlinear wavenumber shifts till self-trapping is achieved.Note that in one space dimension case the self-trapping condition is the same as for soliton formation. The modelling of the self-trapped beams is done using an approximate gaussian ansatz. Self-consistency then demands that the refractive index profile be approximated by a suitable parabolic profile in space corresponding to two nearby turning points being present simultaneously. The estimation of the location of the turning points is accomplished by using the scheme of approximation on the refractive index in momentum space as suggested by Subbarao et.al.(Phys.Plasmas vol.5, pp.3440-3450 (1998)). This scheme automatically also suggests the method to estimate the per photon binding energy in the self-trapped beam that indicates the strength of self-trapping.The photon binding energy vs. the laser beam intensity is the required photon binding energy curve.Being so similar to the nuclear binding energy curve in shape, it also goes on to suggest how to accomplish more stable self-trapped structures by the fusion or fission of self-trapped filaments thereby giving rise to a new form of self-organisation.

  3. The metaphor-gestalt synergy underlying the self-organisation of perception as a semiotic process.

    PubMed

    Rail, David

    2013-04-01

    Recently the basis of concept and language formation has been redefined by the proposal that they both stem from perception and embodiment. The experiential revolution has lead to a far more integrated and dynamic understanding of perception as a semiotic system. The emergence of meaning in the perceptual process stems from the interaction between two key mechanisms. These are first, the generation of schemata through recurrent sensorimotor activity (SM) that underlies category and language formation (L). The second is the interaction between metaphor (M) and gestalt mechanisms (G) that generate invariant mappings beyond the SM domain that both conserve and diversify our understanding and meaning potential. We propose an important advance in our understanding of perception as a semiotic system through exploring the affect of self-organising to criticality where hierarchical behaviour becomes widely integrated through 1/f process and isomorphisms. Our proposal leads to several important implications. First, that SM and L form a functional isomorphism depicted as SM <=> L. We contend that SM <=> L is emergent, corresponding to the phenomenal self. Second, meaning structures the isomorphism SM <=>L through the synergy between M and G (M-G). M-G synergy is based on a combination of structuring and imagination. We contend that the interaction between M-G and SM <=> L functions as a macro-micro comutation that governs perception as semiosis. We discuss how our model relates to current research in fractal time and verb formation. PMID:23517606

  4. Emergence and Dissolvence in the Self-organisation of Complex Systems

    NASA Astrophysics Data System (ADS)

    Testa, Bernard; Kier, Lemont B.

    2000-03-01

    The formation of complex systems is accompanied by the emergence of properties that are non-existent in the components. But what of the properties and behaviour of such components caught up in the formation of a system of a higher level of complexity? In this assay, we use a large variety of examples, from molecules to organisms and beyond, to show that systems merging into a complex system of higher order experience constraints with a partial loss of choice, options and independence. In other words, emergence in a complex system often implies reduction in the number of probable states of its components, a phenomenon we term dissolvence. This is seen in atoms when they merge to form molecules, in biomolecules when they form macromolecules such as proteins, and in macromolecules when they form aggregates such as molecular machines or membranes. At higher biological levels, dissolvence occurs for example in components of cells (e.g. organelles), tissues (cells), organs (tissues), organisms (organs) and societies (individuals). Far from being a destruction, dissolvence is understood here as a creative process in which information is generated to fuel the process of self-organisation of complex systems, allowing them to appear and evolve to higher states of organisation and emergence. Questions are raised about the relationship of dissolvence and adaptability; the interrelation with top-down causation; the reversibility of dissolvence; and the connection between dissolvence and anticipation.

  5. Space Network IP Services (SNIS): An Architecture for Supporting Low Earth Orbiting IP Satellite Missions

    NASA Technical Reports Server (NTRS)

    Israel, David J.

    2005-01-01

    The NASA Space Network (SN) supports a variety of missions using the Tracking and Data Relay Satellite System (TDRSS), which includes ground stations in White Sands, New Mexico and Guam. A Space Network IP Services (SNIS) architecture is being developed to support future users with requirements for end-to-end Internet Protocol (IP) communications. This architecture will support all IP protocols, including Mobile IP, over TDRSS Single Access, Multiple Access, and Demand Access Radio Frequency (RF) links. This paper will describe this architecture and how it can enable Low Earth Orbiting IP satellite missions.

  6. The influence of receptor-mediated interactions on reaction-diffusion mechanisms of cellular self-organisation.

    PubMed

    Klika, Václav; Baker, Ruth E; Headon, Denis; Gaffney, Eamonn A

    2012-04-01

    Understanding the mechanisms governing and regulating self-organisation in the developing embryo is a key challenge that has puzzled and fascinated scientists for decades. Since its conception in 1952 the Turing model has been a paradigm for pattern formation, motivating numerous theoretical and experimental studies, though its verification at the molecular level in biological systems has remained elusive. In this work, we consider the influence of receptor-mediated dynamics within the framework of Turing models, showing how non-diffusing species impact the conditions for the emergence of self-organisation. We illustrate our results within the framework of hair follicle pre-patterning, showing how receptor interaction structures can be constrained by the requirement for patterning, without the need for detailed knowledge of the network dynamics. Finally, in the light of our results, we discuss the ability of such systems to pattern outside the classical limits of the Turing model, and the inherent dangers involved in model reduction. PMID:22072186

  7. A heavy rainfall sounding climatology over Gauteng, South Africa, using self-organising maps

    NASA Astrophysics Data System (ADS)

    Dyson, Liesl L.

    2015-12-01

    The daily weather at a particular place is largely influenced by the synoptic circulation and thermodynamic profile of the atmosphere. Heavy rainfall occurs from a particular subset of synoptic and thermodynamic states. Baseline climatologies provide objective information on heavy rainfall-producing circulation patterns and thermodynamic variables. This is how climatologically large or extreme values associated with heavy rainfall are identified. The aim of this research is to provide a heavy rainfall sounding climatology in austral summer over Gauteng, South Africa, using self-organising maps (SOMs). The results show that the SOM captures the intra-seasonal variability of heavy rainfall soundings by clearly distinguishing between the atmospheric conditions on early summer (October-December) and late summer (January-March) heavy rainfall days. Conditions associated with heavy early summer rainfall are large vertical wind shear and conditional instability, while the atmosphere is drier and cooler than when heavy rainfall occurs in late summer. Late summer heavy rainfall conditions are higher convective instability and small vertical wind shear values. The SOM climatology shows that some heavy rainfall days occur in both early and late summer when large-scale synoptic weather systems cause strong near-surface moisture flux and large values of wind shear. On these days, both the conditional and convective instability of the atmosphere are low and heavy rainfall results from the strong synoptic forcing. In contrast, heavy rainfall also occurs on days when synoptic circulation is not very favourable and the air is relatively dry, but the atmosphere is unstable with warm surface conditions and heavy rainfall develops from local favourable conditions. The SOM climatology provides guidelines to critical values of sounding-derived parameters for all these scenarios.

  8. Summarising climate and air quality (ozone) data on self-organising maps: a Sydney case study.

    PubMed

    Jiang, Ningbo; Betts, Alan; Riley, Matt

    2016-02-01

    This paper explores the classification and visualisation utility of the self-organising map (SOM) method in the context of New South Wales (NSW), Australia, using gridded NCEP/NCAR geopotential height reanalysis for east Australia, together with multi-site meteorological and air quality data for Sydney from the NSW Office of Environment and Heritage Air Quality Monitoring Network. A twice-daily synoptic classification has been derived for east Australia for the period of 1958-2012. The classification has not only reproduced the typical synoptic patterns previously identified in the literature but also provided an opportunity to visualise the subtle, non-linear change in the eastward-migrating synoptic systems influencing NSW (including Sydney). The summarisation of long-term, multi-site air quality/meteorological data from the Sydney basin on the SOM plane has identified a set of typical air pollution/meteorological spatial patterns in the region. Importantly, the examination of these patterns in relation to synoptic weather types has provided important visual insights into how local and synoptic meteorological conditions interact with each other and affect the variability of air quality in tandem. The study illustrates that while synoptic circulation types are influential, the within-type variability in mesoscale flows plays a critical role in determining local ozone levels in Sydney. These results indicate that the SOM can be a useful tool for assessing the impact of weather and climatic conditions on air quality in the regional airshed. This study further promotes the use of the SOM method in environmental research. PMID:26787272

  9. Rapid self-organised initiation of ad hoc sensor networks close above the percolation threshold

    NASA Astrophysics Data System (ADS)

    Korsnes, Reinert

    2010-07-01

    This work shows potentials for rapid self-organisation of sensor networks where nodes collaborate to relay messages to a common data collecting unit (sink node). The study problem is, in the sense of graph theory, to find a shortest path tree spanning a weighted graph. This is a well-studied problem where for example Dijkstra’s algorithm provides a solution for non-negative edge weights. The present contribution shows by simulation examples that simple modifications of known distributed approaches here can provide significant improvements in performance. Phase transition phenomena, which are known to take place in networks close to percolation thresholds, may explain these observations. An initial method, which here serves as reference, assumes the sink node starts organisation of the network (tree) by transmitting a control message advertising its availability for its neighbours. These neighbours then advertise their current cost estimate for routing a message to the sink. A node which in this way receives a message implying an improved route to the sink, advertises its new finding and remembers which neighbouring node the message came from. This activity proceeds until there are no more improvements to advertise to neighbours. The result is a tree network for cost effective transmission of messages to the sink (root). This distributed approach has potential for simple improvements which are of interest when minimisation of storage and communication of network information are a concern. Fast organisation of the network takes place when the number k of connections for each node ( degree) is close above its critical value for global network percolation and at the same time there is a threshold for the nodes to decide to advertise network route updates.

  10. Ecological hierarchies and self-organisation - Pattern analysis, modelling and process integration across scales

    USGS Publications Warehouse

    Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.

    2010-01-01

    A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.

  11. SANDS: A Service-Oriented Architecture for Clinical Decision Support in a National Health Information Network

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2008-01-01

    In this paper we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. PMID:18434256

  12. Experimental Support for the Evolution of Symmetric Protein Architecture from a Simple Peptide Motif

    SciTech Connect

    J Lee; M Blaber

    2011-12-31

    The majority of protein architectures exhibit elements of structural symmetry, and 'gene duplication and fusion' is the evolutionary mechanism generally hypothesized to be responsible for their emergence from simple peptide motifs. Despite the central importance of the gene duplication and fusion hypothesis, experimental support for a plausible evolutionary pathway for a specific protein architecture has yet to be effectively demonstrated. To address this question, a unique 'top-down symmetric deconstruction' strategy was utilized to successfully identify a simple peptide motif capable of recapitulating, via gene duplication and fusion processes, a symmetric protein architecture (the threefold symmetric {beta}-trefoil fold). The folding properties of intermediary forms in this deconstruction agree precisely with a previously proposed 'conserved architecture' model for symmetric protein evolution. Furthermore, a route through foldable sequence-space between the simple peptide motif and extant protein fold is demonstrated. These results provide compelling experimental support for a plausible evolutionary pathway of symmetric protein architecture via gene duplication and fusion processes.

  13. A software architecture to support a large-scale, multi-tier clinical information system.

    PubMed

    Yungton, J A; Sittig, D F; Reilly, P; Pappas, J; Flammini, S; Chueh, H C; Teich, J M

    1998-01-01

    A robust software architecture is necessary to support a large-scale multi-tier clinical information system. This paper describes our mechanism for enterprise distribution of applications and support files, the consolidation of data-access functions and system utilities stored on the data access tier, and an application framework which implements a coherent clinical computing environment. The software architecture and systems described in this paper have been robust through pilot testing of our applications at Massachusetts General Hospital. PMID:9929212

  14. A novel EPON architecture for supporting direct communication between ONUs

    NASA Astrophysics Data System (ADS)

    Wang, Liqian; Chen, Xue; Wang, Zhen

    2008-11-01

    In the traditional EPON network, optical signal from one ONU can not reach other ONUs. So ONUs can not directly transmit packets to other ONUs .The packets must be transferred by the OLT and it consumes both upstream bandwidth and downstream bandwidth. The bandwidth utilization is low and becomes lower when there are more packets among ONUs. When the EPON network carries P2P (Peer-to-Peer) applications and VPN applications, there would be a great lot of packets among ONUs and the traditional EPON network meets the problem of low bandwidth utilization. In the worst situation the bandwidth utilization of traditional EPON only is 50 percent. This paper proposed a novel EPON architecture and a novel medium access control protocol to realize direct packets transmission between ONUs. In the proposed EPON we adopt a novel circled architecture in the splitter. Due to the circled-splitter, optical signals from an ONU can reach the other ONUs and packets could be directly transmitted between two ONUs. The traffic between two ONUs only consumes upstream bandwidth and the bandwidth cost is reduced by 50 percent. Moreover, this kind of directly transmission reduces the packet's latency.

  15. Clouds: A support architecture for fault tolerant, distributed systems

    NASA Technical Reports Server (NTRS)

    Dasgupta, P.; Leblanco, R. J., Jr.

    1986-01-01

    Clouds is a distributed operating system providing support for fault tolerance, location independence, reconfiguration, and transactions. The implementation paradigm uses objects and nested actions as building blocks. Subsystems and applications that can be supported by Clouds to further enhance the performance and utility of the system are also discussed.

  16. A support architecture for reliable distributed computing systems

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1988-01-01

    The Clouds project is well underway to its goal of building a unified distributed operating system supporting the object model. The operating system design uses the object concept of structuring software at all levels of the system. The basic operating system was developed and work is under progress to build a usable system.

  17. Scaling Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin

    2016-01-01

    For long-duration space missions outside of Earth orbit, reliability considerations will drive higher levels of redundancy and/or on-board spares for life support equipment. Component scaling will be a critical element in minimizing overall launch mass while maintaining an acceptable level of system reliability. Building on an earlier reliability study (AIAA 2012-3491), this paper considers the impact of alternative scaling approaches, including the design of technology assemblies and their individual components to maximum, nominal, survival, or other fractional requirements. The optimal level of life support system closure is evaluated for deep-space missions of varying duration using equivalent system mass (ESM) as the comparative basis. Reliability impacts are included in ESM by estimating the number of component spares required to meet a target system reliability. Common cause failures are included in the analysis. ISS and ISS-derived life support technologies are considered along with selected alternatives. This study focusses on minimizing launch mass, which may be enabling for deep-space missions.

  18. Innovative use of self-organising maps (SOMs) in model validation.

    NASA Astrophysics Data System (ADS)

    Jolly, Ben; McDonald, Adrian; Coggins, Jack

    2016-04-01

    We present an innovative combination of techniques for validation of numerical weather prediction (NWP) output against both observations and reanalyses using two classification schemes, demonstrated by a validation of the operational NWP 'AMPS' (the Antarctic Mesoscale Prediction System). Historically, model validation techniques have centred on case studies or statistics at various time scales (yearly/seasonal/monthly). Within the past decade the latter technique has been expanded by the addition of classification schemes in place of time scales, allowing more precise analysis. Classifications are typically generated for either the model or the observations, then used to create composites for both which are compared. Our method creates and trains a single self-organising map (SOM) on both the model output and observations, which is then used to classify both datasets using the same class definitions. In addition to the standard statistics on class composites, we compare the classifications themselves between the model and the observations. To add further context to the area studied, we use the same techniques to compare the SOM classifications with regimes developed for another study to great effect. The AMPS validation study compares model output against surface observations from SNOWWEB and existing University of Wisconsin-Madison Antarctic Automatic Weather Stations (AWS) during two months over the austral summer of 2014-15. Twelve SOM classes were defined in a '4 x 3' pattern, trained on both model output and observations of 2 m wind components, then used to classify both training datasets. Simple statistics (correlation, bias and normalised root-mean-square-difference) computed for SOM class composites showed that AMPS performed well during extreme weather events, but less well during lighter winds and poorly during the more changeable conditions between either extreme. Comparison of the classification time-series showed that, while correlations were lower

  19. WDS Knowledge Network Architecture in Support of International Science

    NASA Astrophysics Data System (ADS)

    Mokrane, M.; Minster, J. B. H.; Hugo, W.

    2014-12-01

    ICSU (International Council for Science) created the World Data System (WDS) as an interdisciplinary body at its General Assembly in Maputo in 2008, and since then the membership of the WDS has grown to include 86 members, of whom 56 are institutions or data centers focused on providing quality-assured data and services to the scientific community, and 10 more are entire networks of such data facilities and services. In addition to its objective of providing universal and equitable access to scientific data and services, WDS is also active in promoting stewardship, standards and conventions, and improved access to products derived from data and services. Whereas WDS is in process of aggregating and harmonizing the metadata collections of its membership, it is clear that additional benefits can be obtained by supplementing such traditional metadata sources with information about members, authors, and the coverages of the data, as well as metrics such as citation indices, quality indicators, and usability. Moreover, the relationships between the actors and systems that populate this metadata landscape can be seen as a knowledge network that describes a subset of global scientific endeavor. Such a knowledge network is useful in many ways, supporting both machine-based and human requests for contextual information related to a specific data set, institution, author, topic, or other entities in the network. Specific use cases that can be realized include decision and policy support for funding agencies, identification of collaborators, ranking of data sources, availability of data for specific coverages, and many more. The paper defines the scope of and conceptual background to such a knowledge network, discusses some initial work done by WDS to establish the network, and proposes an implementation model for rapid operationalization. In this model, established interests such as DataCite, ORCID, and CrossRef have well-defined roles, and the standards, services, and

  20. Knowledge Network Architecture in Support of International Science

    NASA Astrophysics Data System (ADS)

    Hugo, Wim

    2015-04-01

    ICSU (The International Council for Science) created the World Data System (WDS) as an interdisciplinary body at its General Assembly in Maputo in 2008, and since then the membership of the WDS has grown to include 86 members, of whom 56 are institutions or data centres focused on providing quality-assured data and services to the scientific community. In addition to its objective of providing universal and equitable access to such data and services, WDS is also active in promoting stewardship, standards and conventions, and improved access to products derived from data and services. Whereas WDS is in process of aggregating and harmonizing the meta-data collections of its membership, it is clear that additional benefits can be obtained by supplementing such traditional meta-data sources with information about members, authors, and the coverages of the data, as well as metrics such as citation indices, quality indicators, and usability. Moreover, the relationships between the actors and systems that populate this meta-data landscape can be seen as a knowledge network that describes a sub-set of global scientific endeavor. Such a knowledge network is useful in many ways, supporting both machine-based and human requests for contextual information related to a specific data set, institution, author, topic, or other entities in the network. Specific use cases that can be realised include decision and policy support for funding agencies, identification of collaborators, ranking of data sources, availability of data for specific coverages, and many more. The paper defines the scope of and conceptual background to such a knowledge network, discusses some initial work done by WDS to establish the network, and proposes an implementation model for rapid operationalisation. In this model, established interests such as DataCITE, ORCID, and CrossRef have well-defined roles, and the standards, services, and registries required to build a community-maintained, scalable knowledge

  1. An integrative architecture for a sensor-supported trust management system.

    PubMed

    Trček, Denis

    2012-01-01

    Trust plays a key role not only in e-worlds and emerging pervasive computing environments, but also already for millennia in human societies. Trust management solutions that have being around now for some fifteen years were primarily developed for the above mentioned cyber environments and they are typically focused on artificial agents, sensors, etc. However, this paper presents extensions of a new methodology together with architecture for trust management support that is focused on humans and human-like agents. With this methodology and architecture sensors play a crucial role. The architecture presents an already deployable tool for multi and interdisciplinary research in various areas where humans are involved. It provides new ways to obtain an insight into dynamics and evolution of such structures, not only in pervasive computing environments, but also in other important areas like management and decision making support. PMID:23112628

  2. An Integrative Architecture for a Sensor-Supported Trust Management System

    PubMed Central

    Trček, Denis

    2012-01-01

    Trust plays a key role not only in e-worlds and emerging pervasive computing environments, but also already for millennia in human societies. Trust management solutions that have being around now for some fifteen years were primarily developed for the above mentioned cyber environments and they are typically focused on artificial agents, sensors, etc. However, this paper presents extensions of a new methodology together with architecture for trust management support that is focused on humans and human-like agents. With this methodology and architecture sensors play a crucial role. The architecture presents an already deployable tool for multi and interdisciplinary research in various areas where humans are involved. It provides new ways to obtain an insight into dynamics and evolution of such structures, not only in pervasive computing environments, but also in other important areas like management and decision making support. PMID:23112628

  3. Architecture, Design, and Development of an HTML/JavaScript Web-Based Group Support System.

    ERIC Educational Resources Information Center

    Romano, Nicholas C., Jr.; Nunamaker, Jay F., Jr.; Briggs, Robert O.; Vogel, Douglas R.

    1998-01-01

    Examines the need for virtual workspaces and describes the architecture, design, and development of GroupSystems for the World Wide Web (GSWeb), an HTML/JavaScript Web-based Group Support System (GSS). GSWeb, an application interface similar to a Graphical User Interface (GUI), is currently used by teams around the world and relies on user…

  4. FY04 Advanced Life Support Architecture and Technology Studies: Mid-Year Presentation

    NASA Technical Reports Server (NTRS)

    Lange, Kevin; Anderson, Molly; Duffield, Bruce; Hanford, Tony; Jeng, Frank

    2004-01-01

    Long-Term Objective: Identify optimal advanced life support system designs that meet existing and projected requirements for future human spaceflight missions. a) Include failure-tolerance, reliability, and safe-haven requirements. b) Compare designs based on multiple criteria including equivalent system mass (ESM), technology readiness level (TRL), simplicity, commonality, etc. c) Develop and evaluate new, more optimal, architecture concepts and technology applications.

  5. The middleware architecture supports heterogeneous network systems for module-based personal robot system

    NASA Astrophysics Data System (ADS)

    Choo, Seongho; Li, Vitaly; Choi, Dong Hee; Jung, Gi Deck; Park, Hong Seong; Ryuh, Youngsun

    2005-12-01

    On developing the personal robot system presently, the internal architecture is every module those occupy separated functions are connected through heterogeneous network system. This module-based architecture supports specialization and division of labor at not only designing but also implementation, as an effect of this architecture, it can reduce developing times and costs for modules. Furthermore, because every module is connected among other modules through network systems, we can get easy integrations and synergy effect to apply advanced mutual functions by co-working some modules. In this architecture, one of the most important technologies is the network middleware that takes charge communications among each modules connected through heterogeneous networks systems. The network middleware acts as the human nerve system inside of personal robot system; it relays, transmits, and translates information appropriately between modules that are similar to human organizations. The network middleware supports various hardware platform, heterogeneous network systems (Ethernet, Wireless LAN, USB, IEEE 1394, CAN, CDMA-SMS, RS-232C). This paper discussed some mechanisms about our network middleware to intercommunication and routing among modules, methods for real-time data communication and fault-tolerant network service. There have designed and implemented a layered network middleware scheme, distributed routing management, network monitoring/notification technology on heterogeneous networks for these goals. The main theme is how to make routing information in our network middleware. Additionally, with this routing information table, we appended some features. Now we are designing, making a new version network middleware (we call 'OO M/W') that can support object-oriented operation, also are updating program sources itself for object-oriented architecture. It is lighter, faster, and can support more operation systems and heterogeneous network systems, but other general

  6. A Distributed Architecture for Tsunami Early Warning and Collaborative Decision-support in Crises

    NASA Astrophysics Data System (ADS)

    Moßgraber, J.; Middleton, S.; Hammitzsch, M.; Poslad, S.

    2012-04-01

    The presentation will describe work on the system architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". The challenges for a Tsunami Early Warning System (TEWS) are manifold and the success of a system depends crucially on the system's architecture. A modern warning system following a system-of-systems approach has to integrate various components and sub-systems such as different information sources, services and simulation systems. Furthermore, it has to take into account the distributed and collaborative nature of warning systems. In order to create an architecture that supports the whole spectrum of a modern, distributed and collaborative warning system one must deal with multiple challenges. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. At the bottom layer it has to reliably integrate a large set of conventional sensors, such as seismic sensors and sensor networks, buoys and tide gauges, and also innovative and unconventional sensors, such as streams of messages from social media services. At the top layer it has to support collaboration on high-level decision processes and facilitates information sharing between organizations. In between, the system has to process all data and integrate information on a semantic level in a timely manner. This complex communication follows an event-driven mechanism allowing events to be published, detected and consumed by various applications within the architecture. Therefore, at the upper layer the event-driven architecture (EDA) aspects are combined with principles of service-oriented architectures (SOA) using standards for communication and data exchange. The most prominent challenges on this layer

  7. The Setting is the Service: How the Architecture of Sober Living Residences Supports Community Based Recovery

    PubMed Central

    Wittman, Fried; Jee, Babette; Polcin, Douglas L.; Henderson, Diane

    2014-01-01

    The architecture of residential recovery settings is an important silent partner in the alcohol/drug recovery field. The settings significantly support or hinder recovery experiences of residents, and shape community reactions to the presence of sober living houses (SLH) in ordinary neighborhoods. Grounded in the principles of Alcoholics Anonymous, the SLH provides residents with settings designed to support peer based recovery; further, these settings operate in a community context that insists on sobriety and strongly encourages attendance at 12-step meetings. Little formal research has been conducted to show how architectural features of the recovery setting – building appearance, spatial layouts, furnishings and finishes, policies for use of the facilities, physical care and maintenance of the property, neighborhood features, aspects of location in the city – function to promote (or retard) recovery, and to build (or detract from) community support. This paper uses a case-study approach to analyze the architecture of a community-based residential recovery service that has demonstrated successful recovery outcomes for its residents, is popular in its community, and has achieved state-wide recognition. The Environmental Pattern Language (Alexander, Ishikawa, & Silverstein, 1977) is used to analyze its architecture in a format that can be tested, critiqued, and adapted for use by similar programs in many communities, providing a model for replication and further research. PMID:25328377

  8. Exploring Hardware Support For Scaling Irregular Applications on Multi-node Multi-core Architectures

    SciTech Connect

    Secchi, Simone; Ceriani, Marco; Tumeo, Antonino; Villa, Oreste; Palermo, Gianluca; Raffo, Luigi

    2013-06-05

    With the recent emergence of large-scale knowledge dis- covery, data mining and social network analysis, irregular applications have gained renewed interest. Classic cache-based high-performance architectures do not provide optimal performances with such kind of workloads, mainly due to the very low spatial and temporal locality of the irregular control and memory access patterns. In this paper, we present a multi-node, multi-core, fine-grained multi-threaded shared-memory system architecture specifically designed for the execution of large-scale irregular applications, and built on top of three pillars, that we believe are fundamental to support these workloads. First, we offer transparent hardware support for Partitioned Global Address Space (PGAS) to provide a large globally-shared address space with no software library overhead. Second, we employ multi-threaded multi-core processing nodes to achieve the necessary latency tolerance required by accessing global memory, which potentially resides in a remote node. Finally, we devise hardware support for inter-thread synchronization on the whole global address space. We first model the performances by using an analytical model that takes into account the main architecture and application characteristics. We describe the hardware design of the proposed cus- tom architectural building blocks that provide support for the above- mentioned three pillars. Finally, we present a limited-scale evaluation of the system on a multi-board FPGA prototype with typical irregular kernels and benchmarks. The experimental evaluation demonstrates the architecture performance scalability for different configurations of the whole system.

  9. Chlorinated solvents in a petrochemical wastewater treatment plant: an assessment of their removal using self-organising maps.

    PubMed

    Tobiszewski, Marek; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek

    2012-05-01

    The self-organising map approach was used to assess the efficiency of chlorinated solvent removal from petrochemical wastewater in a refinery wastewater treatment plant. Chlorinated solvents and inorganic anions (11 variables) were determined in 72 wastewater samples, collected from three different purification streams. The classification of variables identified technical solvents, brine from oil desalting and runoff sulphates as pollution sources in the refinery, affecting the quality of wastewater treatment plant influent. The classification of samples revealed the formation of five clusters: the first three clusters contained samples collected from the drainage water, process water and oiled rainwater treatment streams. The fourth cluster consisted mainly of samples collected after biological treatment, and the fifth one of samples collected after an unusual event. SOM analysis showed that the biological treatment step significantly reduced concentrations of chlorinated solvents in wastewater. PMID:22356856

  10. Service oriented architecture for clinical decision support: a systematic review and future directions.

    PubMed

    Loya, Salvador Rodriguez; Kawamoto, Kensaku; Chatwin, Chris; Huser, Vojtech

    2014-12-01

    The use of a service-oriented architecture (SOA) has been identified as a promising approach for improving health care by facilitating reliable clinical decision support (CDS). A review of the literature through October 2013 identified 44 articles on this topic. The review suggests that SOA related technologies such as Business Process Model and Notation (BPMN) and Service Component Architecture (SCA) have not been generally adopted to impact health IT systems' performance for better care solutions. Additionally, technologies such as Enterprise Service Bus (ESB) and architectural approaches like Service Choreography have not been generally exploited among researchers and developers. Based on the experience of other industries and our observation of the evolution of SOA, we found that the greater use of these approaches have the potential to significantly impact SOA implementations for CDS. PMID:25325996

  11. Fuzzylot: a novel self-organising fuzzy-neural rule-based pilot system for automated vehicles.

    PubMed

    Pasquier, M; Quek, C; Toh, M

    2001-10-01

    This paper presents part of our research work concerned with the realisation of an Intelligent Vehicle and the technologies required for its routing, navigation, and control. An automated driver prototype has been developed using a self-organising fuzzy rule-based system (POPFNN-CRI(S)) to model and subsequently emulate human driving expertise. The ability of fuzzy logic to represent vague information using linguistic variables makes it a powerful tool to develop rule-based control systems when an exact working model is not available, as is the case of any vehicle-driving task. Designing a fuzzy system, however, is a complex endeavour, due to the need to define the variables and their associated fuzzy sets, and determine a suitable rule base. Many efforts have thus been devoted to automating this process, yielding the development of learning and optimisation techniques. One of them is the family of POP-FNNs, or Pseudo-Outer Product Fuzzy Neural Networks (TVR, AARS(S), AARS(NS), CRI, Yager). These generic self-organising neural networks developed at the Intelligent Systems Laboratory (ISL/NTU) are based on formal fuzzy mathematical theory and are able to objectively extract a fuzzy rule base from training data. In this application, a driving simulator has been developed, that integrates a detailed model of the car dynamics, complete with engine characteristics and environmental parameters, and an OpenGL-based 3D-simulation interface coupled with driving wheel and accelerator/ brake pedals. The simulator has been used on various road scenarios to record from a human pilot driving data consisting of steering and speed control actions associated to road features. Specifically, the POPFNN-CRI(S) system is used to cluster the data and extract a fuzzy rule base modelling the human driving behaviour. Finally, the effectiveness of the generated rule base has been validated using the simulator in autopilot mode. PMID:11681754

  12. Distributed Sensor Architecture for Intelligent Control that Supports Quality of Control and Quality of Service

    PubMed Central

    Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés

    2015-01-01

    This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems. PMID:25723145

  13. Multitier Portal Architecture for Thin- and Thick-client Neutron Scattering Experiment Support

    SciTech Connect

    Green, Mark L; Miller, Stephen D

    2007-01-01

    Integration of emerging technologies and design patterns into the three-tier client-server architecture is required in order to provide a scalable and flexible architecture for novice to sophisticated portal user groups. The ability to provide user customizable portal interfaces is rapidly becoming commonplace and is driving the expectations of researchers and scientists in the scientific community. This paper describes an architectural design that maximizes information technology service reuse while providing a customizable user interface that scales with user sophistication and requirements. The Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory provides a state-of-the-art facility ideal for implementation of this infrastructure. The SNS Java-based Science Portal (Tier I) and Open Grid Computing Environment (Tier II) provide thin-client support whereas the GumTree Eclipse Rich Client Platform (Tier III) and Eclipse Integrated Development Environment (Tier IV) provide thickclient support within a multitier portal architecture. Each tier incorporates all of the features of the previous tiers while adding new capabilities based on the user requirements.

  14. Lunar Outpost Life Support Architecture Study Based on a High-Mobility Exploration Scenario

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2010-01-01

    This paper presents results of a life support architecture study based on a 2009 NASA lunar surface exploration scenario known as Scenario 12. The study focuses on the assembly complete outpost configuration and includes pressurized rovers as part of a distributed outpost architecture in both stand-alone and integrated configurations. A range of life support architectures are examined reflecting different levels of closure and distributed functionality. Monte Carlo simulations are used to assess the sensitivity of results to volatile high-impact mission variables, including the quantity of residual Lander oxygen and hydrogen propellants available for scavenging, the fraction of crew time away from the outpost on excursions, total extravehicular activity hours, and habitat leakage. Surpluses or deficits of water and oxygen are reported for each architecture, along with fixed and 10-year total equivalent system mass estimates relative to a reference case. System robustness is discussed in terms of the probability of no water or oxygen resupply as determined from the Monte Carlo simulations.

  15. Experimental support for the evolution of symmetric protein architecture from a simple peptide motif

    PubMed Central

    Lee, Jihun; Blaber, Michael

    2011-01-01

    The majority of protein architectures exhibit elements of structural symmetry, and “gene duplication and fusion” is the evolutionary mechanism generally hypothesized to be responsible for their emergence from simple peptide motifs. Despite the central importance of the gene duplication and fusion hypothesis, experimental support for a plausible evolutionary pathway for a specific protein architecture has yet to be effectively demonstrated. To address this question, a unique “top-down symmetric deconstruction” strategy was utilized to successfully identify a simple peptide motif capable of recapitulating, via gene duplication and fusion processes, a symmetric protein architecture (the threefold symmetric β-trefoil fold). The folding properties of intermediary forms in this deconstruction agree precisely with a previously proposed “conserved architecture” model for symmetric protein evolution. Furthermore, a route through foldable sequence-space between the simple peptide motif and extant protein fold is demonstrated. These results provide compelling experimental support for a plausible evolutionary pathway of symmetric protein architecture via gene duplication and fusion processes. PMID:21173271

  16. Earth Orbiting Support Systems for commercial low Earth orbit data relay: Assessing architectures through tradespace exploration

    NASA Astrophysics Data System (ADS)

    Palermo, Gianluca; Golkar, Alessandro; Gaudenzi, Paolo

    2015-06-01

    As small satellites and Sun Synchronous Earth Observation systems are assuming an increased role in nowadays space activities, including commercial investments, it is of interest to assess how infrastructures could be developed to support the development of such systems and other spacecraft that could benefit from having a data relay service in Low Earth Orbit (LEO), as opposed to traditional Geostationary relays. This paper presents a tradespace exploration study of the architecture of such LEO commercial satellite data relay systems, here defined as Earth Orbiting Support Systems (EOSS). The paper proposes a methodology to formulate architectural decisions for EOSS constellations, and enumerate the corresponding tradespace of feasible architectures. Evaluation metrics are proposed to measure benefits and costs of architectures; lastly, a multicriteria Pareto criterion is used to downselect optimal architectures for subsequent analysis. The methodology is applied to two case studies for a set of 30 and 100 customer-spacecraft respectively, representing potential markets for LEO services in Exploration, Earth Observation, Science, and CubeSats. Pareto analysis shows how increased performance of the constellation is always achieved by an increased node size, as measured by the gain of the communications antenna mounted on EOSS spacecraft. On the other hand, nonlinear trends in optimal orbital altitude, number of satellites per plane, and number of orbital planes, are found in both cases. An upward trend in individual node memory capacity is found, although never exceeding 256 Gbits of onboard memory for both cases that have been considered, assuming the availability of a polar ground station for EOSS data downlink. System architects can use the proposed methodology to identify optimal EOSS constellations for a given service pricing strategy and customer target, thus identifying alternatives for selection by decision makers.

  17. Lunar Outpost Life Support Architecture Study Based on a High Mobility Exploration Scenario

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2009-01-01

    As scenarios for lunar surface exploration and habitation continue to evolve within NASA s Constellation program, so must studies of optimal life support system architectures and technologies. This paper presents results of a life support architecture study based on a 2009 NASA scenario known as Scenario 12. Scenario 12 represents a consolidation of ideas from earlier NASA scenarios and includes an outpost near the Lunar South Pole comprised of three larger fixed surface elements and four attached pressurized rovers. The scenario places a high emphasis on surface mobility, with planning assuming that all four crewmembers spend roughly 50% of the time away from the outpost on 3-14 day excursions in two of the pressurized rovers. Some of the larger elements can also be mobilized for longer duration excursions. This emphasis on mobility poses a significant challenge for a regenerative life support system in terms of cost-effective waste collection and resource recovery across multiple elements, including rovers with very constrained infrastructure resources. The current study considers pressurized rovers as part of a distributed outpost life support architecture in both stand-alone and integrated configurations. A range of architectures are examined reflecting different levels of closure and distributed functionality. Different lander propellant scavenging options are also considered involving either initial conversion of residual oxygen and hydrogen propellants to water or initial direct oxygen scavenging. Monte Carlo simulations are used to assess the sensitivity of results to volatile high-impact mission variables, including the quantity of residual lander propellants available for scavenging, the fraction of crew time away from the outpost on excursions, total extravehicular activity hours, and habitat leakage. Architectures are evaluated by estimating surpluses or deficits of water and oxygen per 180-day mission and differences in fixed and 10-year

  18. An Architecture and Supporting Environment of Service-Oriented Computing Based-On Context Awareness

    NASA Astrophysics Data System (ADS)

    Ma, Tianxiao; Wu, Gang; Huang, Jun

    Service-oriented computing (SOC) is emerging to be an important computing paradigm of the next future. Based on context awareness, this paper proposes an architecture of SOC. A definition of the context in open environments such as Internet is given, which is based on ontology. The paper also proposes a supporting environment for the context-aware SOC, which focus on services on-demand composition and context-awareness evolving. A reference implementation of the supporting environment based on OSGi[11] is given at last.

  19. A Scalable Architecture for Rule Engine Based Clinical Decision Support Systems.

    PubMed

    Chattopadhyay, Soumi; Banerjee, Ansuman; Banerjee, Nilanjan

    2015-01-01

    Clinical Decision Support systems (CDSS) have reached a fair level of sophistication and have emerged as the popular system of choice for their aid in clinical decision making. These decision support systems are based on rule engines navigate through a repertoire of clinical rules and multitudes of facts to assist a clinical expert to decide on the set of actuations in response to a medical situation. In this paper, we present the design of a scalable architecture for a rule engine based clinical decision system. PMID:26262249

  20. Big Data Architectures for Operationalized Seismic and Subsurface Monitoring and Decision Support Workflows

    NASA Astrophysics Data System (ADS)

    Irving, D. H.; Rasheed, M.; Hillman, C.; O'Doherty, N.

    2012-12-01

    Oilfield management is moving to a more operational footing with near-realtime seismic and sensor monitoring governing drilling, fluid injection and hydrocarbon extraction workflows within safety, productivity and profitability constraints. To date, the geoscientific analytical architectures employed are configured for large volumes of data, computational power or analytical latency and compromises in system design must be made to achieve all three aspects. These challenges are encapsulated by the phrase 'Big Data' which has been employed for over a decade in the IT industry to describe the challenges presented by data sets that are too large, volatile and diverse for existing computational architectures and paradigms. We present a data-centric architecture developed to support a geoscientific and geotechnical workflow whereby: ●scientific insight is continuously applied to fresh data ●insights and derived information are incorporated into engineering and operational decisions ●data governance and provenance are routine within a broader data management framework Strategic decision support systems in large infrastructure projects such as oilfields are typically relational data environments; data modelling is pervasive across analytical functions. However, subsurface data and models are typically non-relational (i.e. file-based) in the form of large volumes of seismic imaging data or rapid streams of sensor feeds and are analysed and interpreted using niche applications. The key architectural challenge is to move data and insight from a non-relational to a relational, or structured, data environment for faster and more integrated analytics. We describe how a blend of MapReduce and relational database technologies can be applied in geoscientific decision support, and the strengths and weaknesses of each in such an analytical ecosystem. In addition we discuss hybrid technologies that use aspects of both and translational technologies for moving data and analytics

  1. Design and Parametric Sizing of Deep Space Habitats Supporting NASA'S Human Space Flight Architecture Team

    NASA Technical Reports Server (NTRS)

    Toups, Larry; Simon, Matthew; Smitherman, David; Spexarth, Gary

    2012-01-01

    NASA's Human Space Flight Architecture Team (HAT) is a multi-disciplinary, cross-agency study team that conducts strategic analysis of integrated development approaches for human and robotic space exploration architectures. During each analysis cycle, HAT iterates and refines the definition of design reference missions (DRMs), which inform the definition of a set of integrated capabilities required to explore multiple destinations. An important capability identified in this capability-driven approach is habitation, which is necessary for crewmembers to live and work effectively during long duration transits to and operations at exploration destinations beyond Low Earth Orbit (LEO). This capability is captured by an element referred to as the Deep Space Habitat (DSH), which provides all equipment and resources for the functions required to support crew safety, health, and work including: life support, food preparation, waste management, sleep quarters, and housekeeping.The purpose of this paper is to describe the design of the DSH capable of supporting crew during exploration missions. First, the paper describes the functionality required in a DSH to support the HAT defined exploration missions, the parameters affecting its design, and the assumptions used in the sizing of the habitat. Then, the process used for arriving at parametric sizing estimates to support additional HAT analyses is detailed. Finally, results from the HAT Cycle C DSH sizing are presented followed by a brief description of the remaining design trades and technological advancements necessary to enable the exploration habitation capability.

  2. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  3. Boeing Crew Exploration Vehicle Environmental Control and Life Support System Architecture Overview

    NASA Technical Reports Server (NTRS)

    Saiidi, Mo; Lewis, John F.

    2007-01-01

    The Boeing Company under the teaming agreement with the Northrop Grumman Systems Corporation and in compliance with the NASA Phase 1 contract, had the responsibilities for the CEV architecture development of the Environmental control and life support (ECLS) system under the NASA Phase 1 contract. The ECLS system was comprised of the various subsystems which provided for a shirt-sleeve habitable environment for crew to live and work in the crew module of the CEV. This architecture met the NASA requirements to ferry cargo and crew to ISS, and Lunar sortie missions, with extensibility to long duration missions to Moon and MARS. This paper provides a summary overview of the CEV ECLS subsystems which was proposed in compliance with the contract activities.

  4. Exploring Life Support Architectures for Evolution of Deep Space Human Exploration

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Stambaugh, Imelda C.

    2015-01-01

    Life support system architectures for long duration space missions are often explored analytically in the human spaceflight community to find optimum solutions for mass, performance, and reliability. But in reality, many other constraints can guide the design when the life support system is examined within the context of an overall vehicle, as well as specific programmatic goals and needs. Between the end of the Constellation program and the development of the "Evolvable Mars Campaign", NASA explored a broad range of mission possibilities. Most of these missions will never be implemented but the lessons learned during these concept development phases may color and guide future analytical studies and eventual life support system architectures. This paper discusses several iterations of design studies from the life support system perspective to examine which requirements and assumptions, programmatic needs, or interfaces drive design. When doing early concept studies, many assumptions have to be made about technology and operations. Data can be pulled from a variety of sources depending on the study needs, including parametric models, historical data, new technologies, and even predictive analysis. In the end, assumptions must be made in the face of uncertainty. Some of these may introduce more risk as to whether the solution for the conceptual design study will still work when designs mature and data becomes available.

  5. Coaching Doctoral Students--A Means to Enhance Progress and Support Self-Organisation in Doctoral Education

    ERIC Educational Resources Information Center

    Godskesen, Mirjam; Kobayashi, Sofie

    2016-01-01

    In this paper we focus on individual coaching carried out by an external coach as a new pedagogical element that can impact doctoral students' sense of progress in doctoral education. The study used a mixed-methods approach in that we draw on quantitative and qualitative data from the evaluation of a project on coaching doctoral students. We…

  6. Changing vegetation self organisation affecting eco-hydrological and geomorphological processes under invasion of blue bush in SE South Africa

    NASA Astrophysics Data System (ADS)

    Cammeraat, L. H.; Kakembo, V.

    2012-04-01

    In southeastern South Africa sub-humid grasslands on abandoned soils are spontaneously being invaded by the exotic shrub Pteronia incana (Blue bush) originating from the semi-arid and arid Karoo region. This results eventually in soil loss and rill and gully erosion and consequently loss in agricultural production affecting the local rural economy. Degradation of soils is occurring following replacement of grassland by unpalatable shrubs and altering the spatial organization of the vegetation. This in consequence is changing the eco-hydrological response of the hillslopes leading to a dramatic increase of runoff and erosion. However the reason for this spontaneous vegetation replacement is not clear. Various explanations have been proposed and discussed such as overgrazing, vegetation cover and rainfall, drought or climatic change or exposition. The study presented aims at quantifying the observed changes in the plant and bare spot patterns and which may help us unraveling vegetation self organisation processes in relation to environmental disturbances. We analyzed high resolution low altitude images of vegetation patterns in combination with high resolution digital terrain model analysis. We applied this procedure for different patterns reflecting a time series covering the observed changing patterns. These reflect changing interactions between the (re-) organization of the plant patterns during the bushy invasion and incorporated the interaction between vegetation, water redistribution and soil properties. By doing so we may be able to unravel critical processes as indicated by changes in vegetation patterns that might enable us to mitigate degradation of dryland ecosystems.

  7. Application of self-organising maps towards segmentation of soybean samples by determination of amino acids concentration.

    PubMed

    Silva, Lívia Ramazzoti Chanan; Angilelli, Karina Gomes; Cremasco, Hágata; Romagnoli, Érica Signori; Galão, Olívio Fernandes; Borsato, Dionisio; Moraes, Larissa Alexandra Cardoso; Mandarino, José Marcos Gontijo

    2016-09-01

    Soybeans are widely used both for human nutrition and animal feed, since they are an important source of protein, and they also provide components such as phytosterols, isoflavones, and amino acids. In this study, were determined the concentrations of the amino acids lysine, histidine, arginine, asparagine, glutamic acid, glycine, alanine, valine, isoleucine, leucine, tyrosine, phenylalanine present in 14 samples of conventional soybeans and 6 transgenic, cultivated in two cities of the state of Paraná, Londrina and Ponta Grossa. The results were tabulated and presented to a self-organising map for segmentation according planting regions and conventional or transgenic varieties. A network with 7000 training epochs and a 10 × 10 topology was used, and it proved appropriate in the segmentation of the samples using the data analysed. The weight maps provided by the network, showed that all the amino acids were important in targeting the samples, especially isoleucine. Three clusters were formed, one with only Ponta Grossa samples (including transgenic (PGT) and common (PGC)), a second group with Londrina transgenic (LT) samples and the third with Londrina common (LC) samples. PMID:27213953

  8. Clinical decision support for whole genome sequence information leveraging a service-oriented architecture: a prototype.

    PubMed

    Welch, Brandon M; Rodriguez-Loya, Salvador; Eilbeck, Karen; Kawamoto, Kensaku

    2014-01-01

    Whole genome sequence (WGS) information could soon be routinely available to clinicians to support the personalized care of their patients. At such time, clinical decision support (CDS) integrated into the clinical workflow will likely be necessary to support genome-guided clinical care. Nevertheless, developing CDS capabilities for WGS information presents many unique challenges that need to be overcome for such approaches to be effective. In this manuscript, we describe the development of a prototype CDS system that is capable of providing genome-guided CDS at the point of care and within the clinical workflow. To demonstrate the functionality of this prototype, we implemented a clinical scenario of a hypothetical patient at high risk for Lynch Syndrome based on his genomic information. We demonstrate that this system can effectively use service-oriented architecture principles and standards-based components to deliver point of care CDS for WGS information in real-time. PMID:25954430

  9. Fully distributed monitoring architecture supporting multiple trackees and trackers in indoor mobile asset management application.

    PubMed

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-01-01

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated. PMID:24662407

  10. Fully Distributed Monitoring Architecture Supporting Multiple Trackees and Trackers in Indoor Mobile Asset Management Application

    PubMed Central

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-01-01

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated. PMID:24662407

  11. A digital architecture for support vector machines: theory, algorithm, and FPGA implementation.

    PubMed

    Anguita, D; Boni, A; Ridella, S

    2003-01-01

    In this paper, we propose a digital architecture for support vector machine (SVM) learning and discuss its implementation on a field programmable gate array (FPGA). We analyze briefly the quantization effects on the performance of the SVM in classification problems to show its robustness, in the feedforward phase, respect to fixed-point math implementations; then, we address the problem of SVM learning. The architecture described here makes use of a new algorithm for SVM learning which is less sensitive to quantization errors respect to the solution appeared so far in the literature. The algorithm is composed of two parts: the first one exploits a recurrent network for finding the parameters of the SVM; the second one uses a bisection process for computing the threshold. The architecture implementing the algorithm is described in detail and mapped on a real current-generation FPGA (Xilinx Virtex II). Its effectiveness is then tested on a channel equalization problem, where real-time performances are of paramount importance. PMID:18244555

  12. Architecture-Level Dependability Analysis of a Medical Decision Support System

    SciTech Connect

    Pullum, Laura L; Symons, Christopher T; Patton, Robert M; Beckerman, Barbara G

    2010-01-01

    Recent advances in techniques such as image analysis, text analysis and machine learning have shown great potential to assist physicians in detecting and diagnosing health issues in patients. In this paper, we describe the approach and findings of an architecture-level dependability analysis for a mammography decision support system that incorporates these techniques. The goal of the research described in this paper is to provide an initial understanding of the dependability issues, particularly the potential failure modes and severity, in order to identify areas of potential high risk. The results will guide design decisions and provide the basis of a dependability and performance evaluation program.

  13. Runtime and Architecture Support for Efficient Data Exchange in Multi-Accelerator Applications

    PubMed Central

    Cabezas, Javier; Gelado, Isaac; Stone, John E.; Navarro, Nacho; Kirk, David B.; Hwu, Wen-mei

    2014-01-01

    Heterogeneous parallel computing applications often process large data sets that require multiple GPUs to jointly meet their needs for physical memory capacity and compute throughput. However, the lack of high-level abstractions in previous heterogeneous parallel programming models force programmers to resort to multiple code versions, complex data copy steps and synchronization schemes when exchanging data between multiple GPU devices, which results in high software development cost, poor maintainability, and even poor performance. This paper describes the HPE runtime system, and the associated architecture support, which enables a simple, efficient programming interface for exchanging data between multiple GPUs through either interconnects or cross-node network interfaces. The runtime and architecture support presented in this paper can also be used to support other types of accelerators. We show that the simplified programming interface reduces programming complexity. The research presented in this paper started in 2009. It has been implemented and tested extensively in several generations of HPE runtime systems as well as adopted into the NVIDIA GPU hardware and drivers for CUDA 4.0 and beyond since 2011. The availability of real hardware that support key HPE features gives rise to a rare opportunity for studying the effectiveness of the hardware support by running important benchmarks on real runtime and hardware. Experimental results show that in a exemplar heterogeneous system, peer DMA and double-buffering, pinned buffers, and software techniques can improve the inter-accelerator data communication bandwidth by 2×. They can also improve the execution speed by 1.6× for a 3D finite difference, 2.5× for 1D FFT, and 1.6× for merge sort, all measured on real hardware. The proposed architecture support enables the HPE runtime to transparently deploy these optimizations under simple portable user code, allowing system designers to freely employ devices of

  14. A web-services architecture designed for intermittent connectivity to support medical response to disasters.

    PubMed

    Brown, Steve; Griswold, William; Lenert, Leslie A

    2005-01-01

    To support mobile computing systems for first responders at mass casualty sites, as part of the WIISARD (Wireless Internet Information System for Medical Response in Disasters) project, we have developed a data architecture to gracefully handle an environment with frequent network failure and, multiple writers that also supports rapid dissemination of updates that could be critical to the safety of responders. This is accomplished by allowing for a subset of the overall information available in a disaster scene to be cached locally on a responder's device and locally modified with or without network access. When the network is available, the local subset of the model is automatically synchronized with a server that contains the full model, and conflicts are resolved. When changes from a device are committed, the changes are instantly sent to any connected devices where the local subset would be modified by the changes. PMID:16779191

  15. Guiding Principles for Data Architecture to Support the Pathways Community HUB Model

    PubMed Central

    Zeigler, Bernard P.; Redding, Sarah; Leath, Brenda A.; Carter, Ernest L.; Russell, Cynthia

    2016-01-01

    Introduction: The Pathways Community HUB Model provides a unique strategy to effectively supplement health care services with social services needed to overcome barriers for those most at risk of poor health outcomes. Pathways are standardized measurement tools used to define and track health and social issues from identification through to a measurable completion point. The HUB use Pathways to coordinate agencies and service providers in the community to eliminate the inefficiencies and duplication that exist among them. Pathways Community HUB Model and Formalization: Experience with the Model has brought out the need for better information technology solutions to support implementation of the Pathways themselves through decision-support tools for care coordinators and other users to track activities and outcomes, and to facilitate reporting. Here we provide a basis for discussing recommendations for such a data infrastructure by developing a conceptual model that formalizes the Pathway concept underlying current implementations. Requirements for Data Architecture to Support the Pathways Community HUB Model: The main contribution is a set of core recommendations as a framework for developing and implementing a data architecture to support implementation of the Pathways Community HUB Model. The objective is to present a tool for communities interested in adopting the Model to learn from and to adapt in their own development and implementation efforts. Problems with Quality of Data Extracted from the CHAP Database: Experience with the Community Health Access Project (CHAP) data base system (the core implementation of the Model) has identified several issues and remedies that have been developed to address these issues. Based on analysis of issues and remedies, we present several key features for a data architecture meeting the just mentioned recommendations. Implementation of Features: Presentation of features is followed by a practical guide to their implementation

  16. Modeling development of natural multi-sensory integration using neural self-organisation and probabilistic population codes

    NASA Astrophysics Data System (ADS)

    Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan

    2015-10-01

    Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.

  17. Knowledge base and sensor bus messaging service architecture for critical tsunami warning and decision-support

    NASA Astrophysics Data System (ADS)

    Sabeur, Z. A.; Wächter, J.; Middleton, S. E.; Zlatev, Z.; Häner, R.; Hammitzsch, M.; Loewe, P.

    2012-04-01

    The intelligent management of large volumes of environmental monitoring data for early tsunami warning requires the deployment of robust and scalable service oriented infrastructure that is supported by an agile knowledge-base for critical decision-support In the TRIDEC project (TRIDEC 2010-2013), a sensor observation service bus of the TRIDEC system is being developed for the advancement of complex tsunami event processing and management. Further, a dedicated TRIDEC system knowledge-base is being implemented to enable on-demand access to semantically rich OGC SWE compliant hydrodynamic observations and operationally oriented meta-information to multiple subscribers. TRIDEC decision support requires a scalable and agile real-time processing architecture which enables fast response to evolving subscribers requirements as the tsunami crisis develops. This is also achieved with the support of intelligent processing services which specialise in multi-level fusion methods with relevance feedback and deep learning. The TRIDEC knowledge base development work coupled with that of the generic sensor bus platform shall be presented to demonstrate advanced decision-support with situation awareness in context of tsunami early warning and crisis management.

  18. An attention-gating recurrent working memory architecture for emergent speech representation

    NASA Astrophysics Data System (ADS)

    Elshaw, Mark; Moore, Roger K.; Klein, Michael

    2010-06-01

    This paper describes an attention-gating recurrent self-organising map approach for emergent speech representation. Inspired by evidence from human cognitive processing, the architecture combines two main neural components. The first component, the attention-gating mechanism, uses actor-critic learning to perform selective attention towards speech. Through this selective attention approach, the attention-gating mechanism controls access to working memory processing. The second component, the recurrent self-organising map memory, develops a temporal-distributed representation of speech using phone-like structures. Representing speech in terms of phonetic features in an emergent self-organised fashion, according to research on child cognitive development, recreates the approach found in infants. Using this representational approach, in a fashion similar to infants, should improve the performance of automatic recognition systems through aiding speech segmentation and fast word learning.

  19. A cost-effective WDM-PON architecture simultaneously supporting wired, wireless and optical VPN services

    NASA Astrophysics Data System (ADS)

    Wu, Yanzhi; Ye, Tong; Zhang, Liang; Hu, Xiaofeng; Li, Xinwan; Su, Yikai

    2011-03-01

    It is believed that next-generation passive optical networks (PONs) are required to provide flexible and various services to users in a cost-effective way. To address this issue, for the first time, this paper proposes and demonstrates a novel wavelength-division-multiplexed PON (WDM-PON) architecture to simultaneously support three types of services: 1) wireless access traffic, 2) optical virtual passive network (VPN) communications, and 3) conventional wired services. In the optical line terminal (OLT), we use two cascaded Mach-Zehnder modulators (MZMs) on each wavelength channel to generate an optical carrier, and produce the wireless and the downstream traffic using the orthogonal modulation technique. In each optical network unit (ONU), the obtained optical carrier is modulated by a single MZM to provide the VPN and upstream communications. Consequently, the light sources in the ONUs are saved and the system cost is reduced. The feasibility of our proposal is experimentally and numerically verified.

  20. Building the Knowledge Base to Support the Automatic Animation Generation of Chinese Traditional Architecture

    NASA Astrophysics Data System (ADS)

    Wei, Gongjin; Bai, Weijing; Yin, Meifang; Zhang, Songmao

    We present a practice of applying the Semantic Web technologies in the domain of Chinese traditional architecture. A knowledge base consisting of one ontology and four rule bases is built to support the automatic generation of animations that demonstrate the construction of various Chinese timber structures based on the user's input. Different Semantic Web formalisms are used, e.g., OWL DL, SWRL and Jess, to capture the domain knowledge, including the wooden components needed for a given building, construction sequence, and the 3D size and position of every piece of wood. Our experience in exploiting the current Semantic Web technologies in real-world application systems indicates their prominent advantages (such as the reasoning facilities and modeling tools) as well as the limitations (such as low efficiency).

  1. Space Station Environmental Control and Life Support System architecture - Centralized versus distributed

    NASA Technical Reports Server (NTRS)

    Boehm, A. M.; Behrend, A. F.

    1984-01-01

    Both Centralized and Distributed approaches are being evaluated for the installation of Environmental Control and Life Support (ECLS) equipment in the Space Station. In the Centralized facility concept, integrated processing equipment is located in two modules with plumbing used to circulate ECLS services throughout the Station. The Distributed approach locates the ECLS subsystems in every module of the Space Station with each subsystem designed to meet its own module needs. This paper defines the two approaches and how the advantages and disadvantages of each are tied to the choice of Space Station architecture. Other considerations and evaluations include: crew movement, Station evolution and the ducting impact needed to circulate ECLS services from centrally located processing equipment.

  2. A newborn screening system based on service-oriented architecture embedded support vector machine.

    PubMed

    Hsu, Kai-Ping; Hsieh, Sung-Huai; Hsieh, Sheau-Ling; Cheng, Po-Hsun; Weng, Yung-Ching; Wu, Jang-Hung; Lai, Feipei

    2010-10-01

    The clinical symptoms of metabolic disorders are rarely apparent during the neonatal period, and if they are not treated earlier, irreversible damages, such as mental retardation or even death, may occur. Therefore, the practice of newborn screening is essential to prevent permanent disabilities in newborns. In the paper, we design, implement a newborn screening system using Support Vector Machine (SVM) classifications. By evaluating metabolic substances data collected from tandem mass spectrometry (MS/MS), we can interpret and determine whether a newborn has a metabolic disorder. In addition, National Taiwan University Hospital Information System (NTUHIS) has been developed and implemented to integrate heterogeneous platforms, protocols, databases as well as applications. To expedite adapting the diversities, we deploy Service-Oriented Architecture (SOA) concepts to the newborn screening system based on web services. The system can be embedded seamlessly into NTUHIS. PMID:20703618

  3. Systems modeling of space medical support architecture: topological mapping of high level characteristics and constraints.

    PubMed

    Musson, David M; Doyle, Thomas E; Saary, Joan

    2012-01-01

    The challenges associated with providing medical support to astronauts on long duration lunar or planetary missions are significant. Experience to date in space has included short duration missions to the lunar surface and both short and long duration stays on board spacecraft and space stations in low Earth orbit. Live actor, terrestrial analogue setting simulation provides a means of studying multiple aspects of the medical challenges of exploration class space missions, though few if any published models exist upon which to construct systems-simulation test beds. Current proposed and projected moon mission scenarios were analyzed from a systems perspective to construct such a model. A resulting topological mapping of high-level architecture for a reference lunar mission with presumed EVA excursion and international mission partners is presented. High-level descriptions of crew operational autonomy, medical support related to crew-member status, and communication characteristics within and between multiple teams are presented. It is hoped this modeling will help guide future efforts to simulate medical support operations for research purposes, such as in the use of live actor simulations in terrestrial analogue environments. PMID:23367318

  4. Requirements for Designing Life Support System Architectures for Crewed Exploration Missions Beyond Low-Earth Orbit

    NASA Technical Reports Server (NTRS)

    Howard, David; Perry,Jay; Sargusingh, Miriam; Toomarian, Nikzad

    2016-01-01

    NASA's technology development roadmaps provide guidance to focus technological development on areas that enable crewed exploration missions beyond low-Earth orbit. Specifically, the technology area roadmap on human health, life support and habitation systems describes the need for life support system (LSS) technologies that can improve reliability and in-situ maintainability within a minimally-sized package while enabling a high degree of mission autonomy. To address the needs outlined by the guiding technology area roadmap, NASA's Advanced Exploration Systems (AES) Program has commissioned the Life Support Systems (LSS) Project to lead technology development in the areas of water recovery and management, atmosphere revitalization, and environmental monitoring. A notional exploration LSS architecture derived from the International Space has been developed and serves as the developmental basis for these efforts. Functional requirements and key performance parameters that guide the exploration LSS technology development efforts are presented and discussed. Areas where LSS flight operations aboard the ISS afford lessons learned that are relevant to exploration missions are highlighted.

  5. Supporting Undergraduate Computer Architecture Students Using a Visual MIPS64 CPU Simulator

    ERIC Educational Resources Information Center

    Patti, D.; Spadaccini, A.; Palesi, M.; Fazzino, F.; Catania, V.

    2012-01-01

    The topics of computer architecture are always taught using an Assembly dialect as an example. The most commonly used textbooks in this field use the MIPS64 Instruction Set Architecture (ISA) to help students in learning the fundamentals of computer architecture because of its orthogonality and its suitability for real-world applications. This…

  6. NASA's Earth Science Gateway: A Platform for Interoperable Services in Support of the GEOSS Architecture

    NASA Astrophysics Data System (ADS)

    Alameh, N.; Bambacus, M.; Cole, M.

    2006-12-01

    Nasa's Earth Science as well as interdisciplinary research and applications activities require access to earth observations, analytical models and specialized tools and services, from diverse distributed sources. Interoperability and open standards for geospatial data access and processing greatly facilitate such access among the information and processing compo¬nents related to space¬craft, airborne, and in situ sensors; predictive models; and decision support tools. To support this mission, NASA's Geosciences Interoperability Office (GIO) has been developing the Earth Science Gateway (ESG; online at http://esg.gsfc.nasa.gov) by adapting and deploying a standards-based commercial product. Thanks to extensive use of open standards, ESG can tap into a wide array of online data services, serve a variety of audiences and purposes, and adapt to technology and business changes. Most importantly, the use of open standards allow ESG to function as a platform within a larger context of distributed geoscience processing, such as the Global Earth Observing System of Systems (GEOSS). ESG shares the goals of GEOSS to ensure that observations and products shared by users will be accessible, comparable, and understandable by relying on common standards and adaptation to user needs. By maximizing interoperability, modularity, extensibility and scalability, ESG's architecture fully supports the stated goals of GEOSS. As such, ESG's role extends beyond that of a gateway to NASA science data to become a shared platform that can be leveraged by GEOSS via: A modular and extensible architecture Consensus and community-based standards (e.g. ISO and OGC standards) A variety of clients and visualization techniques, including WorldWind and Google Earth A variety of services (including catalogs) with standard interfaces Data integration and interoperability Mechanisms for user involvement and collaboration Mechanisms for supporting interdisciplinary and domain-specific applications ESG

  7. Architectures and Evaluation for Adjustable Control Autonomy for Space-Based Life Support Systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Schreckenghost, Debra K.

    2001-01-01

    In the past five years, a number of automation applications for control of crew life support systems have been developed and evaluated in the Adjustable Autonomy Testbed at NASA's Johnson Space Center. This paper surveys progress on an adjustable autonomous control architecture for situations where software and human operators work together to manage anomalies and other system problems. When problems occur, the level of control autonomy can be adjusted, so that operators and software agents can work together on diagnosis and recovery. In 1997 adjustable autonomy software was developed to manage gas transfer and storage in a closed life support test. Four crewmembers lived and worked in a chamber for 91 days, with both air and water recycling. CO2 was converted to O2 by gas processing systems and wheat crops. With the automation software, significantly fewer hours were spent monitoring operations. System-level validation testing of the software by interactive hybrid simulation revealed problems both in software requirements and implementation. Since that time, we have been developing multi-agent approaches for automation software and human operators, to cooperatively control systems and manage problems. Each new capability has been tested and demonstrated in realistic dynamic anomaly scenarios, using the hybrid simulation tool.

  8. A Scalable, Out-of-Band Diagnostics Architecture for International Space Station Systems Support

    NASA Technical Reports Server (NTRS)

    Fletcher, Daryl P.; Alena, Rick; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The computational infrastructure of the International Space Station (ISS) is a dynamic system that supports multiple vehicle subsystems such as Caution and Warning, Electrical Power Systems and Command and Data Handling (C&DH), as well as scientific payloads of varying size and complexity. The dynamic nature of the ISS configuration coupled with the increased demand for payload support places a significant burden on the inherently resource constrained computational infrastructure of the ISS. Onboard system diagnostics applications are hosted on computers that are elements of the avionics network while ground-based diagnostic applications receive only a subset of available telemetry, down-linked via S-band communications. In this paper we propose a scalable, out-of-band diagnostics architecture for ISS systems support that uses a read-only connection for C&DH data acquisition, which provides a lower cost of deployment and maintenance (versus a higher criticality readwrite connection). The diagnostics processing burden is off-loaded from the avionics network to elements of the on-board LAN that have a lower overall cost of operation and increased computational capacity. A superset of diagnostic data, richer in content than the configured telemetry, is made available to Advanced Diagnostic System (ADS) clients running on wireless handheld devices, affording the crew greater mobility for troubleshooting and providing improved insight into vehicle state. The superset of diagnostic data is made available to the ground in near real-time via an out-of band downlink, providing a high level of fidelity between vehicle state and test, training and operational facilities on the ground.

  9. Enhancing Architecture-Implementation Conformance with Change Management and Support for Behavioral Mapping

    ERIC Educational Resources Information Center

    Zheng, Yongjie

    2012-01-01

    Software architecture plays an increasingly important role in complex software development. Its further application, however, is challenged by the fact that software architecture, over time, is often found not conformant to its implementation. This is usually caused by frequent development changes made to both artifacts. Against this background,…

  10. A system architecture for decision-making support on ISR missions with stochastic needs and profit

    NASA Astrophysics Data System (ADS)

    Hu, Nan; Pizzocaro, Diego; La Porta, Thomas; Preece, Alun

    2013-05-01

    In this paper, we propose a system architecture for decision-making support on ISR (i.e., Intelligence, Surveil­lance, Reconnaissance) missions via optimizing resource allocation. We model a mission as a graph of tasks, each of which often requires exclusive access to some resources. Our system guides users through refinement of their needs through an interactive interface. To maximize the chances of executing new missions, the system searches for pre-existent information collected on the field that best fit the needs. If this search fails, a set of new requests representing users' requirements is considered to maximize the overall benefit constrained by limited resources. If an ISR request cannot be satisfied, feedback is generated to help the commander further refine or adjust their information requests in order to still provide support to the mission. In our work, we model both demands for resources and the importance of the information retrieved realistically in that they are not fully known at the time a mission is submitted and may change overtime during execution. The amount of resources consumed by a mission may not be deterministic; e.g., a mission may last slightly longer or shorter than expected, or more of a resource may be required to complete a task. Furthermore, the benefits received from the mission, which we call profits, may also be non-deterministic; e.g., successfully localizing a vehicle might be more important than expected for accomplishing the entire operation. Therefore, when satisfying ISR requirements we take into account both constraints on the underlying resources and uncertainty of demands and profits.

  11. Architecture and Functionality of the Advanced Life Support On-Line Project Information System (OPIS)

    NASA Technical Reports Server (NTRS)

    Hogan, John A.; Levri, Julie A.; Morrow, Rich; Cavazzoni, Jim; Rodriquez, Luis F.; Riano, Rebecca; Whitaker, Dawn R.

    2004-01-01

    An ongoing effort is underway at NASA Amcs Research Center (ARC) tu develop an On-line Project Information System (OPIS) for the Advanced Life Support (ALS) Program. The objective of this three-year project is to develop, test, revise and deploy OPIS to enhance the quality of decision-making metrics and attainment of Program goals through improved knowledge sharing. OPIS will centrally locate detailed project information solicited from investigators on an annual basis and make it readily accessible by the ALS Community via a web-accessible interface. The data will be stored in an object-oriented relational database (created in MySQL(Trademark) located on a secure server at NASA ARC. OPE will simultaneously serve several functions, including being an R&TD status information hub that can potentially serve as the primary annual reporting mechanism. Using OPIS, ALS managers and element leads will be able to carry out informed research and technology development investment decisions, and allow analysts to perform accurate systems evaluations. Additionally, the range and specificity of information solicited will serve to educate technology developers of programmatic needs. OPIS will collect comprehensive information from all ALS projects as well as highly detailed information specific to technology development in each ALS area (Waste, Water, Air, Biomass, Food, Thermal, and Control). Because the scope of needed information can vary dramatically between areas, element-specific technology information is being compiled with the aid of multiple specialized working groups. This paper presents the current development status in terms of the architecture and functionality of OPIS. Possible implementation approaches for OPIS are also discussed.

  12. Architecture and Functionality of the Advanced Life Support On-Line Project Information System

    NASA Technical Reports Server (NTRS)

    Hogan, John A.; Levri, Julie A.; Morrow, Rich; Cavazzoni, Jim; Rodriguez, Luis F.; Riano, Rebecca; Whitaker, Dawn R.

    2004-01-01

    An ongoing effort is underway at NASA Ames Research Center (ARC) to develop an On-line Project Information System (OPIS) for the Advanced Life Support (ALS) Program. The objective of this three-year project is to develop, test, revise and deploy OPIS to enhance the quality of decision-making metrics and attainment of Program goals through improved knowledge sharing. OPIS will centrally locate detailed project information solicited from investigators on an annual basis and make it readily accessible by the ALS Community via a Web-accessible interface. The data will be stored in an object-oriented relational database (created in MySQL) located on a secure server at NASA ARC. OPE will simultaneously serve several functions, including being an research and technology development (R&TD) status information hub that can potentially serve as the primary annual reporting mechanism for ALS-funded projects. Using OPIS, ALS managers and element leads will be able to carry out informed R&TD investment decisions, and allow analysts to perform accurate systems evaluations. Additionally, the range and specificity of information solicited will serve to educate technology developers of programmatic needs. OPIS will collect comprehensive information from all ALS projects as well as highly detailed information specific to technology development in each ALS area (Waste, Water, Air, Biomass, Food, Thermal, Controls and Systems Analysis). Because the scope of needed information can vary dramatically between areas, element-specific technology information is being compiled with the aid of multiple specialized working groups. This paper presents the current development status in terms of the architecture and functionality of OPIS. Possible implementation approaches for OPIS are also discussed.

  13. Usalpharma: A Cloud-Based Architecture to Support Quality Assurance Training Processes in Health Area Using Virtual Worlds

    PubMed Central

    García-Peñalvo, Francisco J.; Pérez-Blanco, Jonás Samuel; Martín-Suárez, Ana

    2014-01-01

    This paper discusses how cloud-based architectures can extend and enhance the functionality of the training environments based on virtual worlds and how, from this cloud perspective, we can provide support to analysis of training processes in the area of health, specifically in the field of training processes in quality assurance for pharmaceutical laboratories, presenting a tool for data retrieval and analysis that allows facing the knowledge discovery in the happenings inside the virtual worlds. PMID:24778593

  14. Architecture of an E-Learning System with Embedded Authoring Support.

    ERIC Educational Resources Information Center

    Baudry, Andreas; Bungenstock, Michael; Mertsching, Barbel

    This paper introduces an architecture for an e-learning system with an embedded authoring system. Based on the metaphor of a construction kit, this approach offers a general solution for specific content creation and publication. The learning resources are IMS "Content Packages" with a special structure to separate content and presentation. These…

  15. The Use of Supporting Documentation for Information Architecture by Australian Libraries

    ERIC Educational Resources Information Center

    Hider, Philip; Burford, Sally; Ferguson, Stuart

    2009-01-01

    This article reports the results of an online survey that examined the development of information architecture of Australian library Web sites with reference to documented methods and guidelines. A broad sample of library Web managers responded from across the academic, public, and special sectors. A majority of libraries used either in-house or…

  16. High rate information systems - Architectural trends in support of the interdisciplinary investigator

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Preheim, Larry E.

    1990-01-01

    Data systems requirements in the Earth Observing System (EOS) Space Station Freedom (SSF) eras indicate increasing data volume, increased discipline interplay, higher complexity and broader data integration and interpretation. A response to the needs of the interdisciplinary investigator is proposed, considering the increasing complexity and rising costs of scientific investigation. The EOS Data Information System, conceived to be a widely distributed system with reliable communication links between central processing and the science user community, is described. Details are provided on information architecture, system models, intelligent data management of large complex databases, and standards for archiving ancillary data, using a research library, a laboratory and collaboration services.

  17. Development of Groundwater Modeling Support System Based on Service-Oriented Architecture

    NASA Astrophysics Data System (ADS)

    WANG, Y.; Tsai, J. P.; Hsiao, C. T.; Chang, L. C.

    2014-12-01

    Groundwater simulation has become an essential step on the groundwater resources management and assessment. There are many stand-alone pre and post processing software packages to alleviate the model simulation loading, but the stand-alone software do not consider centralized management of data and simulation results neither do they provide network sharing function. The model buildings are still implemented independently case to case when using these packages. Hence, it is difficult to share and reuse the data and knowledge (simulation cases) systematically within or across companies. Therefore, this study develops a centralized and network based groundwater model developing system to assist model simulation. The system is based on service-oriented architecture and allows remote user to develop their modeling cases on internet. The data and cases (knowledge) are thus easy to manage centralized. MODFLOW is the modeling engine of the system, which is the most popular groundwater model in the world. Other functions include the database management and variety of model developing assisted web services including auto digitalizing of geology profile map、groundwater missing data recovery assisting、graphic data demonstration and auto generation of MODFLOW input files from database that is the most important function of the system. Since the system architecture is service-oriented, it is scalable and flexible. The system can be easily extended to include the scenarios analysis and knowledge management to facilitate the reuse of groundwater modeling knowledge.

  18. Human Airway Primary Epithelial Cells Show Distinct Architectures on Membrane Supports Under Different Culture Conditions.

    PubMed

    Min, Kyoung Ah; Rosania, Gus R; Shin, Meong Cheol

    2016-06-01

    To facilitate drug development for lung delivery, it is highly demanding to establish appropriate airway epithelial cell models as transport barriers to evaluate pharmacokinetic profiles of drug molecules. Besides the cancer-derived cell lines, as the primary cell model, normal human bronchial epithelial (NHBE) cells have been used for drug screenings because of physiological relevance to in vivo. Therefore, to accurately interpret drug transport data in NHBE measured by different laboratories, it is important to know biophysical characteristics of NHBE grown on membranes in different culture conditions. In this study, NHBE was grown on the polyester membrane in a different medium and its transport barrier properties as well as cell architectures were fully characterized by functional assays and confocal imaging throughout the days of cultures. Moreover, NHBE cells on inserts in a different medium were subject to either of air-interfaced culture (AIC) or liquid-covered culture (LCC) condition. Cells in the AIC condition were cultivated on the membrane with medium in the basolateral side only, whereas cells with medium in apical and basolateral sides under the LCC condition. Quantitative microscopic imaging with biophysical examination revealed distinct multilayered architectures of differentiated NHBE cells, suggesting NHBE as functional cell barriers for the lung-targeting drug transport. PMID:26818810

  19. Novel architectured metal-supported solid oxide fuel cells with Mo-doped SrFeO3-δ electrocatalysts

    NASA Astrophysics Data System (ADS)

    Zhou, Yucun; Meng, Xie; Liu, Xuejiao; Pan, Xin; Li, Junliang; Ye, Xiaofeng; Nie, Huaiwen; Xia, Changrong; Wang, Shaorong; Zhan, Zhongliang

    2014-12-01

    Barriers to technological advancement of metal-supported SOFCs include nickel coarsening in the anode, metallic interdiffusion between the anode and the metal substrate, as well as poor cathode adhesion. Here we report a robust and novel architectured metal-supported SOFC that consists of a thin dense yttria-stabilized zirconia (YSZ) electrolyte layer sandwiched between a porous 430L stainless steel substrate and a porous YSZ thin layer. The key feature is simultaneous use of impregnated nano-scale SrFe0.75Mo0.25O3-δ coatings on the internal surfaces of the porous 430L and YSZ backbones respectively as the anode and cathode catalyst. Such a fuel cell exhibits power densities of 0.74 W cm-2 at 800 °C and 0.40 W cm-2 at 700 °C when operating on hydrogen fuels and air oxidants.

  20. Challenges with Deploying and Integrating Environmental Control and Life Support Functions in a Lunar Architecture with High Degrees of Mobility

    NASA Technical Reports Server (NTRS)

    Bagdigian, Robert M.

    2009-01-01

    Visions of lunar outposts often depict a collection of fixed elements such as pressurized habitats, in and around which human inhabitants spend the large majority of their surface stay time. In such an outpost, an efficient deployment of environmental control and life support equipment can be achieved by centralizing certain functions within one or a minimum number of habitable elements and relying on the exchange of gases and liquids between elements via atmosphere ventilation and plumbed interfaces. However, a rigidly fixed outpost can constrain the degree to which the total lunar landscape can be explored. The capability to enable widespread access across the landscape makes a lunar architecture with a high degree of surface mobility attractive. Such mobility presents unique challenges to the efficient deployment of environmental control and life support functions in multiple elements that may for long periods of time be operated independently. This paper describes some of those anticipated challenges.

  1. Using a service oriented architecture approach to clinical decision support: performance results from two CDS Consortium demonstrations.

    PubMed

    Paterno, Marilyn D; Goldberg, Howard S; Simonaitis, Linas; Dixon, Brian E; Wright, Adam; Rocha, Beatriz H; Ramelson, Harley Z; Middleton, Blackford

    2012-01-01

    The Clinical Decision Support Consortium has completed two demonstration trials involving a web service for the execution of clinical decision support (CDS) rules in one or more electronic health record (EHR) systems. The initial trial ran in a local EHR at Partners HealthCare. A second EHR site, associated with Wishard Memorial Hospital, Indianapolis, IN, was added in the second trial. Data were gathered during each 6 month period and analyzed to assess performance, reliability, and response time in the form of means and standard deviations for all technical components of the service, including assembling and preparation of input data. The mean service call time for each period was just over 2 seconds. In this paper we report on the findings and analysis to date while describing the areas for further analysis and optimization as we continue to expand our use of a Services Oriented Architecture approach for CDS across multiple institutions. PMID:23304342

  2. PNNI routing support for ad hoc mobile networking: A flat architecture

    SciTech Connect

    Martinez, L.; Sholander, P.; Tolendino, L.

    1997-12-01

    This contribution extends the Outside Nodal Hierarchy List (ONHL) procedures described in ATM Form Contribution 97-0766. These extensions allow multiple mobile networks to form either an ad hoc network or an extension of a fixed PNNI infrastructure. This contribution covers the simplest case where the top-most Logical Group Nodes (LGNs), in those mobile networks, all reside at the same level in a PNNI hierarchy. Future contributions will cover the general case where those top-most LGNs reside at different hierarchy levels. This contribution considers a flat ad hoc network architecture--in the sense that each mobile network always participates in the PNNI hierarchy at the preconfigured level of its top-most LGN.

  3. Clinical Document Architecture integration system to support patient referral and reply letters.

    PubMed

    Lee, Sung-Hyun; Song, Joon Hyun; Kim, Il Kon; Kim, Jeong-Whun

    2016-06-01

    Many Clinical Document Architecture (CDA) referrals and reply documents have been accumulated for patients since the deployment of the Health Information Exchange System (HIES) in Korea. Clinical data were scattered in many CDA documents and this took too much time for physicians to read. Physicians in Korea spend only limited time per patient as insurances in Korea follow a fee-for-service model. Therefore, physicians were not allowed sufficient time for making medical decisions, and follow-up care service was hindered. To address this, we developed CDA Integration Template (CIT) and CDA Integration System (CIS) for the HIES. The clinical items included in CIT were defined reflecting the Korean Standard for CDA Referral and Reply Letters and requests by physicians. CIS integrates CDA documents of a specified patient into a single CDA document following the format of CIT. Finally, physicians were surveyed after CIT/CIS adoption, and they indicated overall satisfaction. PMID:24963075

  4. Three-wave interactions of surface defect-deformation waves and their manifestations in the self-organisation of nano- and microstructures in solids exposed to laser radiation

    SciTech Connect

    Emel'yanov, Vladimir I; Seval'nev, D M

    2009-07-31

    The self-organisation of the surface-relief nanostructures in solids under the action of energy and particle fluxes is interpreted as the instability of defect-deformation (DD) gratings produced by quasi-static Lamb and Rayleigh waves and defect-concentration waves. The allowance for the nonlocality in the defects-lattice atom interaction with a simultaneous account for both (normal and longitudinal) defect-induced forces bending the surface layer leads to the appearance of two maxima in the dependence of the instability growth rate of DD waves on the wave number. Three-wave interactions of quasi-static coupled DD waves (second harmonic generation and wave vector mixing) are considered for the first time, which are similar to three-wave interactions in nonlinear optics and acoustics and lead to the enrichment of the spectrum of surface-relief harmonics. Computer processing of experimental data on laser-induced generation of micro- and nanostructures of the surface relief reveals the presence of effects responsible for the second harmonic generation and wave vector mixing. (special issue devoted to the 80th birthday of S.A. Akhmanov)

  5. Supporting self-management of obesity using a novel game architecture.

    PubMed

    Giabbanelli, Philippe J; Crutzen, Rik

    2015-09-01

    Obesity has commonly been addressed using a 'one size fits all' approach centred on a combination of diet and exercise. This has not succeeded in halting the obesity epidemic, as two-thirds of American adults are now obese or overweight. Practitioners are increasingly highlighting that one's weight is shaped by myriad factors, suggesting that interventions should be tailored to the specific needs of individuals. Health games have potential to provide such tailored approach. However, they currently tend to focus on communicating and/or reinforcing knowledge, in order to suscitate learning in the participants. We argue that it would be equally, if not more valuable, that games learn from participants using recommender systems. This would allow treatments to be comprehensive, as games can deduce from the participants' behaviour which factors seem to be most relevant to his or her weight and focus on them. We introduce a novel game architecture and discuss its implications on facilitating the self-management of obesity. PMID:24557604

  6. Reconfiguration of brain network architecture to support executive control in aging.

    PubMed

    Gallen, Courtney L; Turner, Gary R; Adnan, Areeba; D'Esposito, Mark

    2016-08-01

    Aging is accompanied by declines in executive control abilities and changes in underlying brain network architecture. Here, we examined brain networks in young and older adults during a task-free resting state and an N-back task and investigated age-related changes in the modular network organization of the brain. Compared with young adults, older adults showed larger changes in network organization between resting state and task. Although young adults exhibited increased connectivity between lateral frontal regions and other network modules during the most difficult task condition, older adults also exhibited this pattern of increased connectivity during less-demanding task conditions. Moreover, the increase in between-module connectivity in older adults was related to faster task performance and greater fractional anisotropy of the superior longitudinal fasciculus. These results demonstrate that older adults who exhibit more pronounced network changes between a resting state and task have better executive control performance and greater structural connectivity of a core frontal-posterior white matter pathway. PMID:27318132

  7. Thioflavin T-Silent Denaturation Intermediates Support the Main-Chain-Dominated Architecture of Amyloid Fibrils.

    PubMed

    Noda, Sayaka; So, Masatomo; Adachi, Masayuki; Kardos, József; Akazawa-Ogawa, Yoko; Hagihara, Yoshihisa; Goto, Yuji

    2016-07-19

    Ultrasonication is considered one of the most effective agitations for inducing the spontaneous formation of amyloid fibrils. When we induced the ultrasonication-dependent fibrillation of β2-microglobulin and insulin monitored by amyloid-specific thioflavin T (ThT) fluorescence, both proteins showed a significant decrease in ThT fluorescence after the burst-phase increase. The decrease in ThT fluorescence was accelerated when the ultrasonic power was stronger, suggesting that this decrease was caused by the partial denaturation of preformed fibrils. The possible intermediates of denaturation retained amyloid-like morphologies, secondary structures, and seeding potentials. Similar denaturation intermediates were also observed when fibrils were denatured by guanidine hydrochloride or sodium dodecyl sulfate. The presence of these denaturation intermediates is consistent with the main-chain-dominated architecture of amyloid fibrils. Moreover, in the three types of denaturation experiments conducted, insulin fibrils were more stable than β2-microglobulin fibrils, suggesting that the relative stability of various fibrils is independent of the method of denaturation. PMID:27345358

  8. Structural architecture supports functional organization in the human aging brain at a regionwise and network level.

    PubMed

    Zimmermann, Joelle; Ritter, Petra; Shen, Kelly; Rothmeier, Simon; Schirner, Michael; McIntosh, Anthony R

    2016-07-01

    Functional interactions in the brain are constrained by the underlying anatomical architecture, and structural and functional networks share network features such as modularity. Accordingly, age-related changes of structural connectivity (SC) may be paralleled by changes in functional connectivity (FC). We provide a detailed qualitative and quantitative characterization of the SC-FC coupling in human aging as inferred from resting-state blood oxygen-level dependent functional magnetic resonance imaging and diffusion-weighted imaging in a sample of 47 adults with an age range of 18-82. We revealed that SC and FC decrease with age across most parts of the brain and there is a distinct age-dependency of regionwise SC-FC coupling and network-level SC-FC relations. A specific pattern of SC-FC coupling predicts age more reliably than does regionwise SC or FC alone (r = 0.73, 95% CI = [0.7093, 0.8522]). Hence, our data propose that regionwise SC-FC coupling can be used to characterize brain changes in aging. Hum Brain Mapp 37:2645-2661, 2016. © 2016 Wiley Periodicals, Inc. PMID:27041212

  9. High-speed transcendental elementary-function architecture in support of the Vector Wave Equation (VWE). Master's thesis

    SciTech Connect

    Bailey, M.J.

    1987-12-01

    In support of a Very High Speed Integrated Circuit (VHSIC) class processor for computation of a set of equations known as the Vector Wave Equations (VWE), certain elementary functions including sine, cosine, and division are required. These elementary functions are the bottlenecks in the VWE processor. Floating-point multipliers and adders comprise the remainder of the pipeline stages in the VWE processor. To speed up the computation of the elementary functions, pipelining within the functions is considered. To compute sine, cosine, and division, the CORDIC algorithm is presented. Another method for computation of sine and cosine is the expansion of the Chebyshev polynomials. The equations for the CORDIC processor are recursive and the resulting hardware is very simple, consisting of three adders, three shifters, and lookup table for some of the coefficients. The shifters replace the multiplies, because in binary, i right shifts is the same as multiplying by 2 to the 8th power. The expansion of the Chebyshev polynomials can be used to compute other trigonometric functions as well as the exponential and logarithmic functions. The expansion of the Chebyshev polynomials can be used as a mathematic coprocessor. From these equations, a pipelined architecture can be realized that results in very fast computation times. The transformation of these equations as a function of x instead of the Chebyshev polynomials produces an architecture that requires less hardware, resulting in even faster computation times.

  10. Architectural proteins Pita, Zw5,and ZIPIC contain homodimerization domain and support specific long-range interactions in Drosophila

    PubMed Central

    Zolotarev, Nikolay; Fedotova, Anna; Kyrchanova, Olga; Bonchuk, Artem; Penin, Aleksey A.; Lando, Andrey S.; Eliseeva, Irina A.; Kulakovskiy, Ivan V.; Maksimenko, Oksana; Georgiev, Pavel

    2016-01-01

    According to recent models, as yet poorly studied architectural proteins appear to be required for local regulation of enhancer–promoter interactions, as well as for global chromosome organization. Transcription factors ZIPIC, Pita and Zw5 belong to the class of chromatin insulator proteins and preferentially bind to promoters near the TSS and extensively colocalize with cohesin and condensin complexes. ZIPIC, Pita and Zw5 are structurally similar in containing the N-terminal zinc finger-associated domain (ZAD) and different numbers of C2H2-type zinc fingers at the C-terminus. Here we have shown that the ZAD domains of ZIPIC, Pita and Zw5 form homodimers. In Drosophila transgenic lines, these proteins are able to support long-distance interaction between GAL4 activator and the reporter gene promoter. However, no functional interaction between binding sites for different proteins has been revealed, suggesting that such interactions are highly specific. ZIPIC facilitates long-distance stimulation of the reporter gene by GAL4 activator in yeast model system. Many of the genomic binding sites of ZIPIC, Pita and Zw5 are located at the boundaries of topologically associated domains (TADs). Thus, ZAD-containing zinc-finger proteins can be attributed to the class of architectural proteins. PMID:27137890

  11. Does Supporting Multiple Student Strategies Lead to Greater Learning and Motivation? Investigating a Source of Complexity in the Architecture of Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    Waalkens, Maaike; Aleven, Vincent; Taatgen, Niels

    2013-01-01

    Intelligent tutoring systems (ITS) support students in learning a complex problem-solving skill. One feature that makes an ITS architecturally complex, and hard to build, is support for strategy freedom, that is, the ability to let students pursue multiple solution strategies within a given problem. But does greater freedom mean that students…

  12. A Java-based enterprise system architecture for implementing a continuously supported and entirely Web-based exercise solution.

    PubMed

    Wang, Zhihui; Kiryu, Tohru

    2006-04-01

    Since machine-based exercise still uses local facilities, it is affected by time and place. We designed a web-based system architecture based on the Java 2 Enterprise Edition that can accomplish continuously supported machine-based exercise. In this system, exercise programs and machines are loosely coupled and dynamically integrated on the site of exercise via the Internet. We then extended the conventional health promotion model, which contains three types of players (users, exercise trainers, and manufacturers), by adding a new player: exercise program creators. Moreover, we developed a self-describing strategy to accommodate a variety of exercise programs and provide ease of use to users on the web. We illustrate our novel design with examples taken from our feasibility study on a web-based cycle ergometer exercise system. A biosignal-based workload control approach was introduced to ensure that users performed appropriate exercise alone. PMID:16617629

  13. Adaptive and Speculative Memory Consistency Support for Multi-core Architectures with On-Chip Local Memories

    NASA Astrophysics Data System (ADS)

    Vujic, Nikola; Alvarez, Lluc; Tallada, Marc Gonzalez; Martorell, Xavier; Ayguadé, Eduard

    Software cache has been showed as a robust approach in multi-core systems with no hardware support for transparent data transfers between local and global memories. Software cache provides the user with a transparent view of the memory architecture and considerably improves the programmability of such systems. But this software approach can suffer from poor performance due to considerable overheads related to software mechanisms to maintain the memory consistency. This paper presents a set of alternatives to smooth their impact. A specific write-back mechanism is introduced based on some degree of speculation regarding the number of threads actually modifying the same cache lines. A case study based on the Cell BE processor is described. Performance evaluation indicates that improvements due to the optimized software-cache structures combined with the proposed code-optimizations translate into 20% up to 40% speedup factors, compared to a traditional software cache approach.

  14. Insight into the Supramolecular Architecture of Intact Diatom Biosilica from DNP-Supported Solid-State NMR Spectroscopy.

    PubMed

    Jantschke, Anne; Koers, Eline; Mance, Deni; Weingarth, Markus; Brunner, Eike; Baldus, Marc

    2015-12-01

    Diatom biosilica is an inorganic/organic hybrid with interesting properties. The molecular architecture of the organic material at the atomic and nanometer scale has so far remained unknown, in particular for intact biosilica. A DNP-supported ssNMR approach assisted by microscopy, MS, and MD simulations was applied to study the structural organization of intact biosilica. For the first time, the secondary structure elements of tightly biosilica-associated native proteins in diatom biosilica were characterized in situ. Our data suggest that these proteins are rich in a limited set of amino acids and adopt a mixture of random-coil and β-strand conformations. Furthermore, biosilica-associated long-chain polyamines and carbohydrates were characterized, thereby leading to a model for the supramolecular organization of intact biosilica. PMID:26509491

  15. The kinematic architecture of the Active Headframe: A new head support for awake brain surgery.

    PubMed

    Malosio, Matteo; Negri, Simone Pio; Pedrocchi, Nicola; Vicentini, Federico; Cardinale, Francesco; Tosatti, Lorenzo Molinari

    2012-01-01

    This paper presents the novel hybrid kinematic structure of the Active Headframe, a robotic head support to be employed in brain surgery operations for an active and dynamic control of the patient's head position and orientation, particularly addressing awake surgery requirements. The topology has been conceived in order to satisfy all the installation, functional and dynamic requirements. A kinetostatic optimization has been performed to obtain the actual geometric dimensions of the prototype currently being developed. PMID:23366166

  16. New architecture for MPEG video streaming system with backward playback support.

    PubMed

    Fu, Chang-Hong; Chan, Yui-Lam; Ip, Tak-Piu; Siu, Wan-Chi

    2007-09-01

    proposed architecture only manipulates macroblocks either in the VLC domain or the quantized DCT domain resulting in low server complexity. Experimental results show that, as compared to the conventional system, the new streaming system reduces the required network bandwidth and the decoder complexity significantly. PMID:17784591

  17. Software Architecture to Support the Evolution of the ISRU RESOLVE Engineering Breadboard Unit 2 (EBU2)

    NASA Technical Reports Server (NTRS)

    Moss, Thomas; Nurge, Mark; Perusich, Stephen

    2011-01-01

    The In-Situ Resource Utilization (ISRU) Regolith & Environmental Science and Oxygen & Lunar Volatiles Extraction (RESOLVE) software provides operation of the physical plant from a remote location with a high-level interface that can access and control the data from external software applications of other subsystems. This software allows autonomous control over the entire system with manual computer control of individual system/process components. It gives non-programmer operators the capability to easily modify the high-level autonomous sequencing while the software is in operation, as well as the ability to modify the low-level, file-based sequences prior to the system operation. Local automated control in a distributed system is also enabled where component control is maintained during the loss of network connectivity with the remote workstation. This innovation also minimizes network traffic. The software architecture commands and controls the latest generation of RESOLVE processes used to obtain, process, and quantify lunar regolith. The system is grouped into six sub-processes: Drill, Crush, Reactor, Lunar Water Resource Demonstration (LWRD), Regolith Volatiles Characterization (RVC) (see example), and Regolith Oxygen Extraction (ROE). Some processes are independent, some are dependent on other processes, and some are independent but run concurrently with other processes. The first goal is to analyze the volatiles emanating from lunar regolith, such as water, carbon monoxide, carbon dioxide, ammonia, hydrogen, and others. This is done by heating the soil and analyzing and capturing the volatilized product. The second goal is to produce water by reducing the soil at high temperatures with hydrogen. This is done by raising the reactor temperature in the range of 800 to 900 C, causing the reaction to progress by adding hydrogen, and then capturing the water product in a desiccant bed. The software needs to run the entire unit and all sub-processes; however

  18. CranialCloud: A cloud-based architecture to support trans-institutional collaborative efforts in neuro-degenerative disorders

    PubMed Central

    D’Haese, Pierre-Francois; Konrad, Peter E.; Pallavaram, Srivatsan; Li, Rui; Prassad, Priyanka; Rodriguez, William; Dawant, Benoit M.

    2015-01-01

    Purpose Neurological diseases have a devastating impact on millions of individuals and their families. These diseases will continue to constitute a significant research focus for this century. The search for effective treatments and cures requires multiple teams of experts in clinical neurosciences, neuroradiology, engineering and industry. Hence, the need to communicate a large amount of information with accuracy and precision is more necessary than ever for this specialty. Method In this paper, we present a distributed system that supports this vision, which we call the CranialVault Cloud (CranialCloud). It consists in a network of nodes, each with the capability to store and process data, that share the same spatial normalization processes, thus guaranteeing a common reference space. We detail and justify design choices, the architecture and functionality of individual nodes, the way these nodes interact, and how the distributed system can be used to support inter-institutional research. Results We discuss the current state of the system that gathers data for more than 1,600 patients and how we envision it to grow. Conclusions We contend that the fastest way to find and develop promising treatments and cures is to permit teams of researchers to aggregate data, spatially normalize these data, and share them. The Cranialvault system is a system that supports this vision. PMID:25861055

  19. Self-organising maps and correlation analysis as a tool to explore patterns in excitation-emission matrix data sets and to discriminate dissolved organic matter fluorescence components.

    PubMed

    Ejarque-Gonzalez, Elisabet; Butturini, Andrea

    2014-01-01

    Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets. PMID:24906009

  20. Self-Organising Maps and Correlation Analysis as a Tool to Explore Patterns in Excitation-Emission Matrix Data Sets and to Discriminate Dissolved Organic Matter Fluorescence Components

    PubMed Central

    Ejarque-Gonzalez, Elisabet; Butturini, Andrea

    2014-01-01

    Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets. PMID:24906009

  1. Evolution of self-organisation in Dictyostelia by adaptation of a non-selective phosphodiesterase and a matrix component for regulated cAMP degradation

    PubMed Central

    Kawabe, Yoshinori; Weening, Karin E.; Marquay-Markiewicz, Jacques; Schaap, Pauline

    2012-01-01

    Dictyostelium discoideum amoebas coordinate aggregation and morphogenesis by secreting cyclic adenosine monophosphate (cAMP) pulses that propagate as waves through fields of cells and multicellular structures. To retrace how this mechanism for self-organisation evolved, we studied the origin of the cAMP phosphodiesterase PdsA and its inhibitor PdiA, which are essential for cAMP wave propagation. D. discoideum and other species that use cAMP to aggregate reside in group 4 of the four major groups of Dictyostelia. We found that groups 1-3 express a non-specific, low affinity orthologue of PdsA, which gained cAMP selectivity and increased 200-fold in affinity in group 4. A low affinity group 3 PdsA only partially restored aggregation of a D. discoideum pdsA-null mutant, but was more effective at restoring fruiting body morphogenesis. Deletion of a group 2 PdsA gene resulted in disruption of fruiting body morphogenesis, but left aggregation unaffected. Together, these results show that groups 1-3 use a low affinity PdsA for morphogenesis that is neither suited nor required for aggregation. PdiA belongs to a family of matrix proteins that are present in all Dictyostelia and consist mainly of cysteine-rich repeats. However, in its current form with several extensively modified repeats, PdiA is only present in group 4. PdiA is essential for initiating spiral cAMP waves, which, by organising large territories, generate the large fruiting structures that characterise group 4. We conclude that efficient cAMP-mediated aggregation in group 4 evolved by recruitment and adaptation of a non-selective phosphodiesterase and a matrix component into a system for regulated cAMP degradation. PMID:22357931

  2. Fortran Transformational Tools in Support of Scientific Application Development for Petascale Computer Architectures

    SciTech Connect

    Sottille, Matthew

    2013-09-12

    This document is the final report for a multi-year effort building infrastructure to support tool development for Fortran programs. We also investigated static analysis and code transformation methods relevant to scientific programmers who are writing Fortran programs for petascale-class high performance computing systems. This report details our accomplishments, technical approaches, and provides information on where the research results and code may be obtained from an open source software repository. The report for the first year of the project that was performed at the University of Oregon prior to the PI moving to Galois, Inc. is included as an appendix.

  3. Project I-COP - architecture of software tool for decision support in oncology.

    PubMed

    Blaha, Milan; Janča, Dalibor; Klika, Petr; Mužík, Jan; Dušek, Ladislav

    2013-01-01

    This article briefly describes the development of the I-COP tool, which is designed to promote education and decision making of clinical oncologists. It is based on real data from medical facilities, which are processed, stored in database, analyzed and finally displayed in an interactive software application. Used data sources are shortly described in individual sections together with the functionality of developed tools. The final goal of this project is to provide support for work and education within each involved partner center. Clinical oncologists are therefore supposed to be the authors and users at the same time. PMID:23542983

  4. Development of a Computer Architecture to Support the Optical Plume Anomaly Detection (OPAD) System

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1996-01-01

    to execute the software in a modern single-processor workstation, and therefore real-time operation is currently not possible. A different number of iterations may be required to perform spectral data fitting per spectral sample. Yet, the OPAD system must be designed to maintain real-time performance in all cases. Although faster single-processor workstations are available for execution of the fitting and SPECTRA software, this option is unattractive due to the excessive cost associated with very fast workstations and also due to the fact that such hardware is not easily expandable to accommodate future versions of the software which may require more processing power. Initial research has already demonstrated that the OPAD software can take advantage of a parallel computer architecture to achieve the necessary speedup. Current work has improved the software by converting it into a form which is easily parallelizable. Timing experiments have been performed to establish the computational complexity and execution speed of major components of the software. This work provides the foundation of future work which will create a fully parallel version of the software executing in a shared-memory multiprocessor system.

  5. Design of a decision-support architecture for management of remotely monitored patients.

    PubMed

    Basilakis, Jim; Lovell, Nigel H; Redmond, Stephen J; Celler, Branko G

    2010-09-01

    Telehealth is the provision of health services at a distance. Typically, this occurs in unsupervised or remote environments, such as a patient's home. We describe one such telehealth system and the integration of extracted clinical measurement parameters with a decision-support system (DSS). An enterprise application-server framework, combined with a rules engine and statistical analysis tools, is used to analyze the acquired telehealth data, searching for trends and shifts in parameter values, as well as identifying individual measurements that exceed predetermined or adaptive thresholds. An overarching business process engine is used to manage the core DSS knowledge base and coordinate workflow outputs of the DSS. The primary role for such a DSS is to provide an effective means to reduce the data overload and to provide a means of health risk stratification to allow appropriate targeting of clinical resources to best manage the health of the patient. In this way, the system may ultimately influence changes in workflow by targeting scarce clinical resources to patients of most need. A single case study extracted from an initial pilot trial of the system, in patients with chronic obstructive pulmonary disease and chronic heart failure, will be reviewed to illustrate the potential benefit of integrating telehealth and decision support in the management of both acute and chronic disease. PMID:20615815

  6. NASA's Earth Science Gateway within the GEOSS Architecture Framework and in Support of Distributed Global Systems

    NASA Astrophysics Data System (ADS)

    Alameh, N.; Cole, M.; Bambacus, M.; Thomas, R.

    2007-12-01

    Progress continues within the arena of interoperability towards greater discovery, access, and use of scientific data regarding improved societal benefit and decision solutions. The Group on Earth Observation System of Systems has developed multiple pilot projects in which many of these maturing and emerging technologies are being interconnected, tested, and implemented as operational systems. Within this network of components are data stores, registries, catalogs, portals, models, work-process flows, satellites, UAVs, and many more components. The pilots tackle the intricate process of ensuring these components properly work together within the framework of open-standards based interoperability, and enabling the vision of a distributed, comprehensible, global system of scientific tools and data for the lay-person as well as the researcher. This paper will concentrate on the NASA Earth Science Gateway (http://esg.gsfc.nasa.gov) view of interconnections to the registries, catalogs, models, and so on, that support this System of Systems. This paper covers what standards were used, how components can be connected and tested, where difficulties emerged, where we have seen return on investment, and how this level of interoperability is progressing.

  7. Self-Organizing Distributed Architecture Supporting Dynamic Space Expanding and Reducing in Indoor LBS Environment

    PubMed Central

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2015-01-01

    Indoor location-based services (iLBS) are extremely dynamic and changeable, and include numerous resources and mobile devices. In particular, the network infrastructure requires support for high scalability in the indoor environment, and various resource lookups are requested concurrently and frequently from several locations based on the dynamic network environment. A traditional map-based centralized approach for iLBSs has several disadvantages: it requires global knowledge to maintain a complete geographic indoor map; the central server is a single point of failure; it can also cause low scalability and traffic congestion; and it is hard to adapt to a change of service area in real time. This paper proposes a self-organizing and fully distributed platform for iLBSs. The proposed self-organizing distributed platform provides a dynamic reconfiguration of locality accuracy and service coverage by expanding and contracting dynamically. In order to verify the suggested platform, scalability performance according to the number of inserted or deleted nodes composing the dynamic infrastructure was evaluated through a simulation similar to the real environment. PMID:26016908

  8. NASA's Earth Observing Data and Information System - Supporting Interoperability through a Scalable Architecture (Invited)

    NASA Astrophysics Data System (ADS)

    Mitchell, A. E.; Lowe, D. R.; Murphy, K. J.; Ramapriyan, H. K.

    2013-12-01

    Initiated in 1990, NASA's Earth Observing System Data and Information System (EOSDIS) is currently a petabyte-scale archive of data designed to receive, process, distribute and archive several terabytes of science data per day from NASA's Earth science missions. Comprised of 12 discipline specific data centers collocated with centers of science discipline expertise, EOSDIS manages over 6800 data products from many science disciplines and sources. NASA supports global climate change research by providing scalable open application layers to the EOSDIS distributed information framework. This allows many other value-added services to access NASA's vast Earth Science Collection and allows EOSDIS to interoperate with data archives from other domestic and international organizations. EOSDIS is committed to NASA's Data Policy of full and open sharing of Earth science data. As metadata is used in all aspects of NASA's Earth science data lifecycle, EOSDIS provides a spatial and temporal metadata registry and order broker called the EOS Clearing House (ECHO) that allows efficient search and access of cross domain data and services through the Reverb Client and Application Programmer Interfaces (APIs). Another core metadata component of EOSDIS is NASA's Global Change Master Directory (GCMD) which represents more than 25,000 Earth science data set and service descriptions from all over the world, covering subject areas within the Earth and environmental sciences. With inputs from the ECHO, GCMD and Soil Moisture Active Passive (SMAP) mission metadata models, EOSDIS is developing a NASA ISO 19115 Best Practices Convention. Adoption of an international metadata standard enables a far greater level of interoperability among national and international data products. NASA recently concluded a 'Metadata Harmony Study' of EOSDIS metadata capabilities/processes of ECHO and NASA's Global Change Master Directory (GCMD), to evaluate opportunities for improved data access and use, reduce

  9. Long-Term Patterns in the Population Dynamics of Daphnia longispina, Leptodora kindtii and Cyanobacteria in a Shallow Reservoir: A Self-Organising Map (SOM) Approach

    PubMed Central

    Wojtal-Frankiewicz, Adrianna; Kruk, Andrzej; Frankiewicz, Piotr; Oleksińska, Zuzanna; Izydorczyk, Katarzyna

    2015-01-01

    The recognition of long-term patterns in the seasonal dynamics of Daphnia longispina, Leptodora kindtii and cyanobacteria is dependent upon their interactions, the water temperature and the hydrological conditions, which were all investigated between 1999 and 2008 in the lowland Sulejow Reservoir. The biomass of cyanobacteria, densities of D. longispina and L. kindtii, concentration of chlorophyll a and water temperature were assessed weekly from April to October at three sampling stations along the longitudinal reservoir axis. The retention time was calculated using data on the actual water inflow and reservoir volume. A self-organising map (SOM) was used due to high interannual variability in the studied parameters and their often non-linear relationships. Classification of the SOM output neurons into three clusters that grouped the sampling terms with similar biotic states allowed identification of the crucial abiotic factors responsible for the seasonal sequence of events: cluster CL-ExSp (extreme/spring) corresponded to hydrologically unstable cold periods (mostly spring) with extreme values and highly variable abiotic factors, which made abiotic control of the biota dominant; cluster CL-StSm (stable/summer) was associated with ordinary late spring and summer and was characterised by stable non-extreme abiotic conditions, which made biotic interactions more important; and the cluster CL-ExSm (extreme/summer), was associated with late spring/summer and characterised by thermal or hydrological extremes, which weakened the role of biotic factors. The significance of the differences between the SOM sub-clusters was verified by Kruskal-Wallis and post-hoc Dunn tests. The importance of the temperature and hydrological regimes as the key plankton-regulating factors in the dam reservoir, as shown by the SOM, was confirmed by the results of canonical correlation analyses (CCA) of each cluster. The demonstrated significance of hydrology in seasonal plankton dynamics

  10. Long-Term Patterns in the Population Dynamics of Daphnia longispina, Leptodora kindtii and Cyanobacteria in a Shallow Reservoir: A Self-Organising Map (SOM) Approach.

    PubMed

    Wojtal-Frankiewicz, Adrianna; Kruk, Andrzej; Frankiewicz, Piotr; Oleksińska, Zuzanna; Izydorczyk, Katarzyna

    2015-01-01

    The recognition of long-term patterns in the seasonal dynamics of Daphnia longispina, Leptodora kindtii and cyanobacteria is dependent upon their interactions, the water temperature and the hydrological conditions, which were all investigated between 1999 and 2008 in the lowland Sulejow Reservoir. The biomass of cyanobacteria, densities of D. longispina and L. kindtii, concentration of chlorophyll a and water temperature were assessed weekly from April to October at three sampling stations along the longitudinal reservoir axis. The retention time was calculated using data on the actual water inflow and reservoir volume. A self-organising map (SOM) was used due to high interannual variability in the studied parameters and their often non-linear relationships. Classification of the SOM output neurons into three clusters that grouped the sampling terms with similar biotic states allowed identification of the crucial abiotic factors responsible for the seasonal sequence of events: cluster CL-ExSp (extreme/spring) corresponded to hydrologically unstable cold periods (mostly spring) with extreme values and highly variable abiotic factors, which made abiotic control of the biota dominant; cluster CL-StSm (stable/summer) was associated with ordinary late spring and summer and was characterised by stable non-extreme abiotic conditions, which made biotic interactions more important; and the cluster CL-ExSm (extreme/summer), was associated with late spring/summer and characterised by thermal or hydrological extremes, which weakened the role of biotic factors. The significance of the differences between the SOM sub-clusters was verified by Kruskal-Wallis and post-hoc Dunn tests. The importance of the temperature and hydrological regimes as the key plankton-regulating factors in the dam reservoir, as shown by the SOM, was confirmed by the results of canonical correlation analyses (CCA) of each cluster. The demonstrated significance of hydrology in seasonal plankton dynamics

  11. A Scalable Architecture for Incremental Specification and Maintenance of Procedural and Declarative Clinical Decision-Support Knowledge

    PubMed Central

    Hatsek, Avner; Shahar, Yuval; Taieb-Maimon, Meirav; Shalom, Erez; Klimov, Denis; Lunenfeld, Eitan

    2010-01-01

    Clinical guidelines have been shown to improve the quality of medical care and to reduce its costs. However, most guidelines exist in a free-text representation and, without automation, are not sufficiently accessible to clinicians at the point of care. A prerequisite for automated guideline application is a machine-comprehensible representation of the guidelines. In this study, we designed and implemented a scalable architecture to support medical experts and knowledge engineers in specifying and maintaining the procedural and declarative aspects of clinical guideline knowledge, resulting in a machine comprehensible representation. The new framework significantly extends our previous work on the Digital electronic Guidelines Library (DeGeL) The current study designed and implemented a graphical framework for specification of declarative and procedural clinical knowledge, Gesher. We performed three different experiments to evaluate the functionality and usability of the major aspects of the new framework: Specification of procedural clinical knowledge, specification of declarative clinical knowledge, and exploration of a given clinical guideline. The subjects included clinicians and knowledge engineers (overall, 27 participants). The evaluations indicated high levels of completeness and correctness of the guideline specification process by both the clinicians and the knowledge engineers, although the best results, in the case of declarative-knowledge specification, were achieved by teams including a clinician and a knowledge engineer. The usability scores were high as well, although the clinicians’ assessment was significantly lower than the assessment of the knowledge engineers. PMID:21611137

  12. PICNIC Architecture.

    PubMed

    Saranummi, Niilo

    2005-01-01

    The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source. PMID:16160218

  13. A Sustainable, Reliable Mission-Systems Architecture that Supports a System of Systems Approach to Space Exploration

    NASA Technical Reports Server (NTRS)

    Watson, Steve; Orr, Jim; O'Neil, Graham

    2004-01-01

    A mission-systems architecture based on a highly modular "systems of systems" infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is absolutely essential for an affordable and sustainable space exploration program. This architecture requires (a) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimum sustaining engineering. This paper proposes such an architecture. Lessons learned from the space shuttle program are applied to help define and refine the model.

  14. IAIMS Architecture

    PubMed Central

    Hripcsak, George

    1997-01-01

    Abstract An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884

  15. IAIMS architecture.

    PubMed

    Hripcsak, G

    1997-01-01

    An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884

  16. An SMS-based System Architecture (Logical Model) to Support Management of Information Exchange in Emergency Stuations. poLINT-112-SMS PROJECT

    NASA Astrophysics Data System (ADS)

    Vetulani, Zygmunt; Marciniak, Jacek; Konieczka, Pawel; Walkowska, Justyna

    In the paper we present the architecture of the POLINT-112-SMS system to support information management in emergency situations. The system interprets the text input in form of SMS messages, understands and interprets information provided by the human user. It is supposed to assist a human in taking decisions. The main modules of the system presented here are the following: the SMS gate, the NLP Module (processing Polish), the Situation Analysis Module (SAM) and the Dialogue Maintenance Module (DMM).

  17. Web 2.0 systems supporting childhood chronic disease management: A pattern language representation of a general architecture

    PubMed Central

    Timpka, Toomas; Eriksson, Henrik; Ludvigsson, Johnny; Ekberg, Joakim; Nordfeldt, Sam; Hanberger, Lena

    2008-01-01

    Background Chronic disease management is a global health concern. By the time they reach adolescence, 10–15% of all children live with a chronic disease. The role of educational interventions in facilitating adaptation to chronic disease is receiving growing recognition, and current care policies advocate greater involvement of patients in self-care. Web 2.0 is an umbrella term for new collaborative Internet services characterized by user participation in developing and managing content. Key elements include Really Simple Syndication (RSS) to rapidly disseminate awareness of new information; weblogs (blogs) to describe new trends, wikis to share knowledge, and podcasts to make information available on personal media players. This study addresses the potential to develop Web 2.0 services for young persons with a chronic disease. It is acknowledged that the management of childhood chronic disease is based on interplay between initiatives and resources on the part of patients, relatives, and health care professionals, and where the balance shifts over time to the patients and their families. Methods Participatory action research was used to stepwise define a design specification in the form of a pattern language. Support for children diagnosed with diabetes Type 1 was used as the example area. Each individual design pattern was determined graphically using card sorting methods, and textually in the form Title, Context, Problem, Solution, Examples and References. Application references were included at the lowest level in the graphical overview in the pattern language but not specified in detail in the textual descriptions. Results The design patterns are divided into functional and non-functional design elements, and formulated at the levels of organizational, system, and application design. The design elements specify access to materials for development of the competences needed for chronic disease management in specific community settings, endorsement of self

  18. T and D-Bench--Innovative Combined Support for Education and Research in Computer Architecture and Embedded Systems

    ERIC Educational Resources Information Center

    Soares, S. N.; Wagner, F. R.

    2011-01-01

    Teaching and Design Workbench (T&D-Bench) is a framework aimed at education and research in the areas of computer architecture and embedded systems. It includes a set of features not found in other educational environments. This set of features is the result of an original combination of design requirements for T&D-Bench: that the framework should…

  19. A Tool for Managing Software Architecture Knowledge

    SciTech Connect

    Babar, Muhammad A.; Gorton, Ian

    2007-08-01

    This paper describes a tool for managing architectural knowledge and rationale. The tool has been developed to support a framework for capturing and using architectural knowledge to improve the architecture process. This paper describes the main architectural components and features of the tool. The paper also provides examples of using the tool for supporting wellknown architecture design and analysis methods.

  20. Implementation of a metadata architecture and knowledge collection to support semantic interoperability in an enterprise data warehouse.

    PubMed

    Dhaval, Rakesh; Borlawsky, Tara; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti; Payne, Philip R O

    2008-01-01

    In order to enhance interoperability between enterprise systems, and improve data validity and reliability throughout The Ohio State University Medical Center (OSUMC), we have initiated the development of an ontology-anchored metadata architecture and knowledge collection for our enterprise data warehouse. The metadata and corresponding semantic relationships stored in the OSUMC knowledge collection are intended to promote consistency and interoperability across the heterogeneous clinical, research, business and education information managed within the data warehouse. PMID:18999040

  1. Interactions between subunits of Saccharomyces cerevisiae RNase MRP support a conserved eukaryotic RNase P/MRP architecture

    PubMed Central

    Aspinall, Tanya V.; Gordon, James M.B.; Bennett, Hayley J.; Karahalios, Panagiotis; Bukowski, John-Paul; Walker, Scott C.; Engelke, David R.; Avis, Johanna M.

    2007-01-01

    Ribonuclease MRP is an endonuclease, related to RNase P, which functions in eukaryotic pre-rRNA processing. In Saccharomyces cerevisiae, RNase MRP comprises an RNA subunit and ten proteins. To improve our understanding of subunit roles and enzyme architecture, we have examined protein-protein and protein–RNA interactions in vitro, complementing existing yeast two-hybrid data. In total, 31 direct protein–protein interactions were identified, each protein interacting with at least three others. Furthermore, seven proteins self-interact, four strongly, pointing to subunit multiplicity in the holoenzyme. Six protein subunits interact directly with MRP RNA and four with pre-rRNA. A comparative analysis with existing data for the yeast and human RNase P/MRP systems enables confident identification of Pop1p, Pop4p and Rpp1p as subunits that lie at the enzyme core, with probable addition of Pop5p and Pop3p. Rmp1p is confirmed as an integral subunit, presumably associating preferentially with RNase MRP, rather than RNase P, via interactions with Snm1p and MRP RNA. Snm1p and Rmp1p may act together to assist enzyme specificity, though roles in substrate binding are also indicated for Pop4p and Pop6p. The results provide further evidence of a conserved eukaryotic RNase P/MRP architecture and provide a strong basis for studies of enzyme assembly and subunit function. PMID:17881380

  2. Nitric oxide is required for determining root architecture and lignin composition in sunflower. Supporting evidence from microarray analyses.

    PubMed

    Corti Monzón, Georgina; Pinedo, Marcela; Di Rienzo, Julio; Novo-Uzal, Esther; Pomar, Federico; Lamattina, Lorenzo; de la Canal, Laura

    2014-05-30

    Nitric oxide (NO) is a signal molecule involved in several physiological processes in plants, including root development. Despite the importance of NO as a root growth regulator, the knowledge about the genes and metabolic pathways modulated by NO in this process is still limited. A constraint to unravel these pathways has been the use of exogenous applications of NO donors that may produce toxic effects. We have analyzed the role of NO in root architecture through the depletion of endogenous NO using the scavenger cPTIO. Sunflower seedlings growing in liquid medium supplemented with cPTIO showed unaltered primary root length while the number of lateral roots was deeply reduced; indicating that endogenous NO participates in determining root branching in sunflower. The transcriptional changes induced by NO depletion have been analyzed using a large-scale approach. A microarray analysis showed 330 genes regulated in the roots (p≤0.001) upon endogenous NO depletion. A general cPTIO-induced up-regulation of genes involved in the lignin biosynthetic pathway was observed. Even if no detectable changes in total lignin content could be detected, cell walls analyses revealed that the ratio G/S lignin increased in roots treated with cPTIO. This means that endogenous NO may control lignin composition in planta. Our results suggest that a fine tuning regulation of NO levels could be used by plants to regulate root architecture and lignin composition. The functional implications of these findings are discussed. PMID:24747108

  3. Architecture Design of Healthcare Software-as-a-Service Platform for Cloud-Based Clinical Decision Support Service

    PubMed Central

    Oh, Sungyoung; Cha, Jieun; Ji, Myungkyu; Kang, Hyekyung; Kim, Seok; Heo, Eunyoung; Han, Jong Soo; Kang, Hyunggoo; Chae, Hoseok; Hwang, Hee

    2015-01-01

    Objectives To design a cloud computing-based Healthcare Software-as-a-Service (SaaS) Platform (HSP) for delivering healthcare information services with low cost, high clinical value, and high usability. Methods We analyzed the architecture requirements of an HSP, including the interface, business services, cloud SaaS, quality attributes, privacy and security, and multi-lingual capacity. For cloud-based SaaS services, we focused on Clinical Decision Service (CDS) content services, basic functional services, and mobile services. Microsoft's Azure cloud computing for Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) was used. Results The functional and software views of an HSP were designed in a layered architecture. External systems can be interfaced with the HSP using SOAP and REST/JSON. The multi-tenancy model of the HSP was designed as a shared database, with a separate schema for each tenant through a single application, although healthcare data can be physically located on a cloud or in a hospital, depending on regulations. The CDS services were categorized into rule-based services for medications, alert registration services, and knowledge services. Conclusions We expect that cloud-based HSPs will allow small and mid-sized hospitals, in addition to large-sized hospitals, to adopt information infrastructures and health information technology with low system operation and maintenance costs. PMID:25995962

  4. Evolving earth-based and in-situ satellite network architectures for Mars communications and navigation support

    NASA Technical Reports Server (NTRS)

    Hastrup, Rolf; Weinberg, Aaron; Mcomber, Robert

    1991-01-01

    Results of on-going studies to develop navigation/telecommunications network concepts to support future robotic and human missions to Mars are presented. The performance and connectivity improvements provided by the relay network will permit use of simpler, lower performance, and less costly telecom subsystems for the in-situ mission exploration elements. Orbiting relay satellites can serve as effective navigation aids by supporting earth-based tracking as well as providing Mars-centered radiometric data for mission elements approaching, in orbit, or on the surface of Mars. The relay satellite orbits may be selected to optimize navigation aid support and communication coverage for specific mission sets.

  5. Tools for describing the reference architecture for space data systems

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Yamada, Takahiro

    2004-01-01

    This paper has briefly presented the Reference Architecture for Space Data Systems (RASDS) that is being developed by the CCSDS Systems Architecture Working Group (SAWG). The SAWG generated some sample architectures (spacecraft onboard architectures, space link architectures, cross-support architectures) using this RASDS approach, and RASDS was proven to be a powerful tool for describing and relating different space data system architectures.

  6. Robotic Intelligence Kernel: Architecture

    Energy Science and Technology Software Center (ESTSC)

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  7. FTS2000 network architecture

    NASA Technical Reports Server (NTRS)

    Klenart, John

    1991-01-01

    The network architecture of FTS2000 is graphically depicted. A map of network A topology is provided, with interservice nodes. Next, the four basic element of the architecture is laid out. Then, the FTS2000 time line is reproduced. A list of equipment supporting FTS2000 dedicated transmissions is given. Finally, access alternatives are shown.

  8. Extending multi-tenant architectures: a database model for a multi-target support in SaaS applications

    NASA Astrophysics Data System (ADS)

    Rico, Antonio; Noguera, Manuel; Garrido, José Luis; Benghazi, Kawtar; Barjis, Joseph

    2016-05-01

    Multi-tenant architectures (MTAs) are considered a cornerstone in the success of Software as a Service as a new application distribution formula. Multi-tenancy allows multiple customers (i.e. tenants) to be consolidated into the same operational system. This way, tenants run and share the same application instance as well as costs, which are significantly reduced. Functional needs vary from one tenant to another; either companies from different sectors run different types of applications or, although deploying the same functionality, they do differ in the extent of their complexity. In any case, MTA leaves one major concern regarding the companies' data, their privacy and security, which requires special attention to the data layer. In this article, we propose an extended data model that enhances traditional MTAs in respect of this concern. This extension - called multi-target - allows MT applications to host, manage and serve multiple functionalities within the same multi-tenant (MT) environment. The practical deployment of this approach will allow SaaS vendors to target multiple markets or address different levels of functional complexity and yet commercialise just one single MT application. The applicability of the approach is demonstrated via a case study of a real multi-tenancy multi-target (MT2) implementation, called Globalgest.

  9. The story of DB4GeO - A service-based geo-database architecture to support multi-dimensional data analysis and visualization

    NASA Astrophysics Data System (ADS)

    Breunig, Martin; Kuper, Paul V.; Butwilowski, Edgar; Thomsen, Andreas; Jahn, Markus; Dittrich, André; Al-Doori, Mulhim; Golovko, Darya; Menninghaus, Mathias

    2016-07-01

    Multi-dimensional data analysis and visualization need efficient data handling to archive original data, to reproduce results on large data sets, and to retrieve space and time partitions just in time. This article tells the story of more than twenty years research resulting in the development of DB4GeO, a web service-based geo-database architecture for geo-objects to support the data handling of 3D/4D geo-applications. Starting from the roots and lessons learned, the concepts and implementation of DB4GeO are described in detail. Furthermore, experiences and extensions to DB4GeO are presented. Finally, conclusions and an outlook on further research also considering 3D/4D geo-applications for DB4GeO in the context of Dubai 2020 are given.

  10. MIC-SVM: Designing A Highly Efficient Support Vector Machine For Advanced Modern Multi-Core and Many-Core Architectures

    SciTech Connect

    You, Yang; Song, Shuaiwen; Fu, Haohuan; Marquez, Andres; Mehri Dehanavi, Maryam; Barker, Kevin J.; Cameron, Kirk; Randles, Amanda; Yang, Guangwen

    2014-08-16

    Support Vector Machine (SVM) has been widely used in data-mining and Big Data applications as modern commercial databases start to attach an increasing importance to the analytic capabilities. In recent years, SVM was adapted to the field of High Performance Computing for power/performance prediction, auto-tuning, and runtime scheduling. However, even at the risk of losing prediction accuracy due to insufficient runtime information, researchers can only afford to apply offline model training to avoid significant runtime training overhead. To address the challenges above, we designed and implemented MICSVM, a highly efficient parallel SVM for x86 based multi-core and many core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi coprocessor (MIC).

  11. An Approach for Hydrogen Recycling in a Closed-loop Life Support Architecture to Increase Oxygen Recovery Beyond State-of-the-Art

    NASA Technical Reports Server (NTRS)

    Abney, Morgan B.; Miller, Lee; Greenwood, Zachary; Alvarez, Giraldo

    2014-01-01

    State-of-the-art atmosphere revitalization life support technology on the International Space Station is theoretically capable of recovering 50% of the oxygen from metabolic carbon dioxide via the Carbon Dioxide Reduction Assembly (CRA). When coupled with a Plasma Pyrolysis Assembly (PPA), oxygen recovery increases dramatically, thus drastically reducing the logistical challenges associated with oxygen resupply. The PPA decomposes methane to predominantly form hydrogen and acetylene. Because of the unstable nature of acetylene, a down-stream separation system is required to remove acetylene from the hydrogen stream before it is recycled to the CRA. A new closed-loop architecture that includes a PPA and downstream Hydrogen Purification Assembly (HyPA) is proposed and discussed. Additionally, initial results of separation material testing are reported.

  12. Preparation of a self-supporting cell architecture mimic by water channel confined photocrosslinking within a lamellar structured hydrogel.

    SciTech Connect

    Grubjesic, S.; Lee, B.; Seifert, S.; Firestone, M. A.

    2011-01-01

    A self-supporting biomimetic chemical hydrogel that can be reversibly swollen in water is described. An aqueous dispersion of a diacrylate end-derivatized PEO-PPO-PEO macromer, a saturated phospholipid, and a zwitterionic co-surfactant self-assembles into a multilamellar-structured physical gel at room temperature as determined by SAXS. The addition of a water soluble PEGDA co-monomer and photoinitiator within the water layers does not alter the self-assembled structure. ATR/FT-IR spectroscopy reveals that photoirradiation initiates the crosslinking between the acrylate end groups on the macromer with the PEGDA, forming a polymeric network within the aqueous domains. The primitive cytoskeleton mimic serves to stabilize the amphiphile bilayer, converting the physical gel into an elastic self-supporting chemical gel. Storage under ambient conditions causes dehydration of the hydrogel to 5 wt % water which can be reversed by swelling in water. The fully water swollen gel (85 wt % water) remains self-supporting but converts to a non-lamellar structure. As water is lost the chemical gel regains its lamellar structure. Incubation of the hydrogel in nonpolar organic solvents that do not dissolve the uncrosslinked lipid component (hexane) allow for swelling without loss of structural integrity. Chloroform, which readily solubilizes the lipid, causes irreversible loss of the lamellar structure.

  13. Novel Architecture for supporting medical decision making of different data types based on Fuzzy Cognitive Map Framework.

    PubMed

    Papageorgiou, Elpiniki; Stylios, Chrysostomos; Groumpos, Peter

    2007-01-01

    Medical problems involve different types of variables and data, which have to be processed, analyzed and synthesized in order to reach a decision and/or conclude to a diagnosis. Usually, information and data set are both symbolic and numeric but most of the well-known data analysis methods deal with only one kind of data. Even when fuzzy approaches are considered, which are not depended on the scales of variables, usually only numeric data is considered. The medical decision support methods usually are accessed in only one type of available data. Thus, sophisticated methods have been proposed such as integrated hybrid learning approaches to process symbolic and numeric data for the decision support tasks. Fuzzy Cognitive Maps (FCM) is an efficient modelling method, which is based on human knowledge and experience and it can handle with uncertainty and it is constructed by extracted knowledge in the form of fuzzy rules. The FCM model can be enhanced if a fuzzy rule base (IF-THEN rules) is available. This rule base could be derived by a number of machine learning and knowledge extraction methods. Here it is introduced a hybrid attempt to handle situations with different types of available medical and/or clinical data and with difficulty to handle them for decision support tasks using soft computing techniques. PMID:18002176

  14. The Development of a Remote Sensor System and Decision Support Systems Architecture to Monitor Resistance Development in Transgenic Crops

    NASA Technical Reports Server (NTRS)

    Cacas, Joseph; Glaser, John; Copenhaver, Kenneth; May, George; Stephens, Karen

    2008-01-01

    The United States Environmental Protection Agency (EPA) has declared that "significant benefits accrue to growers, the public, and the environment" from the use of transgenic pesticidal crops due to reductions in pesticide usage for crop pest management. Large increases in the global use of transgenic pesticidal crops has reduced the amounts of broad spectrum pesticides used to manage pest populations, improved yield and reduced the environmental impact of crop management. A significant threat to the continued use of this technology is the evolution of resistance in insect pest populations to the insecticidal Bt toxins expressed by the plants. Management of transgenic pesticidal crops with an emphasis on conservation of Bt toxicity in field populations of insect pests is important to the future of sustainable agriculture. A vital component of this transgenic pesticidal crop management is establishing the proof of concept basic understanding, situational awareness, and monitoring and decision support system tools for more than 133650 square kilometers (33 million acres) of bio-engineered corn and cotton for development of insect resistance . Early and recent joint NASA, US EPA and ITD remote imagery flights and ground based field experiments have provided very promising research results that will potentially address future requirements for crop management capabilities.

  15. Architecture & Environment

    ERIC Educational Resources Information Center

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  16. Project Integration Architecture: Application Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications is enabled.

  17. Selected reprints on dataflow and reduction architectures

    SciTech Connect

    Thakkar, S.S.

    1987-01-01

    This reprint collection looks at alternatives to von Neumann architecture: dataflow and reduction architectures and is organized into eight chapters that cover: different dataflow systems; dataflow solution to multiprocessing; dataflow languages and dataflow graphs; functional programming languages and their implementation; uniprocessor architectures that provide support for reduction; parallel graph reduction machines, and hybrid multiprocessor architectures.

  18. Generic Distributed Simulation Architecture

    SciTech Connect

    Booker, C.P.

    1999-05-14

    A Generic Distributed Simulation Architecture is described that allows a simulation to be automatically distributed over a heterogeneous network of computers and executed with very little human direction. A prototype Framework is presented that implements the elements of the Architecture and demonstrates the feasibility of the concepts. It provides a basis for a future, improved Framework that will support legacy models. Because the Framework is implemented in Java, it may be installed on almost any modern computer system.

  19. Green Architecture

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Ho

    Today, the environment has become a main subject in lots of science disciplines and the industrial development due to the global warming. This paper presents the analysis of the tendency of Green Architecture in France on the threes axes: Regulations and Approach for the Sustainable Architecture (Certificate and Standard), Renewable Materials (Green Materials) and Strategies (Equipments) of Sustainable Technology. The definition of 'Green Architecture' will be cited in the introduction and the question of the interdisciplinary for the technological development in 'Green Architecture' will be raised up in the conclusion.

  20. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  1. Modular avionic architectures

    NASA Astrophysics Data System (ADS)

    Trujillo, Edward

    The author presents an analysis revealing some of the salient features of modular avionics. A decomposition of the modular avionics concept is performed, highlighting some of the key features of such architectures. Several layers of architecture can be found in such concepts, including those relating to software structure, communication, and supportability. Particular emphasis is placed on the layer relating to partitioning, which gives rise to those features of integration, modularity, and commonality. Where integration is the sharing of common tasks or items to gain efficiency and flexibility, modularity is the partitioning of a system into reconfigurable and maintainable items, and commonality is partitioning to maximize the use of identical items across the range of applications. Two architectures, MASA (Modular Avionics System Architecture) and Pave Pillar, are considered in particular.

  2. Project Integration Architecture: Architectural Overview

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2001-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. By being a single, self-revealing architecture, the ability to develop single tools, for example a single graphical user interface, to span all applications is enabled. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications becomes possible, Object-encapsulation further allows information to become in a sense self-aware, knowing things such as its own dimensionality and providing functionality appropriate to its kind.

  3. The flight telerobotic servicer: From functional architecture to computer architecture

    NASA Technical Reports Server (NTRS)

    Lumia, Ronald; Fiala, John

    1989-01-01

    After a brief tutorial on the NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) functional architecture, the approach to its implementation is shown. First, interfaces must be defined which are capable of supporting the known algorithms. This is illustrated by considering the interfaces required for the SERVO level of the NASREM functional architecture. After interface definition, the specific computer architecture for the implementation must be determined. This choice is obviously technology dependent. An example illustrating one possible mapping of the NASREM functional architecture to a particular set of computers which implements it is shown. The result of choosing the NASREM functional architecture is that it provides a technology independent paradigm which can be mapped into a technology dependent implementation capable of evolving with technology in the laboratory and in space.

  4. Lunar architecture and urbanism

    NASA Technical Reports Server (NTRS)

    Sherwood, Brent

    1992-01-01

    Human civilization and architecture have defined each other for over 5000 years on Earth. Even in the novel environment of space, persistent issues of human urbanism will eclipse, within a historically short time, the technical challenges of space settlement that dominate our current view. By adding modern topics in space engineering, planetology, life support, human factors, material invention, and conservation to their already renaissance array of expertise, urban designers can responsibly apply ancient, proven standards to the exciting new opportunities afforded by space. Inescapable facts about the Moon set real boundaries within which tenable lunar urbanism and its component architecture must eventually develop.

  5. QUEST2: Sysdtem architecture deliverable set

    SciTech Connect

    Braaten, F.D.

    1995-02-27

    This document contains the system architecture and related documents which were developed during the Preliminary Analysis/System Architecture phase of the Quality, Environmental, Safety T-racking System redesign (QUEST2) project. Each discreet document in this deliverable set applies to a analytic effort supporting the architectural model of QUEST2. The P+ methodology cites a list of P+ documents normally included in a ``typical`` system architecture. Some of these were deferred to the release development phase of the project. The documents included in this deliverable set represent the system architecture itself. Related to that architecture are some decision support documents which provided needed information for management reviews that occurred during April. Consequently, the deliverables in this set were logically grouped and provided to support customer requirements. The remaining System Architecture Phase deliverables will be provided as a ``Supporting Documents`` deliverable set for the first release.

  6. Avionics System Architecture Tool

    NASA Technical Reports Server (NTRS)

    Chau, Savio; Hall, Ronald; Traylor, marcus; Whitfield, Adrian

    2005-01-01

    Avionics System Architecture Tool (ASAT) is a computer program intended for use during the avionics-system-architecture- design phase of the process of designing a spacecraft for a specific mission. ASAT enables simulation of the dynamics of the command-and-data-handling functions of the spacecraft avionics in the scenarios in which the spacecraft is expected to operate. ASAT is built upon I-Logix Statemate MAGNUM, providing a complement of dynamic system modeling tools, including a graphical user interface (GUI), modeling checking capabilities, and a simulation engine. ASAT augments this with a library of predefined avionics components and additional software to support building and analyzing avionics hardware architectures using these components.

  7. Software Architecture Design Reasoning

    NASA Astrophysics Data System (ADS)

    Tang, Antony; van Vliet, Hans

    Despite recent advancements in software architecture knowledge management and design rationale modeling, industrial practice is behind in adopting these methods. The lack of empirical proofs and the lack of a practical process that can be easily incorporated by practitioners are some of the hindrance for adoptions. In particular, the process to support systematic design reasoning is not available. To rectify this issue, we propose a design reasoning process to help architects cope with an architectural design environment where design concerns are cross-cutting and diversified.We use an industrial case study to validate that the design reasoning process can help improve the quality of software architecture design. The results have indicated that associating design concerns and identifying design options are important steps in design reasoning.

  8. System architectures for telerobotic research

    NASA Technical Reports Server (NTRS)

    Harrison, F. Wallace

    1989-01-01

    Several activities are performed related to the definition and creation of telerobotic systems. The effort and investment required to create architectures for these complex systems can be enormous; however, the magnitude of process can be reduced if structured design techniques are applied. A number of informal methodologies supporting certain aspects of the design process are available. More recently, prototypes of integrated tools supporting all phases of system design from requirements analysis to code generation and hardware layout have begun to appear. Activities related to system architecture of telerobots are described, including current activities which are designed to provide a methodology for the comparison and quantitative analysis of alternative system architectures.

  9. Architectural Illusion.

    ERIC Educational Resources Information Center

    Doornek, Richard R.

    1990-01-01

    Presents a lesson plan developed around the work of architectural muralist Richard Haas. Discusses the significance of mural painting and gives key concepts for the lesson. Lists class activities for the elementary and secondary grades. Provides a photograph of the Haas mural on the Fountainbleau Hilton Hotel, 1986. (GG)

  10. Architectural Treasures.

    ERIC Educational Resources Information Center

    Pietropola, Anne

    1998-01-01

    Presents an art lesson for eighth-grade students in which they created their own architectural structures. Stresses a strong discipline-based introduction using slide shows of famous buildings, large metropolitan cities, and 35,00 years of homes. Reports the lesson spanned two weeks. Includes a diagram, directions, and specifies materials. (CMK)

  11. Architectural Drafting.

    ERIC Educational Resources Information Center

    Davis, Ronald; Yancey, Bruce

    Designed to be used as a supplement to a two-book course in basic drafting, these instructional materials consisting of 14 units cover the process of drawing all working drawings necessary for residential buildings. The following topics are covered in the individual units: introduction to architectural drafting, lettering and tools, site…

  12. Architectural Tops

    ERIC Educational Resources Information Center

    Mahoney, Ellen

    2010-01-01

    The development of the skyscraper is an American story that combines architectural history, economic power, and technological achievement. Each city in the United States can be identified by the profile of its buildings. The design of the tops of skyscrapers was the inspiration for the students in the author's high-school ceramic class to develop…

  13. Guiding Architects in Selecting Architectural Evolution Alternatives

    SciTech Connect

    Ciraci, Selim; Sozer, Hasan; Aksit, Mehmet

    2011-09-09

    Although there exist methods and tools to support architecture evolution, the derivation and evaluation of alternative evolution paths are realized manually. In this paper, we introduce an approach, where architecture specification is converted to a graph representation. Based on this representation, we automatically generate possible evolution paths, evalute quality attributes for different architecture configurations, and optimize the selection of a particular path accordingly. We illustrate our approach by modeling the software architecture evolution of a crisis management system.

  14. Embedded instrumentation systems architecture

    NASA Astrophysics Data System (ADS)

    Visnevski, Nikita A.

    2007-04-01

    This paper describes the operational concept of the Embedded Instrumentation Systems Architecture (EISA) that is being developed for Test and Evaluation (T&E) applications. The architecture addresses such future T&E requirements as interoperability, flexibility, and non-intrusiveness. These are the ultimate requirements that support continuous T&E objectives. In this paper, we demonstrate that these objectives can be met by decoupling the Embedded Instrumentation (EI) system into an on-board and an off-board component. An on-board component is responsible for sampling, pre-processing, buffering, and transmitting data to the off-board component. The latter is responsible for aggregating, post-processing, and storing test data as well as providing access to the data via a clearly defined interface including such aspects as security, user authentication and access control. The power of the EISA architecture approach is in its inherent ability to support virtual instrumentation as well as enabling interoperability with such important T&E systems as Integrated Network-Enhanced Telemetry (iNET), Test and Training Enabling Architecture (TENA) and other relevant Department of Defense initiatives.

  15. Information systems definition architecture

    SciTech Connect

    Calapristi, A.J.

    1996-06-20

    The Tank Waste Remediation System (TWRS) Information Systems Definition architecture evaluated information Management (IM) processes in several key organizations. The intent of the study is to identify improvements in TWRS IM processes that will enable better support to the TWRS mission, and accommodate changes in TWRS business environment. The ultimate goals of the study are to reduce IM costs, Manage the configuration of TWRS IM elements, and improve IM-related process performance.

  16. Avionics Architecture Modelling Language

    NASA Astrophysics Data System (ADS)

    Alana, Elena; Naranjo, Hector; Valencia, Raul; Medina, Alberto; Honvault, Christophe; Rugina, Ana; Panunzia, Marco; Dellandrea, Brice; Garcia, Gerald

    2014-08-01

    This paper presents the ESA AAML (Avionics Architecture Modelling Language) study, which aimed at advancing the avionics engineering practices towards a model-based approach by (i) identifying and prioritising the avionics-relevant analyses, (ii) specifying the modelling language features necessary to support the identified analyses, and (iii) recommending/prototyping software tooling to demonstrate the automation of the selected analyses based on a modelling language and compliant with the defined specification.

  17. Space Telecommunications Radio Architecture (STRS)

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA s current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  18. The NASA Space Communications Data Networking Architecture

    NASA Technical Reports Server (NTRS)

    Israel, David J.; Hooke, Adrian J.; Freeman, Kenneth; Rush, John J.

    2006-01-01

    The NASA Space Communications Architecture Working Group (SCAWG) has recently been developing an integrated agency-wide space communications architecture in order to provide the necessary communication and navigation capabilities to support NASA's new Exploration and Science Programs. A critical element of the space communications architecture is the end-to-end Data Networking Architecture, which must provide a wide range of services required for missions ranging from planetary rovers to human spaceflight, and from sub-orbital space to deep space. Requirements for a higher degree of user autonomy and interoperability between a variety of elements must be accommodated within an architecture that necessarily features minimum operational complexity. The architecture must also be scalable and evolvable to meet mission needs for the next 25 years. This paper will describe the recommended NASA Data Networking Architecture, present some of the rationale for the recommendations, and will illustrate an application of the architecture to example NASA missions.

  19. Software Architecture Review: The State of Practice

    SciTech Connect

    Babar, Muhammad A.; Gorton, Ian

    2009-07-01

    This paper presents the results of a survey we carried out to investigate the state of practice of software architecture reviews. Of the survey results we describe, two are particularly significant for the software architecture research community. First, the survey respondents evaluate architectures mostly using informal, experience-based approaches. Second, the survey respondents rarely adopt the techniques that are highly recommended in architecture review research, such as the use of project-independent reviewers. We conclude that the software engineering practitioner community has yet to become fully aware of the methods and techniques available to support disciplined architecture review processes and their potential benefits. The architecture review research community needs to concentrate on helping practitioners by providing guidelines for justifying and institutionalizing the architecture review processes, and associated tools support.

  20. Space station needs, attributes, and architectural options study. Volume 2: Program options, architecture, and technology

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Mission scenarios and space station architectures are discussed. Electrical power subsystems (EPS), environmental control and life support, subsystems (ECLSS), and reaction control subsystem (RCS) architectures are addressed. Thermal control subsystems, (TCS), guidance/navigation and control (GN and C), information management systems IMS), communications and tracking (C and T), and propellant transfer and storage systems architectures are discussed.

  1. Design, Implementation and Evaluation of an Architecture based on the CDA R2 Document Repository to Provide Support to the Contingency Plan.

    PubMed

    Campos, Fernando; Luna, Daniel; Sittig, Dean F; Bernaldo de Quirós, Fernán González

    2015-01-01

    The pervasive use of electronic records in healthcare increases the dependency on technology due to the lack of physical backup for the records. Downtime in the Electronic Health Record system is unavoidable, due to software, infrastructure and power failures as well as natural disasters, so there is a need to develop a contingency plan ensuring patient care continuity and minimizing risks for health care delivery. To mitigate these risks, two applications were developed allowing healthcare delivery providers to retrieve clinical information using the Clinical Document Architecture Release 2 (CDA R2) document repository as the information source. In this paper we describe the strategy, implementation and results; and provide an evaluation of effectiveness. PMID:26262033

  2. Open architecture design and approach for the Integrated Sensor Architecture (ISA)

    NASA Astrophysics Data System (ADS)

    Moulton, Christine L.; Krzywicki, Alan T.; Hepp, Jared J.; Harrell, John; Kogut, Michael

    2015-05-01

    Integrated Sensor Architecture (ISA) is designed in response to stovepiped integration approaches. The design, based on the principles of Service Oriented Architectures (SOA) and Open Architectures, addresses the problem of integration, and is not designed for specific sensors or systems. The use of SOA and Open Architecture approaches has led to a flexible, extensible architecture. Using these approaches, and supported with common data formats, open protocol specifications, and Department of Defense Architecture Framework (DoDAF) system architecture documents, an integration-focused architecture has been developed. ISA can help move the Department of Defense (DoD) from costly stovepipe solutions to a more cost-effective plug-and-play design to support interoperability.

  3. The importance of architectures for interoperability.

    PubMed

    Blobel, Bernd; Oemig, Frank

    2015-01-01

    The paradigm changes health systems are faced with result in highly complex and distributed systems requiring flexibility, autonomy, but first of all advanced interoperability. In that context, understanding the architecture of the system to be supported as well as the process to meet the intended business objectives is crucial. Unfortunately, there is a lot of confusion around the term architecture, which doesn't facilitate the integration of systems. Using a reference architecture model and framework, relevant existing architectural approaches are analyzed, compared and critically discussed, but also harmonized using a reference architectural model and framework. PMID:25980847

  4. Commanding Constellations (Pipeline Architecture)

    NASA Technical Reports Server (NTRS)

    Ray, Tim; Condron, Jeff

    2003-01-01

    Providing ground command software for constellations of spacecraft is a challenging problem. Reliable command delivery requires a feedback loop; for a constellation there will likely be an independent feedback loop for each constellation member. Each command must be sent via the proper Ground Station, which may change from one contact to the next (and may be different for different members). Dynamic configuration of the ground command software is usually required (e.g. directives to configure each member's feedback loop and assign the appropriate Ground Station). For testing purposes, there must be a way to insert command data at any level in the protocol stack. The Pipeline architecture described in this paper can support all these capabilities with a sequence of software modules (the pipeline), and a single self-identifying message format (for all types of command data and configuration directives). The Pipeline architecture is quite simple, yet it can solve some complex problems. The resulting solutions are conceptually simple, and therefore, reliable. They are also modular, and therefore, easy to distribute and extend. We first used the Pipeline architecture to design a CCSDS (Consultative Committee for Space Data Systems) Ground Telecommand system (to command one spacecraft at a time with a fixed Ground Station interface). This pipeline was later extended to include gateways to any of several Ground Stations. The resulting pipeline was then extended to handle a small constellation of spacecraft. The use of the Pipeline architecture allowed us to easily handle the increasing complexity. This paper will describe the Pipeline architecture, show how it was used to solve each of the above commanding situations, and how it can easily be extended to handle larger constellations.

  5. Lab architecture

    NASA Astrophysics Data System (ADS)

    Crease, Robert P.

    2008-04-01

    There are few more dramatic illustrations of the vicissitudes of laboratory architecturethan the contrast between Building 20 at the Massachusetts Institute of Technology (MIT) and its replacement, the Ray and Maria Stata Center. Building 20 was built hurriedly in 1943 as temporary housing for MIT's famous Rad Lab, the site of wartime radar research, and it remained a productive laboratory space for over half a century. A decade ago it was demolished to make way for the Stata Center, an architecturally striking building designed by Frank Gehry to house MIT's computer science and artificial intelligence labs (above). But in 2004 - just two years after the Stata Center officially opened - the building was criticized for being unsuitable for research and became the subject of still ongoing lawsuits alleging design and construction failures.

  6. Executable Architecture Research at Old Dominion University

    NASA Technical Reports Server (NTRS)

    Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.

    2011-01-01

    Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.

  7. Architectures for statically scheduled dataflow

    SciTech Connect

    Lee, E.A.; Bier, J.C. )

    1990-12-01

    When dataflow program graphs can be statically scheduled, little run-time overhead (software or hardware) is necessary. This paper describes a class of parallel architectures consisting of von Neumann processors and one or more shared memories, where the order of shared- memory access is determined at compile time and enforced at run time. The architecture is extremely lean in hardware, yet for a set of important applications it can perform as well as any shared-memory architecture. Dataflow graphs can be mapped onto it statically. Furthermore, it supports shared data structures without the run-time overhead of I-structures. A software environment has been constructed that automatically maps signal processing applications onto a simulation of such an architecture, where the architecture is implemented using Motorola DSP96002 microcomputers. Static (compile-time) scheduling is possible for a subclass of dataflow program graphs where the firing pattern of actors is data independent. This model is suitable for digital signal processing and some other scientific computation. It supports recurrences, manifest iteration, and conditional assignment. However, it does not support true recursion, data-dependent iteration, or conditional evaluation. An effort is under way to weaken the constraints of the model to determine the implications for hardware design.

  8. REPETE2: A next generation home telemedicine architecture.

    PubMed

    Lai, Albert M; Nieh, Jason; Starren, Justin B

    2007-01-01

    As the availability of home broadband increases, there is an increasing need for a broadband-based home telemedicine architecture. A home tele-medicine architecture supporting broadband and remote training is presented. PMID:18694118

  9. The influence of nano-architectured CeOx supports in RhPd/CeO₂ for the catalytic ethanol steam reforming reaction

    SciTech Connect

    Divins, N. J.; Senanayake, S. D.; Casanovas, A.; Xu, W.; Trovarelli, A.; Llorca, J.

    2015-01-19

    The ethanol steam reforming (ESR) reaction has been tested over RhPd supported on polycrystalline ceria in comparison to structured supports composed of nanoshaped CeO₂ cubes and CeO₂ rods tailored towards the production of hydrogen. At 650-700 K the hydrogen yield follows the trend RhPd/CeO₂-cubes > RhPd/CeO₂ -rods > RhPd/CeO₂- polycrystalline, whereas at temperatures higher than 800 K the catalytic performance of all samples is similar and close to the thermodynamic equilibrium. The improved performance of RhPd/CeO₂-cubes and RhPd/CeO₂ -rods for ESR at low temperature is mainly ascribed to higher water-gas shift activity and a strong interaction between the bimetallic - oxide support interaction. STEM analysis shows the existence of RhPd alloyed nanoparticles in all samples, with no apparent relationship between ESR performance and RhPd particle size. X-ray diffraction under operating conditions shows metal reorganization on {100} and {110} ceria crystallographic planes during catalyst activation and ESR, but not on {111} ceria crystallographic planes. The RhPd reconstructing and tuned activation over ceria nanocubes and nanorods is considered the main reason for better catalytic activity with respect to conventional catalysts based on polycrystalline ceria

  10. The influence of nano-architectured CeOx supports in RhPd/CeO₂ for the catalytic ethanol steam reforming reaction

    DOE PAGESBeta

    Divins, N. J.; Senanayake, S. D.; Casanovas, A.; Xu, W.; Trovarelli, A.; Llorca, J.

    2015-01-19

    The ethanol steam reforming (ESR) reaction has been tested over RhPd supported on polycrystalline ceria in comparison to structured supports composed of nanoshaped CeO₂ cubes and CeO₂ rods tailored towards the production of hydrogen. At 650-700 K the hydrogen yield follows the trend RhPd/CeO₂-cubes > RhPd/CeO₂ -rods > RhPd/CeO₂- polycrystalline, whereas at temperatures higher than 800 K the catalytic performance of all samples is similar and close to the thermodynamic equilibrium. The improved performance of RhPd/CeO₂-cubes and RhPd/CeO₂ -rods for ESR at low temperature is mainly ascribed to higher water-gas shift activity and a strong interaction between the bimetallic -more » oxide support interaction. STEM analysis shows the existence of RhPd alloyed nanoparticles in all samples, with no apparent relationship between ESR performance and RhPd particle size. X-ray diffraction under operating conditions shows metal reorganization on {100} and {110} ceria crystallographic planes during catalyst activation and ESR, but not on {111} ceria crystallographic planes. The RhPd reconstructing and tuned activation over ceria nanocubes and nanorods is considered the main reason for better catalytic activity with respect to conventional catalysts based on polycrystalline ceria« less

  11. Architectural Methodology Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.

  12. Marshall Application Realignment System (MARS) Architecture

    NASA Technical Reports Server (NTRS)

    Belshe, Andrea; Sutton, Mandy

    2010-01-01

    The Marshall Application Realignment System (MARS) Architecture project was established to meet the certification requirements of the Department of Defense Architecture Framework (DoDAF) V2.0 Federal Enterprise Architecture Certification (FEAC) Institute program and to provide added value to the Marshall Space Flight Center (MSFC) Application Portfolio Management process. The MARS Architecture aims to: (1) address the NASA MSFC Chief Information Officer (CIO) strategic initiative to improve Application Portfolio Management (APM) by optimizing investments and improving portfolio performance, and (2) develop a decision-aiding capability by which applications registered within the MSFC application portfolio can be analyzed and considered for retirement or decommission. The MARS Architecture describes a to-be target capability that supports application portfolio analysis against scoring measures (based on value) and overall portfolio performance objectives (based on enterprise needs and policies). This scoring and decision-aiding capability supports the process by which MSFC application investments are realigned or retired from the application portfolio. The MARS Architecture is a multi-phase effort to: (1) conduct strategic architecture planning and knowledge development based on the DoDAF V2.0 six-step methodology, (2) describe one architecture through multiple viewpoints, (3) conduct portfolio analyses based on a defined operational concept, and (4) enable a new capability to support the MSFC enterprise IT management mission, vision, and goals. This report documents Phase 1 (Strategy and Design), which includes discovery, planning, and development of initial architecture viewpoints. Phase 2 will move forward the process of building the architecture, widening the scope to include application realignment (in addition to application retirement), and validating the underlying architecture logic before moving into Phase 3. The MARS Architecture key stakeholders are most

  13. MSAT network architecture

    NASA Technical Reports Server (NTRS)

    Davies, N. G.; Skerry, B.

    1990-01-01

    The Mobile Satellite (MSAT) communications system will support mobile voice and data services using circuit switched and packet switched facilities with interconnection to the public switched telephone network and private networks. Control of the satellite network will reside in a Network Control System (NCS) which is being designed to be extremely flexible to provide for the operation of the system initially with one multi-beam satellite, but with capability to add additional satellites which may have other beam configurations. The architecture of the NCS is described. The signalling system must be capable of supporting the protocols for the assignment of circuits for mobile public telephone and private network calls as well as identifying packet data networks. The structure of a straw-man signalling system is discussed.

  14. Space Telecommunications Radio System (STRS) Architecture, Tutorial Part 2 - Detailed

    NASA Technical Reports Server (NTRS)

    Handler, Louis

    2014-01-01

    The STRS architecture detail presentation presents each requirement in the STRS Architecture Standard with some examples and supporting information. The purpose is to give a platform provider, application provider, or application integrator a better, more detailed understanding of the STRS Architecture Standard and its use.

  15. Lunar Navigation Architecture Design Considerations

    NASA Technical Reports Server (NTRS)

    D'Souza, Christopher; Getchius, Joel; Holt, Greg; Moreau, Michael

    2009-01-01

    The NASA Constellation Program is aiming to establish a long-term presence on the lunar surface. The Constellation elements (Orion, Altair, Earth Departure Stage, and Ares launch vehicles) will require a lunar navigation architecture for navigation state updates during lunar-class missions. Orion in particular has baselined earth-based ground direct tracking as the primary source for much of its absolute navigation needs. However, due to the uncertainty in the lunar navigation architecture, the Orion program has had to make certain assumptions on the capabilities of such architectures in order to adequately scale the vehicle design trade space. The following paper outlines lunar navigation requirements, the Orion program assumptions, and the impacts of these assumptions to the lunar navigation architecture design. The selection of potential sites was based upon geometric baselines, logistical feasibility, redundancy, and abort support capability. Simulated navigation covariances mapped to entry interface flightpath- angle uncertainties were used to evaluate knowledge errors. A minimum ground station architecture was identified consisting of Goldstone, Madrid, Canberra, Santiago, Hartebeeshoek, Dongora, Hawaii, Guam, and Ascension Island (or the geometric equivalent).

  16. Modularity and mental architecture.

    PubMed

    Robbins, Philip

    2013-11-01

    Debates about the modularity of cognitive architecture have been ongoing for at least the past three decades, since the publication of Fodor's landmark book The Modularity of Mind. According to Fodor, modularity is essentially tied to informational encapsulation, and as such is only found in the relatively low-level cognitive systems responsible for perception and language. According to Fodor's critics in the evolutionary psychology camp, modularity simply reflects the fine-grained functional specialization dictated by natural selection, and it characterizes virtually all aspects of cognitive architecture, including high-level systems for judgment, decision making, and reasoning. Though both of these perspectives on modularity have garnered support, the current state of evidence and argument suggests that a broader skepticism about modularity may be warranted. WIREs Cogn Sci 2013, 4:641-649. doi: 10.1002/wcs.1255 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304269

  17. Protocol Architecture Model Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.

  18. Secure Storage Architectures

    SciTech Connect

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine; Koch, Scott M; Naughton, III, Thomas J; Pogge, James R; Scott, Stephen L; Shipman, Galen M; Sorrillo, Lawrence

    2015-01-01

    The purpose of this report is to clarify the challenges associated with storage for secure enclaves. The major focus areas for the report are: - review of relevant parallel filesystem technologies to identify assets and gaps; - review of filesystem isolation/protection mechanisms, to include native filesystem capabilities and auxiliary/layered techniques; - definition of storage architectures that can be used for customizable compute enclaves (i.e., clarification of use-cases that must be supported for shared storage scenarios); - investigate vendor products related to secure storage. This study provides technical details on the storage and filesystem used for HPC with particular attention on elements that contribute to creating secure storage. We outline the pieces for a a shared storage architecture that balances protection and performance by leveraging the isolation capabilities available in filesystems and virtualization technologies to maintain the integrity of the data. Key Points: There are a few existing and in-progress protection features in Lustre related to secure storage, which are discussed in (Chapter 3.1). These include authentication capabilities like GSSAPI/Kerberos and the in-progress work for GSSAPI/Host-keys. The GPFS filesystem provides native support for encryption, which is not directly available in Lustre. Additionally, GPFS includes authentication/authorization mechanisms for inter-cluster sharing of filesystems (Chapter 3.2). The limitations of key importance for secure storage/filesystems are: (i) restricting sub-tree mounts for parallel filesystem (which is not directly supported in Lustre or GPFS), and (ii) segregation of hosts on the storage network and practical complications with dynamic additions to the storage network, e.g., LNET. A challenge for VM based use cases will be to provide efficient IO forwarding of the parallel filessytem from the host to the guest (VM). There are promising options like para-virtualized filesystems to

  19. The EPOS ICT Architecture

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Harrison, Matt; Bailo, Daniele

    2016-04-01

    parallel the ICT team is tracking developments in ICT for relevance to EPOS-IP. In particular, the potential utilisation of e-Is (e-Infrastructures) such as GEANT(network), AARC (security), EGI (GRID computing), EUDAT (data curation), PRACE (High Performance Computing), HELIX-Nebula / Open Science Cloud (Cloud computing) are being assessed. Similarly relationships to other e-RIs (e-Research Infrastructures) such as ENVRI+, EXCELERATE and other ESFRI (European Strategic Forum for Research Infrastructures) projects are developed to share experience and technology and to promote interoperability. EPOS ICT team members are also involved in VRE4EIC, a project developing a reference architecture and component software services for a Virtual Research Environment to be superimposed on EPOS-ICS. The challenge which is being tackled now is therefore to keep consistency and interoperability among the different modules, initiatives and actors which participate to the process of running the EPOS platform. It implies both a continuous update about IT aspects of mentioned initiatives and a refinement of the e-architecture designed so far. One major aspect of EPOS-IP is the ICT support for legalistic, financial and governance aspects of the EPOS ERIC to be initiated during EPOS-IP. This implies a sophisticated AAAI (Authentication, authorization, accounting infrastructure) with consistency throughout the software, communications and data stack.

  20. Information architecture. Volume 4: Vision

    SciTech Connect

    1998-03-01

    The Vision document marks the transition from definition to implementation of the Department of Energy (DOE) Information Architecture Program. A description of the possibilities for the future, supported by actual experience with a process model and tool set, points toward implementation options. The directions for future information technology investments are discussed. Practical examples of how technology answers the business and information needs of the organization through coordinated and meshed data, applications, and technology architectures are related. This document is the fourth and final volume in the planned series for defining and exhibiting the DOE information architecture. The targeted scope of this document includes DOE Program Offices, field sites, contractor-operated facilities, and laboratories. This document paints a picture of how, over the next 7 years, technology may be implemented, dramatically improving the ways business is conducted at DOE. While technology is mentioned throughout this document, the vision is not about technology. The vision concerns the transition afforded by technology and the process steps to be completed to ensure alignment with business needs. This goal can be met if those directing the changing business and mission-support processes understand the capabilities afforded by architectural processes.

  1. Demand Activated Manufacturing Architecture

    SciTech Connect

    Bender, T.R.; Zimmerman, J.J.

    2001-02-07

    Honeywell Federal Manufacturing & Technologies (FM&T) engineers John Zimmerman and Tom Bender directed separate projects within this CRADA. This Project Accomplishments Summary contains their reports independently. Zimmerman: In 1998 Honeywell FM&T partnered with the Demand Activated Manufacturing Architecture (DAMA) Cooperative Business Management Program to pilot the Supply Chain Integration Planning Prototype (SCIP). At the time, FM&T was developing an enterprise-wide supply chain management prototype called the Integrated Programmatic Scheduling System (IPSS) to improve the DOE's Nuclear Weapons Complex (NWC) supply chain. In the CRADA partnership, FM&T provided the IPSS technical and business infrastructure as a test bed for SCIP technology, and this would provide FM&T the opportunity to evaluate SCIP as the central schedule engine and decision support tool for IPSS. FM&T agreed to do the bulk of the work for piloting SCIP. In support of that aim, DAMA needed specific DOE Defense Programs opportunities to prove the value of its supply chain architecture and tools. In this partnership, FM&T teamed with Sandia National Labs (SNL), Division 6534, the other DAMA partner and developer of SCIP. FM&T tested SCIP in 1998 and 1999. Testing ended in 1999 when DAMA CRADA funding for FM&T ceased. Before entering the partnership, FM&T discovered that the DAMA SCIP technology had an array of applications in strategic, tactical, and operational planning and scheduling. At the time, FM&T planned to improve its supply chain performance by modernizing the NWC-wide planning and scheduling business processes and tools. The modernization took the form of a distributed client-server planning and scheduling system (IPSS) for planners and schedulers to use throughout the NWC on desktops through an off-the-shelf WEB browser. The planning and scheduling process within the NWC then, and today, is a labor-intensive paper-based method that plans and schedules more than 8,000 shipped parts

  2. Post and Lintel Architecture

    ERIC Educational Resources Information Center

    Daniel, Robert A.

    1973-01-01

    Author finds that children understand architectural concepts more readily when he refers to familiar non-architectural examples of them such as goal posts, chairs, tables, and playground equipment. (GB)

  3. New computer architectures

    SciTech Connect

    Tiberghien, J.

    1984-01-01

    This book presents papers on supercomputers. Topics considered include decentralized computer architecture, new programming languages, data flow computers, reduction computers, parallel prefix calculations, structural and behavioral descriptions of digital systems, instruction sets, software generation, personal computing, and computer architecture education.

  4. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  5. UMTS network architecture

    NASA Astrophysics Data System (ADS)

    Katoen, J. P.; Saiedi, A.; Baccaro, I.

    1994-05-01

    This paper proposes a Functional Architecture and a corresponding Network Architecture for the Universal Mobile Telecommunication System (UMTS). Procedures like call handling, location management, and handover are considered. The architecture covers the domestic, business, and public environments. Integration with existing and forthcoming networks for fixed communications is anticipated and the Intelligent Network (IN) philosophy is applied.

  6. Architecture for Adaptive Intelligent Systems

    NASA Technical Reports Server (NTRS)

    Hayes-Roth, Barbara

    1993-01-01

    We identify a class of niches to be occupied by 'adaptive intelligent systems (AISs)'. In contrast with niches occupied by typical AI agents, AIS niches present situations that vary dynamically along several key dimensions: different combinations of required tasks, different configurations of available resources, contextual conditions ranging from benign to stressful, and different performance criteria. We present a small class hierarchy of AIS niches that exhibit these dimensions of variability and describe a particular AIS niche, ICU (intensive care unit) patient monitoring, which we use for illustration throughout the paper. We have designed and implemented an agent architecture that supports all of different kinds of adaptation by exploiting a single underlying theoretical concept: An agent dynamically constructs explicit control plans to guide its choices among situation-triggered behaviors. We illustrate the architecture and its support for adaptation with examples from Guardian, an experimental agent for ICU monitoring.

  7. HRST architecture modeling and assessments

    SciTech Connect

    Comstock, D.A.

    1997-01-01

    This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segments for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented. {copyright} {ital 1997 American Institute of Physics.}

  8. HRST architecture modeling and assessments

    NASA Astrophysics Data System (ADS)

    Comstock, Douglas A.

    1997-01-01

    This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segments for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented.

  9. Launch Vehicle Control Center Architectures

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Williams, Randall; McLaughlin, Tom

    2014-01-01

    This analysis is a survey of control center architectures of the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures have similarities in basic structure, and differences in functional distribution of responsibilities for the phases of operations: (a) Launch vehicles in the international community vary greatly in configuration and process; (b) Each launch site has a unique processing flow based on the specific configurations; (c) Launch and flight operations are managed through a set of control centers associated with each launch site, however the flight operations may be a different control center than the launch center; and (d) The engineering support centers are primarily located at the design center with a small engineering support team at the launch site.

  10. Airport Surface Network Architecture Definition

    NASA Technical Reports Server (NTRS)

    Nguyen, Thanh C.; Eddy, Wesley M.; Bretmersky, Steven C.; Lawas-Grodek, Fran; Ellis, Brenda L.

    2006-01-01

    Currently, airport surface communications are fragmented across multiple types of systems. These communication systems for airport operations at most airports today are based dedicated and separate architectures that cannot support system-wide interoperability and information sharing. The requirements placed upon the Communications, Navigation, and Surveillance (CNS) systems in airports are rapidly growing and integration is urgently needed if the future vision of the National Airspace System (NAS) and the Next Generation Air Transportation System (NGATS) 2025 concept are to be realized. To address this and other problems such as airport surface congestion, the Space Based Technologies Project s Surface ICNS Network Architecture team at NASA Glenn Research Center has assessed airport surface communications requirements, analyzed existing and future surface applications, and defined a set of architecture functions that will help design a scalable, reliable and flexible surface network architecture to meet the current and future needs of airport operations. This paper describes the systems approach or methodology to networking that was employed to assess airport surface communications requirements, analyze applications, and to define the surface network architecture functions as the building blocks or components of the network. The systems approach used for defining these functions is relatively new to networking. It is viewing the surface network, along with its environment (everything that the surface network interacts with or impacts), as a system. Associated with this system are sets of services that are offered by the network to the rest of the system. Therefore, the surface network is considered as part of the larger system (such as the NAS), with interactions and dependencies between the surface network and its users, applications, and devices. The surface network architecture includes components such as addressing/routing, network management, network

  11. Information architecture. Volume 1, The foundations

    SciTech Connect

    1995-03-01

    The Information Management Planning and Architecture Coordinating Team was formed to establish an information architecture framework to meet DOE`s current and future information needs. This department- wide activity was initiated in accordance with the DOE Information Management Strategic Plan; it also supports the Departmental Strategic Plan. It recognizes recent changes in emphasis as reflected in OMB Circular A-130 and the Information Resources Management Planning Process Improvement Team recommendations. Sections of this document provides the foundation for establishing DOE`s Information Architecture: Background, Business Case (reduced duplication of effort, increased integration of activities, improved operational capabilities), Baseline (technology baseline currently in place within DOE), Vision (guiding principles for future DOE Information Architecture), Standards Process, Policy and Process Integration (describes relations between information architecture and business processes), and Next Steps. Following each section is a scenario. A glossary of terms is provided.

  12. Launch Vehicle Control Center Architectures

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Levesque, Marl; Williams, Randall; Mclaughlin, Tom

    2014-01-01

    Launch vehicles within the international community vary greatly in their configuration and processing. Each launch site has a unique processing flow based on the specific launch vehicle configuration. Launch and flight operations are managed through a set of control centers associated with each launch site. Each launch site has a control center for launch operations; however flight operations support varies from being co-located with the launch site to being shared with the space vehicle control center. There is also a nuance of some having an engineering support center which may be co-located with either the launch or flight control center, or in a separate geographical location altogether. A survey of control center architectures is presented for various launch vehicles including the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures shares some similarities in basic structure while differences in functional distribution also exist. The driving functions which lead to these factors are considered and a model of control center architectures is proposed which supports these commonalities and variations.

  13. The UAS control segment architecture: an overview

    NASA Astrophysics Data System (ADS)

    Gregory, Douglas A.; Batavia, Parag; Coats, Mark; Allport, Chris; Jennings, Ann; Ernst, Richard

    2013-05-01

    The Under Secretary of Defense (Acquisition, Technology and Logistics) directed the Services in 2009 to jointly develop and demonstrate a common architecture for command and control of Department of Defense (DoD) Unmanned Aircraft Systems (UAS) Groups 2 through 5. The UAS Control Segment (UCS) Architecture is an architecture framework for specifying and designing the softwareintensive capabilities of current and emerging UCS systems in the DoD inventory. The UCS Architecture is based on Service Oriented Architecture (SOA) principles that will be adopted by each of the Services as a common basis for acquiring, integrating, and extending the capabilities of the UAS Control Segment. The UAS Task Force established the UCS Working Group to develop and support the UCS Architecture. The Working Group currently has over three hundred members, and is open to qualified representatives from DoD-approved defense contractors, academia, and the Government. The UCS Architecture is currently at Release 2.2, with Release 3.0 planned for July 2013. This paper discusses the current and planned elements of the UCS Architecture, and related activities of the UCS Community of Interest.

  14. Space Telecommunications Radio Architecture (STRS): Technical Overview

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG s SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  15. A reference architecture for integrated EHR in Colombia.

    PubMed

    de la Cruz, Edgar; Lopez, Diego M; Uribe, Gustavo; Gonzalez, Carolina; Blobel, Bernd

    2011-01-01

    The implementation of national EHR infrastructures has to start by a detailed definition of the overall structure and behavior of the EHR system (system architecture). Architectures have to be open, scalable, flexible, user accepted and user friendly, trustworthy, based on standards including terminologies and ontologies. The GCM provides an architectural framework created with the purpose of analyzing any kind of system, including EHR system´s architectures. The objective of this paper is to propose a reference architecture for the implementation of an integrated EHR in Colombia, based on the current state of system´s architectural models, and EHR standards. The proposed EHR architecture defines a set of services (elements) and their interfaces, to support the exchange of clinical documents, offering an open, scalable, flexible and semantically interoperable infrastructure. The architecture was tested in a pilot tele-consultation project in Colombia, where dental EHR are exchanged. PMID:21893762

  16. Distributed visualization framework architecture

    NASA Astrophysics Data System (ADS)

    Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger

    2010-01-01

    An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this

  17. Generic architectures for future flight systems

    NASA Technical Reports Server (NTRS)

    Wood, Richard J.

    1992-01-01

    Generic architecture for future flight systems must be based on open system architectures (OSA). This provides the developer and integrator the flexibility to optimize the hardware and software systems to match diverse and unique applications requirements. When developed properly OSA provides interoperability, commonality, graceful upgradability, survivability and hardware/software transportability to greatly minimize life cycle costs and supportability. Architecture flexibility can be achieved to take advantage of commercial developments by basing these developments on vendor-neutral commercially accepted standards and protocols. Rome Laboratory presently has a program that addresses requirements for OSA.

  18. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  19. The EPSILON-2 hybrid dataflow architecture

    SciTech Connect

    Grafe, V.G.; Hoch, J.E.

    1989-11-08

    EPSILON-2 is a general parallel computer architecture that combines the fine grain parallelism of dataflow computing with the sequential efficiency common to von Neumann computing. Instruction level synchronization, single cycle context switches, and RISC-like sequential efficiency are all supported in EPSILON-2. The general parallel computing model of EPSILON-2 is described, followed by a description of the processing element architecture. A sample code is presented in detail, and the progress of the physical implementation discussed. 11 refs., 14 figs.

  20. The Self-Organising Seismic Early Warning Information Network

    NASA Astrophysics Data System (ADS)

    Picozzi, M.

    2009-04-01

    The Self-Organizing Seismic Early Warning Information Network (SOSEWIN) represents a new approach for Earthquake Early Warning Systems (EEWS), consisting in taking advantage of novel wireless communications technologies. It also sets out to overcome problems of insufficient node density, which typically affects present existing early warning systems, by having the SOSEWIN seismological sensing units being comprised of low-cost components (generally bought "off-the-shelf"), with each unit initially costing 100's of Euros, in contrast to 1,000's to 10,000's for standard seismological stations. The reduced sensitivity of the new sensing units arising from the use of lower-cost components will be compensated by the network's density, which in the future is expected to number 100's to 1000's over areas served currently by the order of 10's of standard stations. The robustness, independence of infrastructure, spontaneous extensibility and a self-healing/self-organizing character in the event of failing sensors during an earthquake makes SOSEWIN particularly useful for urban areas. Moreover, in the post-event time frame, negligible assumptions or interpolations would be necessary for assessing the strong ground shaking and earthquake intensities. In SOSEWIN, the ground motion is continuously monitored by conventional accelerometers (3-component) and geophones and analyzed using robust signal analysis methods by each sensing node of the network. The incoming signals are pre-processed by bandpass filtering and the detection processing is performed using an automatic STA/LTA trigger algorithm. Signal attributes are iteratively estimated from the P-wave part of the recordings (e.g. PGA, PGV, PGD, Arias Intensity and Cumulative Absolute Velocity) to determine if the earthquake is of sufficient magnitude to be of concern to issue a system alarm. Differently from most existing EEWS where the alarming system relies on estimates provided by only a few seismic stations, the SOSEWIN is specifically designed to take advantage during the "event detection" and "appropriate issuing of alarms" stages of a redundancy of available real-time ground motion information, thanks to the dense wireless mesh network. All of these strategies are devoted to minimizing the occurrence of false alarms while maximizing the early warning or lead time. The early warning performance of SOSEWIN in terms of its combination of seismological software, hierarchical alarming protocol and routing protocol are currently being tested by simulations. The first deployment of the SOSEWIN was carried out in June 2008, with a network of 20 stations installed in the Ataköy district of Istanbul, Turkey. We present here a report of the first months of the associated activities, together with what the field experiences have taught us in terms of wireless communication for early warning purposes.

  1. The Self-Organising Seismic Early Warning Information Network

    NASA Astrophysics Data System (ADS)

    Zschau, J.; Picozzi, M.; Milkereit, C.; Fleming, K.; Fischer, J.; Kuehnlenz, F.; Lichtblau, B.; Erdik, M.

    2008-12-01

    The Self-Organizing Seismic Early Warning Information Network (SOSEWIN) represents a new approach for Earthquake Early Warning Systems (EEWS), consisting in taking advantage of novel wireless communications technologies. It also sets out to overcome problems of insufficient node density, which typically affects present existing early warning systems, by having the SOSEWIN seismological sensing units being comprised of low-cost components (generally bought "off-the-shelf"), with each unit initially costing 100's of Euros, in contrast to 1,000's to 10,000's for standard seismological stations. The reduced sensitivity of the new sensing units arising from the use of lower-cost components will be compensated by the network's density, which in the future is expected to number 100's to 1000's over areas served currently by the order of 10's of standard stations. The robustness, independence of infrastructure, spontaneous extensibility and a self-healing/self-organizing character in the event of failing sensors during an earthquake makes SOSEWIN particularly useful for urban areas. Moreover, in the post-event time frame, negligible assumptions or interpolations would be necessary for assessing the strong ground shaking and earthquake intensities. In SOSEWIN, the ground motion is continuously monitored by conventional accelerometers (3-component) and geophones and analyzed using robust signal analysis methods by each sensing node of the network. The incoming signals are pre-processed by bandpass filtering and the detection processing is performed using an automatic STA/LTA trigger algorithm. Signal attributes are iteratively estimated from the P-wave part of the recordings (e.g. PGAP, PGVP, PGDP, Arias Intensity and Cumulative Absolute Velocity) to determine if the earthquake is of sufficient magnitude to be of concern to issue a system alarm. Differently from most existing EEWS where the alarming system relies on estimates provided by only a few seismic stations, the SOSEWIN is specifically designed to take advantage during the 'event detection' and 'appropriate issuing of alarms' stages of a redundancy of available real-time ground motion information, thanks to the dense wireless mesh network. All of these strategies are devoted to minimizing the occurrence of false alarms while maximizing the early warning or lead time. The early warning performance of SOSEWIN in terms of its combination of seismological software, hierarchical alarming protocol and routing protocol are currently being tested by simulations. The first deployment of the SOSEWIN was carried out in June 2008, with a network of 20 stations installed in the Ataköy district of Istanbul, Turkey. We present here a report of the first few months of the associated activities, together with what the field experiences have taught us in terms of wireless communication for early warning purposes.

  2. Franchisees in Crisis: Using Action Learning to Self-Organise

    ERIC Educational Resources Information Center

    O'Donoghue, Carol

    2011-01-01

    The present article describes the use of action learning by a group of 30 franchisees to organise themselves and work through a period of upheaval and uncertainty when their parent company faced liquidation. Written from the perspective of one of the franchisees who found herself adopting action learning principles to facilitate the group, it…

  3. Extended Self Organised Criticality in Asynchronously Tuned Cellular Automata

    NASA Astrophysics Data System (ADS)

    Gunji, Yukio-Pegio

    2014-12-01

    Systems at a critical point in phase transitions can be regarded as being relevant to biological complex behaviour. Such a perspective can only result, in a mathematical consistent manner, from a recursive structure. We implement a recursive structure based on updating by asynchronously tuned elementary cellular automata (AT ECA), and show that a large class of elementary cellular automata (ECA) can reveal critical behavior due to the asynchronous updating and tuning.We show that the obtained criticality coincides with the criticality in phase transitions of asynchronous ECA with respect to density decay, and that multiple distributed ECAs, synchronously updated, can emulate critical behavior in AT ECA. Our approach draws on concepts and tools from category and set theory, in particular on "adjunction dualities" of pairs of adjoint functors.

  4. Advanced computer architecture specification for automated weld systems

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.

  5. Numerical Propulsion System Simulation Architecture

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia G.

    2004-01-01

    The Numerical Propulsion System Simulation (NPSS) is a framework for performing analysis of complex systems. Because the NPSS was developed using the object-oriented paradigm, the resulting architecture is an extensible and flexible framework that is currently being used by a diverse set of participants in government, academia, and the aerospace industry. NPSS is being used by over 15 different institutions to support rockets, hypersonics, power and propulsion, fuel cells, ground based power, and aerospace. Full system-level simulations as well as subsystems may be modeled using NPSS. The NPSS architecture enables the coupling of analyses at various levels of detail, which is called numerical zooming. The middleware used to enable zooming and distributed simulations is the Common Object Request Broker Architecture (CORBA). The NPSS Developer's Kit offers tools for the developer to generate CORBA-based components and wrap codes. The Developer's Kit enables distributed multi-fidelity and multi-discipline simulations, preserves proprietary and legacy codes, and facilitates addition of customized codes. The platforms supported are PC, Linux, HP, Sun, and SGI.

  6. Savannah River Site computing architecture

    SciTech Connect

    Not Available

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site`s production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  7. Savannah River Site computing architecture

    SciTech Connect

    Not Available

    1991-03-29

    A computing architecture is a framework for making decisions about the implementation of computer technology and the supporting infrastructure. Because of the size, diversity, and amount of resources dedicated to computing at the Savannah River Site (SRS), there must be an overall strategic plan that can be followed by the thousands of site personnel who make decisions daily that directly affect the SRS computing environment and impact the site's production and business systems. This plan must address the following requirements: There must be SRS-wide standards for procurement or development of computing systems (hardware and software). The site computing organizations must develop systems that end users find easy to use. Systems must be put in place to support the primary function of site information workers. The developers of computer systems must be given tools that automate and speed up the development of information systems and applications based on computer technology. This document describes a proposal for a site-wide computing architecture that addresses the above requirements. In summary, this architecture is standards-based data-driven, and workstation-oriented with larger systems being utilized for the delivery of needed information to users in a client-server relationship.

  8. Grid Architecture 2

    SciTech Connect

    Taft, Jeffrey D.

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  9. Architectures for Nanostructured Batteries

    NASA Astrophysics Data System (ADS)

    Rubloff, Gary

    2013-03-01

    Heterogeneous nanostructures offer profound opportunities for advancement in electrochemical energy storage, particularly with regard to power. However, their design and integration must balance ion transport, electron transport, and stability under charge/discharge cycling, involving fundamental physical, chemical and electrochemical mechanisms at nano length scales and across disparate time scales. In our group and in our DOE Energy Frontier Research Center (www.efrc.umd.edu) we have investigated single nanostructures and regular nanostructure arrays as batteries, electrochemical capacitors, and electrostatic capacitors to understand limiting mechanisms, using a variety of synthesis and characterization strategies. Primary lithiation pathways in heterogeneous nanostructures have been observed to include surface, interface, and both isotropic and anisotropic diffusion, depending on materials. Integrating current collection layers at the nano scale with active ion storage layers enhances power and can improve stability during cycling. For densely packed nanostructures as required for storage applications, we investigate both ``regular'' and ``random'' architectures consistent with transport requirements for spatial connectivity. Such configurations raise further important questions at the meso scale, such as dynamic ion and electron transport in narrow and tortuous channels, and the role of defect structures and their evolution during charge cycling. Supported as part of the Nanostructures for Electrical Energy Storage, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DESC0001160

  10. The Technology of Architecture

    ERIC Educational Resources Information Center

    Reese, Susan

    2006-01-01

    This article discusses how career and technical education is helping students draw up plans for success in architectural technology. According to the College of DuPage (COD) in Glen Ellyn, Illinois, one of the two-year schools offering training in architectural technology, graduates have a number of opportunities available to them. They may work…

  11. Workflow automation architecture standard

    SciTech Connect

    Moshofsky, R.P.; Rohen, W.T.

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  12. Clinical document architecture.

    PubMed

    Heitmann, Kai

    2003-01-01

    The Clinical Document Architecture (CDA), a standard developed by the Health Level Seven organisation (HL7), is an ANSI approved document architecture for exchange of clinical information using XML. A CDA document is comprised of a header with associated vocabularies and a body containing the structural clinical information. PMID:15061557

  13. Generic POCC architectures

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This document describes a generic POCC (Payload Operations Control Center) architecture based upon current POCC software practice, and several refinements to the architecture based upon object-oriented design principles and expected developments in teleoperations. The current-technology generic architecture is an abstraction based upon close analysis of the ERBS, COBE, and GRO POCC's. A series of three refinements is presented: these may be viewed as an approach to a phased transition to the recommended architecture. The third refinement constitutes the recommended architecture, which, together with associated rationales, will form the basis of the rapid synthesis environment to be developed in the remainder of this task. The document is organized into two parts. The first part describes the current generic architecture using several graphical as well as tabular representations or 'views.' The second part presents an analysis of the generic architecture in terms of object-oriented principles. On the basis of this discussion, refinements to the generic architecture are presented, again using a combination of graphical and tabular representations.

  14. Emerging supercomputer architectures

    SciTech Connect

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  15. Architectural Physics: Lighting.

    ERIC Educational Resources Information Center

    Hopkinson, R. G.

    The author coordinates the many diverse branches of knowledge which have dealt with the field of lighting--physiology, psychology, engineering, physics, and architectural design. Part I, "The Elements of Architectural Physics", discusses the physiological aspects of lighting, visual performance, lighting design, calculations and measurements of…

  16. Software Architecture Evolution

    ERIC Educational Resources Information Center

    Barnes, Jeffrey M.

    2013-01-01

    Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects. Today's software architects, however,…

  17. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  18. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  19. Virtual environment architecture for rapid application development

    NASA Technical Reports Server (NTRS)

    Grinstein, Georges G.; Southard, David A.; Lee, J. P.

    1993-01-01

    We describe the MITRE Virtual Environment Architecture (VEA), a product of nearly two years of investigations and prototypes of virtual environment technology. This paper discusses the requirements for rapid prototyping, and an architecture we are developing to support virtual environment construction. VEA supports rapid application development by providing a variety of pre-built modules that can be reconfigured for each application session. The modules supply interfaces for several types of interactive I/O devices, in addition to large-screen or head-mounted displays.

  20. Middleware Architecture Evaluation for Dependable Self-managing Systems

    SciTech Connect

    Liu, Yan; Babar, Muhammad A.; Gorton, Ian

    2008-10-10

    Middleware provides infrastructure support for creating dependable software systems. A specific middleware implementation plays a critical role in determining the quality attributes that satisfy a system’s dependability requirements. Evaluating a middleware architecture at an early development stage can help to pinpoint critical architectural challenges and optimize design decisions. In this paper, we present a method and its application to evaluate middleware architectures, driven by emerging architecture patterns for developing self-managing systems. Our approach focuses on two key attributes of dependability, reliability and maintainability by means of fault tolerance and fault prevention. We identify the architectural design patterns necessary to build an adaptive self-managing architecture that is capable of preventing or recovering from failures. These architectural patterns and their impacts on quality attributes create the context for middleware evaluation. Our approach is demonstrated by an example application -- failover control of a financial application on an enterprise service bus.

  1. Architectural design for resilience

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Deters, Ralph; Zhang, W. J.

    2010-05-01

    Resilience has become a new nonfunctional requirement for information systems. Many design decisions have to be made at the architectural level in order to deliver an information system with the resilience property. This paper discusses the relationships between resilience and other architectural properties such as scalability, reliability, and consistency. A corollary is derived from the CAP theorem, and states that it is impossible for a system to have all three properties of consistency, resilience and partition-tolerance. We present seven architectural constraints for resilience. The constraints are elicited from good architectural practices for developing reliable and fault-tolerant systems and the state-of-the-art technologies in distributed computing. These constraints provide a comprehensive reference for architectural design towards resilience.

  2. The Simulation Intranet Architecture

    SciTech Connect

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  3. Satellite ATM Networks: Architectures and Guidelines Developed

    NASA Technical Reports Server (NTRS)

    vonDeak, Thomas C.; Yegendu, Ferit

    1999-01-01

    An important element of satellite-supported asynchronous transfer mode (ATM) networking will involve support for the routing and rerouting of active connections. Work published under the auspices of the Telecommunications Industry Association (http://www.tiaonline.org), describes basic architectures and routing protocol issues for satellite ATM (SATATM) networks. The architectures and issues identified will serve as a basis for further development of technical specifications for these SATATM networks. Three ATM network architectures for bent pipe satellites and three ATM network architectures for satellites with onboard ATM switches were developed. The architectures differ from one another in terms of required level of mobility, supported data rates, supported terrestrial interfaces, and onboard processing and switching requirements. The documentation addresses low-, middle-, and geosynchronous-Earth-orbit satellite configurations. The satellite environment may require real-time routing to support the mobility of end devices and nodes of the ATM network itself. This requires the network to be able to reroute active circuits in real time. In addition to supporting mobility, rerouting can also be used to (1) optimize network routing, (2) respond to changing quality-of-service requirements, and (3) provide a fault tolerance mechanism. Traffic management and control functions are necessary in ATM to ensure that the quality-of-service requirements associated with each connection are not violated and also to provide flow and congestion control functions. Functions related to traffic management were identified and described. Most of these traffic management functions will be supported by on-ground ATM switches, but in a hybrid terrestrial-satellite ATM network, some of the traffic management functions may have to be supported by the onboard satellite ATM switch. Future work is planned to examine the tradeoffs of placing traffic management functions onboard a satellite as

  4. The Planning Execution Monitoring Architecture

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Ly, Bebe; Crocker, Alan; Schreckenghost, Debra; Mueller, Stephen; Phillips, Robert; Wadsworth, David; Sorensen, Charles

    2011-01-01

    The Planning Execution Monitoring (PEM) architecture is a design concept for developing autonomous cockpit command and control software. The PEM architecture is designed to reduce the operations costs in the space transportation system through the use of automation while improving safety and operability of the system. Specifically, the PEM autonomous framework enables automatic performance of many vehicle operations that would typically be performed by a human. Also, this framework supports varying levels of autonomous control, ranging from fully automatic to fully manual control. The PEM autonomous framework interfaces with the core flight software to perform flight procedures. It can either assist human operators in performing procedures or autonomously execute routine cockpit procedures based on the operational context. Most importantly, the PEM autonomous framework promotes and simplifies the capture, verification, and validation of the flight operations knowledge. Through a hierarchical decomposition of the domain knowledge, the vehicle command and control capabilities are divided into manageable functional "chunks" that can be captured and verified separately. These functional units, each of which has the responsibility to manage part of the vehicle command and control, are modular, re-usable, and extensible. Also, the functional units are self-contained and have the ability to plan and execute the necessary steps for accomplishing a task based upon the current mission state and available resources. The PEM architecture has potential for application outside the realm of spaceflight, including management of complex industrial processes, nuclear control, and control of complex vehicles such as submarines or unmanned air vehicles.

  5. Fractal Geometry of Architecture

    NASA Astrophysics Data System (ADS)

    Lorenz, Wolfgang E.

    In Fractals smaller parts and the whole are linked together. Fractals are self-similar, as those parts are, at least approximately, scaled-down copies of the rough whole. In architecture, such a concept has also been known for a long time. Not only architects of the twentieth century called for an overall idea that is mirrored in every single detail, but also Gothic cathedrals and Indian temples offer self-similarity. This study mainly focuses upon the question whether this concept of self-similarity makes architecture with fractal properties more diverse and interesting than Euclidean Modern architecture. The first part gives an introduction and explains Fractal properties in various natural and architectural objects, presenting the underlying structure by computer programmed renderings. In this connection, differences between the fractal, architectural concept and true, mathematical Fractals are worked out to become aware of limits. This is the basis for dealing with the problem whether fractal-like architecture, particularly facades, can be measured so that different designs can be compared with each other under the aspect of fractal properties. Finally the usability of the Box-Counting Method, an easy-to-use measurement method of Fractal Dimension is analyzed with regard to architecture.

  6. Architecture for Verifiable Software

    NASA Technical Reports Server (NTRS)

    Reinholtz, William; Dvorak, Daniel

    2005-01-01

    Verifiable MDS Architecture (VMA) is a software architecture that facilitates the construction of highly verifiable flight software for NASA s Mission Data System (MDS), especially for smaller missions subject to cost constraints. More specifically, the purpose served by VMA is to facilitate aggressive verification and validation of flight software while imposing a minimum of constraints on overall functionality. VMA exploits the state-based architecture of the MDS and partitions verification issues into elements susceptible to independent verification and validation, in such a manner that scaling issues are minimized, so that relatively large software systems can be aggressively verified in a cost-effective manner.

  7. Tagged token dataflow architecture

    SciTech Connect

    Arvind; Culler, D.E.

    1983-10-01

    The demand for large-scale multiprocessor systems has been substantial for many years. The technology for fabrication of such systems is available, but attempts to extend traditional architectures to this context have met with only mild success. The authors hold that fundamental aspects of the Von Neumann architecture prohibit its extension to multiprocessor systems; they pose dataflow architectures as an alternative. These two approaches are contrasted on issues of synchronization, memory latency, and the ability to share data without constraining parallelism. 12 references.

  8. Microcomponent sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K..; McDonald, C.E.

    1997-03-18

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 14 figs.

  9. Microcomponent sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; McDonald, Carolyn E.

    1997-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  10. RASSP signal processing architectures

    NASA Astrophysics Data System (ADS)

    Shirley, Fred; Bassett, Bob; Letellier, J. P.

    1995-06-01

    The rapid prototyping of application specific signal processors (RASSP) program is an ARPA/tri-service effort to dramatically improve the process by which complex digital systems, particularly embedded signal processors, are specified, designed, documented, manufactured, and supported. The domain of embedded signal processing was chosen because it is important to a variety of military and commercial applications as well as for the challenge it presents in terms of complexity and performance demands. The principal effort is being performed by two major contractors, Lockheed Sanders (Nashua, NH) and Martin Marietta (Camden, NJ). For both, improvements in methodology are to be exercised and refined through the performance of individual 'Demonstration' efforts. The Lockheed Sanders' Demonstration effort is to develop an infrared search and track (IRST) processor. In addition, both contractors' results are being measured by a series of externally administered (by Lincoln Labs) six-month Benchmark programs that measure process improvement as a function of time. The first two Benchmark programs are designing and implementing a synthetic aperture radar (SAR) processor. Our demonstration team is using commercially available VME modules from Mercury Computer to assemble a multiprocessor system scalable from one to hundreds of Intel i860 microprocessors. Custom modules for the sensor interface and display driver are also being developed. This system implements either proprietary or Navy owned algorithms to perform the compute-intensive IRST function in real time in an avionics environment. Our Benchmark team is designing custom modules using commercially available processor ship sets, communication submodules, and reconfigurable logic devices. One of the modules contains multiple vector processors optimized for fast Fourier transform processing. Another module is a fiberoptic interface that accepts high-rate input data from the sensors and provides video-rate output data to a

  11. Flexible weapons architecture design

    NASA Astrophysics Data System (ADS)

    Pyant, William C., III

    Present day air-delivered weapons are of a closed architecture, with little to no ability to tailor the weapon for the individual engagement. The closed architectures require weaponeers to make the target fit the weapon instead of fitting the individual weapons to a target. The concept of a flexible weapons aims to modularize weapons design using an open architecture shell into which different modules are inserted to achieve the desired target fractional damage while reducing cost and civilian casualties. This thesis shows that the architecture design factors of damage mechanism, fusing, weapons weight, guidance, and propulsion are significant in enhancing weapon performance objectives, and would benefit from modularization. Additionally, this thesis constructs an algorithm that can be used to design a weapon set for a particular target class based on these modular components.

  12. Robot Electronics Architecture

    NASA Technical Reports Server (NTRS)

    Garrett, Michael; Magnone, Lee; Aghazarian, Hrand; Baumgartner, Eric; Kennedy, Brett

    2008-01-01

    An electronics architecture has been developed to enable the rapid construction and testing of prototypes of robotic systems. This architecture is designed to be a research vehicle of great stability, reliability, and versatility. A system according to this architecture can easily be reconfigured (including expanded or contracted) to satisfy a variety of needs with respect to input, output, processing of data, sensing, actuation, and power. The architecture affords a variety of expandable input/output options that enable ready integration of instruments, actuators, sensors, and other devices as independent modular units. The separation of different electrical functions onto independent circuit boards facilitates the development of corresponding simple and modular software interfaces. As a result, both hardware and software can be made to expand or contract in modular fashion while expending a minimum of time and effort.

  13. An Open Specification for Space Project Mission Operations Control Architectures

    NASA Technical Reports Server (NTRS)

    Hooke, A.; Heuser, W. R.

    1995-01-01

    An 'open specification' for Space Project Mission Operations Control Architectures is under development in the Spacecraft Control Working Group of the American Institute for Aeronautics and Astro- nautics. This architecture identifies 5 basic elements incorporated in the design of similar operations systems: Data, System Management, Control Interface, Decision Support Engine, & Space Messaging Service.

  14. CORDIC processor architectures

    NASA Astrophysics Data System (ADS)

    Boehme, Johann F.; Timmermann, D.; Hahn, H.; Hosticka, Bedrich J.

    1991-12-01

    As CORDIC algorithms receive more and more attention in elementary function evaluation and signal processing applications, the problem of their VLSI realization has attracted considerable interest. In this work we review the CORDIC fundamentals covering algorithm, architecture, and implementation issues. Various aspects of the CORDIC algorithm are investigated such as efficient scale factor compensation, redundant and non-redundant addition schemes, and convergence domain. Several CORDIC processor architectures and implementation examples are discussed.

  15. A Practical Software Architecture for Virtual Universities

    ERIC Educational Resources Information Center

    Xiang, Peifeng; Shi, Yuanchun; Qin, Weijun

    2006-01-01

    This article introduces a practical software architecture called CUBES, which focuses on system integration and evolvement for online virtual universities. The key of CUBES is a supporting platform that helps to integrate and evolve heterogeneous educational applications developed by different organizations. Both standardized educational…

  16. Utilizing Rapid Prototyping for Architectural Modeling

    ERIC Educational Resources Information Center

    Kirton, E. F.; Lavoie, S. D.

    2006-01-01

    This paper will discuss our approach to, success with and future direction in rapid prototyping for architectural modeling. The premise that this emerging technology has broad and exciting applications in the building design and construction industry will be supported by visual and physical evidence. This evidence will be presented in the form of…

  17. Network architecture functional description and design

    SciTech Connect

    Stans, L.; Bencoe, M.; Brown, D.; Kelly, S.; Pierson, L.; Schaldach, C.

    1989-05-25

    This report provides a top level functional description and design for the development and implementation of the central network to support the next generation of SNL, Albuquerque supercomputer in a UNIX{reg sign} environment. It describes the network functions and provides an architecture and topology.

  18. Beethoven: architecture for media telephony

    NASA Astrophysics Data System (ADS)

    Keskinarkaus, Anja; Ohtonen, Timo; Sauvola, Jaakko J.

    1999-11-01

    This paper presents a new architecture and techniques for media-based telephony over wireless/wireline IP networks, called `Beethoven'. The platform supports complex media transport and mobile conferencing for multi-user environments having a non-uniform access. New techniques are presented to provide advanced multimedia call management over different media types and their presentation. The routing and distribution of the media is rendered over the standards based protocol. Our approach offers a generic, distributed and object-oriented solution having interfaces, where signal processing and unified messaging algorithms are embedded as instances of core classes. The platform services are divided into `basic communication', `conferencing' and `media session'. The basic communication form platform core services and supports access from scalable user interface to network end-points. Conferencing services take care of media filter adaptation, conversion, error resiliency, multi-party connection and event signaling, while the media session services offer resources for application-level communication between the terminals. The platform allows flexible attachment of any number of plug-in modules, and thus we use it as a test bench for multiparty/multi-point conferencing and as an evaluation bench for signal coding algorithms. In tests, our architecture showed the ability to easily be scaled from simple voice terminal to complex multi-user conference sharing virtual data.

  19. Message Bus Architectures - Simplicity in the Right Places

    NASA Technical Reports Server (NTRS)

    Smith, Dan

    2010-01-01

    There will always be a new latest and greatest architecture for satellite ground systems. This paper discusses the use of a proven message-oriented middleware (MOM) architecture using publish/subscribe functions and the strengths it brings to these mission critical systems. An even newer approach gaining popularity is Service Oriented Architectures (SOAs). SOAs are generally considered more powerful than the MOM approach and address many mission-critical system challenges. A MOM vs SOA discussion can highlight capabilities supported or enabled by the underlying architecture and can identify benefits of MOMs and SOAs when applied to differing sets of mission requirements or evaluation criteria.

  20. Evaluating science return in space exploration initiative architectures

    NASA Technical Reports Server (NTRS)

    Budden, Nancy Ann; Spudis, Paul D.

    1993-01-01

    Science is an important aspect of the Space Exploration Initiative, a program to explore the Moon and Mars with people and machines. Different SEI mission architectures are evaluated on the basis of three variables: access (to the planet's surface), capability (including number of crew, equipment, and supporting infrastructure), and time (being the total number of man-hours available for scientific activities). This technique allows us to estimate the scientific return to be expected from different architectures and from different implementations of the same architecture. Our methodology allows us to maximize the scientific return from the initiative by illuminating the different emphases and returns that result from the alternative architectural decisions.

  1. Specifying structural constraints of architectural patterns in the ARCHERY language

    SciTech Connect

    Sanchez, Alejandro; Barbosa, Luis S.; Riesco, Daniel

    2015-03-10

    ARCHERY is an architectural description language for modelling and reasoning about distributed, heterogeneous and dynamically reconfigurable systems in terms of architectural patterns. The language supports the specification of architectures and their reconfiguration. This paper introduces a language extension for precisely describing the structural design decisions that pattern instances must respect in their (re)configurations. The extension is a propositional modal logic with recursion and nominals referencing components, i.e., a hybrid µ-calculus. Its expressiveness allows specifying safety and liveness constraints, as well as paths and cycles over structures. Refinements of classic architectural patterns are specified.

  2. Neural Architectures for Control

    NASA Technical Reports Server (NTRS)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  3. Technology architecture guidelines for a health care system.

    PubMed

    Jones, D T; Duncan, R; Langberg, M L; Shabot, M M

    2000-01-01

    Although the demand for use of information technology within the healthcare industry is intensifying, relatively little has been written about guidelines to optimize IT investments. A technology architecture is a set of guidelines for technology integration within an enterprise. The architecture is a critical tool in the effort to control information technology (IT) operating costs by constraining the number of technologies supported. A well-designed architecture is also an important aid to integrating disparate applications, data stores and networks. The authors led the development of a thorough, carefully designed technology architecture for a large and rapidly growing health care system. The purpose and design criteria are described, as well as the process for gaining consensus and disseminating the architecture. In addition, the processes for using, maintaining, and handling exceptions are described. The technology architecture is extremely valuable to health care organizations both in controlling costs and promoting integration. PMID:11079913

  4. Unifying parametrized VLSI Jacobi algorithms and architectures

    NASA Astrophysics Data System (ADS)

    Deprettere, Ed F. A.; Moonen, Marc

    1993-11-01

    Implementing Jacobi algorithms in parallel VLSI processor arrays is a non-trivial task, in particular when the algorithms are parametrized with respect to size and the architectures are parametrized with respect to space-time trade-offs. The paper is concerned with an approach to implement several time-adaptive Jacobi-type algorithms on a parallel processor array, using only Cordic arithmetic and asynchronous communications, such that any degree of parallelism, ranging from single-processor up to full-size array implementation, is supported by a `universal' processing unit. This result is attributed to a gracious interplay between algorithmic and architectural engineering.

  5. Unconventional Architectures for High-Throughput Sciences

    SciTech Connect

    Nieplocha, Jarek; Marquez, Andres; Petrini, Fabrizio; Chavarría-Miranda, Daniel

    2007-06-15

    Science laboratories and sophisticated simulations are producing data of increasing volumes and complexities, and that’s posing significant challenges to current data infrastructures as terabytes to petabytes of data must be processed and analyzed. Traditional computing platforms, originally designed to support model-driven applications, are unable to meet the demands of the data-intensive scientific applications. Pacific Northwest National Laboratory (PNNL) research goes beyond “traditional supercomputing” applications to address emerging problems that need scalable, real-time solutions. The outcome is new unconventional architectures for data-intensive applications specifically designed to process the deluge of scientific data, including FPGAs, multithreaded architectures and IBM's Cell.

  6. Constellation Architecture and System Margins Strategy

    NASA Technical Reports Server (NTRS)

    Muirhead, Brian

    2008-01-01

    NASA's Constellation Program (CxP) is responsible for the definition, design, development, and operations of the flight, ground, and mission operations elements being developed by the United States for the human exploration of the Moon, Mars, and beyond. This paper provides an overview of the latest CxP technical architecture baseline, driving requirements, and reference missions for initial capability to fly to the International Space Station (ISS) and to the Moon. The results of the most recent design decisions and analyses supporting the architecture, including the Ares I, Ares V, Orion crew exploration vehicle, and the Altair lunar lander will be presented.

  7. Space station needs, attributes and architectural options: Midterm main briefing

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Space station missions, their requirements, and architectural solutions are presented. Analyses of the following five mission categories are summarized: (1) science/applications, (2) commercial, (3) national security, (4) operational support, and (5) technology development.

  8. Using an Integrated Distributed Test Architecture to Develop an Architecture for Mars

    NASA Technical Reports Server (NTRS)

    Othon, William L.

    2016-01-01

    The creation of a crew-rated spacecraft architecture capable of sending humans to Mars requires the development and integration of multiple vehicle systems and subsystems. Important new technologies will be identified and matured within each technical discipline to support the mission. Architecture maturity also requires coordination with mission operations elements and ground infrastructure. During early architecture formulation, many of these assets will not be co-located and will required integrated, distributed test to show that the technologies and systems are being developed in a coordinated way. When complete, technologies must be shown to function together to achieve mission goals. In this presentation, an architecture will be described that promotes and advances integration of disparate systems within JSC and across NASA centers.

  9. The social architecture of capitalism

    NASA Astrophysics Data System (ADS)

    Wright, Ian

    2005-02-01

    A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.

  10. Agent Architectures for Compliance

    NASA Astrophysics Data System (ADS)

    Burgemeestre, Brigitte; Hulstijn, Joris; Tan, Yao-Hua

    A Normative Multi-Agent System consists of autonomous agents who must comply with social norms. Different kinds of norms make different assumptions about the cognitive architecture of the agents. For example, a principle-based norm assumes that agents can reflect upon the consequences of their actions; a rule-based formulation only assumes that agents can avoid violations. In this paper we present several cognitive agent architectures for self-monitoring and compliance. We show how different assumptions about the cognitive architecture lead to different information needs when assessing compliance. The approach is validated with a case study of horizontal monitoring, an approach to corporate tax auditing recently introduced by the Dutch Customs and Tax Authority.

  11. Advanced ground station architecture

    NASA Technical Reports Server (NTRS)

    Zillig, David; Benjamin, Ted

    1994-01-01

    This paper describes a new station architecture for NASA's Ground Network (GN). The architecture makes efficient use of emerging technologies to provide dramatic reductions in size, operational complexity, and operational and maintenance costs. The architecture, which is based on recent receiver work sponsored by the Office of Space Communications Advanced Systems Program, allows integration of both GN and Space Network (SN) modes of operation in the same electronics system. It is highly configurable through software and the use of charged coupled device (CCD) technology to provide a wide range of operating modes. Moreover, it affords modularity of features which are optional depending on the application. The resulting system incorporates advanced RF, digital, and remote control technology capable of introducing significant operational, performance, and cost benefits to a variety of NASA communications and tracking applications.

  12. Domain specific software architectures: Command and control

    NASA Technical Reports Server (NTRS)

    Braun, Christine; Hatch, William; Ruegsegger, Theodore; Balzer, Bob; Feather, Martin; Goldman, Neil; Wile, Dave

    1992-01-01

    GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 applications development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE's approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team's DSSA approach and then presents our work on automated support for message processing.

  13. Exploration Architecture Options - ECLSS, EVA, TCS Implications

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Henninger, Don; Lawrence, Carl

    2010-01-01

    Many options for exploration of space have been identified and evaluated since the Vision for Space Exploration (VSE) was announced in 2004. Lunar architectures have been identified and addressed in the Lunar Surface Systems team to establish options for how to get to and then inhabit and explore the moon. The Augustine Commission evaluated human space flight for the Obama administration and identified many options for how to conduct human spaceflight in the future. This paper will evaluate the options for exploration of space for the implications of architectures on the Environmental Control and Life Support (ECLSS), ExtraVehicular Activity (EVA) and Thermal Control System (TCS) Systems. The advantages and disadvantages of each architecture and options are presented.

  14. Exploration Architecture Options - ECLSS, EVA, TCS Implications

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Henninger, Don; Lawrence, Carl

    2009-01-01

    Many options for exploration of the Moon and Mars have been identified and evaluated since the Vision for Space Exploration VSE was announced in 2004. Lunar architectures have been identified and addressed in the Lunar Surface Systems team to establish options for how to get to and then inhabit and explore the moon. The Augustine Commission evaluated human space flight for the Obama administration and identified many options for how to conduct human spaceflight in the future. This paper will evaluate the options for exploration of the moon and Mars and those of the Augustine human spaceflight commission for the implications of each architecture on the Environmental Control and Life Support, ExtraVehicular Activity and Thermal Control systems. The advantages and disadvantages of each architecture and options are presented.

  15. Synergetics and architecture

    NASA Astrophysics Data System (ADS)

    Maslov, V. P.; Maslova, T. V.

    2008-03-01

    A series of phenomena pertaining to economics, quantum physics, language, literary criticism, and especially architecture is studied from the standpoint of synergetics (the study of self-organizing complex systems). It turns out that a whole series of concrete formulas describing these phenomena is identical in these different situations. This is the case of formulas relating to the Bose-Einstein distribution of particles and the distribution of words from a frequency dictionary. This also allows to apply a "quantized" from of the Zipf law to the problem of the authorship of Quiet Flows the Don and to the "blending in" of new architectural structures in an existing environment.

  16. Information architecture. Volume 3: Guidance

    SciTech Connect

    1997-04-01

    The purpose of this document, as presented in Volume 1, The Foundations, is to assist the Department of Energy (DOE) in developing and promulgating information architecture guidance. This guidance is aimed at increasing the development of information architecture as a Departmentwide management best practice. This document describes departmental information architecture principles and minimum design characteristics for systems and infrastructures within the DOE Information Architecture Conceptual Model, and establishes a Departmentwide standards-based architecture program. The publication of this document fulfills the commitment to address guiding principles, promote standard architectural practices, and provide technical guidance. This document guides the transition from the baseline or defacto Departmental architecture through approved information management program plans and budgets to the future vision architecture. This document also represents another major step toward establishing a well-organized, logical foundation for the DOE information architecture.

  17. National Positioning, Navigation, and Timing Architecture Study

    NASA Astrophysics Data System (ADS)

    van Dyke, K.; Vicario, J.; Hothem, L.

    2007-12-01

    The purpose of the National Positioning, Navigation and Timing (PNT) Architecture effort is to help guide future PNT system-of-systems investment and implementation decisions. The Assistant Secretary of Defense for Networks and Information Integration and the Under Secretary of Transportation for Policy sponsored a National PNT Architecture study to provide more effective and efficient PNT capabilities focused on the 2025 timeframe and an evolutionary path for government provided systems and services. U.S. Space-Based PNT Policy states that the U.S. must continue to improve and maintain GPS, augmentations to GPS, and back-up capabilities to meet growing national, homeland, and economic security needs. PNT touches almost every aspect of people´s lives today. PNT is essential for Defense and Civilian applications ranging from the Department of Defense´s Joint network centric and precision operations to the transportation and telecommunications sectors, improving efficiency, increasing safety, and being more productive. Absence of an approved PNT architecture results in uncoordinated research efforts, lack of clear developmental paths, potentially wasteful procurements and inefficient deployment of PNT resources. The national PNT architecture effort evaluated alternative future mixes of global (space and non space-based) and regional PNT solutions, PNT augmentations, and autonomous PNT capabilities to address priorities identified in the DoD PNT Joint Capabilities Document (JCD) and civil equivalents. The path to achieving the Should-Be architecture is described by the National PNT Architecture's Guiding Principles, representing an overarching Vision of the US' role in PNT, an architectural Strategy to fulfill that Vision, and four Vectors which support the Strategy. The National PNT Architecture effort has developed nineteen recommendations. Five foundational recommendations are tied directly to the Strategy while the remaining fourteen individually support one of

  18. Service connectivity architecture for mobile augmented reality

    NASA Astrophysics Data System (ADS)

    Turunen, Tuukka; Pyssysalo, Tino; Roening, Juha

    2001-06-01

    Mobile augmented reality can be utilized in a number of different services, and it provides a lot of added value compared to the interfaces used in mobile multimedia today. Intelligent service connectivity architecture is needed for the emerging commercial mobile augmented reality services, to guarantee mobility and interoperability on a global scale. Some of the key responsibilities of this architecture are to find suitable service providers, to manage the connection with and utilization of such providers, and to allow smooth switching between them whenever the user moves out of the service area of the service provider she is currently connected to. We have studied the potential support technologies for such architectures and propose a way to create an intelligent service connectivity architecture based on current and upcoming wireless networks, an Internet backbone, and mechanisms to manage service connectivity in the upper layers of the protocol stack. In this paper, we explain the key issues of service connectivity, describe the properties of our architecture, and analyze the functionality of an example system. Based on these, we consider our proposition a good solution to the quest for global interoperability in mobile augmented reality services.

  19. NASA CEV Reference GN&C Architecture

    NASA Technical Reports Server (NTRS)

    Tamblyn, Scott; Hinkel, Heather; Saley, Dave

    2007-01-01

    The Orion Crew Exploration Vehicle (CEV) will be the first human spacecraft built by NASA in almost 3 decades and will be the first vehicle to perform both Low Earth Orbit (LEO) missions and lunar missions since Apollo. The awesome challenge of designing a Guidance, Navigation, and Control (GN&C) system for this vehicle that satisfies all of its various mission requirements is countered by the opportunity to take advantage of the improvements in algorithms, software, sensors, and other related GN&C technology over this period. This paper describes the CEV GN&C reference architecture developed to support the overall NASA reference configuration and validate the driving requirements of the Constellation (Cx) Architecture Requirements Document (CARD, Reference 1) and the CEV System Requirements Document (SRD, Reference 2). The Orion GN&C team designed the reference architecture based on the functional allocation of GN&C roles and responsibilities of CEV with respect to the other Cx vehicles, such as the Crew Launch Vehicle (CLV), Earth Departure Stage (EDS), and Lunar Surface Area Module (LSAM), across all flight phases. The specific challenges and responsibilities of the CEV GN&C system from launch pad to touchdown will be introduced along with an overview of the navigation sensor suite, its redundancy management, and flight software (FSW) architecture. Sensors will be discussed in terms of range of operation, data utility within the navigation system, and rationale for selection. The software architecture is illustrated via block diagrams, commensurate with the design aspects.

  20. Reference Avionics Architecture for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Somervill, Kevin M.; Lapin, Jonathan C.; Schmidt, Oron L.

    2010-01-01

    Developing and delivering infrastructure capable of supporting long-term manned operations to the lunar surface has been a primary objective of the Constellation Program in the Exploration Systems Mission Directorate. Several concepts have been developed related to development and deployment lunar exploration vehicles and assets that provide critical functionality such as transportation, habitation, and communication, to name a few. Together, these systems perform complex safety-critical functions, largely dependent on avionics for control and behavior of system functions. These functions are implemented using interchangeable, modular avionics designed for lunar transit and lunar surface deployment. Systems are optimized towards reuse and commonality of form and interface and can be configured via software or component integration for special purpose applications. There are two core concepts in the reference avionics architecture described in this report. The first concept uses distributed, smart systems to manage complexity, simplify integration, and facilitate commonality. The second core concept is to employ extensive commonality between elements and subsystems. These two concepts are used in the context of developing reference designs for many lunar surface exploration vehicles and elements. These concepts are repeated constantly as architectural patterns in a conceptual architectural framework. This report describes the use of these architectural patterns in a reference avionics architecture for Lunar surface systems elements.

  1. Generic Software Architecture for Launchers

    NASA Astrophysics Data System (ADS)

    Carre, Emilien; Gast, Philippe; Hiron, Emmanuel; Leblanc, Alain; Lesens, David; Mescam, Emmanuelle; Moro, Pierre

    2015-09-01

    The definition and reuse of generic software architecture for launchers is not so usual for several reasons: the number of European launcher families is very small (Ariane 5 and Vega for these last decades); the real time constraints (reactivity and determinism needs) are very hard; low levels of versatility are required (implying often an ad hoc development of the launcher mission). In comparison, satellites are often built on a generic platform made up of reusable hardware building blocks (processors, star-trackers, gyroscopes, etc.) and reusable software building blocks (middleware, TM/TC, On Board Control Procedure, etc.). If some of these reasons are still valid (e.g. the limited number of development), the increase of the available CPU power makes today an approach based on a generic time triggered middleware (ensuring the full determinism of the system) and a centralised mission and vehicle management (offering more flexibility in the design and facilitating the long term maintenance) achievable. This paper presents an example of generic software architecture which could be envisaged for future launchers, based on the previously described principles and supported by model driven engineering and automatic code generation.

  2. Hadl: HUMS Architectural Description Language

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Adavi, V.; Agarwal, N.; Gullapalli, S.; Kumar, P.; Sundaram, P.

    2004-01-01

    Specification of architectures is an important prerequisite for evaluation of architectures. With the increase m the growth of health usage and monitoring systems (HUMS) in commercial and military domains, the need far the design and evaluation of HUMS architectures has also been on the increase. In this paper, we describe HADL, HUMS Architectural Description Language, that we have designed for this purpose. In particular, we describe the features of the language, illustrate them with examples, and show how we use it in designing domain-specific HUMS architectures. A companion paper contains details on our design methodology of HUMS architectures.

  3. The Architecture of a Software System for Supporting Community-based Primary Health Care with Mobile Technology: The Mobile Technology for Community Health (MoTeCH) Initiative in Ghana

    PubMed Central

    MacLeod, Bruce; Phillips, James; Stone, Allison E.; Walji, Aliya; Awoonor-Williams, John Koku

    2012-01-01

    This paper describes the software architecture of a system designed in response to the health development potential of two concomitant trends in poor countries: i) The rapid expansion of community health worker deployment, now estimated to involve over a million workers in Africa and Asia, and ii) the global proliferation of mobile technology coverage and use. Known as the Mobile Technology for Community Health (MoTeCH) Initiative, our system adapts and integrates existing software applications for mobile data collection, electronic medical records, and interactive voice response to bridge health information gaps in rural Africa. MoTeCH calculates the upcoming schedule of care for each client and, when care is due, notifies the client and community health workers responsible for that client. MoTeCH also automates the aggregation of health status and health service delivery information for routine reports. The paper concludes with a summary of lessons learned and future system development needs. PMID:23569631

  4. American School & University Architectural Portfolio 2000 Awards: Landscape Architecture.

    ERIC Educational Resources Information Center

    American School & University, 2000

    2000-01-01

    Presents photographs and basic information on architectural design, costs, square footage, and principle designers of the award winning school landscaping projects that competed in the American School & University Architectural Portfolio 2000. (GR)

  5. Tutorial on architectural acoustics

    NASA Astrophysics Data System (ADS)

    Shaw, Neil; Talaske, Rick; Bistafa, Sylvio

    2002-11-01

    This tutorial is intended to provide an overview of current knowledge and practice in architectural acoustics. Topics covered will include basic concepts and history, acoustics of small rooms (small rooms for speech such as classrooms and meeting rooms, music studios, small critical listening spaces such as home theatres) and the acoustics of large rooms (larger assembly halls, auditoria, and performance halls).

  6. 1989 Architectural Exhibition Winners.

    ERIC Educational Resources Information Center

    School Business Affairs, 1990

    1990-01-01

    Winners of the 1989 Architectural Exhibition sponsored annually by the ASBO International's School Facilities Research Committee include the Brevard Performing Arts Center (Melbourne, Florida), the Capital High School (Santa Fe, New Mexico), Gage Elementary School (Rochester, Minnesota), the Lakewood (Ohio) High School Natatorium, and three other…

  7. Emulating an MIMD architecture

    SciTech Connect

    Su Bogong; Grishman, R.

    1982-01-01

    As part of a research effort in parallel processor architecture and programming, the ultracomputer group at New York University has performed extensive simulation of parallel programs. To speed up these simulations, a parallel processor emulator, using the microprogrammable Puma computer system previously designed and built at NYU, has been developed. 8 references.

  8. System Building and Architecture.

    ERIC Educational Resources Information Center

    Robbie, Roderick G.

    The technical director of the Metropolitan Toronto School Boards Study of Educational Facilities (SEF) presents a description of the general theory and execution of the first SEF building system, and his views on the general principles of system building as they might affect architecture and the economy. (TC)

  9. Making Connections through Architecture.

    ERIC Educational Resources Information Center

    Hollingsworth, Patricia

    1993-01-01

    The Center for Arts and Sciences (Oklahoma) developed an interdisciplinary curriculum for disadvantaged gifted children on styles of architecture, called "Discovering Patterns in the Built Environment." This article describes the content and processes used in the curriculum, as well as other programs of the center, such as teacher workshops,…

  10. GNU debugger internal architecture

    SciTech Connect

    Miller, P.; Nessett, D.; Pizzi, R.

    1993-12-16

    This document describes the internal and architecture and implementation of the GNU debugger, gdb. Topics include inferior process management, command execution, symbol table management and remote debugging. Call graphs for specific functions are supplied. This document is not a complete description but offers a developer an overview which is the place to start before modification.

  11. Test Architecture, Test Retrofit

    ERIC Educational Resources Information Center

    Fulcher, Glenn; Davidson, Fred

    2009-01-01

    Just like buildings, tests are designed and built for specific purposes, people, and uses. However, both buildings and tests grow and change over time as the needs of their users change. Sometimes, they are also both used for purposes other than those intended in the original designs. This paper explores architecture as a metaphor for language…

  12. INL Generic Robot Architecture

    Energy Science and Technology Software Center (ESTSC)

    2005-03-30

    The INL Generic Robot Architecture is a generic, extensible software framework that can be applied across a variety of different robot geometries, sensor suites and low-level proprietary control application programming interfaces (e.g. mobility, aria, aware, player, etc.).

  13. Modeling Operations Costs for Human Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  14. Standardizing the information architecture for spacecraft operations

    NASA Technical Reports Server (NTRS)

    Easton, C. R.

    1994-01-01

    This paper presents an information architecture developed for the Space Station Freedom as a model from which to derive an information architecture standard for advanced spacecraft. The information architecture provides a way of making information available across a program, and among programs, assuming that the information will be in a variety of local formats, structures and representations. It provides a format that can be expanded to define all of the physical and logical elements that make up a program, add definitions as required, and import definitions from prior programs to a new program. It allows a spacecraft and its control center to work in different representations and formats, with the potential for supporting existing spacecraft from new control centers. It supports a common view of data and control of all spacecraft, regardless of their own internal view of their data and control characteristics, and of their communications standards, protocols and formats. This information architecture is central to standardizing spacecraft operations, in that it provides a basis for information transfer and translation, such that diverse spacecraft can be monitored and controlled in a common way.

  15. Shaping plant architecture.

    PubMed

    Teichmann, Thomas; Muhr, Merlin

    2015-01-01

    Plants exhibit phenotypical plasticity. Their general body plan is genetically determined, but plant architecture and branching patterns are variable and can be adjusted to the prevailing environmental conditions. The modular design of the plant facilitates such morphological adaptations. The prerequisite for the formation of a branch is the initiation of an axillary meristem. Here, we review the current knowledge about this process. After its establishment, the meristem can develop into a bud which can either become dormant or grow out and form a branch. Many endogenous factors, such as photoassimilate availability, and exogenous factors like nutrient availability or shading, have to be integrated in the decision whether a branch is formed. The underlying regulatory network is complex and involves phytohormones and transcription factors. The hormone auxin is derived from the shoot apex and inhibits bud outgrowth indirectly in a process termed apical dominance. Strigolactones appear to modulate apical dominance by modification of auxin fluxes. Furthermore, the transcription factor BRANCHED1 plays a central role. The exact interplay of all these factors still remains obscure and there are alternative models. We discuss recent findings in the field along with the major models. Plant architecture is economically significant because it affects important traits of crop and ornamental plants, as well as trees cultivated in forestry or on short rotation coppices. As a consequence, plant architecture has been modified during plant domestication. Research revealed that only few key genes have been the target of selection during plant domestication and in breeding programs. Here, we discuss such findings on the basis of various examples. Architectural ideotypes that provide advantages for crop plant management and yield are described. We also outline the potential of breeding and biotechnological approaches to further modify and improve plant architecture for economic needs

  16. Shaping plant architecture

    PubMed Central

    Teichmann, Thomas; Muhr, Merlin

    2015-01-01

    Plants exhibit phenotypical plasticity. Their general body plan is genetically determined, but plant architecture and branching patterns are variable and can be adjusted to the prevailing environmental conditions. The modular design of the plant facilitates such morphological adaptations. The prerequisite for the formation of a branch is the initiation of an axillary meristem. Here, we review the current knowledge about this process. After its establishment, the meristem can develop into a bud which can either become dormant or grow out and form a branch. Many endogenous factors, such as photoassimilate availability, and exogenous factors like nutrient availability or shading, have to be integrated in the decision whether a branch is formed. The underlying regulatory network is complex and involves phytohormones and transcription factors. The hormone auxin is derived from the shoot apex and inhibits bud outgrowth indirectly in a process termed apical dominance. Strigolactones appear to modulate apical dominance by modification of auxin fluxes. Furthermore, the transcription factor BRANCHED1 plays a central role. The exact interplay of all these factors still remains obscure and there are alternative models. We discuss recent findings in the field along with the major models. Plant architecture is economically significant because it affects important traits of crop and ornamental plants, as well as trees cultivated in forestry or on short rotation coppices. As a consequence, plant architecture has been modified during plant domestication. Research revealed that only few key genes have been the target of selection during plant domestication and in breeding programs. Here, we discuss such findings on the basis of various examples. Architectural ideotypes that provide advantages for crop plant management and yield are described. We also outline the potential of breeding and biotechnological approaches to further modify and improve plant architecture for economic needs

  17. ACOUSTICS IN ARCHITECTURAL DESIGN, AN ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS.

    ERIC Educational Resources Information Center

    DOELLE, LESLIE L.

    THE PURPOSE OF THIS ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS WAS--(1) TO COMPILE A CLASSIFIED BIBLIOGRAPHY, INCLUDING MOST OF THOSE PUBLICATIONS ON ARCHITECTURAL ACOUSTICS, PUBLISHED IN ENGLISH, FRENCH, AND GERMAN WHICH CAN SUPPLY A USEFUL AND UP-TO-DATE SOURCE OF INFORMATION FOR THOSE ENCOUNTERING ANY ARCHITECTURAL-ACOUSTIC DESIGN…

  18. 11. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch Alexandria, Va.) 'Non-Com-Officers Qrs.' Quartermaster General's Office Standard Plan 82, sheet 1. Lithograph on linen architectural drawing. April 1893 3 ELEVATIONS, 3 PLANS AND A PARTIAL SECTION - Fort Myer, Non-Commissioned Officers Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  19. 12. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch, Alexandria, Va.) 'Non-Com-Officers Qrs.' Quartermaster Generals Office Standard Plan 82, sheet 2, April 1893. Lithograph on linen architectural drawing. DETAILS - Fort Myer, Non-Commissioned Officers Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  20. An Experiment in Architectural Instruction.

    ERIC Educational Resources Information Center

    Dvorak, Robert W.

    1978-01-01

    Discusses the application of the PLATO IV computer-based educational system to a one-semester basic drawing course for freshman architecture, landscape architecture, and interior design students and relates student reactions to the experience. (RAO)

  1. Compositional Specification of Software Architecture

    NASA Technical Reports Server (NTRS)

    Penix, John; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes our experience using parameterized algebraic specifications to model properties of software architectures. The goal is to model the decomposition of requirements independent of the style used to implement the architecture. We begin by providing an overview of the role of architecture specification in software development. We then describe how architecture specifications are build up from component and connector specifications and give an overview of insights gained from a case study used to validate the method.

  2. Controlling Material Reactivity Using Architecture.

    PubMed

    Sullivan, Kyle T; Zhu, Cheng; Duoss, Eric B; Gash, Alexander E; Kolesky, David B; Kuntz, Joshua D; Lewis, Jennifer A; Spadaccini, Christopher M

    2016-03-01

    3D-printing methods are used to generate reactive material architectures. Several geometric parameters are observed to influence the resultant flame propagation velocity, indicating that the architecture can be utilized to control reactivity. Two different architectures, channels and hurdles, are generated, and thin films of thermite are deposited onto the surface. The architecture offers an additional route to control, at will, the energy release rate in reactive composite materials. PMID:26669517

  3. Architecture for autonomy

    NASA Astrophysics Data System (ADS)

    Broten, Gregory S.; Monckton, Simon P.; Collier, Jack; Giesbrecht, Jared

    2006-05-01

    In 2002 Defence R&D Canada changed research direction from pure tele-operated land vehicles to general autonomy for land, air, and sea craft. The unique constraints of the military environment coupled with the complexity of autonomous systems drove DRDC to carefully plan a research and development infrastructure that would provide state of the art tools without restricting research scope. DRDC's long term objectives for its autonomy program address disparate unmanned ground vehicle (UGV), unattended ground sensor (UGS), air (UAV), and subsea and surface (UUV and USV) vehicles operating together with minimal human oversight. Individually, these systems will range in complexity from simple reconnaissance mini-UAVs streaming video to sophisticated autonomous combat UGVs exploiting embedded and remote sensing. Together, these systems can provide low risk, long endurance, battlefield services assuming they can communicate and cooperate with manned and unmanned systems. A key enabling technology for this new research is a software architecture capable of meeting both DRDC's current and future requirements. DRDC built upon recent advances in the computing science field while developing its software architecture know as the Architecture for Autonomy (AFA). Although a well established practice in computing science, frameworks have only recently entered common use by unmanned vehicles. For industry and government, the complexity, cost, and time to re-implement stable systems often exceeds the perceived benefits of adopting a modern software infrastructure. Thus, most persevere with legacy software, adapting and modifying software when and wherever possible or necessary -- adopting strategic software frameworks only when no justifiable legacy exists. Conversely, academic programs with short one or two year projects frequently exploit strategic software frameworks but with little enduring impact. The open-source movement radically changes this picture. Academic frameworks

  4. Cognitive Architectures for Multimedia Learning

    ERIC Educational Resources Information Center

    Reed, Stephen K.

    2006-01-01

    This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio's dual coding theory,…

  5. Information architecture. Volume 2, Part 1: Baseline analysis summary

    SciTech Connect

    1996-12-01

    The Department of Energy (DOE) Information Architecture, Volume 2, Baseline Analysis, is a collaborative and logical next-step effort in the processes required to produce a Departmentwide information architecture. The baseline analysis serves a diverse audience of program management and technical personnel and provides an organized way to examine the Department`s existing or de facto information architecture. A companion document to Volume 1, The Foundations, it furnishes the rationale for establishing a Departmentwide information architecture. This volume, consisting of the Baseline Analysis Summary (part 1), Baseline Analysis (part 2), and Reference Data (part 3), is of interest to readers who wish to understand how the Department`s current information architecture technologies are employed. The analysis identifies how and where current technologies support business areas, programs, sites, and corporate systems.

  6. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  7. Open architecture CNC system

    SciTech Connect

    Tal, J.; Lopez, A.; Edwards, J.M.

    1995-04-01

    In this paper, an alternative solution to the traditional CNC machine tool controller has been introduced. Software and hardware modules have been described and their incorporation in a CNC control system has been outlined. This type of CNC machine tool controller demonstrates that technology is accessible and can be readily implemented into an open architecture machine tool controller. Benefit to the user is greater controller flexibility, while being economically achievable. PC based, motion as well as non-motion features will provide flexibility through a Windows environment. Up-grading this type of controller system through software revisions will keep the machine tool in a competitive state with minimal effort. Software and hardware modules are mass produced permitting competitive procurement and incorporation. Open architecture CNC systems provide diagnostics thus enhancing maintainability, and machine tool up-time. A major concern of traditional CNC systems has been operator training time. Training time can be greatly minimized by making use of Windows environment features.

  8. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  9. Instrumented Architectural Simulation System

    NASA Technical Reports Server (NTRS)

    Delagi, B. A.; Saraiya, N.; Nishimura, S.; Byrd, G.

    1987-01-01

    Simulation of systems at an architectural level can offer an effective way to study critical design choices if (1) the performance of the simulator is adequate to examine designs executing significant code bodies, not just toy problems or small application fragements, (2) the details of the simulation include the critical details of the design, (3) the view of the design presented by the simulator instrumentation leads to useful insights on the problems with the design, and (4) there is enough flexibility in the simulation system so that the asking of unplanned questions is not suppressed by the weight of the mechanics involved in making changes either in the design or its measurement. A simulation system with these goals is described together with the approach to its implementation. Its application to the study of a particular class of multiprocessor hardware system architectures is illustrated.

  10. Generic robot architecture

    SciTech Connect

    Bruemmer, David J; Few, Douglas A

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  11. Aerobot Autonomy Architecture

    NASA Technical Reports Server (NTRS)

    Elfes, Alberto; Hall, Jeffery L.; Kulczycki, Eric A.; Cameron, Jonathan M.; Morfopoulos, Arin C.; Clouse, Daniel S.; Montgomery, James F.; Ansar, Adnan I.; Machuzak, Richard J.

    2009-01-01

    An architecture for autonomous operation of an aerobot (i.e., a robotic blimp) to be used in scientific exploration of planets and moons in the Solar system with an atmosphere (such as Titan and Venus) is undergoing development. This architecture is also applicable to autonomous airships that could be flown in the terrestrial atmosphere for scientific exploration, military reconnaissance and surveillance, and as radio-communication relay stations in disaster areas. The architecture was conceived to satisfy requirements to perform the following functions: a) Vehicle safing, that is, ensuring the integrity of the aerobot during its entire mission, including during extended communication blackouts. b) Accurate and robust autonomous flight control during operation in diverse modes, including launch, deployment of scientific instruments, long traverses, hovering or station-keeping, and maneuvers for touch-and-go surface sampling. c) Mapping and self-localization in the absence of a global positioning system. d) Advanced recognition of hazards and targets in conjunction with tracking of, and visual servoing toward, targets, all to enable the aerobot to detect and avoid atmospheric and topographic hazards and to identify, home in on, and hover over predefined terrain features or other targets of scientific interest. The architecture is an integrated combination of systems for accurate and robust vehicle and flight trajectory control; estimation of the state of the aerobot; perception-based detection and avoidance of hazards; monitoring of the integrity and functionality ("health") of the aerobot; reflexive safing actions; multi-modal localization and mapping; autonomous planning and execution of scientific observations; and long-range planning and monitoring of the mission of the aerobot. The prototype JPL aerobot (see figure) has been tested extensively in various areas in the California Mojave desert.

  12. Staged Event Architecture

    Energy Science and Technology Software Center (ESTSC)

    2005-05-30

    Sea is a framework for a Staged Event Architecture, designed around non-blocking asynchronous communication facilities that are decoupled from the threading model chosen by any given application, Components for P networking and in-memory communication are provided. The Sea Java library encapsulates these concepts. Sea is used to easily build efficient and flexible low-level network clients and servers, and in particular as a basic communication substrate for Peer-to-Peer applications.

  13. Modular robotic architecture

    NASA Astrophysics Data System (ADS)

    Smurlo, Richard P.; Laird, Robin T.

    1991-03-01

    The development of control architectures for mobile systems is typically a task undertaken with each new application. These architectures address different operational needs and tend to be difficult to adapt to more than the problem at hand. The development of a flexible and extendible control system with evolutionary growth potential for use on mobile robots will help alleviate these problems and if made widely available will promote standardization and cornpatibility among systems throughout the industry. The Modular Robotic Architecture (MRA) is a generic control systern that meets the above needs by providing developers with a standard set of software hardware tools that can be used to design modular robots (MODBOTs) with nearly unlimited growth potential. The MODBOT itself is a generic creature that must be customized by the developer for a particular application. The MRA facilitates customization of the MODBOT by providing sensor actuator and processing modules that can be configured in almost any manner as demanded by the application. The Mobile Security Robot (MOSER) is an instance of a MODBOT that is being developed using the MRA. Navigational Sonar Module RF Link Control Station Module hR Link Detection Module Near hR Proximi Sensor Module Fluxgate Compass and Rate Gyro Collision Avoidance Sonar Module Figure 1. Remote platform module configuration of the Mobile Security Robot (MOSER). Acoustical Detection Array Stereoscopic Pan and Tilt Module High Level Processing Module Mobile Base 566

  14. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  15. Quantifying Loopy Network Architectures

    PubMed Central

    Katifori, Eleni; Magnasco, Marcelo O.

    2012-01-01

    Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs. PMID:22701593

  16. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  17. Architecture of Chinese Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Cui, Chen-Zhou; Zhao, Yong-Heng

    2004-06-01

    Virtual Observatory (VO) is brought forward under the background of progresses of astronomical technologies and information technologies. VO architecture design embodies the combination of above two technologies. As an introduction of VO, principle and workflow of Virtual Observatory are given firstly. Then the latest progress on VO architecture is introduced. Based on the Grid technology, layered architecture model and service-oriented architecture model are given for Chinese Virtual Observatory. In the last part of the paper, some problems on architecture design are discussed in detail.

  18. Analyzing and Visualizing Whole Program Architectures

    SciTech Connect

    Panas, T; Quinlan, D; Vuduc, R

    2007-05-10

    This paper describes our work to develop new tool support for analyzing and visualizing the architecture of complete large-scale (millions or more lines of code) programs. Our approach consists of (i) creating a compact, accurate representation of a whole C or C++ program, (ii) analyzing the program in this representation, and (iii) visualizing the analysis results with respect to the program's architecture. We have implemented our approach by extending and combining a compiler infrastructure and a program visualization tool, and we believe our work will be of broad interest to those engaged in a variety of program understanding and transformation tasks. We have added new whole-program analysis support to ROSE [15, 14], a source-to-source C/C++ compiler infrastructure for creating customized analysis and transformation tools. Our whole-program work does not rely on procedure summaries; rather, we preserve all of the information present in the source while keeping our representation compact. In our representation, a million-line application fits in well less than 1 GB of memory. Because whole-program analyses can generate large amounts of data, we believe that abstracting and visualizing analysis results at the architecture level is critical to reducing the cognitive burden on the consumer of the analysis results. Therefore, we have extended Vizz3D [19], an interactive program visualization tool, with an appropriate metaphor and layout algorithm for representing a program's architecture. Our implementation provides developers with an intuitive, interactive way to view analysis results, such as those produced by ROSE, in the context of the program's architecture. The remainder of this paper summarizes our approach to whole-program analysis (Section 2) and provides an example of how we visualize the analysis results (Section 3).

  19. BADD phase II: DDS information management architecture

    NASA Astrophysics Data System (ADS)

    Stephenson, Thomas P.; DeCleene, Brian T.; Speckert, Glen; Voorhees, Harry L.

    1997-06-01

    The DARPA Battlefield Awareness and Data Dissemination (BADD) Phase II Program will provide the next generation multimedia information management architecture to support the warfighter. One goal of this architecture is proactive dissemination of information to the warfighter through strategies such as multicast and 'smart push and pull' designed to minimize latency and make maximum use of available communications bandwidth. Another goal is to support integration of information from widely distributed legacy repositories. This will enable the next generation of battlefield awareness applications to form a common operational view of the battlefield to aid joint service and/or multi-national peacekeeping forces. This paper discusses the approach we are taking to realize such an architecture for BADD. Our architecture and its implementation, known as the Distributed Dissemination Serivces (DDS) are based on two key concepts: a global database schema and an intelligent, proactive caching scheme. A global schema provides a common logical view of the information space in which the warfighter operates. This schema (or subsets of it) is shared by all warfighters through a distributed object database providing local access to all relevant metadata. This approach provides both scalability to a large number of warfighters, and it supports tethered as well as autonomous operations. By utilizing DDS information integration services that provide transparent access to legacy databases, related information from multiple 'stovepipe' systems are now available to battlefield awareness applications. The second key concept embedded in our architecture is an intelligent, hierarchical caching system supported by proactive dissemination management services which push both lightweight and heavyweight data such as imagery and video to warfighters based on their information profiles. The goal of this approach is to transparently and proactively stage data which is likely to be requested by

  20. Capital Architecture: Situating symbolism parallel to architectural methods and technology

    NASA Astrophysics Data System (ADS)

    Daoud, Bassam

    Capital Architecture is a symbol of a nation's global presence and the cultural and social focal point of its inhabitants. Since the advent of High-Modernism in Western cities, and subsequently decolonised capitals, civic architecture no longer seems to be strictly grounded in the philosophy that national buildings shape the legacy of government and the way a nation is regarded through its built environment. Amidst an exceedingly globalized architectural practice and with the growing concern of key heritage foundations over the shortcomings of international modernism in representing its immediate socio-cultural context, the contextualization of public architecture within its sociological, cultural and economic framework in capital cities became the key denominator of this thesis. Civic architecture in capital cities is essential to confront the challenges of symbolizing a nation and demonstrating the legitimacy of the government'. In today's dominantly secular Western societies, governmental architecture, especially where the seat of political power lies, is the ultimate form of architectural expression in conveying a sense of identity and underlining a nation's status. Departing with these convictions, this thesis investigates the embodied symbolic power, the representative capacity, and the inherent permanence in contemporary architecture, and in its modes of production. Through a vast study on Modern architectural ideals and heritage -- in parallel to methodologies -- the thesis stimulates the future of large scale governmental building practices and aims to identify and index the key constituents that may respond to the lack representation in civic architecture in capital cities.

  1. Surface Buildup Scenarios and Outpost Architectures for Lunar Exploration

    NASA Technical Reports Server (NTRS)

    Mazanek, Daniel D.; Troutman, Patrick A.; Culbert, Christopher J.; Leonard, Matthew J.; Spexarth, Gary R.

    2009-01-01

    The Constellation Program Architecture Team and the Lunar Surface Systems Project Office have developed an initial set of lunar surface buildup scenarios and associated polar outpost architectures, along with preliminary supporting element and system designs in support of NASA's Exploration Strategy. The surface scenarios are structured in such a way that outpost assembly can be suspended at any time to accommodate delivery contingencies or changes in mission emphasis. The modular nature of the architectures mitigates the impact of the loss of any one element and enhances the ability of international and commercial partners to contribute elements and systems. Additionally, the core lunar surface system technologies and outpost operations concepts are applicable to future Mars exploration. These buildup scenarios provide a point of departure for future trades and assessments of alternative architectures and surface elements.

  2. A Ground Systems Architecture Transition for a Distributed Operations System

    NASA Technical Reports Server (NTRS)

    Sellers, Donna; Pitts, Lee; Bryant, Barry

    2003-01-01

    The Marshall Space Flight Center (MSFC) Ground Systems Department (GSD) recently undertook an architecture change in the product line that serves the ISS program. As a result, the architecture tradeoffs between data system product lines that serve remote users versus those that serve control center flight control teams were explored extensively. This paper describes the resulting architecture that will be used in the International Space Station (ISS) payloads program, and the resulting functional breakdown of the products that support this architecture. It also describes the lessons learned from the path that was followed, as a migration of products cause the need to reevaluate the allocation of functions across the architecture. The result is a set of innovative ground system solutions that is scalable so it can support facilities of wide-ranging sizes, from a small site up to large control centers. Effective use of system automation, custom components, design optimization for data management, data storage, data transmissions, and advanced local and wide area networking architectures, plus the effective use of Commercial-Off-The-Shelf (COTS) products, provides flexible Remote Ground System options that can be tailored to the needs of each user. This paper offers a description of the efficiency and effectiveness of the Ground Systems architectural options that have been implemented, and includes successful implementation examples and lessons learned.

  3. Exascale Hardware Architectures Working Group

    SciTech Connect

    Hemmert, S; Ang, J; Chiang, P; Carnes, B; Doerfler, D; Leininger, M; Dosanjh, S; Fields, P; Koch, K; Laros, J; Noe, J; Quinn, T; Torrellas, J; Vetter, J; Wampler, C; White, A

    2011-03-15

    The ASC Exascale Hardware Architecture working group is challenged to provide input on the following areas impacting the future use and usability of potential exascale computer systems: processor, memory, and interconnect architectures, as well as the power and resilience of these systems. Going forward, there are many challenging issues that will need to be addressed. First, power constraints in processor technologies will lead to steady increases in parallelism within a socket. Additionally, all cores may not be fully independent nor fully general purpose. Second, there is a clear trend toward less balanced machines, in terms of compute capability compared to memory and interconnect performance. In order to mitigate the memory issues, memory technologies will introduce 3D stacking, eventually moving on-socket and likely on-die, providing greatly increased bandwidth but unfortunately also likely providing smaller memory capacity per core. Off-socket memory, possibly in the form of non-volatile memory, will create a complex memory hierarchy. Third, communication energy will dominate the energy required to compute, such that interconnect power and bandwidth will have a significant impact. All of the above changes are driven by the need for greatly increased energy efficiency, as current technology will prove unsuitable for exascale, due to unsustainable power requirements of such a system. These changes will have the most significant impact on programming models and algorithms, but they will be felt across all layers of the machine. There is clear need to engage all ASC working groups in planning for how to deal with technological changes of this magnitude. The primary function of the Hardware Architecture Working Group is to facilitate codesign with hardware vendors to ensure future exascale platforms are capable of efficiently supporting the ASC applications, which in turn need to meet the mission needs of the NNSA Stockpile Stewardship Program. This issue is

  4. Systems Architecture for a Nationwide Healthcare System.

    PubMed

    Abin, Jorge; Nemeth, Horacio; Friedmann, Ignacio

    2015-01-01

    From a national level to give Internet technology support, the Nationwide Integrated Healthcare System in Uruguay requires a model of Information Systems Architecture. This system has multiple healthcare providers (public and private), and a strong component of supplementary services. Thus, the data processing system should have an architecture that considers this fact, while integrating the central services provided by the Ministry of Public Health. The national electronic health record, as well as other related data processing systems, should be based on this architecture. The architecture model described here conceptualizes a federated framework of electronic health record systems, according to the IHE affinity model, HL7 standards, local standards on interoperability and security, as well as technical advice provided by AGESIC. It is the outcome of the research done by AGESIC and Systems Integration Laboratory (LINS) on the development and use of the e-Government Platform since 2008, as well as the research done by the team Salud.uy since 2013. PMID:26262000

  5. Space Architecture: The Role, Work and Aptitude

    NASA Technical Reports Server (NTRS)

    Griffin, Brand

    2014-01-01

    Space architecture has been an emerging discipline for at least 40 years. Has it arrived? Is space architecture a legitimate vocation or an avocation? If it leads to a job, what do employers want? In 2002, NASA Headquarters created a management position for a space architect whose job was to "lead the development of strategic architectures and identify high level requirements for systems that will accomplish the Nation's space exploration vision." This is a good job description with responsibility at the right level in NASA, but unfortunately, the office was discontinued two years later. Even though there is no accredited academic program or professional licensing for space architecture, there is a community of practitioners. They are civil servants, contractors and academicians supporting International Space Station and space exploration programs. In various ways, space architects currently contribute to human spaceflight, but there is a way for the discipline to be more effective in developing solutions to large scale complex problems. This paper organizes contributions from engineers, architects and psychologists into recommendations on the role of space architects in the organization, the process of creating and selecting options, and intrinsic personality traits including why they must have a high tolerance for ambiguity.

  6. The Tera Multithreaded Architecture and Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.; Mavriplis, Dimitri J.

    1998-01-01

    The Tera Multithreaded Architecture (MTA) is a new parallel supercomputer currently being installed at San Diego Supercomputing Center (SDSC). This machine has an architecture quite different from contemporary parallel machines. The computational processor is a custom design and the machine uses hardware to support very fine grained multithreading. The main memory is shared, hardware randomized and flat. These features make the machine highly suited to the execution of unstructured mesh problems, which are difficult to parallelize on other architectures. We report the results of a study carried out during July-August 1998 to evaluate the execution of EUL3D, a code that solves the Euler equations on an unstructured mesh, on the 2 processor Tera MTA at SDSC. Our investigation shows that parallelization of an unstructured code is extremely easy on the Tera. We were able to get an existing parallel code (designed for a shared memory machine), running on the Tera by changing only the compiler directives. Furthermore, a serial version of this code was compiled to run in parallel on the Tera by judicious use of directives to invoke the "full/empty" tag bits of the machine to obtain synchronization. This version achieves 212 and 406 Mflop/s on one and two processors respectively, and requires no attention to partitioning or placement of data issues that would be of paramount importance in other parallel architectures.

  7. The D3 Middleware Architecture

    NASA Technical Reports Server (NTRS)

    Walton, Joan; Filman, Robert E.; Korsmeyer, David J.; Lee, Diana D.; Mak, Ron; Patel, Tarang

    2002-01-01

    DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid-dynamics) model executions. DARWIN captures, stores and indexes data; manages derived knowledge (such as visualizations across multiple datasets); and provides an environment for designers to collaborate in the analysis of test results. DARWIN is an interesting application because it supports high-volumes of data. integrates multiple modalities of data display (e.g., images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and views of data. Here we provide an overview of the architecture of D3, the third generation of DARWIN. Earlier versions of DARWIN were characterized by browser-based interfaces and a hodge-podge of server technologies: CGI scripts, applets, PERL, and so forth. But browsers proved difficult to control, and a proliferation of computational mechanisms proved inefficient and difficult to maintain. D3 substitutes a pure-Java approach for that medley: A Java client communicates (though RMI over HTTPS) with a Java-based application server. Code on the server accesses information from JDBC databases, distributed LDAP security services, and a collaborative information system. D3 is a three tier-architecture, but unlike 'E-commerce' applications, the data usage pattern suggests different strategies than traditional Enterprise Java Beans - we need to move volumes of related data together, considerable processing happens on the client, and the 'business logic' on the server-side is primarily data integration and collaboration. With D3, we are extending DARWIN to handle other data domains and to be a distributed system, where a single login allows a user

  8. Spacecraft Architecture and environmental pshychology

    NASA Astrophysics Data System (ADS)

    Ören, Ayşe

    2016-07-01

    As we embark on a journey for new homes in the new worlds to lay solid foundations, we should consider not only the survival of frontiers but also well-being of those to live in zero gravity. As a versatile science, architecture encompasses abstract human needs as well. On our new different direction in the course of the Homo sapiens evolution, we can do this with designs addressing both our needs and senses. Well-being of humans can be achieved by creating environments supporting the cognitive and social stages in the evolution process. Space stations are going through their own evolution process. Any step taken can serve as a reference for further attempts. When studying the history of architecture, window designing is discussed in a later phase, which is the case for building a spaceship as well. We lean on the places we live both physically and metaphorically. The feeling of belonging is essential here, entailing trans-humanism, which is significant since the environment therein is like a dress comfortable enough to fit in, meeting needs without any burden. Utilizing the advent of technology, we can create moods and atmospheres to regulate night and day cycles, thus we can turn claustrophobic places into cozy or dream-like places. Senses provoke a psychological sensation going beyond cultural codes as they are rooted within consciousness, which allows designers to create a mood within a space that tells a story and evokes an emotional impact. Color, amount of light, sound and odor are not superficial. As much as intangible, they are real and powerful tools with a physical presence. Tapping into induction, we can solve a whole system based on a part thereof. Therefore, fractal designs may not yield good results unless used correctly in terms of design although they are functional, which makes geometric arrangement critical.

  9. Spacecraft Architecture and well being

    NASA Astrophysics Data System (ADS)

    Ören, Ayşe

    2016-07-01

    As we embark on a journey for new homes in the new worlds to lay solid foundations, we should consider not only the survival of frontiers but also well-being of those to live in zero gravity. As a versatile science, architecture encompasses abstract human needs as well. On our new different direction in the course of the Homo sapiens evolution, we can do this with designs addressing both our needs and senses. Well-being of humans can be achieved by creating environments supporting the cognitive and social stages in the evolution process. Space stations are going through their own evolution process. Any step taken can serve as a reference for further attempts. When studying the history of architecture, window designing is discussed in a later phase, which is the case for building a spaceship as well. We lean on the places we live both physically and metaphorically. The feeling of belonging is essential here, entailing trans-humanism, which is significant since the environment therein is like a dress comfortable enough to fit in, meeting needs without any burden. Utilizing the advent of technology, we can create moods and atmospheres to regulate night and day cycles, thus we can turn claustrophobic places into cozy or dream-like places. Senses provoke a psychological sensation going beyond cultural codes as they are rooted within consciousness, which allows designers to create a mood within a space that tells a story and evokes an emotional impact. Color, amount of light, sound and odor are not superficial. As much as intangible, they are real and powerful tools with a physical presence. Tapping into induction, we can solve a whole system based on a part thereof. Therefore, fractal designs may not yield good results unless used correctly in terms of design although they are functional, which makes geometric arrangement critical.

  10. Novel Payload Architectures for LISA

    NASA Astrophysics Data System (ADS)

    Johann, Ulrich A.; Gath, Peter F.; Holota, Wolfgang; Schulte, Hans Reiner; Weise, Dennis

    2006-11-01

    As part of the current LISA Mission Formulation Study, and based on prior internal investigations, Astrium Germany has defined and preliminary assessed novel payload architectures, potentially reducing overall complexity and improving budgets and costs. A promising concept is characterized by a single active inertial sensor attached to a single optical bench and serving both adjacent interferometer arms via two rigidly connected off-axis telescopes. The in-plane triangular constellation ``breathing angle'' compensation is accomplished by common telescope in-field of view pointing actuation of the transmit/received beams line of sight. A dedicated actuation mechanism located on the optical bench is required in addition to the on bench actuators for differential pointing of the transmit and receive direction perpendicular to the constellation plane. Both actuators operate in a sinusoidal yearly period. A technical challenge is the actuation mechanism pointing jitter and the monitoring and calibration of the laser phase walk which occurs while changing the optical path inside the optical assembly during re-pointing. Calibration or monitoring of instrument internal phase effects e.g. by a laser metrology truss derived from the existing interferometry is required. The architecture exploits in full the two-step interferometry (strap down) concept, separating functionally inter spacecraft and intra-spacecraft interferometry (reference mass laser metrology degrees of freedom sensing). The single test mass is maintained as cubic, but in free-fall in the lateral degrees of freedom within the constellation plane. Also the option of a completely free spherical test mass with full laser interferometer readout has been conceptually investigated. The spherical test mass would rotate slowly, and would be allowed to tumble. Imperfections in roundness and density would be calibrated from differential wave front sensing in a tetrahedral arrangement, supported by added attitude

  11. A multi-agent architecture for geosimulation of moving agents

    NASA Astrophysics Data System (ADS)

    Vahidnia, Mohammad H.; Alesheikh, Ali A.; Alavipanah, Seyed Kazem

    2015-10-01

    In this paper, a novel architecture is proposed in which an axiomatic derivation system in the form of first-order logic facilitates declarative explanation and spatial reasoning. Simulation of environmental perception and interaction between autonomous agents is designed with a geographic belief-desire-intention and a request-inform-query model. The architecture has a complementary quantitative component that supports collaborative planning based on the concept of equilibrium and game theory. This new architecture presents a departure from current best practices geographic agent-based modelling. Implementation tasks are discussed in some detail, as well as scenarios for fleet management and disaster management.

  12. Systems Architecture for Fully Autonomous Space Missions

    NASA Technical Reports Server (NTRS)

    Esper, Jamie; Schnurr, R.; VanSteenberg, M.; Brumfield, Mark (Technical Monitor)

    2002-01-01

    The NASA Goddard Space Flight Center is working to develop a revolutionary new system architecture concept in support of fully autonomous missions. As part of GSFC's contribution to the New Millenium Program (NMP) Space Technology 7 Autonomy and on-Board Processing (ST7-A) Concept Definition Study, the system incorporates the latest commercial Internet and software development ideas and extends them into NASA ground and space segment architectures. The unique challenges facing the exploration of remote and inaccessible locales and the need to incorporate corresponding autonomy technologies within reasonable cost necessitate the re-thinking of traditional mission architectures. A measure of the resiliency of this architecture in its application to a broad range of future autonomy missions will depend on its effectiveness in leveraging from commercial tools developed for the personal computer and Internet markets. Specialized test stations and supporting software come to past as spacecraft take advantage of the extensive tools and research investments of billion-dollar commercial ventures. The projected improvements of the Internet and supporting infrastructure go hand-in-hand with market pressures that provide continuity in research. By taking advantage of consumer-oriented methods and processes, space-flight missions will continue to leverage on investments tailored to provide better services at reduced cost. The application of ground and space segment architectures each based on Local Area Networks (LAN), the use of personal computer-based operating systems, and the execution of activities and operations through a Wide Area Network (Internet) enable a revolution in spacecraft mission formulation, implementation, and flight operations. Hardware and software design, development, integration, test, and flight operations are all tied-in closely to a common thread that enables the smooth transitioning between program phases. The application of commercial software

  13. ROADM architectures and technologies for agile optical networks

    NASA Astrophysics Data System (ADS)

    Eldada, Louay A.

    2007-02-01

    We review the different optoelectronic component and module technologies that have been developed for use in ROADM subsystems, and describe their principles of operation, designs, features, advantages, and challenges. We also describe the various needs for reconfigurable optical add/drop switching in agile optical networks. For each network need, we present the different ROADM subsystem architecture options with their pros and cons, and describe the optoelectronic technologies supporting each architecture.

  14. Avionics Architectures for Exploration: Wireless Technologies and Human Spaceflight

    NASA Technical Reports Server (NTRS)

    Goforth, Montgomery B.; Ratliff, James E.; Barton, Richard J.; Wagner, Raymond S.; Lansdowne, Chatwin

    2014-01-01

    The authors describe ongoing efforts by the Avionics Architectures for Exploration (AAE) project chartered by NASA's Advanced Exploration Systems (AES) Program to evaluate new avionics architectures and technologies, provide objective comparisons of them, and mature selected technologies for flight and for use by other AES projects. The AAE project team includes members from most NASA centers and from industry. This paper provides an overview of recent AAE efforts, with particular emphasis on the wireless technologies being evaluated under AES to support human spaceflight.

  15. Toward a Framework for Modeling Space Systems Architectures

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Skipper, Joseph

    2006-01-01

    In this paper we will describe this extended RASDS/RAMSS methodology, the set of viewpoints that we have derived, and describe their relationship to RM-ODP. While this methodology may be directly used in a variety of document driven ways to describe space system architecture, the real power of it will come when there are tools available that will support full description of system architectures that can be captured electronically in a way that permits their analysis, verification, and transformation.

  16. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    SciTech Connect

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallel computing systems and the actual delivered performance for scientific applications. In general this gap is caused by inadequate architectural support for the requirements of modern scientific applications, as commercial applications and the much larger market they represent, have driven the evolution of computer architectures. This gap has raised the importance of developing better benchmarking methodologies to characterize and to understand the performance requirements of scientific applications, to communicate them efficiently to influence the design of future computer architectures. This improved understanding of the performance behavior of scientific applications will allow improved performance predictions, development of adequate benchmarks for identification of hardware and application features that work well or poorly together, and a more systematic performance evaluation in procurement situations. The Berkeley Institute for Performance Studies has developed a three-level approach to evaluating the design of high end machines and the software that runs on them: (1) A suite of representative applications; (2) A set of application kernels; and (3) Benchmarks to measure key system parameters. The three levels yield different type of information, all of which are useful in evaluating systems, and enable NSF and DOE centers to select computer architectures more suited for scientific applications. The analysis will further allow the centers to engage vendors in discussion of strategies to alleviate the present architectural bottlenecks using quantitative information. These may include small hardware changes or larger ones that may be out interest to non-scientific workloads. Providing quantitative models to the vendors allows them to assess the benefits of technology alternatives using their own internal cost-models in the broader marketplace, ideally facilitating the development of future computer

  17. 18. Photocopy of drawing (1961 architectural drawing by Kaiser Engineers) ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. Photocopy of drawing (1961 architectural drawing by Kaiser Engineers) FLOOR PLAN, ELEVATIONS, AND SCHEDULE FOR VEHICLE SUPPORT BUILDING, SHEET A-1 - Vandenberg Air Force Base, Space Launch Complex 3, Vehicle Support Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  18. Mind and Language Architecture

    PubMed Central

    Logan, Robert K

    2010-01-01

    A distinction is made between the brain and the mind. The architecture of the mind and language is then described within a neo-dualistic framework. A model for the origin of language based on emergence theory is presented. The complexity of hominid existence due to tool making, the control of fire and the social cooperation that fire required gave rise to a new level of order in mental activity and triggered the simultaneous emergence of language and conceptual thought. The mind is shown to have emerged as a bifurcation of the brain with the emergence of language. The role of language in the evolution of human culture is also described. PMID:20922045

  19. Architecture, constraints, and behavior

    PubMed Central

    Doyle, John C.; Csete, Marie

    2011-01-01

    This paper aims to bridge progress in neuroscience involving sophisticated quantitative analysis of behavior, including the use of robust control, with other relevant conceptual and theoretical frameworks from systems engineering, systems biology, and mathematics. Familiar and accessible case studies are used to illustrate concepts of robustness, organization, and architecture (modularity and protocols) that are central to understanding complex networks. These essential organizational features are hidden during normal function of a system but are fundamental for understanding the nature, design, and function of complex biologic and technologic systems. PMID:21788505

  20. Evolution of genome architecture.

    PubMed

    Koonin, Eugene V

    2009-02-01

    Charles Darwin believed that all traits of organisms have been honed to near perfection by natural selection. The empirical basis underlying Darwin's conclusions consisted of numerous observations made by him and other naturalists on the exquisite adaptations of animals and plants to their natural habitats and on the impressive results of artificial selection. Darwin fully appreciated the importance of heredity but was unaware of the nature and, in fact, the very existence of genomes. A century and a half after the publication of the "Origin", we have the opportunity to draw conclusions from the comparisons of hundreds of genome sequences from all walks of life. These comparisons suggest that the dominant mode of genome evolution is quite different from that of the phenotypic evolution. The genomes of vertebrates, those purported paragons of biological perfection, turned out to be veritable junkyards of selfish genetic elements where only a small fraction of the genetic material is dedicated to encoding biologically relevant information. In sharp contrast, genomes of microbes and viruses are incomparably more compact, with most of the genetic material assigned to distinct biological functions. However, even in these genomes, the specific genome organization (gene order) is poorly conserved. The results of comparative genomics lead to the conclusion that the genome architecture is not a straightforward result of continuous adaptation but rather is determined by the balance between the selection pressure, that is itself dependent on the effective population size and mutation rate, the level of recombination, and the activity of selfish elements. Although genes and, in many cases, multigene regions of genomes possess elaborate architectures that ensure regulation of expression, these arrangements are evolutionarily volatile and typically change substantially even on short evolutionary scales when gene sequences diverge minimally. Thus, the observed genome

  1. Architecture for Teraflop Visualization

    SciTech Connect

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  2. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  3. Etruscan Divination and Architecture

    NASA Astrophysics Data System (ADS)

    Magli, Giulio

    The Etruscan religion was characterized by divination methods, aimed at interpreting the will of the gods. These methods were revealed by the gods themselves and written in the books of the Etrusca Disciplina. The books are lost, but parts of them are preserved in the accounts of later Latin sources. According to such traditions divination was tightly connected with the Etruscan cosmovision of a Pantheon distributed in equally spaced, specific sectors of the celestial realm. We explore here the possible reflections of such issues in the Etruscan architectural remains.

  4. TROPIX Power System Architecture

    NASA Technical Reports Server (NTRS)

    Manner, David B.; Hickman, J. Mark

    1995-01-01

    This document contains results obtained in the process of performing a power system definition study of the TROPIX power management and distribution system (PMAD). Requirements derived from the PMADs interaction with other spacecraft systems are discussed first. Since the design is dependent on the performance of the photovoltaics, there is a comprehensive discussion of the appropriate models for cells and arrays. A trade study of the array operating voltage and its effect on array bus mass is also presented. A system architecture is developed which makes use of a combination of high efficiency switching power convertors and analog regulators. Mass and volume estimates are presented for all subsystems.

  5. Architecture for robot intelligence

    NASA Technical Reports Server (NTRS)

    Peters, II, Richard Alan (Inventor)

    2004-01-01

    An architecture for robot intelligence enables a robot to learn new behaviors and create new behavior sequences autonomously and interact with a dynamically changing environment. Sensory information is mapped onto a Sensory Ego-Sphere (SES) that rapidly identifies important changes in the environment and functions much like short term memory. Behaviors are stored in a DBAM that creates an active map from the robot's current state to a goal state and functions much like long term memory. A dream state converts recent activities stored in the SES and creates or modifies behaviors in the DBAM.

  6. Towards a Domain Specific Software Architecture for Scientific Data Distribution

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.

    2011-12-01

    A reference architecture is a "design that satisfies a clearly distinguished subset of the functional capabilities identified in the reference requirements within the boundaries of certain design and implementation constraints, also identified in reference requirements." [Tracz, 1995] Recognizing the value of a reference architecture, NASA's ESDSWG's Standards Process Group (SPG) is introducing a multi-disciplinary science data systems (SDS) reference architecture in order to provide an implementation neutral, template solution for an architecture to support scientific data systems in general [Burnett, et al, 2011]. This reference architecture describes common features and patterns in scientific data systems, and can thus provide guidelines in building and improving such systems. But, guidelines alone may not be sufficient to actually build a system. A domain specific software architecture (DSSA) is "an assemblage of software components, specialized for a particular type of task (domain), generalized for effective use across that domain, composed in a standardized structure (topology) effective for building successful applications." [Tracz, 1995]. It can be thought of as relatively specific reference architecture. The "DSSA Process" is a software life cycle developed at Carnegie Melon's Software Engineering Institute that is based on the development and use of domain-specific software architectures, components, and tools. The process has four distinct activities: 1) develop a domain specific base/model, 2) populate and maintain the library, 3) build applications, 4) operate and maintain applications [Armitage, 1993]. The DSSA process may provide the missing link between guidelines and actual system construction. In this presentation we focus specifically on the realm of scientific data access and distribution. Assuming the role of domain experts in building data access systems, we report the results of creating a DSSA for scientific data distribution. We describe

  7. A Geosynchronous Orbit Optical Communications Relay Architecture

    NASA Technical Reports Server (NTRS)

    Edwards, Bernard L.; Israel, David J.

    2014-01-01

    NASA is planning to fly a Next Generation Tracking and Data Relay Satellite (TDRS) next decade. While the requirements and architecture for that satellite are unknown at this time, NASA is investing in communications technologies that could be deployed on the satellite to provide new communications services. One of those new technologies is optical communications. The Laser Communications Relay Demonstration (LCRD) project, scheduled for launch in December 2017 as a hosted payload on a commercial communications satellite, is a critical pathfinder towards NASA providing optical communications services on the Next Generation TDRS. While it is obvious that a small to medium sized optical communications terminal could be flown on a GEO satellite to provide support to Near Earth missions, it is also possible to deploy a large terminal on the satellite to support Deep Space missions. Onboard data processing and Delay Tolerant Networking (DTN) are two additional technologies that could be used to optimize optical communications link services and enable additional mission and network operations. This paper provides a possible architecture for the optical communications augmentation of a Next Generation TDRS and touches on the critical technology work currently being done at NASA. It will also describe the impact of clouds on such an architecture and possible mitigation techniques.

  8. Options for a lunar base surface architecture

    NASA Astrophysics Data System (ADS)

    Roberts, Barney B.

    1992-02-01

    The Planet Surface Systems Office at the NASA Johnson Space Center has participated in an analysis of the Space Exploration Initiative architectures described in the Synthesis Group report. This effort involves a Systems Engineering and Integration effort to define point designs for evolving lunar and Mars bases that support substantial science, exploration, and resource production objectives. The analysis addresses systems-level designs; element requirements and conceptual designs; assessments of precursor and technology needs; and overall programmatics and schedules. This paper focuses on the results of the study of the Space Resource Utilization Architecture. This architecture develops the capability to extract useful materials from the indigenous resources of the Moon and Mars. On the Moon, a substantial infrastructure is emplaced which can support a crew of up to twelve. Two major process lines are developed: one produces oxygen, ceramics, and metals; the other produces hydrogen, helium, and other volatiles. The Moon is also used for a simulation of a Mars mission. Significant science capabilities are established in conjunction with resource development. Exploration includes remote global surveys and piloted sorties of local and regional areas. Science accommodations include planetary science, astronomy, and biomedical research. Greenhouses are established to provide a substantial amount of food needs.

  9. Architectures Toward Reusable Science Data Systems

    NASA Technical Reports Server (NTRS)

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  10. Options for a lunar base surface architecture

    NASA Technical Reports Server (NTRS)

    Roberts, Barney B.

    1992-01-01

    The Planet Surface Systems Office at the NASA Johnson Space Center has participated in an analysis of the Space Exploration Initiative architectures described in the Synthesis Group report. This effort involves a Systems Engineering and Integration effort to define point designs for evolving lunar and Mars bases that support substantial science, exploration, and resource production objectives. The analysis addresses systems-level designs; element requirements and conceptual designs; assessments of precursor and technology needs; and overall programmatics and schedules. This paper focuses on the results of the study of the Space Resource Utilization Architecture. This architecture develops the capability to extract useful materials from the indigenous resources of the Moon and Mars. On the Moon, a substantial infrastructure is emplaced which can support a crew of up to twelve. Two major process lines are developed: one produces oxygen, ceramics, and metals; the other produces hydrogen, helium, and other volatiles. The Moon is also used for a simulation of a Mars mission. Significant science capabilities are established in conjunction with resource development. Exploration includes remote global surveys and piloted sorties of local and regional areas. Science accommodations include planetary science, astronomy, and biomedical research. Greenhouses are established to provide a substantial amount of food needs.

  11. Architectures Toward Reusable Science Data Systems

    NASA Astrophysics Data System (ADS)

    Moses, J. F.

    2014-12-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building ground systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research, NOAA's weather satellites and USGS's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience the goal is to recognize architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  12. Architectures for intelligent machines

    NASA Technical Reports Server (NTRS)

    Saridis, George N.

    1991-01-01

    The theory of intelligent machines has been recently reformulated to incorporate new architectures that are using neural and Petri nets. The analytic functions of an intelligent machine are implemented by intelligent controls, using entropy as a measure. The resulting hierarchical control structure is based on the principle of increasing precision with decreasing intelligence. Each of the three levels of the intelligent control is using different architectures, in order to satisfy the requirements of the principle: the organization level is moduled after a Boltzmann machine for abstract reasoning, task planning and decision making; the coordination level is composed of a number of Petri net transducers supervised, for command exchange, by a dispatcher, which also serves as an interface to the organization level; the execution level, include the sensory, planning for navigation and control hardware which interacts one-to-one with the appropriate coordinators, while a VME bus provides a channel for database exchange among the several devices. This system is currently implemented on a robotic transporter, designed for space construction at the CIRSSE laboratories at the Rensselaer Polytechnic Institute. The progress of its development is reported.

  13. Autonomous droplet architectures.

    PubMed

    Jones, Gareth; King, Philip H; Morgan, Hywel; de Planque, Maurits R R; Zauner, Klaus-Peter

    2015-01-01

    The quintessential living element of all organisms is the cell-a fluid-filled compartment enclosed, but not isolated, by a layer of amphiphilic molecules that self-assemble at its boundary. Cells of different composition can aggregate and communicate through the exchange of molecules across their boundaries. The astounding success of this architecture is readily apparent throughout the biological world. Inspired by the versatility of nature's architecture, we investigate aggregates of membrane-enclosed droplets as a design concept for robotics. This will require droplets capable of sensing, information processing, and actuation. It will also require the integration of functionally specialized droplets into an interconnected functional unit. Based on results from the literature and from our own laboratory, we argue the viability of this approach. Sensing and information processing in droplets have been the subject of several recent studies, on which we draw. Integrating droplets into coherently acting units and the aspect of controlled actuation for locomotion have received less attention. This article describes experiments that address both of these challenges. Using lipid-coated droplets of Belousov-Zhabotinsky reaction medium in oil, we show here that such droplets can be integrated and that chemically driven mechanical motion can be achieved. PMID:25622015

  14. Rutger's CAM2000 chip architecture

    NASA Technical Reports Server (NTRS)

    Smith, Donald E.; Hall, J. Storrs; Miyake, Keith

    1993-01-01

    This report describes the architecture and instruction set of the Rutgers CAM2000 memory chip. The CAM2000 combines features of Associative Processing (AP), Content Addressable Memory (CAM), and Dynamic Random Access Memory (DRAM) in a single chip package that is not only DRAM compatible but capable of applying simple massively parallel operations to memory. This document reflects the current status of the CAM2000 architecture and is continually updated to reflect the current state of the architecture and instruction set.

  15. Software synthesis using generic architectures

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    A framework for synthesizing software systems based on abstracting software system designs and the design process is described. The result of such an abstraction process is a generic architecture and the process knowledge for customizing the architecture. The customization process knowledge is used to assist a designer in customizing the architecture as opposed to completely automating the design of systems. Our approach using an implemented example of a generic tracking architecture which was customized in two different domains is illustrated. How the designs produced using KASE compare to the original designs of the two systems, and current work and plans for extending KASE to other application areas are described.

  16. Roadmap to the SRS computing architecture

    SciTech Connect

    Johnson, A.

    1994-07-05

    This document outlines the major steps that must be taken by the Savannah River Site (SRS) to migrate the SRS information technology (IT) environment to the new architecture described in the Savannah River Site Computing Architecture. This document proposes an IT environment that is {open_quotes}...standards-based, data-driven, and workstation-oriented, with larger systems being utilized for the delivery of needed information to users in a client-server relationship.{close_quotes} Achieving this vision will require many substantial changes in the computing applications, systems, and supporting infrastructure at the site. This document consists of a set of roadmaps which provide explanations of the necessary changes for IT at the site and describes the milestones that must be completed to finish the migration.

  17. Exploration Architecture Options - ECLSS, TCS, EVA Implications

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Henninger, Don

    2011-01-01

    Many options for exploration of space have been identified and evaluated since the Vision for Space Exploration (VSE) was announced in 2004. The Augustine Commission evaluated human space flight for the Obama administration then the Human Exploration Framework Teams (HEFT and HEFT2) evaluated potential exploration missions and the infrastructure and technology needs for those missions. Lunar architectures have been identified and addressed by the Lunar Surface Systems team to establish options for how to get to, and then inhabit and explore, the moon. This paper will evaluate the options for exploration of space for the implications of architectures on the Environmental Control and Life Support (ECLSS), Thermal Control (TCS), and Extravehicular Activity (EVA) Systems.

  18. A computational architecture for social agents

    SciTech Connect

    Bond, A.H.

    1996-12-31

    This article describes a new class of information-processing models for social agents. They axe derived from primate brain architecture, the processing in brain regions, the interactions among brain regions, and the social behavior of primates. In another paper, we have reviewed the neuroanatomical connections and functional involvements of cortical regions. We reviewed the evidence for a hierarchical architecture in the primate brain. By examining neuroanatomical evidence for connections among neural areas, we were able to establish anatomical regions and connections. We then examined evidence for specific functional involvements of the different neural axeas and found some support for hierarchical functioning, not only for the perception hierarchies but also for the planning and action hierarchy in the frontal lobes.

  19. Architectural Implications for Spatial Object Association Algorithms

    SciTech Connect

    Kumar, V S; Kurc, T; Saltz, J; Abdulla, G; Kohn, S R; Matarazzo, C

    2009-01-29

    Spatial object association, also referred to as cross-match of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server R, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation provides insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST).

  20. Architectural Implications for Spatial Object Association Algorithms*

    PubMed Central

    Kumar, Vijay S.; Kurc, Tahsin; Saltz, Joel; Abdulla, Ghaleb; Kohn, Scott R.; Matarazzo, Celeste

    2013-01-01

    Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server®, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation provides insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST). PMID:25692244

  1. A resource management architecture for metacomputing systems.

    SciTech Connect

    Czajkowski, K.; Foster, I.; Karonis, N.; Kesselman, C.; Martin, S.; Smith, W.; Tuecke, S.

    1999-08-24

    Metacomputing systems are intended to support remote and/or concurrent use of geographically distributed computational resources. Resource management in such systems is complicated by five concerns that do not typically arise in other situations: site autonomy and heterogeneous substrates at the resources, and application requirements for policy extensibility, co-allocation, and online control. We describe a resource management architecture that addresses these concerns. This architecture distributes the resource management problem among distinct local manager, resource broker, and resource co-allocator components and defines an extensible resource specification language to exchange information about requirements. We describe how these techniques have been implemented in the context of the Globus metacomputing toolkit and used to implement a variety of different resource management strategies. We report on our experiences applying our techniques in a large testbed, GUSTO, incorporating 15 sites, 330 computers, and 3600 processors.

  2. Radiant exchange in partially specular architectural environments

    NASA Astrophysics Data System (ADS)

    Beamer, C. Walter; Muehleisen, Ralph T.

    2003-10-01

    The radiant exchange method, also known as radiosity, was originally developed for thermal radiative heat transfer applications. Later it was used to model architectural lighting systems, and more recently it has been extended to model acoustic systems. While there are subtle differences in these applications, the basic method is based on solving a system of energy balance equations, and it is best applied to spaces with mainly diffuse reflecting surfaces. The obvious drawback to this method is that it is based around the assumption that all surfaces in the system are diffuse reflectors. Because almost all architectural systems have at least some partially specular reflecting surfaces in the system it is important to extend the radiant exchange method to deal with this type of surface reflection. [Work supported by NSF.

  3. Integrated Network Architecture for NASA's Orion Missions

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul B.; Hayden, Jeffrey L.; Sartwell, Thomas; Miller, Ronald A.; Hudiburg, John J.

    2008-01-01

    NASA is planning a series of short and long duration human and robotic missions to explore the Moon and then Mars. The series of missions will begin with a new crew exploration vehicle (called Orion) that will initially provide crew exchange and cargo supply support to the International Space Station (ISS) and then become a human conveyance for travel to the Moon. The Orion vehicle will be mounted atop the Ares I launch vehicle for a series of pre-launch tests and then launched and inserted into low Earth orbit (LEO) for crew exchange missions to the ISS. The Orion and Ares I comprise the initial vehicles in the Constellation system of systems that later includes Ares V, Earth departure stage, lunar lander, and other lunar surface systems for the lunar exploration missions. These key systems will enable the lunar surface exploration missions to be initiated in 2018. The complexity of the Constellation system of systems and missions will require a communication and navigation infrastructure to provide low and high rate forward and return communication services, tracking services, and ground network services. The infrastructure must provide robust, reliable, safe, sustainable, and autonomous operations at minimum cost while maximizing the exploration capabilities and science return. The infrastructure will be based on a network of networks architecture that will integrate NASA legacy communication, modified elements, and navigation systems. New networks will be added to extend communication, navigation, and timing services for the Moon missions. Internet protocol (IP) and network management systems within the networks will enable interoperability throughout the Constellation system of systems. An integrated network architecture has developed based on the emerging Constellation requirements for Orion missions. The architecture, as presented in this paper, addresses the early Orion missions to the ISS with communication, navigation, and network services over five

  4. 9. Photocopy of architectural drawing (from National Archives Architectural and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Photocopy of architectural drawing (from National Archives Architectural and Cartographic Branch, Alexandria, Va.) Annotated lithograph on paper. Standard plan used for construction of Commissary Sergeants Quarters, 1876. PLAN, FRONT AND SIDE ELEVATIONS, SECTION - Fort Myer, Commissary Sergeant's Quarters, Washington Avenue between Johnson Lane & Custer Road, Arlington, Arlington County, VA

  5. Integrated Operations Architecture Technology Assessment Study

    NASA Technical Reports Server (NTRS)

    2001-01-01

    As part of NASA's Integrated Operations Architecture (IOA) Baseline, NASA will consolidate all communications operations. including ground-based, near-earth, and deep-space communications, into a single integrated network. This network will make maximum use of commercial equipment, services and standards. It will be an Internet Protocol (IP) based network. This study supports technology development planning for the IOA. The technical problems that may arise when LEO mission spacecraft interoperate with commercial satellite services were investigated. Commercial technology and services that could support the IOA were surveyed, and gaps in the capability of existing technology and techniques were identified. Recommendations were made on which gaps should be closed by means of NASA research and development funding. Several findings emerged from the interoperability assessment: in the NASA mission set, there is a preponderance of small. inexpensive, low data rate science missions; proposed commercial satellite communications services could potentially provide TDRSS-like data relay functions; and. IP and related protocols, such as TCP, require augmentation to operate in the mobile networking environment required by the space-to-ground portion of the IOA. Five case studies were performed in the technology assessment. Each case represented a realistic implementation of the near-earth portion of the IOA. The cases included the use of frequencies at L-band, Ka-band and the optical spectrum. The cases also represented both space relay architectures and direct-to-ground architectures. Some of the main recommendations resulting from the case studies are: select an architecture for the LEO/MEO communications network; pursue the development of a Ka-band space-qualified transmitter (and possibly a receiver), and a low-cost Ka-band ground terminal for a direct-to-ground network, pursue the development of an Inmarsat (L-band) space-qualified transceiver to implement a global, low

  6. Instrument calibration architecture of Radar Imaging Satellite (RISAT-1)

    NASA Astrophysics Data System (ADS)

    Misra, T.; Bhan, R.; Putrevu, D.; Mehrotra, P.; Nandy, P. S.; Shukla, S. D.; Rao, C. V. N.; Dave, D. B.; Desai, N. M.

    2016-05-01

    Radar Imaging Satellite (RISAT-1) payload system is configured to perform self-calibration of transmit and receive paths before and after imaging sessions through a special instrument calibration technique. Instrument calibration architecture of RISAT-1 supported ground verification and validation of payload including active array antenna. During on-ground validation of 126 beams of active array antenna which needed precise calibration of boresight pointing, a unique method called "collimation coefficient error estimation" was utilized. This method of antenna calibration was supported by special hardware and software calibration architecture of RISAT-1. This paper concentrates on RISAT-1 hardware and software architecture which supports in-orbit and on-ground instrument calibration. Efforts are also put here to highlight use of special calibration scheme of RISAT-1 instrument to evaluate system response during ground verification and validation.

  7. The Architecture of Exoplanets

    NASA Astrophysics Data System (ADS)

    Hatzes, Artie P.

    2016-05-01

    Prior to the discovery of exoplanets our expectations of their architecture were largely driven by the properties of our solar system. We expected giant planets to lie in the outer regions and rocky planets in the inner regions. Planets should probably only occupy orbital distances 0.3-30 AU from the star. Planetary orbits should be circular, prograde and in the same plane. The reality of exoplanets have shattered these expectations. Jupiter-mass, Neptune-mass, Superearths, and even Earth-mass planets can orbit within 0.05 AU of the stars, sometimes with orbital periods of less than one day. Exoplanetary orbits can be eccentric, misaligned, and even in retrograde orbits. Radial velocity surveys gave the first hints that the occurrence rate increases with decreasing mass. This was put on a firm statistical basis with the Kepler mission that clearly demonstrated that there were more Neptune- and Superearth-sized planets than Jupiter-sized planets. These are often in multiple, densely packed systems where the planets all orbit within 0.3 AU of the star, a result also suggested by radial velocity surveys. Exoplanets also exhibit diversity along the main sequence. Massive stars tend to have a higher frequency of planets ( ≈ 20-25 %) that tend to be more massive ( M≈ 5-10 M_{Jup}). Giant planets around low mass stars are rare, but these stars show an abundance of small (Neptune and Superearth) planets in multiple systems. Planet formation is also not restricted to single stars as the Kepler mission has discovered several circumbinary planets. Although we have learned much about the architecture of planets over the past 20 years, we know little about the census of small planets at relatively large ( a>1 AU) orbital distances. We have yet to find a planetary system that is analogous to our own solar system. The question of how unique are the properties of our own solar system remains unanswered. Advancements in the detection methods of small planets over a wide range

  8. Multiprocessor architectural study

    NASA Technical Reports Server (NTRS)

    Kosmala, A. L.; Stanten, S. F.; Vandever, W. H.

    1972-01-01

    An architectural design study was made of a multiprocessor computing system intended to meet functional and performance specifications appropriate to a manned space station application. Intermetrics, previous experience, and accumulated knowledge of the multiprocessor field is used to generate a baseline philosophy for the design of a future SUMC* multiprocessor. Interrupts are defined and the crucial questions of interrupt structure, such as processor selection and response time, are discussed. Memory hierarchy and performance is discussed extensively with particular attention to the design approach which utilizes a cache memory associated with each processor. The ability of an individual processor to approach its theoretical maximum performance is then analyzed in terms of a hit ratio. Memory management is envisioned as a virtual memory system implemented either through segmentation or paging. Addressing is discussed in terms of various register design adopted by current computers and those of advanced design.

  9. Functional Biomimetic Architectures

    NASA Astrophysics Data System (ADS)

    Levine, Paul M.

    N-substituted glycine oligomers, or 'peptoids,' are a class of sequence--specific foldamers composed of tertiary amide linkages, engendering proteolytic stability and enhanced cellular permeability. Peptoids are notable for their facile synthesis, sequence diversity, and ability to fold into distinct secondary structures. In an effort to establish new functional peptoid architectures, we utilize the copper-catalyzed azide-alkyne [3+2] cycloaddition (CuAAC) reaction to generate peptidomimetic assemblies bearing bioactive ligands that specifically target and modulate Androgen Receptor (AR) activity, a major therapeutic target for prostate cancer. Additionally, we explore chemical ligation protocols to generate semi-synthetic hybrid biomacromolecules capable of exhibiting novel structures and functions not accessible to fully biosynthesized proteins.

  10. CONRAD Software Architecture

    NASA Astrophysics Data System (ADS)

    Guzman, J. C.; Bennett, T.

    2008-08-01

    The Convergent Radio Astronomy Demonstrator (CONRAD) is a collaboration between the computing teams of two SKA pathfinder instruments, MeerKAT (South Africa) and ASKAP (Australia). Our goal is to produce the required common software to operate, process and store the data from the two instruments. Both instruments are synthesis arrays composed of a large number of antennas (40 - 100) operating at centimeter wavelengths with wide-field capabilities. Key challenges are the processing of high volume of data in real-time as well as the remote mode of operations. Here we present the software architecture for CONRAD. Our design approach is to maximize the use of open solutions and third-party software widely deployed in commercial applications, such as SNMP and LDAP, and to utilize modern web-based technologies for the user interfaces, such as AJAX.

  11. Naval open systems architecture

    NASA Astrophysics Data System (ADS)

    Guertin, Nick; Womble, Brian; Haskell, Virginia

    2013-05-01

    For the past 8 years, the Navy has been working on transforming the acquisition practices of the Navy and Marine Corps toward Open Systems Architectures to open up our business, gain competitive advantage, improve warfighter performance, speed innovation to the fleet and deliver superior capability to the warfighter within a shrinking budget1. Why should Industry care? They should care because we in Government want the best Industry has to offer. Industry is in the business of pushing technology to greater and greater capabilities through innovation. Examples of innovations are on full display at this conference, such as exploring the impact of difficult environmental conditions on technical performance. Industry is creating the tools which will continue to give the Navy and Marine Corps important tactical advantages over our adversaries.

  12. Planning in subsumption architectures

    NASA Technical Reports Server (NTRS)

    Chalfant, Eugene C.

    1994-01-01

    A subsumption planner using a parallel distributed computational paradigm based on the subsumption architecture for control of real-world capable robots is described. Virtual sensor state space is used as a planning tool to visualize the robot's anticipated effect on its environment. Decision sequences are generated based on the environmental situation expected at the time the robot must commit to a decision. Between decision points, the robot performs in a preprogrammed manner. A rudimentary, domain-specific partial world model contains enough information to extrapolate the end results of the rote behavior between decision points. A collective network of predictors operates in parallel with the reactive network forming a recurrrent network which generates plans as a hierarchy. Details of a plan segment are generated only when its execution is imminent. The use of the subsumption planner is demonstrated by a simple maze navigation problem.

  13. Power Systems Control Architecture

    SciTech Connect

    James Davidson

    2005-01-01

    A diagram provided in the report depicts the complexity of the power systems control architecture used by the national power structure. It shows the structural hierarchy and the relationship of the each system to those other systems interconnected to it. Each of these levels provides a different focus for vulnerability testing and has its own weaknesses. In evaluating each level, of prime concern is what vulnerabilities exist that provide a path into the system, either to cause the system to malfunction or to take control of a field device. An additional vulnerability to consider is can the system be compromised in such a manner that the attacker can obtain critical information about the system and the portion of the national power structure that it controls.

  14. Space station needs, attributes and architectural options study. Volume 4: Architectural options, subsystems, technology and programmatics

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Space station architectural options, habitability considerations and subsystem analyses, technology, and programmatics are reviewed. The methodology employed for conceiving and defining space station concepts is presented. As a result of this approach, architectures were conceived and along with their supporting rationale are described within this portion of the report. Habitability consideration and subsystem analyses describe the human factors associated with space station operations and includes subsections covering (1) data management, (2) communications and tracking, (3) environmental control and life support, (4) manipulator systems, (5) resupply, (6) pointing, (7) thermal management and (8) interface standardization. A consolidated matrix of subsystems technology issues as related to meeting the mission needs for a 1990's era space station is presented. Within the programmatics portion, a brief description of costing and program strategies is outlined.

  15. SpaceWire Architectures: Present and Future

    NASA Technical Reports Server (NTRS)

    Rakow, Glen Parker

    2006-01-01

    A viewgraph presentation on current and future spacewire architectures is shown. The topics include: 1) Current Spacewire Architectures: Swift Data Flow; 2) Current SpaceWire Architectures : LRO Data Flow; 3) Current Spacewire Architectures: JWST Data Flow; 4) Current SpaceWire Architectures; 5) Traditional Systems; 6) Future Systems; 7) Advantages; and 8) System Engineer Toolkit.

  16. Software Architecture for Autonomous Spacecraft

    NASA Technical Reports Server (NTRS)

    Shih, Jimmy S.

    1997-01-01

    The thesis objective is to design an autonomous spacecraft architecture to perform both deliberative and reactive behaviors. The Autonomous Small Planet In-Situ Reaction to Events (ASPIRE) project uses the architecture to integrate several autonomous technologies for a comet orbiter mission.

  17. Dynamic Weather Routes Architecture Overview

    NASA Technical Reports Server (NTRS)

    Eslami, Hassan; Eshow, Michelle

    2014-01-01

    Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.

  18. Perspectives on Architecture and Children.

    ERIC Educational Resources Information Center

    Taylor, Anne

    1989-01-01

    Describes a new system for teaching architectural education known as Architectural Design Education. States that this system, developed by Anne Taylor and George Vlastos, introduces students to the problem solving process, integrates creative activities with traditional disciplines, and enhances students' and teachers' ability to relate to their…

  19. Dataflow architecture for machine control

    SciTech Connect

    Lent, B.

    1989-01-01

    The author describes how to implement the latest control strategies using state-of-the-art control technology and computing principles. Provides all the basic definitions, taxonomy, and analysis of currently used architectures, including microprocessor communication schemes. This book describes in detail the analysis and implementation of the selected OR dataflow driven architecture in a grinding machine control system.

  20. Interior Design in Architectural Education

    ERIC Educational Resources Information Center

    Gurel, Meltem O.; Potthoff, Joy K.

    2006-01-01

    The domain of interiors constitutes a point of tension between practicing architects and interior designers. Design of interior spaces is a significant part of architectural profession. Yet, to what extent does architectural education keep pace with changing demands in rendering topics that are identified as pertinent to the design of interiors?…

  1. Architectural constructs of Ampex DST

    NASA Technical Reports Server (NTRS)

    Johnson, Clay

    1993-01-01

    The DST 800 automated library is a high performance, automated tape storage system, developed by AMPEX, providing mass storage to host systems. Physical Volume Manager (PVM) is a volume server which supports either a DST 800, DST 600 stand alone tape drive, or a combination of DST 800 and DST 600 subsystems. The objective of the PVM is to provide the foundation support to allow automated and operator assisted access to the DST cartridges with continuous operation. A second objective is to create a data base about the media, its location, and its usage so that the quality and utilization of the media on which specific data is recorded and the performance of the storage system may be managed. The DST tape drive architecture and media provides several unique functions that enhance the ability to achieve high media space utilization and fast access. Access times are enhanced through the implementation of multiple areas (called system zones) on the media where the media may be unloaded. This reduces positioning time in loading and unloading the cartridge. Access times are also reduced through high speed positioning in excess of 800 megabytes per second. A DST cartridge can be partitioned into fixed size units which can be reclaimed for rewriting without invalidating other recorded data on the tape cartridge. Most tape management systems achieve space reclamation by deleting an entire tape volume, then allowing users to request a 'scratch tape' or 'nonspecific' volume when they wish to record data to tape. Physical cartridge sizes of 25, 75, or 165 gigabytes will make this existing process inefficient or unusable. The DST cartridge partitioning capability provides an efficient mechanism for addressing the tape space utilization problem.

  2. Architecture-driven reuse of code in KASE

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay

    1993-01-01

    In order to support the synthesis of large, complex software systems, we need to focus on issues pertaining to the architectural design of a system in addition to algorithm and data structure design. An approach that is based on abstracting the architectural design of a set of problems in the form of a generic architecture, and providing tools that can be used to instantiate the generic architecture for specific problem instances is presented. Such an approach also facilitates reuse of code between different systems belonging to the same problem class. An application of our approach on a realistic problem is described; the results of the exercise are presented; and how our approach compares to other work in this area is discussed.

  3. An OSI architecture for the deep space network

    NASA Technical Reports Server (NTRS)

    Heuser, W. Randy; Cooper, Lynne P.

    1993-01-01

    The flexibility and robustness of a monitor and control system are a direct result of the underlying inter-processor communications architecture. A new architecture for monitor & Control at the Deep Space Network Communications Complexes has been developed based on the Open System Interconnection (OSI) standards. The suitability of OSI standards for DSN M&C has been proven in the laboratory. The laboratory success has resulted in choosing an OSI-based architecture for DSS-13 M&C. DSS-13 is the DSN experimental station and is not part of the 'operational' DSN; it's role is to provide an environment to test new communications concepts can be tested and conduct unique science experiments. Therefore, DSS-13 must be robust enough to support operational activities, while also being flexible enough to enable experimentation. This paper describes the M&C architecture developed for DSS-13 and the results from system and operational testing.

  4. Quadrant architecture for fast in-place algorithms

    SciTech Connect

    Besslich, P.W.; Kurowski, J.O.

    1983-10-01

    The architecture proposed is tailored to support Radix-2/sup k/ based in-place processing of pictorial data. The algorithms make use of signal-flow graphs to describe 2-dimensional in-place operations suitable for image processing. They may be executed on a general-purpose computer but may also be supported by a special parallel architecture. Major advantages of the scheme are in-place processing and parallel access to disjoint sections of memory only. A quadtree-like decomposition of the picture prevents blocking and queuing of private and common buses. 9 references.

  5. Architecture Governance: The Importance of Architecture Governance for Achieving Operationally Responsive Ground Systems

    NASA Technical Reports Server (NTRS)

    Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik

    2011-01-01

    Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level

  6. 29 CFR 32.28 - Architectural standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accessibility prescribed by the General Services Administration under the Architectural Barriers Act at 41 CFR... FEDERAL FINANCIAL ASSISTANCE Accessibility § 32.28 Architectural standards. (a) Design and construction... usable by qualified handicapped individuals. (c) Standards for architectural accessibility....

  7. 29 CFR 32.28 - Architectural standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... accessibility prescribed by the General Services Administration under the Architectural Barriers Act at 41 CFR... RECEIVING FEDERAL FINANCIAL ASSISTANCE Accessibility § 32.28 Architectural standards. (a) Design and... usable by qualified handicapped individuals. (c) Standards for architectural accessibility....

  8. 29 CFR 32.28 - Architectural standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... accessibility prescribed by the General Services Administration under the Architectural Barriers Act at 41 CFR... RECEIVING FEDERAL FINANCIAL ASSISTANCE Accessibility § 32.28 Architectural standards. (a) Design and... usable by qualified handicapped individuals. (c) Standards for architectural accessibility....

  9. A layered architecture for critical database design

    SciTech Connect

    Chisholm, G.H.; Swietlik, C.E.

    1997-12-31

    Integrity, security, and safety are desired properties of database systems destined for use in critical applications. These properties are desirable because they determine a system`s credibility. However, demonstrating that a system does, in fact, preserve these properties when implemented is a difficult task. The difficulty depends on the complexity of the associated design. The authors explore architectural paradigms that have been demonstrated to reduce system complexity and, thus, reduce the cost associated with certifying that the above properties are present in the final implementation. The approach is based on the tenet that the design is divided into multiple layers. The critical functions and data make up the bottom layer, where the requirements for integrity, security, and safety are most rigid. Certification is dependent on the use of formal methods to specify and analyze the system. Appropriate formal methods are required to support certification that multiple properties are present in the final implementation. These methods must assure a rigid mapping from the top-level specification down through the implementation details. Application of a layered architecture reduces the scope of the design that must be formally specified and analyzed. This paper describes a generic, layered architecture and a formal model for specification and analysis of complex systems that require rigid integrity security, and safety properties.

  10. Information architecture: Profile of adopted standards

    SciTech Connect

    1997-09-01

    The Department of Energy (DOE), like other Federal agencies, is under increasing pressure to use information technology to improve efficiency in mission accomplishment as well as delivery of services to the public. Because users and systems have become interdependent, DOE has enterprise wide needs for common application architectures, communication networks, databases, security, and management capabilities. Users need open systems that provide interoperability of products and portability of people, data, and applications that are distributed throughout heterogeneous computing environments. The level of interoperability necessary requires the adoption of DOE wide standards, protocols, and best practices. The Department has developed an information architecture and a related standards adoption and retirement process to assist users in developing strategies and plans for acquiring information technology products and services based upon open systems standards that support application software interoperability, portability, and scalability. This set of Departmental Information Architecture standards represents guidance for achieving higher degrees of interoperability within the greater DOE community, business partners, and stakeholders. While these standards are not mandatory, particular and due consideration of their applications in contractual matters and use in technology implementations Department wide are goals of the Chief Information Officer.

  11. A Facility and Architecture for Autonomy Research

    NASA Technical Reports Server (NTRS)

    Pisanich, Greg; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Autonomy is a key enabling factor in the advancement of the remote robotic exploration. There is currently a large gap between autonomy software at the research level and software that is ready for insertion into near-term space missions. The Mission Simulation Facility (MST) will bridge this gap by providing a simulation framework and suite of simulation tools to support research in autonomy for remote exploration. This system will allow developers of autonomy software to test their models in a high-fidelity simulation and evaluate their system's performance against a set of integrated, standardized simulations. The Mission Simulation ToolKit (MST) uses a distributed architecture with a communication layer that is built on top of the standardized High Level Architecture (HLA). This architecture enables the use of existing high fidelity models, allows mixing simulation components from various computing platforms and enforces the use of a standardized high-level interface among components. The components needed to achieve a realistic simulation can be grouped into four categories: environment generation (terrain, environmental features), robotic platform behavior (robot dynamics), instrument models (camera/spectrometer/etc.), and data analysis. The MST will provide basic components in these areas but allows users to plug-in easily any refined model by means of a communication protocol. Finally, a description file defines the robot and environment parameters for easy configuration and ensures that all the simulation models share the same information.

  12. Architecture and Workflow of Medical Knowledge Repository

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsook; Kim, Jeong Ah; Cho, Insook

    Recently, clinical field builds various forms of computerized medical knowledge and tries to use it efficiently. In general, to build and reuse knowledge easily, it is needed to build a knowledge repository. Especially, the credibility of knowledge is important in clinical domain. This paper proposes methods for supporting it. To perform it systematically, we propose the method of the knowledge management processes. The methods for knowledge management can serve equal quality, usability and credibility of knowledge. Knowledge management methods consist of 2 methods. They are the knowledge management processes and the specification of the management targets. And this paper proposes the requirement of a knowledge repository and the architecture of the knowledge repository.

  13. Mars transportation system - Architecture trade study

    NASA Astrophysics Data System (ADS)

    Walton, Lewis A.; Malloy, John D.

    1992-07-01

    An advanced Mars base resupply transportation system utilizing nuclear thermal rockets, a split/sprint architecture, and conjunction class trajectories for the manned flight segments was studied to determine the impact of engine characteristics other than specific impulse. High engine thrust-to-weight ratios were found to offer significant performance improvements and engine clustering and shielding strategies were found to interrelate to the engine thrust-to-weight ratio in a complex manner. Performance tradeoffs of alternate abort mode and engine disposal strategies were assessed. The significant benefits of the use of indigenous Martian materials to support the transportation system were quantified.

  14. NASA Laboratory telerobotic manipulator control system architecture

    NASA Technical Reports Server (NTRS)

    Rowe, J. C.; Butler, P. L.; Glassell, R. L.; Herndon, J. N.

    1991-01-01

    In support of the National Aeronautics and Space Administration (NASA) goals to increase the utilization of dexterous robotic systems in space, the Oak Ridge National Laboratory (ORNL) has developed the Laboratory Telerobotic Manipulator (LTM) system. It is a dexterous, dual-arm, force reflecting teleoperator system with robotic features for NASA ground-based research. This paper describes the overall control system architecture, including both the hardware and software. The control system is a distributed, modular, and hierarchical design with flexible expansion capabilities for future enhancements of both the hardware and software.

  15. Chromosome Architecture and Genome Organization

    PubMed Central

    Bernardi, Giorgio

    2015-01-01

    How the same DNA sequences can function in the three-dimensional architecture of interphase nucleus, fold in the very compact structure of metaphase chromosomes and go precisely back to the original interphase architecture in the following cell cycle remains an unresolved question to this day. The strategy used to address this issue was to analyze the correlations between chromosome architecture and the compositional patterns of DNA sequences spanning a size range from a few hundreds to a few thousands Kilobases. This is a critical range that encompasses isochores, interphase chromatin domains and boundaries, and chromosomal bands. The solution rests on the following key points: 1) the transition from the looped domains and sub-domains of interphase chromatin to the 30-nm fiber loops of early prophase chromosomes goes through the unfolding into an extended chromatin structure (probably a 10-nm “beads-on-a-string” structure); 2) the architectural proteins of interphase chromatin, such as CTCF and cohesin sub-units, are retained in mitosis and are part of the discontinuous protein scaffold of mitotic chromosomes; 3) the conservation of the link between architectural proteins and their binding sites on DNA through the cell cycle explains the “mitotic memory” of interphase architecture and the reversibility of the interphase to mitosis process. The results presented here also lead to a general conclusion which concerns the existence of correlations between the isochore organization of the genome and the architecture of chromosomes from interphase to metaphase. PMID:26619076

  16. Gaia Data Processing Architecture

    NASA Astrophysics Data System (ADS)

    O'Mullane, W.; Lammers, U.; Bailer-Jones, C.; Bastian, U.; Brown, A. G. A.; Drimmel, R.; Eyer, L.; Huc, C.; Katz, D.; Lindegren, L.; Pourbaix, D.; Luri, X.; Torra, J.; Mignard, F.; van Leeuwen, F.

    2007-10-01

    Gaia is the European Space Agency's (ESA's) ambitious space astrometry mission with a main objective to map astrometrically and spectro-photometrically not less than 1000 million celestial objects in our galaxy with unprecedented accuracy. The announcement of opportunity (AO) for the data processing will be issued by ESA late in 2006. The Gaia Data Processing and Analysis Consortium (DPAC) has been formed recently and is preparing an answer to this AO. The satellite will downlink around 100 TB of raw telemetry data over a mission duration of 5--6 years. To achieve its required astrometric accuracy of a few tens of microarcseconds, a highly involved processing of this data is required. In addition to the main astrometric instrument Gaia will host a radial-velocity spectrometer and two low-resolution dispersers for multi-color photometry. All instrument modules share a common focal plane consisting of a CCD mosaic about 1 m^2 in size and featuring close to 10^9 pixels. Each of the various instruments requires relatively complex processing while at the same time being interdependent. We describe the composition and structure of the DPAC and the envisaged overall architecture of the system. We shall delve further into the core processing---one of the nine so-called coordination units comprising the Gaia processing system.

  17. Superconducting Bolometer Array Architectures

    NASA Technical Reports Server (NTRS)

    Benford, Dominic; Chervenak, Jay; Irwin, Kent; Moseley, S. Harvey; Shafer, Rick; Staguhn, Johannes; Wollack, Ed; Oegerle, William (Technical Monitor)

    2002-01-01

    The next generation of far-infrared and submillimeter instruments require large arrays of detectors containing thousands of elements. These arrays will necessarily be multiplexed, and superconducting bolometer arrays are the most promising present prospect for these detectors. We discuss our current research into superconducting bolometer array technologies, which has recently resulted in the first multiplexed detections of submillimeter light and the first multiplexed astronomical observations. Prototype arrays containing 512 pixels are in production using the Pop-Up Detector (PUD) architecture, which can be extended easily to 1000 pixel arrays. Planar arrays of close-packed bolometers are being developed for the GBT (Green Bank Telescope) and for future space missions. For certain applications, such as a slewed far-infrared sky survey, feedhorncoupling of a large sparsely-filled array of bolometers is desirable, and is being developed using photolithographic feedhorn arrays. Individual detectors have achieved a Noise Equivalent Power (NEP) of -10(exp 17) W/square root of Hz at 300mK, but several orders of magnitude improvement are required and can be reached with existing technology. The testing of such ultralow-background detectors will prove difficult, as this requires optical loading of below IfW. Antenna-coupled bolometer designs have advantages for large format array designs at low powers due to their mode selectivity.

  18. Lunar Exploration Architectures

    NASA Astrophysics Data System (ADS)

    Perino, Maria Antonietta

    The international space exploration plans foresee in the next decades multiple robotic and human missions to Moon and robotic missions to Mars, Phobos and other destinations. Notably the US has since the announcement of the US space exploration vision by President G. W. Bush in 2004 made significant progress in the further definition of its exploration programme focusing in the next decades in particular on human missions to Moon. Given the highly demanding nature of these missions, different initiatives have been recently taken at international level to discuss how the lunar exploration missions currently planned at national level could fit in a coordinate roadmap and contribute to lunar exploration. Thales Alenia Space - Italia is leading 3 studies for the European Space Agency focus on the analysis of the transportation, in-space and surface architectures required to meet ESA provided stakeholders exploration objectives and requirements. Main result of this activity is the identification of European near-term priorities for exploration missions and European long-term priorities for capability and technology developments related to planetary exploration missions. This paper will present the main studies' results drawing a European roadmap for exploration missions and capability and technology developments related to lunar exploration infrastructure development, taking into account the strategic and programmatic indications for exploration coming from ESA as well as the international exploration context.

  19. Ajax Architecture Implementation Techniques

    NASA Astrophysics Data System (ADS)

    Hussaini, Syed Asadullah; Tabassum, S. Nasira; Baig, Tabassum, M. Khader

    2012-03-01

    Today's rich Web applications use a mix of Java Script and asynchronous communication with the application server. This mechanism is also known as Ajax: Asynchronous JavaScript and XML. The intent of Ajax is to exchange small pieces of data between the browser and the application server, and in doing so, use partial page refresh instead of reloading the entire Web page. AJAX (Asynchronous JavaScript and XML) is a powerful Web development model for browser-based Web applications. Technologies that form the AJAX model, such as XML, JavaScript, HTTP, and XHTML, are individually widely used and well known. However, AJAX combines these technologies to let Web pages retrieve small amounts of data from the server without having to reload the entire page. This capability makes Web pages more interactive and lets them behave like local applications. Web 2.0 enabled by the Ajax architecture has given rise to a new level of user interactivity through web browsers. Many new and extremely popular Web applications have been introduced such as Google Maps, Google Docs, Flickr, and so on. Ajax Toolkits such as Dojo allow web developers to build Web 2.0 applications quickly and with little effort.

  20. Array processor architecture

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  1. Planetary cubesats - mission architectures

    NASA Astrophysics Data System (ADS)

    Bousquet, Pierre W.; Ulamec, Stephan; Jaumann, Ralf; Vane, Gregg; Baker, John; Clark, Pamela; Komarek, Tomas; Lebreton, Jean-Pierre; Yano, Hajime

    2016-07-01

    Miniaturisation of technologies over the last decade has made cubesats a valid solution for deep space missions. For example, a spectacular set 13 cubesats will be delivered in 2018 to a high lunar orbit within the frame of SLS' first flight, referred to as Exploration Mission-1 (EM-1). Each of them will perform autonomously valuable scientific or technological investigations. Other situations are encountered, such as the auxiliary landers / rovers and autonomous camera that will be carried in 2018 to asteroid 1993 JU3 by JAXA's Hayabusas 2 probe, and will provide complementary scientific return to their mothership. In this case, cubesats depend on a larger spacecraft for deployment and other resources, such as telecommunication relay or propulsion. For both situations, we will describe in this paper how cubesats can be used as remote observatories (such as NEO detection missions), as technology demonstrators, and how they can perform or contribute to all steps in the Deep Space exploration sequence: Measurements during Deep Space cruise, Body Fly-bies, Body Orbiters, Atmospheric probes (Jupiter probe, Venus atmospheric probes, ..), Static Landers, Mobile landers (such as balloons, wheeled rovers, small body rovers, drones, penetrators, floating devices, …), Sample Return. We will elaborate on mission architectures for the most promising concepts where cubesat size devices offer an advantage in terms of affordability, feasibility, and increase of scientific return.

  2. Porous scaffold architecture guides tissue formation.

    PubMed

    Cipitria, Amaia; Lange, Claudia; Schell, Hanna; Wagermaier, Wolfgang; Reichert, Johannes C; Hutmacher, Dietmar W; Fratzl, Peter; Duda, Georg N

    2012-06-01

    Critical-sized bone defect regeneration is a remaining clinical concern. Numerous scaffold-based strategies are currently being investigated to enable in vivo bone defect healing. However, a deeper understanding of how a scaffold influences the tissue formation process and how this compares to endogenous bone formation or to regular fracture healing is missing. It is hypothesized that the porous scaffold architecture can serve as a guiding substrate to enable the formation of a structured fibrous network as a prerequirement for later bone formation. An ovine, tibial, 30-mm critical-sized defect is used as a model system to better understand the effect of the scaffold architecture on cell organization, fibrous tissue, and mineralized tissue formation mechanisms in vivo. Tissue regeneration patterns within two geometrically distinct macroscopic regions of a specific scaffold design, the scaffold wall and the endosteal cavity, are compared with tissue formation in an empty defect (negative control) and with cortical bone (positive control). Histology, backscattered electron imaging, scanning small-angle X-ray scattering, and nanoindentation are used to assess the morphology of fibrous and mineralized tissue, to measure the average mineral particle thickness and the degree of alignment, and to map the local elastic indentation modulus. The scaffold proves to function as a guiding substrate to the tissue formation process. It enables the arrangement of a structured fibrous tissue across the entire defect, which acts as a secondary supporting network for cells. Mineralization can then initiate along the fibrous network, resulting in bone ingrowth into a critical-sized defect, although not in complete bridging of the defect. The fibrous network morphology, which in turn is guided by the scaffold architecture, influences the microstructure of the newly formed bone. These results allow a deeper understanding of the mode of mineral tissue formation and the way this is

  3. Systolic architecture for heirarchical clustering

    SciTech Connect

    Ku, L.C.

    1984-01-01

    Several hierarchical clustering methods (including single-linkage complete-linkage, centroid, and absolute overlap methods) are reviewed. The absolute overlap clustering method is selected for the design of systolic architecture mainly due to its simplicity. Two versions of systolic architectures for the absolute overlap hierarchical clustering algorithm are proposed: one-dimensional version that leads to the development of a two dimensional version which fully takes advantage of the underlying data structure of the problems. The two dimensional systolic architecture can achieve a time complexity of O(m + n) in comparison with the conventional computer implementation of a time complexity of O(m/sup 2*/n).

  4. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, R.S.; Drost, M.K.; Call, C.J.; Birmingham, J.G.; McDonald, C.E.; Kurath, D.E.; Friedrich, M.

    1998-09-22

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation. 26 figs.

  5. Microcomponent chemical process sheet architecture

    DOEpatents

    Wegeng, Robert S.; Drost, M. Kevin; Call, Charles J.; Birmingham, Joseph G.; McDonald, Carolyn Evans; Kurath, Dean E.; Friedrich, Michele

    1998-01-01

    The invention is a microcomponent sheet architecture wherein macroscale unit processes are performed by microscale components. The sheet architecture may be a single laminate with a plurality of separate microcomponent sections or the sheet architecture may be a plurality of laminates with one or more microcomponent sections on each laminate. Each microcomponent or plurality of like microcomponents perform at least one chemical process unit operation. A first laminate having a plurality of like first microcomponents is combined with at least a second laminate having a plurality of like second microcomponents thereby combining at least two unit operations to achieve a system operation.

  6. Integrated Sensor Architecture (ISA) for Live Virtual Constructive (LVC) environments

    NASA Astrophysics Data System (ADS)

    Moulton, Christine L.; Harkrider, Susan; Harrell, John; Hepp, Jared

    2014-06-01

    The Integrated Sensor Architecture (ISA) is an interoperability solution that allows for the sharing of information between sensors and systems in a dynamic tactical environment. The ISA created a Service Oriented Architecture (SOA) that identifies common standards and protocols which support a net-centric system of systems integration. Utilizing a common language, these systems are able to connect, publish their needs and capabilities, and interact with other systems even on disadvantaged networks. Within the ISA project, three levels of interoperability were defined and implemented and these levels were tested at many events. Extensible data models and capabilities that are scalable across multi-echelons are supported, as well as dynamic discovery of capabilities and sensor management. The ISA has been tested and integrated with multiple sensors, platforms, and over a variety of hardware architectures in operational environments.

  7. Telemedicine system interoperability architecture: concept description and architecture overview.

    SciTech Connect

    Craft, Richard Layne, II

    2004-05-01

    In order for telemedicine to realize the vision of anywhere, anytime access to care, it must address the question of how to create a fully interoperable infrastructure. This paper describes the reasons for pursuing interoperability, outlines operational requirements that any interoperability approach needs to consider, proposes an abstract architecture for meeting these needs, identifies candidate technologies that might be used for rendering this architecture, and suggests a path forward that the telemedicine community might follow.

  8. Digital Architecture – Results From a Gap Analysis

    SciTech Connect

    Oxstrand, Johanna Helene; Thomas, Kenneth David; Fitzgerald, Kirk

    2015-09-01

    The digital architecture is defined as a collection of IT capabilities needed to support and integrate a wide-spectrum of real-time digital capabilities for nuclear power plant performance improvements. The digital architecture can be thought of as an integration of the separate I&C and information systems already in place in NPPs, brought together for the purpose of creating new levels of automation in NPP work activities. In some cases, it might be an extension of the current communication systems, to provide digital communications where they are currently analog only. This collection of IT capabilities must in turn be based on a set of user requirements that must be supported for the interconnected technologies to operate in an integrated manner. These requirements, simply put, are a statement of what sorts of digital work functions will be exercised in a fully-implemented seamless digital environment and how much they will be used. The goal of the digital architecture research is to develop a methodology for mapping nuclear power plant operational and support activities into the digital architecture, which includes the development of a consensus model for advanced information and control architecture. The consensus model should be developed at a level of detail that is useful to the industry. In other words, not so detailed that it specifies specific protocols and not so vague that it is only provides a high level description of technology. The next step towards the model development is to determine the current state of digital architecture at typical NPPs. To investigate the current state, the researchers conducted a gap analysis to determine to what extent the NPPs can support the future digital technology environment with their existing I&C and IT structure, and where gaps exist with respect to the full deployment of technology over time. The methodology, result, and conclusions from the gap analysis are described in this report.

  9. High-performance solid oxide fuel cells based on a thin La0.8Sr0.2Ga0.8Mg0.2O3-δ electrolyte membrane supported by a nickel-based anode of unique architecture

    NASA Astrophysics Data System (ADS)

    Sun, Haibin; Chen, Yu; Chen, Fanglin; Zhang, Yujun; Liu, Meilin

    2016-01-01

    Solid oxide fuel cells (SOFCs) based on a thin La0.8Sr0.2Ga0.8Mg0.2O3-δ (LSGM) electrolyte membrane supported by a nickel-based anode often suffers from undesirable reaction/diffusion between the Ni anode and the LSGM during high-temperature co-firing. In this study, a high performance intermediate-temperature SOFC is fabricated by depositing thin LSGM electrolyte membranes on a LSGM backbone of unique architecture coated with nano-sized Ni and Gd0.1Ce0.9O2-δ (GDC) particles via a combination of freeze-drying tape-casting, slurry drop-coating, and solution infiltration. The thickness of the dense LSGM electrolyte membranes is ∼30 μm while the undesirable reaction/diffusion between Ni and LSGM are effectively hindered because of the relatively low firing temperature, as confirmed by XRD analysis. Single cells show peak power densities of 1.61 W cm-2 at 700 °C and 0.52 W cm-2 at 600 °C using 3 vol% humidified H2 as fuel and ambient air as oxidant. The cell performance is very stable for 115 h at a constant current density of 0.303 A cm-2 at 600 °C.

  10. The IVOA Architecture

    NASA Astrophysics Data System (ADS)

    Arviset, C.; Gaudet, S.; IVOA Technical Coordination Group

    2012-09-01

    Astronomy produces large amounts of data of many kinds, coming from various sources: science space missions, ground based telescopes, theoretical models, compilation of results, etc. These data and associated processing services are made available via the Internet by "providers", usually large data centres or smaller teams (see Figure 1). The "consumers", be they individual researchers, research teams or computer systems, access these services to do their science. However, inter-connection amongst all these services and between providers and consumers is usually not trivial. The Virtual Observatory (VO) is the necessary "middle layer" framework enabling interoperability between all these providers and consumers in a seamless and transparent manner. Like the web which enables end users and machines to access transparently documents and services wherever and however they are stored, the VO enables the astronomy community to access data and service resources wherever and however they are provided. Over the last decade, the International Virtual Observatory Alliance (IVOA) has been defining various standards to build the VO technical framework for the providers to share their data and services ("Sharing"), and to allow users to find ("Finding") these resources, to get them ("Getting") and to use them ("Using"). To enable these functionalities, the definition of some core astronomically-oriented standards ("VO Core") has also been necessary. This paper will present the official and current IVOA Architecture[1], describing the various building blocks of the VO framework (see Figure 2) and their relation to all existing and in-progress IVOA standards. Additionally, it will show examples of these standards in action, connecting VO "consumers" to VO "providers".

  11. Project Integration Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2008-01-01

    The Project Integration Architecture (PIA) is a distributed, object-oriented, conceptual, software framework for the generation, organization, publication, integration, and consumption of all information involved in any complex technological process in a manner that is intelligible to both computers and humans. In the development of PIA, it was recognized that in order to provide a single computational environment in which all information associated with any given complex technological process could be viewed, reviewed, manipulated, and shared, it is necessary to formulate all the elements of such a process on the most fundamental level. In this formulation, any such element is regarded as being composed of any or all of three parts: input information, some transformation of that input information, and some useful output information. Another fundamental principle of PIA is the assumption that no consumer of information, whether human or computer, can be assumed to have any useful foreknowledge of an element presented to it. Consequently, a PIA-compliant computing system is required to be ready to respond to any questions, posed by the consumer, concerning the nature of the proffered element. In colloquial terms, a PIA-compliant system must be prepared to provide all the information needed to place the element in context. To satisfy this requirement, PIA extends the previously established object-oriented- programming concept of self-revelation and applies it on a grand scale. To enable pervasive use of self-revelation, PIA exploits another previously established object-oriented-programming concept - that of semantic infusion through class derivation. By means of self-revelation and semantic infusion through class derivation, a consumer of information can inquire about the contents of all information entities (e.g., databases and software) and can interact appropriately with those entities. Other key features of PIA are listed.

  12. Dynamic Information Architecture System

    SciTech Connect

    Christiansen, John

    1997-02-12

    The Dynamic Information System (DIAS) is a flexible object-based software framework for concurrent, multidiscplinary modeling of arbitrary (but related) processes. These processes are modeled as interrelated actions caused by and affecting the collection of diverse real-world objects represented in a simulation. The DIAS architecture allows independent process models to work together harmoniously in the same frame of reference and provides a wide range of data ingestion and output capabilities, including Geographic Information System (GIS) type map-based displays and photorealistic visualization of simulations in progress. In the DIAS implementation of the object-based approach, software objects carry within them not only the data which describe their static characteristics, but also the methods, or functions, which describe their dynamic behaviors. There are two categories of objects: (1) Entity objects which have real-world counterparts and are the actors in a simulation, and (2) Software infrastructure objects which make it possible to carry out the simulations. The Entity objects contain lists of Aspect objects, each of which addresses a single aspect of the Entity''s behavior. For example, a DIAS Stream Entity representing a section of a river can have many aspects correspondimg to its behavior in terms of hydrology (as a drainage system component), navigation (as a link in a waterborne transportation system), meteorology (in terms of moisture, heat, and momentum exchange with the atmospheric boundary layer), and visualization (for photorealistic visualization or map type displays), etc. This makes it possible for each real-world object to exhibit any or all of its unique behaviors within the context of a single simulation.

  13. Dynamic Information Architecture System

    Energy Science and Technology Software Center (ESTSC)

    1997-02-12

    The Dynamic Information System (DIAS) is a flexible object-based software framework for concurrent, multidiscplinary modeling of arbitrary (but related) processes. These processes are modeled as interrelated actions caused by and affecting the collection of diverse real-world objects represented in a simulation. The DIAS architecture allows independent process models to work together harmoniously in the same frame of reference and provides a wide range of data ingestion and output capabilities, including Geographic Information System (GIS) typemore » map-based displays and photorealistic visualization of simulations in progress. In the DIAS implementation of the object-based approach, software objects carry within them not only the data which describe their static characteristics, but also the methods, or functions, which describe their dynamic behaviors. There are two categories of objects: (1) Entity objects which have real-world counterparts and are the actors in a simulation, and (2) Software infrastructure objects which make it possible to carry out the simulations. The Entity objects contain lists of Aspect objects, each of which addresses a single aspect of the Entity''s behavior. For example, a DIAS Stream Entity representing a section of a river can have many aspects correspondimg to its behavior in terms of hydrology (as a drainage system component), navigation (as a link in a waterborne transportation system), meteorology (in terms of moisture, heat, and momentum exchange with the atmospheric boundary layer), and visualization (for photorealistic visualization or map type displays), etc. This makes it possible for each real-world object to exhibit any or all of its unique behaviors within the context of a single simulation.« less

  14. The Mothership Mission Architecture

    NASA Astrophysics Data System (ADS)

    Ernst, S. M.; DiCorcia, J. D.; Bonin, G.; Gump, D.; Lewis, J. S.; Foulds, C.; Faber, D.

    2015-12-01

    The Mothership is considered to be a dedicated deep space carrier spacecraft. It is currently being developed by Deep Space Industries (DSI) as a mission concept that enables a broad participation in the scientific exploration of small bodies - the Mothership mission architecture. A Mothership shall deliver third-party nano-sats, experiments and instruments to Near Earth Asteroids (NEOs), comets or moons. The Mothership service includes delivery of nano-sats, communication to Earth and visuals of the asteroid surface and surrounding area. The Mothership is designed to carry about 10 nano-sats, based upon a variation of the Cubesat standard, with some flexibility on the specific geometry. The Deep Space Nano-Sat reference design is a 14.5 cm cube, which accommodates the same volume as a traditional 3U CubeSat. To reduce cost, Mothership is designed as a secondary payload aboard launches to GTO. DSI is offering slots for nano-sats to individual customers. This enables organizations with relatively low operating budgets to closely examine an asteroid with highly specialized sensors of their own choosing and carry out experiments in the proximity of or on the surface of an asteroid, while the nano-sats can be built or commissioned by a variety of smaller institutions, companies, or agencies. While the overall Mothership mission will have a financial volume somewhere between a European Space Agencies' (ESA) S- and M-class mission for instance, it can be funded through a number of small and individual funding sources and programs, hence avoiding the processes associated with traditional space exploration missions. DSI has been able to identify a significant interest in the planetary science and nano-satellite communities.

  15. Architecture and the Information Revolution.

    ERIC Educational Resources Information Center

    Driscoll, Porter; And Others

    1982-01-01

    Traces how technological changes affect the architecture of the workplace. Traces these effects from the industrial revolution up through the computer revolution. Offers suggested designs for the computerized office of today and tomorrow. (JM)

  16. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  17. Transverse pumped laser amplifier architecture

    DOEpatents

    Bayramian, Andrew James; Manes, Kenneth R.; Deri, Robert; Erlandson, Alvin; Caird, John; Spaeth, Mary L.

    2015-05-19

    An optical gain architecture includes a pump source and a pump aperture. The architecture also includes a gain region including a gain element operable to amplify light at a laser wavelength. The gain region is characterized by a first side intersecting an optical path, a second side opposing the first side, a third side adjacent the first and second sides, and a fourth side opposing the third side. The architecture further includes a dichroic section disposed between the pump aperture and the first side of the gain region. The dichroic section is characterized by low reflectance at a pump wavelength and high reflectance at the laser wavelength. The architecture additionally includes a first cladding section proximate to the third side of the gain region and a second cladding section proximate to the fourth side of the gain region.

  18. Transverse pumped laser amplifier architecture

    DOEpatents

    Bayramian, Andrew James; Manes, Kenneth; Deri, Robert; Erlandson, Al; Caird, John; Spaeth, Mary

    2013-07-09

    An optical gain architecture includes a pump source and a pump aperture. The architecture also includes a gain region including a gain element operable to amplify light at a laser wavelength. The gain region is characterized by a first side intersecting an optical path, a second side opposing the first side, a third side adjacent the first and second sides, and a fourth side opposing the third side. The architecture further includes a dichroic section disposed between the pump aperture and the first side of the gain region. The dichroic section is characterized by low reflectance at a pump wavelength and high reflectance at the laser wavelength. The architecture additionally includes a first cladding section proximate to the third side of the gain region and a second cladding section proximate to the fourth side of the gain region.

  19. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Lytle, John K. (Technical Monitor)

    2002-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT). This paper discusses the salient features of the NPSS Architecture including its interface layer, object layer, implementation for accessing legacy codes, numerical zooming infrastructure and its computing layer. The computing layer focuses on the use and deployment of these propulsion simulations on parallel and distributed computing platforms which has been the focus of NASA Ames. Additional features of the object oriented architecture that support MultiDisciplinary (MD) Coupling, computer aided design (CAD) access and MD coupling objects will be discussed. Included will be a discussion of the successes, challenges and benefits of implementing this architecture.

  20. An implementation of SISAL for distributed-memory architectures

    SciTech Connect

    Beard, P.C.

    1995-06-01

    This thesis describes a new implementation of the implicitly parallel functional programming language SISAL, for massively parallel processor supercomputers. The Optimizing SISAL Compiler (OSC), developed at Lawrence Livermore National Laboratory, was originally designed for shared-memory multiprocessor machines and has been adapted to distributed-memory architectures. OSC has been relatively portable between shared-memory architectures, because they are architecturally similar, and OSC generates portable C code. However, distributed-memory architectures are not standardized -- each has a different programming model. Distributed-memory SISAL depends on a layer of software that provides a portable, distributed, shared-memory abstraction. This layer is provided by Split-C, a dialect of the C programming language developed at U.C. Berkeley, which has demonstrated good performance on distributed-memory architectures. Split-C provides important capabilities for good performance: support for program-specific distributed data structures, and split-phase memory operations. Distributed data structures help achieve good memory locality, while split-phase memory operations help tolerate the longer communication latencies inherent in distributed-memory architectures. The distributed-memory SISAL compiler and run-time system takes advantage of these capabilities. The results of these efforts is a compiler that runs identically on the Thinking Machines Connection Machine (CM-5), and the Meiko Computing Surface (CS-2).

  1. A Mobile Service Oriented Multiple Object Tracking Augmented Reality Architecture for Education and Learning Experiences

    ERIC Educational Resources Information Center

    Rattanarungrot, Sasithorn; White, Martin; Newbury, Paul

    2014-01-01

    This paper describes the design of our service-oriented architecture to support mobile multiple object tracking augmented reality applications applied to education and learning scenarios. The architecture is composed of a mobile multiple object tracking augmented reality client, a web service framework, and dynamic content providers. Tracking of…

  2. Alternatives generation and analysis report for immobilized low-level waste interim storage architecture

    SciTech Connect

    Burbank, D.A., Westinghouse Hanford

    1996-09-01

    The Immobilized Low-Level Waste Interim Storage subproject will provide storage capacity for immobilized low-level waste product sold to the U.S. Department of Energy by the privatization contractor. This report describes alternative Immobilized Low-Level Waste storage system architectures, evaluation criteria, and evaluation results to support the Immobilized Low-Level Waste storage system architecture selection decision process.

  3. Adapted Verbal Feedback, Instructor Interaction and Student Emotions in the Landscape Architecture Studio

    ERIC Educational Resources Information Center

    Smith, Carl A.; Boyer, Mark E.

    2015-01-01

    In light of concerns with architectural students' emotional jeopardy during traditional desk and final-jury critiques, the authors pursue alternative approaches intended to provide more supportive and mentoring verbal assessment in landscape architecture studios. In addition to traditional studio-based critiques throughout a semester, we provide…

  4. Collaborative Concept Mapping in a Web-Based Learning Environment: A Pedagogic Experience in Architectural Education.

    ERIC Educational Resources Information Center

    Madrazo, Leandro; Vidal, Jordi

    2002-01-01

    Describes a pedagogical work, carried out within a school of architecture, using a Web-based learning environment to support collaborative understanding of texts on architectural theory. Explains the use of concept maps, creation of a critical vocabulary, exploration of semantic spaces, and knowledge discovery through navigation. (Author/LRW)

  5. Re-engineering Nascom's network management architecture

    NASA Astrophysics Data System (ADS)

    Drake, Brian C.; Messent, David

    1994-11-01

    The development of Nascom systems for ground communications began in 1958 with Project Vanguard. The low-speed systems (rates less than 9.6 Kbs) were developed following existing standards; but, there were no comparable standards for high-speed systems. As a result, these systems were developed using custom protocols and custom hardware. Technology has made enormous strides since the ground support systems were implemented. Standards for computer equipment, software, and high-speed communications exist and the performance of current workstations exceeds that of the mainframes used in the development of the ground systems. Nascom is in the process of upgrading its ground support systems and providing additional services. The Message Switching System (MSS), Communications Address Processor (CAP), and Multiplexer/Demultiplexer (MDM) Automated Control System (MACS) are all examples of Nascom systems developed using standards such as, X-windows, Motif, and Simple Network Management Protocol (SNMP). Also, the Earth Observing System (EOS) Communications (Ecom) project is stressing standards as an integral part of its network. The move towards standards has produced a reduction in development, maintenance, and interoperability costs, while providing operational quality improvement. The Facility and Resource Manager (FARM) project has been established to integrate the Nascom networks and systems into a common network management architecture. The maximization of standards and implementation of computer automation in the architecture will lead to continued cost reductions and increased operational efficiency. The first step has been to derive overall Nascom requirements and identify the functionality common to all the current management systems. The identification of these common functions will enable the reuse of processes in the management architecture and promote increased use of automation throughout the Nascom network. The MSS, CAP, MACS, and Ecom projects have indicated

  6. Re-engineering Nascom's network management architecture

    NASA Technical Reports Server (NTRS)

    Drake, Brian C.; Messent, David

    1994-01-01

    The development of Nascom systems for ground communications began in 1958 with Project Vanguard. The low-speed systems (rates less than 9.6 Kbs) were developed following existing standards; but, there were no comparable standards for high-speed systems. As a result, these systems were developed using custom protocols and custom hardware. Technology has made enormous strides since the ground support systems were implemented. Standards for computer equipment, software, and high-speed communications exist and the performance of current workstations exceeds that of the mainframes used in the development of the ground systems. Nascom is in the process of upgrading its ground support systems and providing additional services. The Message Switching System (MSS), Communications Address Processor (CAP), and Multiplexer/Demultiplexer (MDM) Automated Control System (MACS) are all examples of Nascom systems developed using standards such as, X-windows, Motif, and Simple Network Management Protocol (SNMP). Also, the Earth Observing System (EOS) Communications (Ecom) project is stressing standards as an integral part of its network. The move towards standards has produced a reduction in development, maintenance, and interoperability costs, while providing operational quality improvement. The Facility and Resource Manager (FARM) project has been established to integrate the Nascom networks and systems into a common network management architecture. The maximization of standards and implementation of computer automation in the architecture will lead to continued cost reductions and increased operational efficiency. The first step has been to derive overall Nascom requirements and identify the functionality common to all the current management systems. The identification of these common functions will enable the reuse of processes in the management architecture and promote increased use of automation throughout the Nascom network. The MSS, CAP, MACS, and Ecom projects have indicated

  7. Space station needs, attributes and architectural options study

    NASA Technical Reports Server (NTRS)

    1983-01-01

    All the candidate Technology Development missions investigated during the space station needs, attributes, and architectural options study are described. All the mission data forms plus additional information such as, cost, drawings, functional flows, etc., generated in support of these mission is included with a computer generated mission data form.

  8. 18. Photocopy of Architectural Layout drawing, dated 25 June, 1993 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. Photocopy of Architectural Layout drawing, dated 25 June, 1993 by US Air Force Space Command. Original drawing property of United States Air Force, 21' Space Command AL-2 PAVE PAWS SUPPORT SYSTEMS - CAPE COD AFB, MASSACHUSETTS - SITE PLAN. DRAWING NO. AL-2 - SHEET 3 OF 21. - Cape Cod Air Station, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  9. Educational JavaBeans: a Requirements Driven Architecture.

    ERIC Educational Resources Information Center

    Hall, Jon; Rapanotti, Lucia

    This paper investigates, through a case study, the development of a software architecture that is compatible with a system's high-level requirements. The case study is an example of an extended customer/supplier relationship (post-point of sale support) involved in e-universities and is representative of a class of enterprise without current…

  10. Designed 3D architectures of high-temperature superconductors.

    PubMed

    Green, David C; Lees, Martin R; Hall, Simon R

    2013-04-14

    Self-supporting superconducting replicas of pasta shapes are reported, yielding products of differing 3D architectures. Functioning high-temperature superconductor wires are developed and refined from replicas of spaghetti, demonstrating a unique sol-gel processing technique for the design and synthesis of novel macroscopic morphologies of complex functional materials. PMID:23388857

  11. Information Architecture in JASIST: Just Where Did We Come From?

    ERIC Educational Resources Information Center

    Dillon, Andrew

    2002-01-01

    Traces information architecture (IA) to a historical summit, supported by American Society for Information Science and Technology (ASIS&T) in May 2000 at Boston, MA. where several hundred gathered to thrash out the questions of just what IA was and what this field might become. Outlines the six IA issues discussed. (JMK)

  12. Information Architecture without Internal Theory: An Inductive Design Process.

    ERIC Educational Resources Information Center

    Haverty, Marsha

    2002-01-01

    Suggests that information architecture design is primarily an inductive process, partly because it lacks internal theory and partly because it is an activity that supports emergent phenomena (user experiences) from basic design components. Suggests a resemblance to Constructive Induction, a design process that locates the best representational…

  13. 3. PHOTOCOPY OF DRAWING (1960 ARCHITECTURAL DRAWING BY THE RALPH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. PHOTOCOPY OF DRAWING (1960 ARCHITECTURAL DRAWING BY THE RALPH M. PARSONS COMPANY) FLOOR PLAN, ELEVATIONS, AND SECTION FOR THE SAMOS TECHNICAL SUPPORT BUILDING (BLDG. 761; NOW CALLED SLC-3 AIR FORCE BUILDING), SHEET A14 - Vandenberg Air Force Base, Space Launch Complex 3, SLC-3 Air Force Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  14. How architecture wins technology wars.

    PubMed

    Morris, C R; Ferguson, C H

    1993-01-01

    Signs of revolutionary transformation in the global computer industry are everywhere. A roll call of the major industry players reads like a waiting list in the emergency room. The usual explanations for the industry's turmoil are at best inadequate. Scale, friendly government policies, manufacturing capabilities, a strong position in desktop markets, excellent software, top design skills--none of these is sufficient, either by itself or in combination, to ensure competitive success in information technology. A new paradigm is required to explain patterns of success and failure. Simply stated, success flows to the company that manages to establish proprietary architectural control over a broad, fast-moving, competitive space. Architectural strategies have become crucial to information technology because of the astonishing rate of improvement in microprocessors and other semiconductor components. Since no single vendor can keep pace with the outpouring of cheap, powerful, mass-produced components, customers insist on stitching together their own local systems solutions. Architectures impose order on the system and make the interconnections possible. The architectural controller is the company that controls the standard by which the entire information package is assembled. Microsoft's Windows is an excellent example of this. Because of the popularity of Windows, companies like Lotus must conform their software to its parameters in order to compete for market share. In the 1990s, proprietary architectural control is not only possible but indispensable to competitive success. What's more, it has broader implications for organizational structure: architectural competition is giving rise to a new form of business organization. PMID:10124636

  15. A fully programmable computing architecture for medical ultrasound machines.

    PubMed

    Schneider, Fabio Kurt; Agarwal, Anup; Yoo, Yang Mo; Fukuoka, Tetsuya; Kim, Yongmin

    2010-03-01

    Application-specific ICs have been traditionally used to support the high computational and data rate requirements in medical ultrasound systems, particularly in receive beamforming. Utilizing the previously developed efficient front-end algorithms, in this paper, we present a simple programmable computing architecture, consisting of a field-programmable gate array (FPGA) and a digital signal processor (DSP), to support core ultrasound signal processing. It was found that 97.3% and 51.8% of the FPGA and DSP resources are, respectively, needed to support all the front-end and back-end processing for B-mode imaging with 64 channels and 120 scanlines per frame at 30 frames/s. These results indicate that this programmable architecture can meet the requirements of low- and medium-level ultrasound machines while providing a flexible platform for supporting the development and deployment of new algorithms and emerging clinical applications. PMID:19546045

  16. Demand Activated Manufacturing Architecture (DAMA) model for supply chain collaboration

    SciTech Connect

    CHAPMAN,LEON D.; PETERSEN,MARJORIE B.

    2000-03-13

    The Demand Activated Manufacturing Architecture (DAMA) project during the last five years of work with the U.S. Integrated Textile Complex (retail, apparel, textile, and fiber sectors) has developed an inter-enterprise architecture and collaborative model for supply chains. This model will enable improved collaborative business across any supply chain. The DAMA Model for Supply Chain Collaboration is a high-level model for collaboration to achieve Demand Activated Manufacturing. The five major elements of the architecture to support collaboration are (1) activity or process, (2) information, (3) application, (4) data, and (5) infrastructure. These five elements are tied to the application of the DAMA architecture to three phases of collaboration - prepare, pilot, and scale. There are six collaborative activities that may be employed in this model: (1) Develop Business Planning Agreements, (2) Define Products, (3) Forecast and Plan Capacity Commitments, (4) Schedule Product and Product Delivery, (5) Expedite Production and Delivery Exceptions, and (6) Populate Supply Chain Utility. The Supply Chain Utility is a set of applications implemented to support collaborative product definition, forecast visibility, planning, scheduling, and execution. The DAMA architecture and model will be presented along with the process for implementing this DAMA model.

  17. An intelligent service-based network architecture for wearable robots.

    PubMed

    Lee, Ka Keung; Zhang, Ping; Xu, Yangsheng; Liang, Bin

    2004-08-01

    We are developing a novel robot concept called the wearable robot. Wearable robots are mobile information devices capable of supporting remote communication and intelligent interaction between networked entities. In this paper, we explore the possible functions of such a robotic network and will present a distributed network architecture based on service components. In order to support the interaction and communication between the components in the wearable robot system, we have developed an intelligent network architecture. This service-based architecture involves three major mechanisms. The first mechanism involves the use of a task coordinator service such that the execution of the services can be managed using a priority queue. The second mechanism enables the system to automatically push the required service proxy to the client intelligently based on certain system-related conditions. In the third mechanism, we allow the system to automatically deliver services based on contextual information. Using a fuzzy-logic-based decision making system, the matching service can determine whether the service should be automatically delivered utilizing the information provided by the service, client, lookup service, and context sensors. An application scenario has been implemented to demonstrate the feasibility of this distributed service-based robot architecture. The architecture is implemented as extensions to the Jini network model. PMID:15462452

  18. A support architecture for reliable distributed computing systems

    NASA Technical Reports Server (NTRS)

    Mckendry, Martin S.

    1986-01-01

    The Clouds kernel design was through several design phases and is nearly complete. The object manager, the process manager, the storage manager, the communications manager, and the actions manager are examined.

  19. Architecture for Building Conversational Agents that Support Collaborative Learning

    ERIC Educational Resources Information Center

    Kumar, R.; Rose, C. P.

    2011-01-01

    Tutorial Dialog Systems that employ Conversational Agents (CAs) to deliver instructional content to learners in one-on-one tutoring settings have been shown to be effective in multiple learning domains by multiple research groups. Our work focuses on extending this successful learning technology to collaborative learning settings involving two or…

  20. Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures. Final Report

    SciTech Connect

    Gropp, William D.

    2014-06-23

    With the coming end of Moore's law, it has become essential to develop new algorithms and techniques that can provide the performance needed by demanding computational science applications, especially those that are part of the DOE science mission. This work was part of a multi-institution, multi-investigator project that explored several approaches to develop algorithms that would be effective at the extreme scales and with the complex processor architectures that are expected at the end of this decade. The work by this group developed new performance models that have already helped guide the development of highly scalable versions of an algebraic multigrid solver, new programming approaches designed to support numerical algorithms on heterogeneous architectures, and a new, more scalable version of conjugate gradient, an important algorithm in the solution of very large linear systems of equations.

  1. Simulation of Large-Scale HPC Architectures

    SciTech Connect

    Jones, Ian S; Engelmann, Christian

    2011-01-01

    The Extreme-scale Simulator (xSim) is a recently developed performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads. It allows observing parallel application performance properties in a simulated extreme-scale HPC system to further assist in HPC hardware and application software co-design on the road toward multi-petascale and exascale computing. This paper presents a newly implemented network model for the xSim performance investigation toolkit that is capable of providing simulation support for a variety of HPC network architectures with the appropriate trade-off between simulation scalability and accuracy. The taken approach focuses on a scalable distributed solution with latency and bandwidth restrictions for the simulated network. Different network architectures, such as star, ring, mesh, torus, twisted torus and tree, as well as hierarchical combinations, such as to simulate network-on-chip and network-on-node, are supported. Network traffic congestion modeling is omitted to gain simulation scalability by reducing simulation accuracy.

  2. Data Intensive Architecture for Scalable Cyber Analytics

    SciTech Connect

    Olsen, Bryan K.; Johnson, John R.; Critchlow, Terence J.

    2011-12-19

    Cyber analysts are tasked with the identification and mitigation of network exploits and threats. These compromises are difficult to identify due to the characteristics of cyber communication, the volume of traffic, and the duration of possible attack. In this paper, we describe a prototype implementation designed to provide cyber analysts an environment where they can interactively explore a month’s worth of cyber security data. This prototype utilized On-Line Analytical Processing (OLAP) techniques to present a data cube to the analysts. The cube provides a summary of the data, allowing trends to be easily identified as well as the ability to easily pull up the original records comprising an event of interest. The cube was built using SQL Server Analysis Services (SSAS), with the interface to the cube provided by Tableau. This software infrastructure was supported by a novel hardware architecture comprising a Netezza TwinFin® for the underlying data warehouse and a cube server with a FusionIO drive hosting the data cube. We evaluated this environment on a month’s worth of artificial, but realistic, data using multiple queries provided by our cyber analysts. As our results indicate, OLAP technology has progressed to the point where it is in a unique position to provide novel insights to cyber analysts, as long as it is supported by an appropriate data intensive architecture.

  3. AES Water Architecture Study Interim Results

    NASA Technical Reports Server (NTRS)

    Sarguisingh, Miriam J.

    2012-01-01

    The mission of the Advanced Exploration System (AES) Water Recovery Project (WRP) is to develop advanced water recovery systems in order to enable NASA human exploration missions beyond low earth orbit (LEO). The primary objective of the AES WRP is to develop water recovery technologies critical to near term missions beyond LEO. The secondary objective is to continue to advance mid-readiness level technologies to support future NASA missions. An effort is being undertaken to establish the architecture for the AES Water Recovery System (WRS) that meets both near and long term objectives. The resultant architecture will be used to guide future technical planning, establish a baseline development roadmap for technology infusion, and establish baseline assumptions for integrated ground and on-orbit environmental control and life support systems (ECLSS) definition. This study is being performed in three phases. Phase I of this study established the scope of the study through definition of the mission requirements and constraints, as well as indentifying all possible WRS configurations that meet the mission requirements. Phase II of this study focused on the near term space exploration objectives by establishing an ISS-derived reference schematic for long-duration (>180 day) in-space habitation. Phase III will focus on the long term space exploration objectives, trading the viable WRS configurations identified in Phase I to identify the ideal exploration WRS. The results of Phases I and II are discussed in this paper.

  4. Space Station data management system architecture

    NASA Technical Reports Server (NTRS)

    Mallary, William E.; Whitelaw, Virginia A.

    1987-01-01

    Within the Space Station program, the Data Management System (DMS) functions in a dual role. First, it provides the hardware resources and software services which support the data processing, data communications, and data storage functions of the onboard subsystems and payloads. Second, it functions as an integrating entity which provides a common operating environment and human-machine interface for the operation and control of the orbiting Space Station systems and payloads by both the crew and the ground operators. This paper discusses the evolution and derivation of the requirements and issues which have had significant effect on the design of the Space Station DMS, describes the DMS components and services which support system and payload operations, and presents the current architectural view of the system as it exists in October 1986; one-and-a-half years into the Space Station Phase B Definition and Preliminary Design Study.

  5. GEOSS Architecture Implementation Pilot Phase 2

    NASA Astrophysics Data System (ADS)

    Percivall, G.

    2009-04-01

    The Group on Earth Observations (GEO) is conducting a second phase of the Architecture Implementation Pilot (AIP-2) to integrate services into the Global Earth Observing System of Systems (GEOSS). The first phase of AIP contributed to the initial operating capability of GEOSS Common Infrastructure (GCI) established in early 2008. AIP-2 will augment the GCI with services contributed by GEO Members and Participating Organizations. The activities of AIP-2 are conducted in working groups. Five working groups are developing the transverse technology that supports the multiple user communities. Four Community working groups are applying the transverse technologies to support the following communties of practice: Energy, Biodiversity and Climate Change, Disasters and Air Quality. The Air Quality Working Group is led by the ESIP AQ Cluster. AIP-2 testing and integration will integrate the use cases in to demonstration scenarios. Persistent exemplar services will be nominated to augment the GCI. This presentation will describe the AIP-2 process, progress and planned deliverables.

  6. NASA's Exploration Architecture

    NASA Technical Reports Server (NTRS)

    Tyburski, Timothy

    2006-01-01

    A Bold Vision for Space Exploration includes: 1) Complete the International Space Station; 2) Safely fly the Space Shuttle until 2010; 3) Develop and fly the Crew Exploration Vehicle no later than 2012; 4) Return to the moon no later than 2020; 5) Extend human presence across the solar system and beyond; 6) Implement a sustained and affordable human and robotic program; 7) Develop supporting innovative technologies, knowledge, and infrastructures; and 8) Promote international and commercial participation in exploration.

  7. Space station needs, attributes, and architectural options: Brief analysis

    NASA Technical Reports Server (NTRS)

    Shepphird, F. H.

    1983-01-01

    A baseline set of model missions is thoroughly characterized in terms of support requirements, demands on the Space Station, operating regimes, payload properties, and statements of the mission goals and objectives. This baseline is a representative set of mission requirements covering the most likely extent of space station support requirements from which architectural options can be constructed and exercised. The baseline set of 90 missions are assessed collectively and individually in terms of the economic, performance, and social benefits.

  8. Mission Architecture Comparison for Human Lunar Exploration

    NASA Technical Reports Server (NTRS)

    Geffre, Jim; Robertson, Ed; Lenius, Jon

    2006-01-01

    The Vision for Space Exploration outlines a bold new national space exploration policy that holds as one of its primary objectives the extension of human presence outward into the Solar System, starting with a return to the Moon in preparation for the future exploration of Mars and beyond. The National Aeronautics and Space Administration is currently engaged in several preliminary analysis efforts in order to develop the requirements necessary for implementing this objective in a manner that is both sustainable and affordable. Such analyses investigate various operational concepts, or mission architectures , by which humans can best travel to the lunar surface, live and work there for increasing lengths of time, and then return to Earth. This paper reports on a trade study conducted in support of NASA s Exploration Systems Mission Directorate investigating the relative merits of three alternative lunar mission architecture strategies. The three architectures use for reference a lunar exploration campaign consisting of multiple 90-day expeditions to the Moon s polar regions, a strategy which was selected for its high perceived scientific and operational value. The first architecture discussed incorporates the lunar orbit rendezvous approach employed by the Apollo lunar exploration program. This concept has been adapted from Apollo to meet the particular demands of a long-stay polar exploration campaign while assuring the safe return of crew to Earth. Lunar orbit rendezvous is also used as the baseline against which the other alternate concepts are measured. The first such alternative, libration point rendezvous, utilizes the unique characteristics of the cislunar libration point instead of a low altitude lunar parking orbit as a rendezvous and staging node. Finally, a mission strategy which does not incorporate rendezvous after the crew ascends from the Moon is also studied. In this mission strategy, the crew returns directly to Earth from the lunar surface, and is

  9. Architectural Analysis of Dynamically Reconfigurable Systems

    NASA Technical Reports Server (NTRS)

    Lindvall, Mikael; Godfrey, Sally; Ackermann, Chris; Ray, Arnab; Yonkwa, Lyly

    2010-01-01

    oTpics include: the problem (increased flexibility of architectural styles decrease analyzability, behavior emerges and varies depending on the configuration, does the resulting system run according to the intended design, and architectural decisions can impede or facilitate testing); top down approach to architecture analysis, detection of defects and deviations, and architecture and its testability; currently targeted projects GMSEC and CFS; analyzing software architectures; analyzing runtime events; actual architecture recognition; GMPUB in Dynamic SAVE; sample output from new approach; taking message timing delays into account; CFS examples of architecture and testability; some recommendations for improved testablity; and CFS examples of abstract interfaces and testability; CFS example of opening some internal details.

  10. Bipartite memory network architectures for parallel processing

    SciTech Connect

    Smith, W.; Kale, L.V. . Dept. of Computer Science)

    1990-01-01

    Parallel architectures are boradly classified as either shared memory or distributed memory architectures. In this paper, the authors propose a third family of architectures, called bipartite memory network architectures. In this architecture, processors and memory modules constitute a bipartite graph, where each processor is allowed to access a small subset of the memory modules, and each memory module allows access from a small set of processors. The architecture is particularly suitable for computations requiring dynamic load balancing. The authors explore the properties of this architecture by examining the Perfect Difference set based topology for the graph. Extensions of this topology are also suggested.

  11. STEEL TRUSS TENSION RING SUPPORTING DOME ROOF. TENSION RING COVERED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    STEEL TRUSS TENSION RING SUPPORTING DOME ROOF. TENSION RING COVERED BY ARCHITECTURAL FINISH. TENSION RING ROLLER SUPPORT AT COLUMN OBSCURED BY COLUMN COVERINGS. - Houston Astrodome, 8400 Kirby Drive, Houston, Harris County, TX

  12. DIGITAL ARCHITECTURE PROJECT PLAN

    SciTech Connect

    Thomas, Ken

    2014-09-01

    The objective of this project is to develop an industry consensus document on how to scope and implement the underlying information technology infrastructure that is needed to support a vast array of real-time digital technologies to improve NPP work efficiency, to reduce human error, to increase production reliability and to enhance nuclear safety. A consensus approach is needed because: • There is currently a wide disparity in nuclear utility perspectives and positions on what is prudent and regulatory-compliant for introducing certain digital technologies into the plant environment. For example, there is a variety of implementation policies throughout the industry concerning electromagnetic compatibility (EMC), cyber security, wireless communication coverage, mobile devices for workers, mobile technology in the control room, and so forth. • There is a need to effectively share among the nuclear operating companies the early experience with these technologies and other forms of lessons-learned. There is also the opportunity to take advantage of international experience with these technologies. • There is a need to provide the industry with a sense of what other companies are implementing, so that each respective company can factor this into their own development plans and position themselves to take advantage of new work methods as they are validated by the initial implementing companies. In the nuclear power industry, once a better work practice has been proven, there is a general expectation that the rest of the industry will adopt it. However, the long-lead time of information technology infrastructure could prove to be a delaying factor. A secondary objective of this effort is to provide a general understanding of the incremental investment that would be required to support the targeted digital technologies, in terms of an incremental investment over current infrastructure. This will be required for business cases to support the adoption of these new

  13. Space station needs, attributes, and architectural options study

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The top level, time-phased total space program support system architecture is described including progress from the use of ground-based space shuttle, teleoperator system, extended duration orbiter, and multimission spacecraft, to an initial 4-man crew station at 29 deg inclination in 1991, to a growth station with an 8-man crew with capabilities for OTV high energy orbit payload placement and servicing, assembly, and construction of mission payloads in 1994. System Z, proposed for Earth observation missions in high inclination orbit, can be accommodated in 1993 using a space station derivative platform. Mission definition, system architecture, and benefits are discussed.

  14. Space station needs, attributes and architectural options: Study summary

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Space station needs, attributes, and architectural options that affect the future implementation and design of a space station system are examined. Requirements for candidate missions are used to define functional attributes of a space station. Station elements that perform these functions form the basic station architecture. Alternative ways to accomplish these functions are defined and configuration concepts are developed and evaluated. Configuration analyses are carried to the point that budgetary cost estimates of alternate approaches could be made. Emphasis is placed on differential costs for station support elements and benefits that accrue through use of the station.

  15. An architecture for a brain-image database

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.

    2000-01-01

    The widespread availability of methods for noninvasive assessment of brain structure has enabled researchers to investigate neuroimaging correlates of normal aging, cerebrovascular disease, and other processes; we designate such studies as image-based clinical trials (IBCTs). We propose an architecture for a brain-image database, which integrates image processing and statistical operators, and thus supports the implementation and analysis of IBCTs. The implementation of this architecture is described and results from the analysis of image and clinical data from two IBCTs are presented. We expect that systems such as this will play a central role in the management and analysis of complex research data sets.

  16. Programmable bandwidth management in software-defined EPON architecture

    NASA Astrophysics Data System (ADS)

    Li, Chengjun; Guo, Wei; Wang, Wei; Hu, Weisheng; Xia, Ming

    2016-07-01

    This paper proposes a software-defined EPON architecture which replaces the hardware-implemented DBA module with reprogrammable DBA module. The DBA module allows pluggable bandwidth allocation algorithms among multiple ONUs adaptive to traffic profiles and network states. We also introduce a bandwidth management scheme executed at the controller to manage the customized DBA algorithms for all date queues of ONUs. Our performance investigation verifies the effectiveness of this new EPON architecture, and numerical results show that software-defined EPONs can achieve less traffic delay and provide better support to service differentiation in comparison with traditional EPONs.

  17. Tube support

    DOEpatents

    Mullinax, Jerry L.

    1988-01-01

    A tube support for supporting horizontal tubes from an inclined vertical support tube passing between the horizontal tubes. A support button is welded to the vertical support tube. Two clamping bars or plates, the lower edges of one bearing on the support button, are removably bolted to the inclined vertical tube. The clamping bars provide upper and lower surface support for the horizontal tubes.

  18. Space and Architecture's Current Line of Research? A Lunar Architecture Workshop With An Architectural Agenda.

    NASA Astrophysics Data System (ADS)

    Solomon, D.; van Dijk, A.

    The "2002 ESA Lunar Architecture Workshop" (June 3-16) ESTEC, Noordwijk, NL and V2_Lab, Rotterdam, NL) is the first-of-its-kind workshop for exploring the design of extra-terrestrial (infra) structures for human exploration of the Moon and Earth-like planets introducing 'architecture's current line of research', and adopting an architec- tural criteria. The workshop intends to inspire, engage and challenge 30-40 European masters students from the fields of aerospace engineering, civil engineering, archi- tecture, and art to design, validate and build models of (infra) structures for Lunar exploration. The workshop also aims to open up new physical and conceptual terrain for an architectural agenda within the field of space exploration. A sound introduc- tion to the issues, conditions, resources, technologies, and architectural strategies will initiate the workshop participants into the context of lunar architecture scenarios. In my paper and presentation about the development of the ideology behind this work- shop, I will comment on the following questions: * Can the contemporary architectural agenda offer solutions that affect the scope of space exploration? It certainly has had an impression on urbanization and colonization of previously sparsely populated parts of Earth. * Does the current line of research in architecture offer any useful strategies for com- bining scientific interests, commercial opportunity, and public space? What can be learned from 'state of the art' architecture that blends commercial and public pro- grammes within one location? * Should commercial 'colonisation' projects in space be required to provide public space in a location where all humans present are likely to be there in a commercial context? Is the wave in Koolhaas' new Prada flagship store just a gesture to public space, or does this new concept in architecture and shopping evolve the public space? * What can we learn about designing (infra-) structures on the Moon or any other

  19. An Approach for Detecting Inconsistencies between Behavioral Models of the Software Architecture and the Code

    SciTech Connect

    Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir

    2012-07-16

    In practice, inconsistencies between architectural documentation and the code might arise due to improper implementation of the architecture or the separate, uncontrolled evolution of the code. Several approaches have been proposed to detect the inconsistencies between the architecture and the code but these tend to be limited for capturing inconsistencies that might occur at runtime. We present a runtime verification approach for detecting inconsistencies between the dynamic behavior of the architecture and the actual code. The approach is supported by a set of tools that implement the architecture and the code patterns in Prolog, and support the automatic generation of runtime monitors for detecting inconsistencies. We illustrate the approach and the toolset for a Crisis Management System case study.

  20. Generalized Information Architecture for Managing Requirements in IBM?s Rational DOORS(r) Application.

    SciTech Connect

    Aragon, Kathryn M.; Eaton, Shelley M.; McCornack, Marjorie Turner; Shannon, Sharon A.

    2014-12-01

    When a requirements engineering effort fails to meet expectations, often times the requirements management tool is blamed. Working with numerous project teams at Sandia National Laboratories over the last fifteen years has shown us that the tool is rarely the culprit; usually it is the lack of a viable information architecture with well- designed processes to support requirements engineering. This document illustrates design concepts with rationale, as well as a proven information architecture to structure and manage information in support of requirements engineering activities for any size or type of project. This generalized information architecture is specific to IBM's Rational DOORS (Dynamic Object Oriented Requirements System) software application, which is the requirements management tool in Sandia's CEE (Common Engineering Environment). This generalized information architecture can be used as presented or as a foundation for designing a tailored information architecture for project-specific needs. It may also be tailored for another software tool. Version 1.0 4 November 201