Using CORBA to integrate manufacturing cells to a virtual enterprise
NASA Astrophysics Data System (ADS)
Pancerella, Carmen M.; Whiteside, Robert A.
1997-01-01
It is critical in today's enterprises that manufacturing facilities are not isolated from design, planning, and other business activities and that information flows easily and bidirectionally between these activities. It is also important and cost-effective that COTS software, databases, and corporate legacy codes are well integrated in the information architecture. Further, much of the information generated during manufacturing must be dynamically accessible to engineering and business operations both in a restricted corporate intranet and on the internet. The software integration strategy in the Sandia Agile Manufacturing Testbed supports these enterprise requirements. We are developing a CORBA-based distributed object software system for manufacturing. Each physical machining device is a CORBA object and exports a common IDL interface to allow for rapid and dynamic insertion, deletion, and upgrading within the manufacturing cell. Cell management CORBA components access manufacturing devices without knowledge of any device-specific implementation. To support information flow from design to planning data is accessible to machinists on the shop floor. CORBA allows manufacturing components to be easily accessible to the enterprise. Dynamic clients can be created using web browsers and portable Java GUI's. A CORBA-OLE adapter allows integration to PC desktop applications. Other commercial software can access CORBA network objects in the information architecture through vendor API's.
Bulk data transfer distributer: a high performance multicast model in ALMA ACS
NASA Astrophysics Data System (ADS)
Cirami, R.; Di Marcantonio, P.; Chiozzi, G.; Jeram, B.
2006-06-01
A high performance multicast model for the bulk data transfer mechanism in the ALMA (Atacama Large Millimeter Array) Common Software (ACS) is presented. The ALMA astronomical interferometer will consist of at least 50 12-m antennas operating at millimeter wavelength. The whole software infrastructure for ALMA is based on ACS, which is a set of application frameworks built on top of CORBA. To cope with the very strong requirements for the amount of data that needs to be transported by the software communication channels of the ALMA subsystems (a typical output data rate expected from the Correlator is of the order of 64 MB per second) and with the potential CORBA bottleneck due to parameter marshalling/de-marshalling, usage of IIOP protocol, etc., a transfer mechanism based on the ACE/TAO CORBA Audio/Video (A/V) Streaming Service has been developed. The ACS Bulk Data Transfer architecture bypasses the CORBA protocol with an out-of-bound connection for the data streams (transmitting data directly in TCP or UDP format), using at the same time CORBA for handshaking and leveraging the benefits of ACS middleware. Such a mechanism has proven to be capable of high performances, of the order of 800 Mbits per second on a 1Gbit Ethernet network. Besides a point-to-point communication model, the ACS Bulk Data Transfer provides a multicast model. Since the TCP protocol does not support multicasting and all the data must be correctly delivered to all ALMA subsystems, a distributer mechanism has been developed. This paper focuses on the ACS Bulk Data Distributer, which mimics a multicast behaviour managing data dispatching to all receivers willing to get data from the same sender.
Unified web-based network management based on distributed object orientated software agents
NASA Astrophysics Data System (ADS)
Djalalian, Amir; Mukhtar, Rami; Zukerman, Moshe
2002-09-01
This paper presents an architecture that provides a unified web interface to managed network devices that support CORBA, OSI or Internet-based network management protocols. A client gains access to managed devices through a web browser, which is used to issue management operations and receive event notifications. The proposed architecture is compatible with both the OSI Management reference Model and CORBA. The steps required for designing the building blocks of such architecture are identified.
CAD/CAE Integration Enhanced by New CAD Services Standard
NASA Technical Reports Server (NTRS)
Claus, Russell W.
2002-01-01
A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.
ACS from development to operations
NASA Astrophysics Data System (ADS)
Caproni, Alessandro; Colomer, Pau; Jeram, Bogdan; Sommer, Heiko; Chiozzi, Gianluca; Mañas, Miguel M.
2016-08-01
The ALMA Common Software (ACS), provides the infrastructure of the distributed software system of ALMA and other projects. ACS, built on top of CORBA and Data Distribution Service (DDS) middleware, is based on a Component- Container paradigm and hides the complexity of the middleware allowing the developer to focus on domain specific issues. The transition of the ALMA observatory from construction to operations brings with it that ACS effort focuses primarily on scalability, stability and robustness rather than on new features. The transition came together with a shorter release cycle and a more extensive testing. For scalability, the most problematic area has been the CORBA notification service, used to implement the publisher subscriber pattern because of the asynchronous nature of the paradigm: a lot of effort has been spent to improve its stability and recovery from run time errors. The original bulk data mechanism, implemented using the CORBA Audio/Video Streaming Service, showed its limitations and has been replaced with a more performant and scalable DDS implementation. Operational needs showed soon the difference between releases cycles for Online software (i.e. used during observations) and Offline software, which requires much more frequent releases. This paper attempts to describe the impact the transition from construction to operations had on ACS, the solution adopted so far and a look into future evolution.
Performance Evaluation of Communication Software Systems for Distributed Computing
NASA Technical Reports Server (NTRS)
Fatoohi, Rod
1996-01-01
In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas
2003-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.
2000-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
Introducing high performance distributed logging service for ACS
NASA Astrophysics Data System (ADS)
Avarias, Jorge A.; López, Joao S.; Maureira, Cristián; Sommer, Heiko; Chiozzi, Gianluca
2010-07-01
The ALMA Common Software (ACS) is a software framework that provides the infrastructure for the Atacama Large Millimeter Array and other projects. ACS, based on CORBA, offers basic services and common design patterns for distributed software. Every properly built system needs to be able to log status and error information. Logging in a single computer scenario can be as easy as using fprintf statements. However, in a distributed system, it must provide a way to centralize all logging data in a single place without overloading the network nor complicating the applications. ACS provides a complete logging service infrastructure in which every log has an associated priority and timestamp, allowing filtering at different levels of the system (application, service and clients). Currently the ACS logging service uses an implementation of the CORBA Telecom Log Service in a customized way, using only a minimal subset of the features provided by the standard. The most relevant feature used by ACS is the ability to treat the logs as event data that gets distributed over the network in a publisher-subscriber paradigm. For this purpose the CORBA Notification Service, which is resource intensive, is used. On the other hand, the Data Distribution Service (DDS) provides an alternative standard for publisher-subscriber communication for real-time systems, offering better performance and featuring decentralized message processing. The current document describes how the new high performance logging service of ACS has been modeled and developed using DDS, replacing the Telecom Log Service. Benefits and drawbacks are analyzed. A benchmark is presented comparing the differences between the implementations.
GUEST EDITORS' INTRODUCTION: Guest Editors' introduction
NASA Astrophysics Data System (ADS)
Guerraoui, Rachid; Vinoski, Steve
1997-09-01
The organization of a distributed system can have a tremendous impact on its capabilities, its performance, and its ability to evolve to meet changing requirements. For example, the client - server organization model has proven to be adequate for organizing a distributed system as a number of distributed servers that offer various functions to client processes across the network. However, it lacks peer-to-peer capabilities, and experience with the model has been predominantly in the context of local networks. To achieve peer-to-peer cooperation in a more global context, systems issues of scale, heterogeneity, configuration management, accounting and sharing are crucial, and the complexity of migrating from locally distributed to more global systems demands new tools and techniques. An emphasis on interfaces and modules leads to the modelling of a complex distributed system as a collection of interacting objects that communicate with each other only using requests sent to well defined interfaces. Although object granularity typically varies at different levels of a system architecture, the same object abstraction can be applied to various levels of a computing architecture. Since 1989, the Object Management Group (OMG), an international software consortium, has been defining an architecture for distributed object systems called the Object Management Architecture (OMA). At the core of the OMA is a `software bus' called an Object Request Broker (ORB), which is specified by the OMG Common Object Request Broker Architecture (CORBA) specification. The OMA distributed object model fits the structure of heterogeneous distributed applications, and is applied in all layers of the OMA. For example, each of the OMG Object Services, such as the OMG Naming Service, is structured as a set of distributed objects that communicate using the ORB. Similarly, higher-level OMA components such as Common Facilities and Domain Interfaces are also organized as distributed objects that can be layered over both Object Services and the ORB. The OMG creates specifications, not code, but the interfaces it standardizes are always derived from demonstrated technology submitted by member companies. The specified interfaces are written in a neutral Interface Definition Language (IDL) that defines contractual interfaces with potential clients. Interfaces written in IDL can be translated to a number of programming languages via OMG standard language mappings so that they can be used to develop components. The resulting components can transparently communicate with other components written in different languages and running on different operating systems and machine types. The ORB is responsible for providing the illusion of `virtual homogeneity' regardless of the programming languages, tools, operating systems and networks used to realize and support these components. With the adoption of the CORBA 2.0 specification in 1995, these components are able to interoperate across multi-vendor CORBA-based products. More than 700 member companies have joined the OMG, including Hewlett-Packard, Digital, Siemens, IONA Technologies, Netscape, Sun Microsystems, Microsoft and IBM, which makes it the largest standards body in existence. These companies continue to work together within the OMG to refine and enhance the OMA and its components. This special issue of Distributed Systems Engineering publishes five papers that were originally presented at the `Distributed Object-Based Platforms' track of the 30th Hawaii International Conference on System Sciences (HICSS), which was held in Wailea on Maui on 6 - 10 January 1997. The papers, which were selected based on their quality and the range of topics they cover, address different aspects of CORBA, including advanced aspects such as fault tolerance and transactions. These papers discuss the use of CORBA and evaluate CORBA-based development for different types of distributed object systems and architectures. The first paper, by S Rahkila and S Stenberg, discusses the application of CORBA to telecommunication management networks. In the second paper, P Narasimhan, L E Moser and P M Melliar-Smith present a fault-tolerant extension of an ORB. The third paper, by J Liang, S Sédillot and B Traverson, provides an overview of the CORBA Transaction Service and its integration with the ISO Distributed Transaction Processing protocol. In the fourth paper, D Sherer, T Murer and A Würtz discuss the evolution of a cooperative software engineering infrastructure to a CORBA-based framework. The fifth paper, by R Fatoohi, evaluates the communication performance of a commercially-available Object Request Broker (Orbix from IONA Technologies) on several networks, and compares the performance with that of more traditional communication primitives (e.g., BSD UNIX sockets and PVM). We wish to thank both the referees and the authors of these papers, as their cooperation was fundamental in ensuring timely publication.
Developing CORBA-Based Distributed Scientific Applications from Legacy Fortran Programs
NASA Technical Reports Server (NTRS)
Sang, Janche; Kim, Chan; Lopez, Isaac
2000-01-01
Recent progress in distributed object technology has enabled software applications to be developed and deployed easily such that objects or components can work together across the boundaries of the network, different operating systems, and different languages. A distributed object is not necessarily a complete application but rather a reusable, self-contained piece of software that co-operates with other objects in a plug-and-play fashion via a well-defined interface. The Common Object Request Broker Architecture (CORBA), a middleware standard defined by the Object Management Group (OMG), uses the Interface Definition Language (IDL) to specify such an interface for transparent communication between distributed objects. Since IDL can be mapped to any programming language, such as C++, Java, Smalltalk, etc., existing applications can be integrated into a new application and hence the tasks of code re-writing and software maintenance can be reduced. Many scientific applications in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with CORBA objects can increase the codes reusability. For example, scientists could link their scientific applications to vintage Fortran programs such as Partial Differential Equation(PDE) solvers in a plug-and-play fashion. Unfortunately, CORBA IDL to Fortran mapping has not been proposed and there seems to be no direct method of generating CORBA objects from Fortran without having to resort to manually writing C/C++ wrappers. In this paper, we present an efficient methodology to integrate Fortran legacy programs into a distributed object framework. Issues and strategies regarding the conversion and decomposition of Fortran codes into CORBA objects are discussed. The following diagram shows the conversion and decomposition mechanism we proposed. Our goal is to keep the Fortran codes unmodified. The conversion- aided tool takes the Fortran application program as input and helps programmers generate C/C++ header file and IDL file for wrapping the Fortran code. Programmers need to determine by themselves how to decompose the legacy application into several reusable components based on the cohesion and coupling factors among the functions and subroutines. However, programming effort still can be greatly reduced because function headings and types have been converted to C++ and IDL styles. Most Fortran applications use the COMMON block to facilitate the transfer of large amount of variables among several functions. The COMMON block plays the similar role of global variables used in C. In the CORBA-compliant programming environment, global variables can not be used to pass values between objects. One approach to dealing with this problem is to put the COMMON variables into the parameter list. We do not adopt this approach because it requires modification of the Fortran source code which violates our design consideration. Our approach is to extract the COMMON blocks and convert them into a structure-typed attribute in C++. Through attributes, each component can initialize the variables and return the computation result back to the client. We have tested successfully the proposed conversion methodology based on the f2c converter. Since f2c only translates Fortran to C, we still needed to edit the converted code to meet the C++ and IDL syntax. For example, C++/IDL requires a tag in the structure type, while C does not. In this paper, we identify the necessary changes to the f2c converter in order to directly generate the C++ header and the IDL file. Our future work is to add GUI interface to ease the decomposition task by simply dragging and dropping icons.
Distributed Object Technology with CORBA and Java: Key Concepts and Implications.
1997-06-01
commercial use should be addressed to the SEI Licensing Agent. NO WARRANTY THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL...retrieval. This power is not derived from the language per se, but from the architecture-neutral approach used by Java. The Java Virtual Machine...pattern that is focused on performance considerations, the PCo archi- tecture also uses CORBA interface definition language (IDL) to model the
Performance Analysis of Distributed Object-Oriented Applications
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1998-01-01
The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.
Meeting the Challenge of Distributed Real-Time & Embedded (DRE) Systems
2012-05-10
IP RTOS Middleware Middleware Services DRE Applications Operating Sys & Protocols Hardware & Networks Middleware Middleware Services DRE...Services COTS & standards-based middleware, language, OS , network, & hardware platforms • Real-time CORBA (TAO) middleware • ADAPTIVE Communication...SPLs) F-15 product variant A/V 8-B product variant F/A 18 product variant UCAV product variant Software Produce-Line Hardware (CPU, Memory, I/O) OS
Design and implementation of a CORBA-based genome mapping system prototype.
Hu, J; Mungall, C; Nicholson, D; Archibald, A L
1998-01-01
CORBA (Common Object Request Broker Architecture), as an open standard, is considered to be a good solution for the development and deployment of applications in distributed heterogeneous environments. This technology can be applied in the bioinformatics area to enhance utilization, management and interoperation between biological resources. This paper investigates issues in developing CORBA applications for genome mapping information systems in the Internet environment with emphasis on database connectivity and graphical user interfaces. The design and implementation of a CORBA prototype for an animal genome mapping database are described. The prototype demonstration is available via: http://www.ri.bbsrc.ac.uk/ark_corba/. jian.hu@bbsrc.ac.uk
NASA Technical Reports Server (NTRS)
Dhaliwal, Swarn S.
1997-01-01
An investigation was undertaken to build the software foundation for the WHERE (Web-based Hyper-text Environment for Requirements Engineering) project. The TCM (Toolkit for Conceptual Modeling) was chosen as the foundation software for the WHERE project which aims to provide an environment for facilitating collaboration among geographically distributed people involved in the Requirements Engineering process. The TCM is a collection of diagram and table editors and has been implemented in the C++ programming language. The C++ implementation of the TCM was translated into Java in order to allow the editors to be used for building various functionality of the WHERE project; the WHERE project intends to use the Web as its communication back- bone. One of the limitations of the translated software (TcmJava), which militated against its use in the WHERE project, was persistent data management mechanisms which it inherited from the original TCM; it was designed to be used in standalone applications. Before TcmJava editors could be used as a part of the multi-user, geographically distributed applications of the WHERE project, a persistent storage mechanism must be built which would allow data communication over the Internet, using the capabilities of the Web. An approach involving features of Java, CORBA (Common Object Request Broker), the Web, a middle-ware (Java Relational Binding (JRB)), and a database server was used to build the persistent data management infrastructure for the WHERE project. The developed infrastructure allows a TcmJava editor to be downloaded and run from a network host by using a JDK 1.1 (Java Developer's Kit) compatible Web-browser. The aforementioned editor establishes connection with a server by using the ORB (Object Request Broker) software and stores/retrieves data in/from the server. The server consists of a CORBA object or objects depending upon whether the data is to be made persistent on a single server or multiple servers. The CORBA object providing the persistent data server is implemented using the Java progranu-ning language. It uses the JRB to store/retrieve data in/from a relational database server. The persistent data management system provides transaction and user management facilities which allow multi-user, distributed access to the stored data in a secure manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-08-01
An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less
CORBASec Used to Secure Distributed Aerospace Propulsion Simulations
NASA Technical Reports Server (NTRS)
Blaser, Tammy M.
2003-01-01
The NASA Glenn Research Center and its industry partners are developing a Common Object Request Broker (CORBA) Security (CORBASec) test bed to secure their distributed aerospace propulsion simulations. Glenn has been working with its aerospace propulsion industry partners to deploy the Numerical Propulsion System Simulation (NPSS) object-based technology. NPSS is a program focused on reducing the cost and time in developing aerospace propulsion engines. It was developed by Glenn and is being managed by the NASA Ames Research Center as the lead center reporting directly to NASA Headquarters' Aerospace Technology Enterprise. Glenn is an active domain member of the Object Management Group: an open membership, not-for-profit consortium that produces and manages computer industry specifications (i.e., CORBA) for interoperable enterprise applications. When NPSS is deployed, it will assemble a distributed aerospace propulsion simulation scenario from proprietary analytical CORBA servers and execute them with security afforded by the CORBASec implementation. The NPSS CORBASec test bed was initially developed with the TPBroker Security Service product (Hitachi Computer Products (America), Inc., Waltham, MA) using the Object Request Broker (ORB), which is based on the TPBroker Basic Object Adaptor, and using NPSS software across different firewall products. The test bed has been migrated to the Portable Object Adaptor architecture using the Hitachi Security Service product based on the VisiBroker 4.x ORB (Borland, Scotts Valley, CA) and on the Orbix 2000 ORB (Dublin, Ireland, with U.S. headquarters in Waltham, MA). Glenn, GE Aircraft Engines, and Pratt & Whitney Aircraft are the initial industry partners contributing to the NPSS CORBASec test bed. The test bed uses Security SecurID (RSA Security Inc., Bedford, MA) two-factor token-based authentication together with Hitachi Security Service digital-certificate-based authentication to validate the various NPSS users. The test bed is expected to demonstrate NPSS CORBASec-specific policy functionality, confirm adequate performance, and validate the required Internet configuration in a distributed collaborative aerospace propulsion environment.
Software Agents Applications Using Real-Time CORBA
NASA Astrophysics Data System (ADS)
Fowell, S.; Ward, R.; Nielsen, M.
This paper describes current projects being performed by SciSys in the area of the use of software agents, built using CORBA middleware, to improve operations within autonomous satellite/ground systems. These concepts have been developed and demonstrated in a series of experiments variously funded by ESA's Technology Flight Opportunity Initiative (TFO) and Leading Edge Technology for SMEs (LET-SME), and the British National Space Centre's (BNSC) National Technology Programme. Some of this earlier work has already been reported in [1]. This paper will address the trends, issues and solutions associated with this software agent architecture concept, together with its implementation using CORBA within an on-board environment, that is to say taking account of its real- time and resource constrained nature.
Software To Secure Distributed Propulsion Simulations
NASA Technical Reports Server (NTRS)
Blaser, Tammy M.
2003-01-01
Distributed-object computing systems are presented with many security threats, including network eavesdropping, message tampering, and communications middleware masquerading. NASA Glenn Research Center, and its industry partners, has taken an active role in mitigating the security threats associated with developing and operating their proprietary aerospace propulsion simulations. In particular, they are developing a collaborative Common Object Request Broker Architecture (CORBA) Security (CORBASec) test bed to secure their distributed aerospace propulsion simulations. Glenn has been working with its aerospace propulsion industry partners to deploy the Numerical Propulsion System Simulation (NPSS) object-based technology. NPSS is a program focused on reducing the cost and time in developing aerospace propulsion engines
Research into a distributed fault diagnosis system and its application
NASA Astrophysics Data System (ADS)
Qian, Suxiang; Jiao, Weidong; Lou, Yongjian; Shen, Xiaomei
2005-12-01
CORBA (Common Object Request Broker Architecture) is a solution to distributed computing methods over heterogeneity systems, which establishes a communication protocol between distributed objects. It takes great emphasis on realizing the interoperation between distributed objects. However, only after developing some application approaches and some practical technology in monitoring and diagnosis, can the customers share the monitoring and diagnosis information, so that the purpose of realizing remote multi-expert cooperation diagnosis online can be achieved. This paper aims at building an open fault monitoring and diagnosis platform combining CORBA, Web and agent. Heterogeneity diagnosis object interoperate in independent thread through the CORBA (soft-bus), realizing sharing resource and multi-expert cooperation diagnosis online, solving the disadvantage such as lack of diagnosis knowledge, oneness of diagnosis technique and imperfectness of analysis function, so that more complicated and further diagnosis can be carried on. Take high-speed centrifugal air compressor set for example, we demonstrate a distributed diagnosis based on CORBA. It proves that we can find out more efficient approaches to settle the problems such as real-time monitoring and diagnosis on the net and the break-up of complicated tasks, inosculating CORBA, Web technique and agent frame model to carry on complemental research. In this system, Multi-diagnosis Intelligent Agent helps improve diagnosis efficiency. Besides, this system offers an open circumstances, which is easy for the diagnosis objects to upgrade and for new diagnosis server objects to join in.
CTserver: A Computational Thermodynamics Server for the Geoscience Community
NASA Astrophysics Data System (ADS)
Kress, V. C.; Ghiorso, M. S.
2006-12-01
The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed architecture involves CFD computation of magma convection at Volcan Villarrica with magma properties and phase proportions calculated at each spatial node and at each time step via distributed function calls to MELTS-objects executing on the CTserver. Documentation and programming examples are provided at http://ctserver.ofm- research.org.
NASA Technical Reports Server (NTRS)
Lytle, John
2001-01-01
This report provides an overview presentation of the 2000 NPSS (Numerical Propulsion System Simulation) Review and Planning Meeting. Topics include: 1) a background of the program; 2) 1999 Industry Feedback; 3) FY00 Status, including resource distribution and major accomplishments; 4) FY01 Major Milestones; and 5) Future direction for the program. Specifically, simulation environment/production software and NPSS CORBA Security Development are discussed.
AnaBench: a Web/CORBA-based workbench for biomolecular sequence analysis
Badidi, Elarbi; De Sousa, Cristina; Lang, B Franz; Burger, Gertraud
2003-01-01
Background Sequence data analyses such as gene identification, structure modeling or phylogenetic tree inference involve a variety of bioinformatics software tools. Due to the heterogeneity of bioinformatics tools in usage and data requirements, scientists spend much effort on technical issues including data format, storage and management of input and output, and memorization of numerous parameters and multi-step analysis procedures. Results In this paper, we present the design and implementation of AnaBench, an interactive, Web-based bioinformatics Analysis workBench allowing streamlined data analysis. Our philosophy was to minimize the technical effort not only for the scientist who uses this environment to analyze data, but also for the administrator who manages and maintains the workbench. With new bioinformatics tools published daily, AnaBench permits easy incorporation of additional tools. This flexibility is achieved by employing a three-tier distributed architecture and recent technologies including CORBA middleware, Java, JDBC, and JSP. A CORBA server permits transparent access to a workbench management database, which stores information about the users, their data, as well as the description of all bioinformatics applications that can be launched from the workbench. Conclusion AnaBench is an efficient and intuitive interactive bioinformatics environment, which offers scientists application-driven, data-driven and protocol-driven analysis approaches. The prototype of AnaBench, managed by a team at the Université de Montréal, is accessible on-line at: . Please contact the authors for details about setting up a local-network AnaBench site elsewhere. PMID:14678565
NanoDesign: Concepts and Software for a Nanotechnology Based on Functionalized Fullerenes
NASA Technical Reports Server (NTRS)
Globus, Al; Jaffe, Richard; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Eric Drexler has proposed a hypothetical nanotechnology based on diamond and investigated the properties of such molecular systems. While attractive, diamonoid nanotechnology is not physically accessible with straightforward extensions of current laboratory techniques. We propose a nanotechnology based on functionalized fullerenes and investigate carbon nanotube based gears with teeth added via a benzyne reaction known to occur with C60. The gears are single-walled carbon nanotubes with appended coenzyme groups for teeth. Fullerenes are in widespread laboratory use and can be functionalized in many ways. Companion papers computationally demonstrate the properties of these gears (they appear to work) and the accessibility of the benzyne/nanotube reaction. This paper describes the molecular design techniques and rationale as well as the software that implements these design techniques. The software is a set of persistent C++ objects controlled by TCL command scripts. The c++/tcl interface is automatically generated by a software system called tcl_c++ developed by the author and described here. The objects keep track of different portions of the molecular machinery to allow different simulation techniques and boundary conditions to be applied as appropriate. This capability has been required to demonstrate (computationally) our gear's feasibility. A new distributed software architecture featuring a WWW universal client, CORBA distributed objects, and agent software is under consideration. The software architecture is intended to eventually enable a widely disbursed group to develop complex simulated molecular machines.
Accessing and distributing EMBL data using CORBA (common object request broker architecture).
Wang, L; Rodriguez-Tomé, P; Redaschi, N; McNeil, P; Robinson, A; Lijnzaad, P
2000-01-01
The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems.
Accessing and distributing EMBL data using CORBA (common object request broker architecture)
Wang, Lichun; Rodriguez-Tomé, Patricia; Redaschi, Nicole; McNeil, Phil; Robinson, Alan; Lijnzaad, Philip
2000-01-01
Background: The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. Results: A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. Conclusions: The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems. PMID:11178259
Developing CORBA-Based Distributed Scientific Applications From Legacy Fortran Programs
NASA Technical Reports Server (NTRS)
Sang, Janche; Kim, Chan; Lopez, Isaac
2000-01-01
An efficient methodology is presented for integrating legacy applications written in Fortran into a distributed object framework. Issues and strategies regarding the conversion and decomposition of Fortran codes into Common Object Request Broker Architecture (CORBA) objects are discussed. Fortran codes are modified as little as possible as they are decomposed into modules and wrapped as objects. A new conversion tool takes the Fortran application as input and generates the C/C++ header file and Interface Definition Language (IDL) file. In addition, the performance of the client server computing is evaluated.
a Framework for Distributed Mixed Language Scientific Applications
NASA Astrophysics Data System (ADS)
Quarrie, D. R.
The Object Management Group has defined an architecture (CORBA) for distributed object applications based on an Object Request Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel stubs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently underway to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL.
DNA sequence chromatogram browsing using JAVA and CORBA.
Parsons, J D; Buehler, E; Hillier, L
1999-03-01
DNA sequence chromatograms (traces) are the primary data source for all large-scale genomic and expressed sequence tags (ESTs) sequencing projects. Access to the sequencing trace assists many later analyses, for example contig assembly and polymorphism detection, but obtaining and using traces is problematic. Traces are not collected and published centrally, they are much larger than the base calls derived from them, and viewing them requires the interactivity of a local graphical client with local data. To provide efficient global access to DNA traces, we developed a client/server system based on flexible Java components integrated into other applications including an applet for use in a WWW browser and a stand-alone trace viewer. Client/server interaction is facilitated by CORBA middleware which provides a well-defined interface, a naming service, and location independence. [The software is packaged as a Jar file available from the following URL: http://www.ebi.ac.uk/jparsons. Links to working examples of the trace viewers can be found at http://corba.ebi.ac.uk/EST. All the Washington University mouse EST traces are available for browsing at the same URL.
Integrating the Web and continuous media through distributed objects
NASA Astrophysics Data System (ADS)
Labajo, Saul P.; Garcia, Narciso N.
1998-09-01
The Web has rapidly grown to become the standard for documents interchange on the Internet. At the same time the interest on transmitting continuous media flows on the Internet, and its associated applications like multimedia on demand, is also growing. Integrating both kinds of systems should allow building real hypermedia systems where all media objects can be linked from any other, taking into account temporal and spatial synchronization. A way to achieve this integration is using the Corba architecture. This is a standard for open distributed systems. There are also recent efforts to integrate Web and Corba systems. We use this architecture to build a service for distribution of data flows endowed with timing restrictions. We use to integrate it with the Web, by one side Java applets that can use the Corba architecture and are embedded on HTML pages. On the other side, we also benefit from the efforts to integrate Corba and the Web.
Martinez, R; Cole, C; Rozenblit, J; Cook, J F; Chacko, A K
2000-05-01
The US Army Great Plains Regional Medical Command (GPRMC) has a requirement to conform to Department of Defense (DoD) and Army security policies for the Virtual Radiology Environment (VRE) Project. Within the DoD, security policy is defined as the set of laws, rules, and practices that regulate how an organization manages, protects, and distributes sensitive information. Security policy in the DoD is described by the Trusted Computer System Evaluation Criteria (TCSEC), Army Regulation (AR) 380-19, Defense Information Infrastructure Common Operating Environment (DII COE), Military Health Services System Automated Information Systems Security Policy Manual, and National Computer Security Center-TG-005, "Trusted Network Interpretation." These documents were used to develop a security policy that defines information protection requirements that are made with respect to those laws, rules, and practices that are required to protect the information stored and processed in the VRE Project. The goal of the security policy is to provide for a C2-level of information protection while also satisfying the functional needs of the GPRMC's user community. This report summarizes the security policy for the VRE and defines the CORBA security services that satisfy the policy. In the VRE, the information to be protected is embedded into three major information components: (1) Patient information consists of Digital Imaging and Communications in Medicine (DICOM)-formatted fields. The patient information resides in the digital imaging network picture archiving and communication system (DIN-PACS) networks in the database archive systems and includes (a) patient demographics; (b) patient images from x-ray, computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound (US); and (c) prior patient images and related patient history. (2) Meta-Manager information to be protected consists of several data objects. This information is distributed to the Meta-Manager nodes and includes (a) radiologist schedules; (b) modality worklists; (c) routed case information; (d) DIN-PACS and Composite Health Care system (CHCS) messages, and Meta-Manager administrative and security information; and (e) patient case information. (3) Access control and communications security is required in the VRE to control who uses the VRE and Meta-Manager facilities and to secure the messages between VRE components. The CORBA Security Service Specification version 1.5 is designed to allow up to TCSEC's B2-level security for distributed objects. The CORBA Security Service Specification defines the functionality of several security features: identification and authentication, authorization and access control, security auditing, communication security, nonrepudiation, and security administration. This report describes the enhanced security features for the VRE and their implementation using commercial CORBA Security Service software products.
A Telemetry Browser Built with Java Components
NASA Astrophysics Data System (ADS)
Poupart, E.
In the context of CNES balloon scientific campaigns and telemetry survey field, a generic telemetry processing product, called TelemetryBrowser in the following, was developed reusing COTS, Java Components for most of them. Connection between those components relies on a software architecture based on parameter producers and parameter consumers. The first one transmit parameter values to the second one which has registered to it. All of those producers and consumers can be spread over the network thanks to Corba, and over every kind of workstation thanks to Java. This gives a very powerful mean to adapt to constraints like network bandwidth, or workstations processing or memory. It's also very useful to display and correlate at the same time information coming from multiple and various sources. An important point of this architecture is that the coupling between parameter producers and parameter consumers is reduced to the minimum and that transmission of information on the network is made asynchronously. So, if a parameter consumer goes down or runs slowly, there is no consequence on the other consumers, because producers don't wait for their consumers to finish their data processing before sending it to other consumers. An other interesting point is that parameter producers, also called TelemetryServers in the following are generated nearly automatically starting from a telemetry description using Flavori component. Keywords Java components, Corba, distributed application, OpenORBii, software reuse, COTS, Internet, Flavor. i Flavor (Formal Language for Audio-Visual Object Representation) is an object-oriented media representation language being developed at Columbia University. It is designed as an extension of Java and C++ and simplifies the development of applications that involve a significant media processing component (encoding, decoding, editing, manipulation, etc.) by providing bitstream representation semantics. (flavor.sourceforge.net) ii OpenORB provides a Java implementation of the OMG Corba 2.4.2 specification (openorb.sourceforge.net) 1/16
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H C; Raila, W F; Pappas, J J; Ford, M; Zatsman, P; Tu, J; Barnett, G O
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces.
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744
Software architecture of the Magdalena Ridge Observatory Interferometer
NASA Astrophysics Data System (ADS)
Farris, Allen; Klinglesmith, Dan; Seamons, John; Torres, Nicolas; Buscher, David; Young, John
2010-07-01
Merging software from 36 independent work packages into a coherent, unified software system with a lifespan of twenty years is the challenge faced by the Magdalena Ridge Observatory Interferometer (MROI). We solve this problem by using standardized interface software automatically generated from simple highlevel descriptions of these systems, relying only on Linux, GNU, and POSIX without complex software such as CORBA. This approach, based on gigabit Ethernet with a TCP/IP protocol, provides the flexibility to integrate and manage diverse, independent systems using a centralized supervisory system that provides a database manager, data collectors, fault handling, and an operator interface.
The ALMA software architecture
NASA Astrophysics Data System (ADS)
Schwarz, Joseph; Farris, Allen; Sommer, Heiko
2004-09-01
The software for the Atacama Large Millimeter Array (ALMA) is being developed by many institutes on two continents. The software itself will function in a distributed environment, from the 0.5-14 kmbaselines that separate antennas to the larger distances that separate the array site at the Llano de Chajnantor in Chile from the operations and user support facilities in Chile, North America and Europe. Distributed development demands 1) interfaces that allow separated groups to work with minimal dependence on their counterparts at other locations; and 2) a common architecture to minimize duplication and ensure that developers can always perform similar tasks in a similar way. The Container/Component model provides a blueprint for the separation of functional from technical concerns: application developers concentrate on implementing functionality in Components, which depend on Containers to provide them with services such as access to remote resources, transparent serialization of entity objects to XML, logging, error handling and security. Early system integrations have verified that this architecture is sound and that developers can successfully exploit its features. The Containers and their services are provided by a system-orienteddevelopment team as part of the ALMA Common Software (ACS), middleware that is based on CORBA.
Design and implementation of a distributed large-scale spatial database system based on J2EE
NASA Astrophysics Data System (ADS)
Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia
2003-03-01
With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.
NASA Astrophysics Data System (ADS)
1998-06-01
The Object Management Group (OMG) Platform Technology Committee (PTC) ratified its support for a new asynchronous messaging service for CORBA at OMG's recent Technical Committee Meeting in Orlando, FL. The meeting, held from 8 - 12 June, saw the PTC send the Messaging Service out for a final vote among the OMG membership. The Messaging Service, which will integrate Message Oriented Middleware (MOM) with CORBA, will give CORBA a true asynchronous messaging capability - something of great interest to users and developers. Formal adoption of the specification will most likely occur by the end of the year. The Messaging Service The Messaging Service, when adopted, will be the world's first standard for Message Oriented Middleware and will give CORBA a true asynchronous messaging capability. Asynchronous messaging allows developers to build simpler, richer client environments. With asynchronous messaging there is less need for multi-threaded clients because the Asynchronous Method Invocation is non-blocking, meaning the client thread can continue work while the application waits for a reply. David Curtis, Director of Platform Technology for OMG, said: `This messaging service is one of the more valuable additions to CORBA. It enhances CORBA's existing asynchronous messaging capabilities which is a feature of many popular message oriented middleware products. This service will allow better integration between ORBs and MOM products. This enhanced messaging capability will only make CORBA more valuable for builders of distributed object systems.' The Messaging Service is one of sixteen technologies currently being worked on by the PTC. Additionally, seventeen Revision Task Forces (RTFs) are working on keeping OMG specifications up to date. The purpose of these Revision Task Forces is to take input from the implementors of OMG specifications and clarify or make necessary changes based on the implementor's input. The RTFs also ensure that the specifications remain up to date with changes in the OMA and with industry advances in general. Domain work Thirty-eight technology processes are ongoing in the Domain Technology Committee (DTC). These range over a wide variety of industries, including healthcare, telecommunications, life sciences, manufacturing, business objects, electronic commerce, finance, transportation, utilities, and distributed simulation. These processes aim to enhance CORBA's value and provide interoperability for specific vertical industries. At the Orlando meeting, the Domain Technology Committee issued the following requests to industry: Telecom Wireless Access Request For Information (RFI); Statistics RFI; Clinical Image Access Service Request For Proposal (RFP); Distributed Simulation Request For Comment (RFC). The newly-formed Statistics group at OMG plans to standarize interfaces for Statistical Services in CORBA, and their RFI, to which any person or company can respond, asks for input and guidance as they start this work which will impact the broad spectrum of industries and processes which use statistics. The Clinical Image Access Service will standarize access to important medical images including digital x-rays, MRI scans, and other formats. The Distributed Simulation RFC, when complete, will establish the Distributed Simulation High-Level Architecture of the US Defense Military Simulation Office as an OMG standard. For the next 90 days any person or company, not only OMG members, may submit their comments on the submission. The OMG looks forward to its next meeting to be held in Helsinki, Finland, on 27 - 31 July and hosted by Nokia. OMG encourages anyone considering OMG membership to attend the meeting as a guest. For more information on attending call +1-508-820-4300 or e-mail info@omg.org. Note: descriptions for all RFPs, RFIs and RFCs in progress are available for viewing on the OMG Website at http://www.omg.org/schedule.htm, or contact OMG for a copy of the `Work in Progress' document. For more information on the OMG Technology Process please call Jeurgen Boldt, OMG Process Manager, at +1-508-820-4300 or email jeurgen@omg.org.
Configuration Management of an Optimization Application in a Research Environment
NASA Technical Reports Server (NTRS)
Townsend, James C.; Salas, Andrea O.; Schuler, M. Patricia
1999-01-01
Multidisciplinary design optimization (MDO) research aims to increase interdisciplinary communication and reduce design cycle time by combining system analyses (simulations) with design space search and decision making. The High Performance Computing and Communication Program's current High Speed Civil Transport application, HSCT4.0, at NASA Langley Research Center involves a highly complex analysis process with high-fidelity analyses that are more realistic than previous efforts at the Center. The multidisciplinary processes have been integrated to form a distributed application by using the Java language and Common Object Request Broker Architecture (CORBA) software techniques. HSCT4.0 is a research project in which both the application problem and the implementation strategy have evolved as the MDO and integration issues became better understood. Whereas earlier versions of the application and integrated system were developed with a simple, manual software configuration management (SCM) process, it was evident that this larger project required a more formal SCM procedure. This report briefly describes the HSCT4.0 analysis and its CORBA implementation and then discusses some SCM concepts and their application to this project. In anticipation that SCM will prove beneficial for other large research projects, the report concludes with some lessons learned in overcoming SCM implementation problems for HSCT4.0.
Control Software for the VERITAS Cerenkov Telescope System
NASA Astrophysics Data System (ADS)
Krawczynski, H.; Olevitch, M.; Sembroski, G.; Gibbs, K.
2003-07-01
The VERITAS collab oration is developing a system of initially 4 and ˇ eventually 7 Cerenkov telescopes of the 12 m diameter class for high sensitivity gamma-ray astronomy in the >50 GeV energy range. In this contribution we describe the software that controls and monitors the various VERITAS subsystems. The software uses an object-oriented approach to cop e with the complexities that arise from using sub-groups of the 7 VERITAS telescopes to observe several sources at the same time. Inter-pro cess communication is based on the CORBA object Request Broker proto col and watch-dog processes monitor the sub-system performance.
Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.
NASA Astrophysics Data System (ADS)
Schwarz, Joseph; Raffi, Gianni
2002-12-01
The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe and North America. ALMA will consist of at least 64 12-meter antennas operating in the millimeter and sub-millimeter range. It will be located at an altitude of about 5000m in the Chilean Atacama desert. The primary challenge to the development of the software architecture is the fact that both its development and runtime environments will be distributed. Groups at different institutes will develop the key elements such as Proposal Preparation tools, Instrument operation, On-line calibration and reduction, and Archiving. The Proposal Preparation software will be used primarily at scientists' home institutions (or on their laptops), while Instrument Operations will execute on a set of networked computers at the ALMA Operations Support Facility. The ALMA Science Archive, itself to be replicated at several sites, will serve astronomers worldwide. Building upon the existing ALMA Common Software (ACS), the system architects will prepare a robust framework that will use XML-encoded entity objects to provide an effective solution to the persistence needs of this system, while remaining largely independent of any underlying DBMS technology. Independence of distributed subsystems will be facilitated by an XML- and CORBA-based pass-by-value mechanism for exchange of objects. Proof of concept (as well as a guide to subsystem developers) will come from a prototype whose details will be presented.
Photochemical Phenomenology Model for the New Millenium
NASA Technical Reports Server (NTRS)
Bishop, James; Evans, J. Scott
2000-01-01
This project tackles the problem of conversion of validated a priori physics-based modeling capabilities, specifically those relevant to the analysis and interpretation of planetary atmosphere observations, to application-oriented software for use in science and science-support activities. The software package under development, named the Photochemical Phenomenology Modeling Tool (PPMT), has particular focus on the atmospheric remote sensing data to be acquired by the CIRS instrument during the CASSINI Jupiter flyby and orbital tour of the Saturnian system. Overall, the project has followed the development outline given in the original proposal, and the Year 1 design and architecture goals have been met. Specific accomplishments and the difficulties encountered are summarized in this report. Most of the effort has gone into complete definition of the PPMT interfaces within the context of today's IT arena: adoption and adherence to the CORBA Component Model (CCM) has yielded a solid architecture basis, and CORBA-related issues (services, specification options, development plans, etc.) have been largely resolved. Implementation goals have been redirected somewhat so as to be more relevant to the upcoming CASSINI flyby of Jupiter, with focus now being more on data analysis and remote sensing retrieval applications.
The ALMA common software: dispatch from the trenches
NASA Astrophysics Data System (ADS)
Schwarz, J.; Sommer, H.; Jeram, B.; Sekoranja, M.; Chiozzi, G.; Grimstrup, A.; Caproni, A.; Paredes, C.; Allaert, E.; Harrington, S.; Turolla, S.; Cirami, R.
2008-07-01
The ALMA Common Software (ACS) provides both an application framework and CORBA-based middleware for the distributed software system of the Atacama Large Millimeter Array. Building upon open-source tools such as the JacORB, TAO and OmniORB ORBs, ACS supports the development of component-based software in any of three languages: Java, C++ and Python. Now in its seventh major release, ACS has matured, both in its feature set as well as in its reliability and performance. However, it is only recently that the ALMA observatory's hardware and application software has reached a level at which it can exploit and challenge the infrastructure that ACS provides. In particular, the availability of an Antenna Test Facility(ATF) at the site of the Very Large Array in New Mexico has enabled us to exercise and test the still evolving end-to-end ALMA software under realistic conditions. The major focus of ACS, consequently, has shifted from the development of new features to consideration of how best to use those that already exist. Configuration details which could be neglected for the purpose of running unit tests or skeletal end-to-end simulations have turned out to be sensitive levers for achieving satisfactory performance in a real-world environment. Surprising behavior in some open-source tools has required us to choose between patching code that we did not write or addressing its deficiencies by implementing workarounds in our own software. We will discuss these and other aspects of our recent experience at the ATF and in simulation.
The ALMA Common Software as a Basis for a Distributed Software Development
NASA Astrophysics Data System (ADS)
Raffi, Gianni; Chiozzi, Gianluca; Glendenning, Brian
The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe, North America and Japan. ALMA will consist of 64 12-m antennas operating in the millimetre and sub-millimetre wavelength range, with baselines of more than 10 km. It will be located at an altitude above 5000 m in the Chilean Atacama desert. The ALMA Computing group is a joint group with staff scattered on 3 continents and is responsible for all the control and data flow software related to ALMA, including tools ranging from support of proposal preparation to archive access of automatically created images. Early in the project it was decided that an ALMA Common Software (ACS) would be developed as a way to provide to all partners involved in the development a common software platform. The original assumption was that some key middleware like communication via CORBA and the use of XML and Java would be part of the project. It was intended from the beginning to develop this software in an incremental way based on releases, so that it would then evolve into an essential embedded part of all ALMA software applications. In this way we would build a basic unity and coherence into a system that will have been developed in a distributed fashion. This paper evaluates our progress after 1.5 year of work, following a few tests and preliminary releases. It analyzes the advantages and difficulties of such an ambitious approach, which creates an interface across all the various control and data flow applications.
Numerical Propulsion System Simulation Architecture
NASA Technical Reports Server (NTRS)
Naiman, Cynthia G.
2004-01-01
The Numerical Propulsion System Simulation (NPSS) is a framework for performing analysis of complex systems. Because the NPSS was developed using the object-oriented paradigm, the resulting architecture is an extensible and flexible framework that is currently being used by a diverse set of participants in government, academia, and the aerospace industry. NPSS is being used by over 15 different institutions to support rockets, hypersonics, power and propulsion, fuel cells, ground based power, and aerospace. Full system-level simulations as well as subsystems may be modeled using NPSS. The NPSS architecture enables the coupling of analyses at various levels of detail, which is called numerical zooming. The middleware used to enable zooming and distributed simulations is the Common Object Request Broker Architecture (CORBA). The NPSS Developer's Kit offers tools for the developer to generate CORBA-based components and wrap codes. The Developer's Kit enables distributed multi-fidelity and multi-discipline simulations, preserves proprietary and legacy codes, and facilitates addition of customized codes. The platforms supported are PC, Linux, HP, Sun, and SGI.
The role of CORBA in enabling telemedicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forslund, D.W.
1997-07-01
One of the most powerful tools available for telemedicine is a multimedia medical record accessible over a wide area and simultaneously editable by multiple physicians. The ability to do this through an intuitive interface linking multiple distributed data repositories while maintaining full data integrity is a fundamental enabling technology in healthcare. The author discusses the role of distributed object technology using CORBA in providing this capability including an example of such a system (TeleMed) which can be accessed through the World Wide Web. Issues of security, scalability, data integrity, and useability are emphasized.
GCS component development cycle
NASA Astrophysics Data System (ADS)
Rodríguez, Jose A.; Macias, Rosa; Molgo, Jordi; Guerra, Dailos; Pi, Marti
2012-09-01
The GTC1 is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). First light was at 13/07/2007 and since them it is in the operation phase. The GTC control system (GCS) is a distributed object & component oriented system based on RT-CORBA8 and it is responsible for the management and operation of the telescope, including its instrumentation. GCS has used the Rational Unified process (RUP9) in its development. RUP is an iterative software development process framework. After analysing (use cases) and designing (UML10) any of GCS subsystems, an initial component description of its interface is obtained and from that information a component specification is written. In order to improve the code productivity, GCS has adopted the code generation to transform this component specification into the skeleton of component classes based on a software framework, called Device Component Framework. Using the GCS development tools, based on javadoc and gcc, in only one step, the component is generated, compiled and deployed to be tested for the first time through our GUI inspector. The main advantages of this approach are the following: It reduces the learning curve of new developers and the development error rate, allows a systematic use of design patterns in the development and software reuse, speeds up the deliverables of the software product and massively increase the timescale, design consistency and design quality, and eliminates the future refactoring process required for the code.
A Novel Trust Service Provider for Internet Based Commerce Applications.
ERIC Educational Resources Information Center
Siyal, M. Y.; Barkat, B.
2002-01-01
Presents a framework for enhancing trust in Internet commerce. Shows how trust can be provided through a network of Trust Service Providers (TSp). Identifies a set of services that should be offered by a TSp. Presents a distributed object-oriented implementation of trust services using CORBA, JAVA and XML. (Author/AEF)
Distributed information system architecture for Primary Health Care.
Grammatikou, M; Stamatelopoulos, F; Maglaris, B
2000-01-01
We present a distributed architectural framework for Primary Health Care (PHC) Centres. Distribution is handled through the introduction of the Roaming Electronic Health Care Record (R-EHCR) and the use of local caching and incremental update of a global index. The proposed architecture is designed to accommodate a specific PHC workflow model. Finally, we discuss a pilot implementation in progress, which is based on CORBA and web-based user interfaces. However, the conceptual architecture is generic and open to other middleware approaches like the DHE or HL7.
Open Radio Communications Architecture Core Framework V1.1.0 Volume 1 Software Users Manual
2005-02-01
on a PC utilizing the KDE desktop that comes with Red Hat Linux . The default desktop for most Red Hat Linux installations is the GNOME desktop. The...SCA) v2.2. The software was designed for a desktop computer running the Linux operating system (OS). It was developed in C++, uses ACE/TAO for CORBA...middleware, Xerces for the XML parser, and Red Hat Linux for the Operating System. The software is referred to as, Open Radio Communication
NASA Astrophysics Data System (ADS)
Rosich Minguell, Josefina; Garzón Lopez, Francisco
2012-09-01
The Mid-resolution InfRAreD Astronomical Spectrograph (MIRADAS, a near-infrared multi-object echelle spectrograph operating at spectral resolution R=20,000 over the 1-2.5μm bandpass) was selected in 2010 by the Gran Telescopio Canarias (GTC) partnership as the next-generation near-infrared spectrograph for the world's largest optical/infrared telescope, and is being developed by an international consortium. The MIRADAS consortium includes the University of Florida, Universidad de Barcelona, Universidad Complutense de Madrid, Instituto de Astrofísica de Canarias, Institut de Física d'Altes Energies, Institut d'Estudis Espacials de Catalunya and Universidad Nacional Autónoma de México. This paper shows an overview of the MIRADAS control software, which follows the standards defined by the telescope to permit the integration of this software on the GTC Control System (GCS). The MIRADAS Control System is based on a distributed architecture according to a component model where every subsystem is selfcontained. The GCS is a distributed environment written in object oriented C++, which runs components in different computers, using CORBA middleware for communications. Each MIRADAS observing mode, including engineering, monitoring and calibration modes, will have its own predefined sequence, which are executed in the GCS Sequencer. These sequences will have the ability of communicating with other telescope subsystems.
TeleMed: Wide-area, secure, collaborative object computing with Java and CORBA for healthcare
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forslund, D.W.; George, J.E.; Gavrilov, E.M.
1998-12-31
Distributed computing is becoming commonplace in a variety of industries with healthcare being a particularly important one for society. The authors describe the development and deployment of TeleMed in a few healthcare domains. TeleMed is a 100% Java distributed application build on CORBA and OMG standards enabling the collaboration on the treatment of chronically ill patients in a secure manner over the Internet. These standards enable other systems to work interoperably with TeleMed and provide transparent access to high performance distributed computing to the healthcare domain. The goal of wide scale integration of electronic medical records is a grand-challenge scalemore » problem of global proportions with far-reaching social benefits.« less
A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.
Law, V.; Goldberg, H. S.; Jones, P.; Safran, C.
1998-01-01
One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system. PMID:9929252
A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.
Law, V; Goldberg, H S; Jones, P; Safran, C
1998-01-01
One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system.
National Cycle Program (NCP) Common Analysis Tool for Aeropropulsion
NASA Technical Reports Server (NTRS)
Follen, G.; Naiman, C.; Evans, A.
1999-01-01
Through the NASA/Industry Cooperative Effort (NICE) agreement, NASA Lewis and industry partners are developing a new engine simulation, called the National Cycle Program (NCP), which is the initial framework of NPSS. NCP is the first phase toward achieving the goal of NPSS. This new software supports the aerothermodynamic system simulation process for the full life cycle of an engine. The National Cycle Program (NCP) was written following the Object Oriented Paradigm (C++, CORBA). The software development process used was also based on the Object Oriented paradigm. Software reviews, configuration management, test plans, requirements, design were all apart of the process used in developing NCP. Due to the many contributors to NCP, the stated software process was mandatory for building a common tool intended for use by so many organizations. The U.S. aircraft and airframe companies recognize NCP as the future industry standard for propulsion system modeling.
Software framework for automatic learning of telescope operation
NASA Astrophysics Data System (ADS)
Rodríguez, Jose A.; Molgó, Jordi; Guerra, Dailos
2016-07-01
The "Gran Telescopio de Canarias" (GTC) is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). The GTC Control System (GCS) is a distributed object and component oriented system based on RT-CORBA and it is responsible for the operation of the telescope, including its instrumentation. The current development state of GCS is mature and fully operational. On the one hand telescope users as PI's implement the sequences of observing modes of future scientific instruments that will be installed in the telescope and operators, in turn, design their own sequences for maintenance. On the other hand engineers develop new components that provide new functionality required by the system. This great work effort is possible to minimize so that costs are reduced, especially if one considers that software maintenance is the most expensive phase of the software life cycle. Could we design a system that allows the progressive assimilation of sequences of operation and maintenance of the telescope, through an automatic self-programming system, so that it can evolve from one Component oriented organization to a Service oriented organization? One possible way to achieve this is to use mechanisms of learning and knowledge consolidation to reduce to the minimum expression the effort to transform the specifications of the different telescope users to the operational deployments. This article proposes a framework for solving this problem based on the combination of the following tools: data mining, self-Adaptive software, code generation, refactoring based on metrics, Hierarchical Agglomerative Clustering and Service Oriented Architectures.
Framework for teleoperated microassembly systems
NASA Astrophysics Data System (ADS)
Reinhart, Gunther; Anton, Oliver; Ehrenstrasser, Michael; Patron, Christian; Petzold, Bernd
2002-02-01
Manual assembly of minute parts is currently done using simple devices such as tweezers or magnifying glasses. The operator therefore requires a great deal of concentration for successful assembly. Teleoperated micro-assembly systems are a promising method for overcoming the scaling barrier. However, most of today's telepresence systems are based on proprietary and one-of-a-kind solutions. Frameworks which supply the basic functions of a telepresence system, e.g. to establish flexible communication links that depend on bandwidth requirements or to synchronize distributed components, are not currently available. Large amounts of time and money have to be invested in order to create task-specific teleoperated micro-assembly systems from scratch. For this reason, an object-oriented framework for telepresence systems that is based on CORBA as a common middleware was developed at the Institute for Machine Tools and Industrial Management (iwb). The framework is based on a distributed architectural concept and is realized in C++. External hardware components such as haptic, video or sensor devices are coupled to the system by means of defined software interfaces. In this case, the special requirements of teleoperation systems have to be considered, e.g. dynamic parameter settings for sensors during operation. Consequently, an architectural concept based on logical sensors has been developed to achieve maximum flexibility and to enable a task-oriented integration of hardware components.
Wrapping SRS with CORBA: from textual data to distributed objects.
Coupaye, T
1999-04-01
Biological data come in very different shapes. Databanks are maintained and used by distinct organizations. Text is the de facto Standard exchange format. The SRS system can integrate heterogeneous textual databanks but it was lacking a way to structure the extracted data. This paper presents a CORBA interface to the SRS system which manages databanks in a flat file format. SRS Object Servers are CORBA wrappers for SRS. They allow client applications (visualisation tools, data mining tools, etc.) to access and query SRS servers remotely through an Object Request Broker (ORB). They provide loader objects that contain the information extracted from the databanks by SRS. Loader objects are not hard-coded but generated in a flexible way by using loader specifications which allow SRS administrators to package data coming from distinct databanks. The prototype may be available for beta-testing. Please contact the SRS group (http://srs.ebi.ac.uk).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, Robert; Rivers, Wilmer
any single computer program for seismic data analysis will not have all the capabilities needed to study reference events, since hese detailed studies will be highly specialized. It may be necessary to develop and test new algorithms, and then these special ;odes must be integrated with existing software to use their conventional data-processing routines. We have investigated two neans of establishing communications between the legacy and new codes: CORBA and XML/SOAP Web services. We have nvestigated making new Java code communicate with a legacy C-language program, geotool, running under Linux. Both methods vere successful, but both were difficult to implement.more » C programs on UNIX/Linux are poorly supported for Web services, compared vith the Java and .NET languages and platforms. Easier-to-use middleware will be required for scientists to construct distributed applications as easily as stand-alone ones. Considerable difficulty was encountered in modifying geotool, and this problem shows he need to use component-based user interfaces instead of large C-language codes where changes to one part of the program nay introduce side effects into other parts. We have nevertheless made bug fixes and enhancements to that legacy program, but t remains difficult to expand it through communications with external software.« less
Web-Based Distributed Simulation of Aeronautical Propulsion System
NASA Technical Reports Server (NTRS)
Zheng, Desheng; Follen, Gregory J.; Pavlik, William R.; Kim, Chan M.; Liu, Xianyou; Blaser, Tammy M.; Lopez, Isaac
2001-01-01
An application was developed to allow users to run and view the Numerical Propulsion System Simulation (NPSS) engine simulations from web browsers. Simulations were performed on multiple INFORMATION POWER GRID (IPG) test beds. The Common Object Request Broker Architecture (CORBA) was used for brokering data exchange among machines and IPG/Globus for job scheduling and remote process invocation. Web server scripting was performed by JavaServer Pages (JSP). This application has proven to be an effective and efficient way to couple heterogeneous distributed components.
Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure
NASA Technical Reports Server (NTRS)
Jones, William Henry
2005-01-01
The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.
Architecture for hospital information integration
NASA Astrophysics Data System (ADS)
Chimiak, William J.; Janariz, Daniel L.; Martinez, Ralph
1999-07-01
The ongoing integration of hospital information systems (HIS) continues. Data storage systems, data networks and computers improve, data bases grow and health-care applications increase. Some computer operating systems continue to evolve and some fade. Health care delivery now depends on this computer-assisted environment. The result is the critical harmonization of the various hospital information systems becomes increasingly difficult. The purpose of this paper is to present an architecture for HIS integration that is computer-language-neutral and computer- hardware-neutral for the informatics applications. The proposed architecture builds upon the work done at the University of Arizona on middleware, the work of the National Electrical Manufacturers Association, and the American College of Radiology. It is a fresh approach to allowing applications engineers to access medical data easily and thus concentrates on the application techniques in which they are expert without struggling with medical information syntaxes. The HIS can be modeled using a hierarchy of information sub-systems thus facilitating its understanding. The architecture includes the resulting information model along with a strict but intuitive application programming interface, managed by CORBA. The CORBA requirement facilitates interoperability. It should also reduce software and hardware development times.
Eich, H P; Ohmann, C
1999-01-01
Inadequate informatical support of multi-centre clinical trials lead to pure quality. In order to support a multi-centre clinical trial a data collection via WWW and Internet based on Java has been developed. In this study a generalization and extension of this prototype has been performed. The prototype has been applied to another clinical trial and a knowledge server based on C+t has been integrated via CORBA. The investigation and implementation of security aspects of web-based data collection is now under evaluation.
A Software Architecture for Intelligent Synthesis Environments
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Norvig, Peter (Technical Monitor)
2001-01-01
The NASA's Intelligent Synthesis Environment (ISE) program is a grand attempt to develop a system to transform the way complex artifacts are engineered. This paper discusses a "middleware" architecture for enabling the development of ISE. Desirable elements of such an Intelligent Synthesis Architecture (ISA) include remote invocation; plug-and-play applications; scripting of applications; management of design artifacts, tools, and artifact and tool attributes; common system services; system management; and systematic enforcement of policies. This paper argues that the ISA extend conventional distributed object technology (DOT) such as CORBA and Product Data Managers with flexible repositories of product and tool annotations and "plug-and-play" mechanisms for inserting "ility" or orthogonal concerns into the system. I describe the Object Infrastructure Framework, an Aspect Oriented Programming (AOP) environment for developing distributed systems that provides utility insertion and enables consistent annotation maintenance. This technology can be used to enforce policies such as maintaining the annotations of artifacts, particularly the provenance and access control rules of artifacts-, performing automatic datatype transformations between representations; supplying alternative servers of the same service; reporting on the status of jobs and the system; conveying privileges throughout an application; supporting long-lived transactions; maintaining version consistency; and providing software redundancy and mobility.
NASA Technical Reports Server (NTRS)
Sang, Janche
2003-01-01
Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.
Cognitive/emotional models for human behavior representation in 3D avatar simulations
NASA Astrophysics Data System (ADS)
Peterson, James K.
2004-08-01
Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.
Photochemical Phenomenology Model for the New Millennium
NASA Technical Reports Server (NTRS)
Bishop, James; Evans, J. Scott
2001-01-01
The "Photochemical Phenomenology Model for the New Millennium" project tackles the issue of reengineering and extension of validated physics-based modeling capabilities ("legacy" computer codes) to application-oriented software for use in science and science-support activities. While the design and architecture layouts are in terms of general particle distributions involved in scattering, impact, and reactive interactions, initial Photochemical Phenomenology Modeling Tool (PPMT) implementations are aimed at construction and evaluation of photochemical transport models with rapid execution for use in remote sensing data analysis activities in distributed systems. Current focus is on the Composite Infrared Spectrometer (CIRS) data acquired during the CASSINI flyby of Jupiter. Overall, the project has stayed on the development track outlined in the Year 1 annual report and most Year 2 goals have been met. The issues that have required the most attention are: implementation of the core photochemistry algorithms; implementation of a functional Java Graphical User Interface; completion of a functional CORBA Component Model framework; and assessment of performance issues. Specific accomplishments and the difficulties encountered are summarized in this report. Work to be carried out in the next year center on: completion of testing of the initial operational implementation; its application to analysis of the CASSINI/CIRS Jovian flyby data; extension of the PPMT to incorporate additional phenomenology algorithms; and delivery of a mature operational implementation.
Process Management inside ATLAS DAQ
NASA Astrophysics Data System (ADS)
Alexandrov, I.; Amorim, A.; Badescu, E.; Burckhart-Chromek, D.; Caprini, M.; Dobson, M.; Duval, P. Y.; Hart, R.; Jones, R.; Kazarov, A.; Kolos, S.; Kotov, V.; Liko, D.; Lucio, L.; Mapelli, L.; Mineev, M.; Moneta, L.; Nassiakou, M.; Pedro, L.; Ribeiro, A.; Roumiantsev, V.; Ryabov, Y.; Schweiger, D.; Soloviev, I.; Wolters, H.
2002-10-01
The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.
Rural telemedicine project in northern New Mexico
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zink, S.; Hahn, H.; Rudnick, J.
A virtual electronic medical record system is being deployed over the Internet with security in northern New Mexico using TeleMed, a multimedia medical records management system that uses CORBA-based client-server technology and distributed database architecture. The goal of the NNM Rural Telemedicine Project is to implement TeleMed into fifteen rural clinics and two hospitals within a 25,000 square mile area of northern New Mexico. Evaluation of the project consists of three components: job task analysis, audit of immunized children, and time motion studies. Preliminary results of the evaluation components are presented.
NASA Astrophysics Data System (ADS)
Park, Soomyung; Joo, Seong-Soon; Yae, Byung-Ho; Lee, Jong-Hyun
2002-07-01
In this paper, we present the Optical Cross-Connect (OXC) Management Control System Architecture, which has the scalability and robust maintenance and provides the distributed managing environment in the optical transport network. The OXC system we are developing, which is divided into the hardware and the internal and external software for the OXC system, is made up the OXC subsystem with the Optical Transport Network (OTN) sub layers-hardware and the optical switch control system, the signaling control protocol subsystem performing the User-to-Network Interface (UNI) and Network-to-Network Interface (NNI) signaling control, the Operation Administration Maintenance & Provisioning (OAM&P) subsystem, and the network management subsystem. And the OXC management control system has the features that can support the flexible expansion of the optical transport network, provide the connectivity to heterogeneous external network elements, be added or deleted without interrupting OAM&P services, be remotely operated, provide the global view and detail information for network planner and operator, and have Common Object Request Broker Architecture (CORBA) based the open system architecture adding and deleting the intelligent service networking functions easily in future. To meet these considerations, we adopt the object oriented development method in the whole developing steps of the system analysis, design, and implementation to build the OXC management control system with the scalability, the maintenance, and the distributed managing environment. As a consequently, the componentification for the OXC operation management functions of each subsystem makes the robust maintenance, and increases code reusability. Also, the component based OXC management control system architecture will have the flexibility and scalability in nature.
NASA Technical Reports Server (NTRS)
Lopez, Isaac; Follen, Gregory J.; Gutierrez, Richard; Foster, Ian; Ginsburg, Brian; Larsson, Olle; Martin, Stuart; Tuecke, Steven; Woodford, David
2000-01-01
This paper describes a project to evaluate the feasibility of combining Grid and Numerical Propulsion System Simulation (NPSS) technologies, with a view to leveraging the numerous advantages of commodity technologies in a high-performance Grid environment. A team from the NASA Glenn Research Center and Argonne National Laboratory has been studying three problems: a desktop-controlled parameter study using Excel (Microsoft Corporation); a multicomponent application using ADPAC, NPSS, and a controller program-, and an aviation safety application running about 100 jobs in near real time. The team has successfully demonstrated (1) a Common-Object- Request-Broker-Architecture- (CORBA-) to-Globus resource manager gateway that allows CORBA remote procedure calls to be used to control the submission and execution of programs on workstations and massively parallel computers, (2) a gateway from the CORBA Trader service to the Grid information service, and (3) a preliminary integration of CORBA and Grid security mechanisms. We have applied these technologies to two applications related to NPSS, namely a parameter study and a multicomponent simulation.
NASA Astrophysics Data System (ADS)
Sventek, Joe
1998-12-01
Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, CA 94304, USA Introduction The USENIX Conference on Object-Oriented Technologies and Systems (COOTS) is held annually in the late spring. The conference evolved from a set of C++ workshops that were held under the auspices of USENIX, the first of which met in 1989. Given the growing diverse interest in object-oriented technologies, the C++ focus of the workshop eventually became too narrow, with the result that the scope was widened in 1995 to include object-oriented technologies and systems. COOTS is intended to showcase advanced R&D efforts in object-oriented technologies and software systems. The conference emphasizes experimental research and experience gained by using object-oriented techniques and languages to build complex software systems that meet real-world needs. COOTS solicits papers in the following general areas: application of, and experiences with, object-oriented technologies in particular domains (e.g. financial, medical, telecommunication); the architecture and implementation of distributed object systems (e.g. CORBA, DCOM, RMI); object-oriented programming and specification languages; object-oriented design and analysis. The 4th meeting of COOTS was held 27 - 30 April 1998 at the El Dorado Hotel, Santa Fe, New Mexico, USA. Several tutorials were given. The technical program proper consisted of a single track of six sessions, with three paper presentations per session. A keynote address and a provocative panel session rounded out the technical program. The program committee reviewed 56 papers, selecting the best 18 for presentation in the technical sessions. While we solicit papers across the spectrum of applications of object-oriented technologies, this year there was a predominance of distributed, object-oriented papers. The accepted papers reflected this asymmetry, with 15 papers on distributed objects and 3 papers on object-oriented languages. The papers in this special issue are the six best distributed object papers (in the opinion of the program committee). They represent the diversity of research in this particular area, and should give the reader a good idea of the types of papers presented at COOTS as well as the calibre of the work so presented. The papers The paper by Jain, Widoff and Schmidt explores the suitability of Java for writing performance-sensitive distributed applications. Despite the popularity of Java, there are many concerns about its efficiency; in particular, networking and computation performance are key concerns when considering the use of Java to develop performance-sensitive distributed applications. This paper makes three contributions to the study of Java for these applications: it describes an architecture using Java and the Web to develop MedJava, which is a distributed electronic medical imaging system with stringent networking and computation requirements; it presents benchmarks of MedJava image processing and compares the results to the performance of xv, which is an equivalent image processing application written in C; it presents performance benchmarks using Java as a transport interface to exchange large medical images over high-speed ATM networks. The paper by Little and Shrivastava covers the integration of several important topics: transactions, distributed systems, Java, the Internet and security. The usefulness of this paper lies in the synthesis of an effective solution applying work in different areas of computing to the Java environment. Securing applications constructed from distributed objects is important if these applications are to be used in mission-critical situations. Delegation is one aspect of distributed system security that is necessary for such applications. The paper by Nagaratnam and Lea describes a secure delegation model for Java-based, distributed object environments. The paper by Frølund and Koistinen addresses the topical issue of providing a common way for describing Quality-of-Service (QoS) features in distributed, object-oriented systems. They present a general QoS language, QML, that can be used to capture QoS properties as part of a design. They also show how to extend UML to support QML concepts. The paper by Szymaszek, Uszok and Zielinski discusses the important issue of efficient implementation and usage of fine-grained objects in CORBA-based applications. Fine-grained objects can have serious ramifications on overall application performance and scalability, and the paper suggests that such objects should not be treated as first-class CORBA objects, proposing instead the use of collections and smart proxies for efficient implementation. The paper by Milojicic, LaForge and Chauhan describes a mobile objects and agents infrastructure. Their particular research has focused on communication support across agent migration and extensive resource control. The paper also discusses issues regarding interoperation between agent systems. Acknowledgments The editor wishes to thank all of the authors, reviewers and publishers. Without their excellent work, and the contribution of their valuable time, this special issue would not have been possible.
Java-based cryptosystem for PACS and tele-imaging
NASA Astrophysics Data System (ADS)
Tjandra, Donny; Wong, Stephen T. C.; Yu, Yuan-Pin
1998-07-01
Traditional PACS systems are based on two-tier client server architectures, and require the use of costly, high-end client workstations for image viewing. Consequently, PACS systems using the two-tier architecture do not scale well as data increases in size and complexity. Furthermore, use of dedicated viewing workstations incurs costs in deployment and maintenance. To address these issues, the use of digital library technologies, such as the World Wide Web, Java, and CORBA, is being explored to distribute PACS data to serve a broader range of healthcare providers in an economic and efficient manner. Integration of PACS systems with digital library technologies allows access to medical information through open networks such as the Internet. However, use of open networks to transmit medical data introduces problems with maintaining privacy and integrity of patient information. Cryptography and digital timestamping is used to protect sensitive information from unauthorized access or tampering. A major concern when using cryptography and digital timestamping is the performance degradation associated with the mathematical calculations needed to encrypt/decrypt an image dataset, or to calculate the hash value of an image. The performance issue is compounded by the extra layer associated with the CORBA middleware, and the use of programming languages interpreted at the client side, such as Java. This paper study the extent to which Java-based cryptography and digital timestamping affects performance in a PACS system integrated with digital library technologies.
The SOFIA Mission Control System Software
NASA Astrophysics Data System (ADS)
Heiligman, G. M.; Brock, D. R.; Culp, S. D.; Decker, P. H.; Estrada, J. C.; Graybeal, J. B.; Nichols, D. M.; Paluzzi, P. R.; Sharer, P. J.; Pampell, R. J.; Papke, B. L.; Salovich, R. D.; Schlappe, S. B.; Spriestersbach, K. K.; Webb, G. L.
1999-05-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) will be delivered with a computerized mission control system (MCS). The MCS communicates with the aircraft's flight management system and coordinates the operations of the telescope assembly, mission-specific subsystems, and the science instruments. The software for the MCS must be reliable and flexible. It must be easily usable by many teams of observers with widely differing needs, and it must support non-intrusive access for education and public outreach. The technology must be appropriate for SOFIA's 20-year lifetime. The MCS software development process is an object-oriented, use case driven approach. The process is iterative: delivery will be phased over four "builds"; each build will be the result of many iterations; and each iteration will include analysis, design, implementation, and test activities. The team is geographically distributed, coordinating its work via Web pages, teleconferences, T.120 remote collaboration, and CVS (for Internet-enabled configuration management). The MCS software architectural design is derived in part from other observatories' experience. Some important features of the MCS are: * distributed computing over several UNIX and VxWorks computers * fast throughput of time-critical data * use of third-party components, such as the Adaptive Communications Environment (ACE) and the Common Object Request Broker Architecture (CORBA) * extensive configurability via stored, editable configuration files * use of several computer languages so developers have "the right tool for the job". C++, Java, scripting languages, Interactive Data Language (from Research Systems, Int'l.), XML, and HTML will all be used in the final deliverables. This paper reports on work in progress, with the final product scheduled for delivery in 2001. This work was performed for Universities Space Research Association for NASA under contract NAS2-97001.
What CORBA can do: An example of a new system developed with object technology: TeleMed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forslund, D.; Phillips, R.; Tomlinson, B.
1996-05-01
The TeleMed application grew out of a relationship with physicians at the National Jewish Center for Immunology and Respiratory Medicine (NJC) in Denver. These physicians are experts in pulmonary diseases and radiology, helping patients combat effects of TB and other lung diseases. To make the knowledge and experience at NJC available to a wider audience, LANL has developed a virtual patient record system called TeleMed which is based on distributed national radiographic and patient record repository located throughout the country. Without leaving their offices, participating doctors can view clinical drug and radiographic data via a sophisticated multimedia interface. TeleMed ismore » also valuable for teaching and presentation as well. Thus a resident can use TeleMed for self-training in diagnostic techniques and a physician can use it to explain to a patient the course of their illness. Data can be viewed simultaneously by users at two or more distant locations for consultation with specialists in different fields. This capability is made possible by integration of multimedia information using commercial CORBA technology linking object-enable databases with client interfaces using a three-tiered architecture.« less
Healthcare information system approaches based on middleware concepts.
Holena, M; Blobel, B
1997-01-01
To meet the challenges for efficient and high-level quality, health care systems must implement the "Shared Care" paradigm of distributed co-operating systems. To this end, both the newly developed and legacy applications must be fully integrated into the care process. These requirements can be fulfilled by information systems based on middleware concepts. In the paper, the middleware approaches HL7, DHE, and CORBA are described. The relevance of those approaches to the healthcare domain is documented. The description presented here is complemented through two other papers in this volume, concentrating on the evaluation of the approaches, and on their security threats and solutions.
ALMA Correlator Real-Time Data Processor
NASA Astrophysics Data System (ADS)
Pisano, J.; Amestica, R.; Perez, J.
2005-10-01
The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.
NASA Astrophysics Data System (ADS)
Roach, Colin; Carlsson, Johan; Cary, John R.; Alexander, David A.
2002-11-01
The National Transport Code Collaboration (NTCC) has developed an array of software, including a data client/server. The data server, which is written in C++, serves local data (in the ITER Profile Database format) as well as remote data (by accessing one or several MDS+ servers). The client, a web-invocable Java applet, provides a uniform, intuitive, user-friendly, graphical interface to the data server. The uniformity of the interface relieves the user from the trouble of mastering the differences between different data formats and lets him/her focus on the essentials: plotting and viewing the data. The user runs the client by visiting a web page using any Java capable Web browser. The client is automatically downloaded and run by the browser. A reference to the data server is then retrieved via the standard Web protocol (HTTP). The communication between the client and the server is then handled by the mature, industry-standard CORBA middleware. CORBA has bindings for all common languages and many high-quality implementations are available (both Open Source and commercial). The NTCC data server has been installed at the ITPA International Multi-tokamak Confinement Profile Database, which is hosted by the UKAEA at Culham Science Centre. The installation of the data server is protected by an Internet firewall. To make it accessible to clients outside the firewall some modifications of the server were required. The working version of the ITPA confinement profile database is not open to the public. Authentification of legitimate users is done utilizing built-in Java security features to demand a password to download the client. We present an overview of the NTCC data client/server and some details of how the CORBA firewall-traversal issues were resolved and how the user authentification is implemented.
Project Integration Architecture: Distributed Lock Management, Deadlock Detection, and Set Iteration
NASA Technical Reports Server (NTRS)
Jones, William Henry
2005-01-01
The migration of the Project Integration Architecture (PIA) to the distributed object environment of the Common Object Request Broker Architecture (CORBA) brings with it the nearly unavoidable requirements of multiaccessor, asynchronous operations. In order to maintain the integrity of data structures in such an environment, it is necessary to provide a locking mechanism capable of protecting the complex operations typical of the PIA architecture. This paper reports on the implementation of a locking mechanism to treat that need. Additionally, the ancillary features necessary to make the distributed lock mechanism work are discussed.
Development of telescope control system for the 50cm telescope of UC Observatory Santa Martina
NASA Astrophysics Data System (ADS)
Shen, Tzu-Chiang; Soto, Ruben; Reveco, Johnny; Vanzi, Leonardo; Fernández, Jose M.; Escarate, Pedro; Suc, Vincent
2012-09-01
The main telescope of the UC Observatory Santa Martina is a 50cm optical telescope donated by ESO to Pontificia Universidad Catolica de Chile. During the past years the telescope has been refurbished and used as the main facility for testing and validating new instruments under construction by the center of Astro-Engineering UC. As part of this work, the need to develop a more efficient and flexible control system arises. The new distributed control system has been developed on top of Internet Communication Engine (ICE), a framework developed by Zeroc Inc. This framework features a lightweight but powerful and flexible inter-process communication infrastructure and provides binding to classic and modern programming languages, such as, C/C++, java, c#, ruby-rail, objective c, etc. The result of this work shows ICE as a real alternative for CORBA and other de-facto distribute programming framework. Classical control software architecture has been chosen and comprises an observation control system (OCS), the orchestrator of the observation, which controls the telescope control system (TCS), and detector control system (DCS). The real-time control and monitoring system is deployed and running over ARM based single board computers. Other features such as logging and configuration services have been developed as well. Inter-operation with other main astronomical control frameworks are foreseen in order achieve a smooth integration of instruments when they will be integrated in the main observatories in the north of Chile
Data analysis environment (DASH2000) for the Subaru telescope
NASA Astrophysics Data System (ADS)
Mizumoto, Yoshihiko; Yagi, Masafumi; Chikada, Yoshihiro; Ogasawara, Ryusuke; Kosugi, George; Takata, Tadafumi; Yoshida, Michitoshi; Ishihara, Yasuhide; Yanaka, Hiroshi; Yamamoto, Tadahiro; Morita, Yasuhiro; Nakamoto, Hiroyuki
2000-06-01
New framework of data analysis system (DASH) has been developed for the SUBARU Telescope. It is designed using object-oriented methodology and adopted a restaurant model. DASH shares the load of CPU and I/O among distributed heterogeneous computers. The distributed object environment of the system is implemented with JAVA and CORBA. DASH has been evaluated by several prototypings. DASH2000 is the latest version, which will be released as the beta version of data analysis system for the SUBARU Telescope.
High-performance data processing using distributed computing on the SOLIS project
NASA Astrophysics Data System (ADS)
Wampler, Stephen
2002-12-01
The SOLIS solar telescope collects data at a high rate, resulting in 500 GB of raw data each day. The SOLIS Data Handling System (DHS) has been designed to quickly process this data down to 156 GB of reduced data. The DHS design uses pools of distributed reduction processes that are allocated to different observations as needed. A farm of 10 dual-cpu Linux boxes contains the pools of reduction processes. Control is through CORBA and data is stored on a fibre channel storage area network (SAN). Three other Linux boxes are responsible for pulling data from the instruments using SAN-based ringbuffers. Control applications are Java-based while the reduction processes are written in C++. This paper presents the overall design of the SOLIS DHS and provides details on the approach used to control the pooled reduction processes. The various strategies used to manage the high data rates are also covered.
Session on High Speed Civil Transport Design Capability Using MDO and High Performance Computing
NASA Technical Reports Server (NTRS)
Rehder, Joe
2000-01-01
Since the inception of CAS in 1992, NASA Langley has been conducting research into applying multidisciplinary optimization (MDO) and high performance computing toward reducing aircraft design cycle time. The focus of this research has been the development of a series of computational frameworks and associated applications that increased in capability, complexity, and performance over time. The culmination of this effort is an automated high-fidelity analysis capability for a high speed civil transport (HSCT) vehicle installed on a network of heterogeneous computers with a computational framework built using Common Object Request Broker Architecture (CORBA) and Java. The main focus of the research in the early years was the development of the Framework for Interdisciplinary Design Optimization (FIDO) and associated HSCT applications. While the FIDO effort was eventually halted, work continued on HSCT applications of ever increasing complexity. The current application, HSCT4.0, employs high fidelity CFD and FEM analysis codes. For each analysis cycle, the vehicle geometry and computational grids are updated using new values for design variables. Processes for aeroelastic trim, loads convergence, displacement transfer, stress and buckling, and performance have been developed. In all, a total of 70 processes are integrated in the analysis framework. Many of the key processes include automatic differentiation capabilities to provide sensitivity information that can be used in optimization. A software engineering process was developed to manage this large project. Defining the interactions among 70 processes turned out to be an enormous, but essential, task. A formal requirements document was prepared that defined data flow among processes and subprocesses. A design document was then developed that translated the requirements into actual software design. A validation program was defined and implemented to ensure that codes integrated into the framework produced the same results as their standalone counterparts. Finally, a Commercial Off the Shelf (COTS) configuration management system was used to organize the software development. A computational environment, CJOPT, based on the Common Object Request Broker Architecture, CORBA, and the Java programming language has been developed as a framework for multidisciplinary analysis and Optimization. The environment exploits the parallelisms inherent in the application and distributes the constituent disciplines on machines best suited to their needs. In CJOpt, a discipline code is "wrapped" as an object. An interface to the object identifies the functionality (services) provided by the discipline, defined in Interface Definition Language (IDL) and implemented using Java. The results of using the HSCT4.0 capability are described. A summary of lessons learned is also presented. The use of some of the processes, codes, and techniques by industry are highlighted. The application of the methodology developed in this research to other aircraft are described. Finally, we show how the experience gained is being applied to entirely new vehicles, such as the Reusable Space Transportation System. Additional information is contained in the original.
Martinez, R; Rozenblit, J; Cook, J F; Chacko, A K; Timboe, H L
1999-05-01
In the Department of Defense (DoD), US Army Medical Command is now embarking on an extremely exciting new project--creating a virtual radiology environment (VRE) for the management of radiology examinations. The business of radiology in the military is therefore being reengineered on several fronts by the VRE Project. In the VRE Project, a set of intelligent agent algorithms determine where examinations are to routed for reading bases on a knowledge base of the entire VRE. The set of algorithms, called the Meta-Manager, is hierarchical and uses object-based communications between medical treatment facilities (MTFs) and medical centers that have digital imaging network picture archiving and communications systems (DIN-PACS) networks. The communications is based on use of common object request broker architecture (CORBA) objects and services to send patient demographics and examination images from DIN-PACS networks in the MTFs to the DIN-PACS networks at the medical centers for diagnosis. The Meta-Manager is also responsible for updating the diagnosis at the originating MTF. CORBA services are used to perform secure message communications between DIN-PACS nodes in the VRE network. The Meta-Manager has a fail-safe architecture that allows the master Meta-Manager function to float to regional Meta-Manager sites in case of server failure. A prototype of the CORBA-based Meta-Manager is being developed by the University of Arizona's Computer Engineering Research Laboratory using the unified modeling language (UML) as a design tool. The prototype will implement the main functions described in the Meta-Manager design specification. The results of this project are expected to reengineer the process of radiology in the military and have extensions to commercial radiology environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zitney, S.E.
This paper highlights the use of the CAPE-OPEN (CO) standard interfaces in the Advanced Process Engineering Co-Simulator (APECS) developed at the National Energy Technology Laboratory (NETL). The APECS system uses the CO unit operation, thermodynamic, and reaction interfaces to provide its plug-and-play co-simulation capabilities, including the integration of process simulation with computational fluid dynamics (CFD) simulation. APECS also relies heavily on the use of a CO COM/CORBA bridge for running process/CFD co-simulations on multiple operating systems. For process optimization in the face of multiple and some time conflicting objectives, APECS offers stochastic modeling and multi-objective optimization capabilities developed to complymore » with the CO software standard. At NETL, system analysts are applying APECS to a wide variety of advanced power generation systems, ranging from small fuel cell systems to commercial-scale power plants including the coal-fired, gasification-based FutureGen power and hydrogen production plant.« less
Design and applications of a multimodality image data warehouse framework.
Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.
Design and Applications of a Multimodality Image Data Warehouse Framework
Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forslund, D.W.; Cook, J.L.
One of the most powerful tools available for telemedicine is a multimedia medical record accessible over a wide area and simultaneously editable by multiple physicians. The ability to do this through an intuitive interface linking multiple distributed data repositories while maintaining full data integrity is a fundamental enabling technology in healthcare. The authors discuss the role of distributed object technology using Java and CORBA in providing this capability including an example of such a system (TeleMed) which can be accessed through the World Wide Web. Issues of security, scalability, data integrity, and usability are emphasized.
2000-10-01
control systems and prototyped the approach by porting the ILU ORB from Xerox to the Lynx real - time operating system . They then provided a distributed...compliant real - time operating system , a real-time ORB, and an ODMG-compliant real-time ODBMS [12]. The MITRE system is an infrastructure for...the server’s local operating system can handle. For instance, on a node controlled by the VXWorks real - time operating system with 256 local
Internet Based Robot Control Using CORBA Based Communications
2009-12-01
Proceedings of the IADIS International Conference WWW/Internet, ICWI 2002, pp. 485–490. [5] Flanagan, David , Farley, Jim, Crawford, William, and...Conference on Robotics andAutomation, ICRA’00., pp. 2019–2024. [7] Schulz, D., Burgard, W., Cremers , A., Fox, D., and Thrun, S. (2000), Web interfaces
The MSG Central Facility - A Mission Control System for Windows NT
NASA Astrophysics Data System (ADS)
Thompson, R.
The MSG Central Facility, being developed by Science Systems for EUMETSAT1, represents the first of a new generation of satellite mission control systems, based on the Windows NT operating system. The system makes use of a range of new technologies to provide an integrated environment for the planning, scheduling, control and monitoring of the entire Meteosat Second Generation mission. It supports packetised TM/TC and uses Science System's Space UNiT product to provide automated operations support at both Schedule (Timeline) and Procedure levels. Flexible access to historical data is provided through an operations archive based on ORACLE Enterprise Server, hosted on a large RAID array and off-line tape jukebox. Event driven real-time data distribution is based on the CORBA standard. Operations preparation and configuration control tools form a fully integrated element of the system.
Numerical Propulsion System Simulation: A Common Tool for Aerospace Propulsion Being Developed
NASA Technical Reports Server (NTRS)
Follen, Gregory J.; Naiman, Cynthia G.
2001-01-01
The NASA Glenn Research Center is developing an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). This simulation is initially being used to support aeropropulsion in the analysis and design of aircraft engines. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the Aviation Safety Program and Advanced Space Transportation. NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes using the Common Object Request Broker Architecture (CORBA) in the NPSS Developer's Kit to facilitate collaborative engineering. The NPSS Developer's Kit will provide the tools to develop custom components and to use the CORBA capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities will extend NPSS from a zero-dimensional simulation tool to a multifidelity, multidiscipline system-level simulation tool for the full life cycle of an engine.
Pollux: Enhancing the Quality of Service of the Global Information Grid (GIG)
2009-06-01
and throughput of standard-based and/or COTS-based QoS-enabled pub/sub technologies, including DDS, JMS, Web Services, and CORBA. 2. The DDS QoS...of ser- vice pICKER (QUICKER) model-driven engineering ( MDE ) toolchain shown in Figure 8. QUICKER extends the Platform-Independent Component Modeling
A Service Oriented Architecture for Robotic Platforms
2011-03-01
Composite patternsidentify combinations of business and integration patterns such as those used in eCommerce applications, 4. Application patterns...systems and oers the same advantages and disadvantages of both layered and CORBA systems. 5One commercial CORBA implementation that the author is...complexity to users of the SOA and Player approaches. The advantage of the SOA approach over the Player approach is through the ESB concept in which we
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perl, Joseph
2003-07-10
HepRep is a generic, hierarchical format for description of graphics representables that can be augmented by physics information and relational properties. It was developed for high energy physics event display applications and is especially suited to client/server or component frameworks. The GLAST experiment, an international effort led by NASA for a gamma-ray telescope to launch in 2006, chose HepRep to provide a flexible, extensible and maintainable framework for their event display without tying their users to any one graphics application. To support HepRep in their GUADI infrastructure, GLAST developed a HepRep filler and builder architecture. The architecture hides the detailsmore » of XML and CORBA in a set of base and helper classes allowing physics experts to focus on what data they want to represent. GLAST has two GAUDI services: HepRepSvc, which registers HepRep fillers in a global registry and allows the HepRep to be exported to XML, and CorbaSvc, which allows the HepRep to be published through a CORBA interface and which allows the client application to feed commands back to GAUDI (such as start next event, or run some GAUDI algorithm). GLAST's HepRep solution gives users a choice of client applications, WIRED (written in Java) or FRED (written in C++ and Ruby), and leaves them free to move to any future HepRep-compliant event display.« less
Integrating knowledge based functionality in commercial hospital information systems.
Müller, M L; Ganslandt, T; Eich, H P; Lang, K; Ohmann, C; Prokosch, H U
2000-01-01
Successful integration of knowledge-based functions in the electronic patient record depends on direct and context-sensitive accessibility and availability to clinicians and must suit their workflow. In this paper we describe an exemplary integration of an existing standalone scoring system for acute abdominal pain into two different commercial hospital information systems using Java/Corba technolgy.
NPSS on NASA's IPG: Using CORBA and Globus to Coordinate Multidisciplinary Aeroscience Applications
NASA Technical Reports Server (NTRS)
Lopez, Isaac; Follen, Gregory J.; Gutierrez, Richard; Naiman, Cynthia G.; Foster, Ian; Ginsburg, Brian; Larsson, Olle; Martin, Stuart; Tuecke, Steven; Woodford, David
2000-01-01
Within NASA's High Performance Computing and Communication (HPCC) program, the NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. To this end, NPSS integrates multiple disciplines such as aerodynamics, structures, and heat transfer and supports "numerical zooming" between O-dimensional to 1-, 2-, and 3-dimensional component engine codes. In order to facilitate the timely and cost-effective capture of complex physical processes, NPSS uses object-oriented technologies such as C++ objects to encapsulate individual engine components and CORBA ORBs for object communication and deployment across heterogeneous computing platforms. Recently, the HPCC program has initiated a concept called the Information Power Grid (IPG), a virtual computing environment that integrates computers and other resources at different sites. IPG implements a range of Grid services such as resource discovery, scheduling, security, instrumentation, and data access, many of which are provided by the Globus toolkit. IPG facilities have the potential to benefit NPSS considerably. For example, NPSS should in principle be able to use Grid services to discover dynamically and then co-schedule the resources required for a particular engine simulation, rather than relying on manual placement of ORBs as at present. Grid services can also be used to initiate simulation components on parallel computers (MPPs) and to address inter-site security issues that currently hinder the coupling of components across multiple sites. These considerations led NASA Glenn and Globus project personnel to formulate a collaborative project designed to evaluate whether and how benefits such as those just listed can be achieved in practice. This project involves firstly development of the basic techniques required to achieve co-existence of commodity object technologies and Grid technologies; and secondly the evaluation of these techniques in the context of NPSS-oriented challenge problems. The work on basic techniques seeks to understand how "commodity" technologies (CORBA, DCOM, Excel, etc.) can be used in concert with specialized "Grid" technologies (for security, MPP scheduling, etc.). In principle, this coordinated use should be straightforward because of the Globus and IPG philosophy of providing low-level Grid mechanisms that can be used to implement a wide variety of application-level programming models. (Globus technologies have previously been used to implement Grid-enabled message-passing libraries, collaborative environments, and parameter study tools, among others.) Results obtained to date are encouraging: we have successfully demonstrated a CORBA to Globus resource manager gateway that allows the use of CORBA RPCs to control submission and execution of programs on workstations and MPPs; a gateway from the CORBA Trader service to the Grid information service; and a preliminary integration of CORBA and Grid security mechanisms. The two challenge problems that we consider are the following: 1) Desktop-controlled parameter study. Here, an Excel spreadsheet is used to define and control a CFD parameter study, via a CORBA interface to a high throughput broker that runs individual cases on different IPG resources. 2) Aviation safety. Here, about 100 near real time jobs running NPSS need to be submitted, run and data returned in near real time. Evaluation will address such issues as time to port, execution time, potential scalability of simulation, and reliability of resources. The full paper will present the following information: 1. A detailed analysis of the requirements that NPSS applications place on IPG. 2. A description of the techniques used to meet these requirements via the coordinated use of CORBA and Globus. 3. A description of results obtained to date in the first two challenge problems.
ACS sampling system: design, implementation, and performance evaluation
NASA Astrophysics Data System (ADS)
Di Marcantonio, Paolo; Cirami, Roberto; Chiozzi, Gianluca
2004-09-01
By means of ACS (ALMA Common Software) framework we designed and implemented a sampling system which allows sampling of every Characteristic Component Property with a specific, user-defined, sustained frequency limited only by the hardware. Collected data are sent to various clients (one or more Java plotting widgets, a dedicated GUI or a COTS application) using the ACS/CORBA Notification Channel. The data transport is optimized: samples are cached locally and sent in packets with a lower and user-defined frequency to keep network load under control. Simultaneous sampling of the Properties of different Components is also possible. Together with the design and implementation issues we present the performance of the sampling system evaluated on two different platforms: on a VME based system using VxWorks RTOS (currently adopted by ALMA) and on a PC/104+ embedded platform using Red Hat 9 Linux operating system. The PC/104+ solution offers, as an alternative, a low cost PC compatible hardware environment with free and open operating system.
Secure medical digital libraries.
Papadakis, I; Chrissikopoulos, V; Polemi, D
2001-12-01
In this paper, a secure medical digital library is presented. It is based on the CORBA specifications for distributed systems. The described approach relies on a three-tier architecture. Interaction between the medical digital library and its users is achieved through a Web server. The choice of employing Web technology for the dissemination of medical data has many advantages compared to older approaches, but also poses extra requirements that need to be fulfilled. Thus, special attention is paid to the distinguished nature of such medical data, whose integrity and confidentiality should be preserved at all costs. This is achieved through the employment of Trusted Third Parties (TTP) technology for the support of the required security services. Additionally, the proposed digital library employs smartcards for the management of the various security tokens that are used from the above services.
NASA Astrophysics Data System (ADS)
Reder, Leonard J.; Booth, Andrew; Hsieh, Jonathan; Summers, Kellee R.
2004-09-01
This paper presents a discussion of the evolution of a sequencer from a simple Experimental Physics and Industrial Control System (EPICS) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a Computer Aided Software Engineering (CASE) tool approach. The main purpose of the Interferometer Sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations to be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii. The IF Sequencer is a high-level, multi-threaded, Harel finite state machine software program designed to orchestrate several lower-level hardware and software hard real-time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORBA, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation. The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.
Java Tool Framework for Automation of Hardware Commissioning and Maintenance Procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, J C; Fisher, J M; Gordon, J B
2007-10-02
The National Ignition Facility (NIF) is a 192-beam laser system designed to study high energy density physics. Each beam line contains a variety of line replaceable units (LRUs) that contain optics, stepping motors, sensors and other devices to control and diagnose the laser. During commissioning and subsequent maintenance of the laser, LRUs undergo a qualification process using the Integrated Computer Control System (ICCS) to verify and calibrate the equipment. The commissioning processes are both repetitive and tedious when we use remote manual computer controls, making them ideal candidates for software automation. Maintenance and Commissioning Tool (MCT) software was developed tomore » improve the efficiency of the qualification process. The tools are implemented in Java, leveraging ICCS services and CORBA to communicate with the control devices. The framework provides easy-to-use mechanisms for handling configuration data, task execution, task progress reporting, and generation of commissioning test reports. The tool framework design and application examples will be discussed.« less
Advanced Operating System Technologies
NASA Astrophysics Data System (ADS)
Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro
In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC. Our work started in the second half of 1994, with a research agreement between CERN and Chorus Systemes (France), world leader in the micro-kernel OS technology. The Chorus OS is targeted to distributed real-time applications, and it can very efficiently support different "OS personalities" in the same environment, like Posix, UNIX, and a CORBA compliant distributed object architecture. Projects are being set-up to verify the suitability of our work for LHC applications, we are building a scaled-down prototype of the DAQ system foreseen for the CMS experiment at LHC, where we will directly test our protocols and where we will be able to make measurements and benchmarks, guiding our development and allowing us to build an analytical model of the system, suitable for simulation and large scale verification.
NASA Astrophysics Data System (ADS)
Candela, L.; Ruggieri, G.; Giancaspro, A.
2004-09-01
In the sphere of "Multi-Mission Ground Segment" Italian Space Agency project, some innovative technologies such as CORBA[1], Z39.50[2], XML[3], Java[4], Java server Pages[4] and C++ has been experimented. The SSPI system (Space Service Provider Infrastructure) is the prototype of a distributed environment aimed to facilitate the access to Earth Observation (EO) data. SSPI allows to ingests, archive, consolidate, visualize and evaluate these data. Hence, SSPI is not just a database of or a data repository, but an application that by means of a set of protocols, standards and specifications provides a unified access to multi-mission EO data.
Intelligent Launch and Range Operations Virtual Test Bed (ILRO-VTB)
NASA Technical Reports Server (NTRS)
Bardina, Jorge; Rajkumar, T.
2003-01-01
Intelligent Launch and Range Operations Virtual Test Bed (ILRO-VTB) is a real-time web-based command and control, communication, and intelligent simulation environment of ground-vehicle, launch and range operation activities. ILRO-VTB consists of a variety of simulation models combined with commercial and indigenous software developments (NASA Ames). It creates a hybrid software/hardware environment suitable for testing various integrated control system components of launch and range. The dynamic interactions of the integrated simulated control systems are not well understood. Insight into such systems can only be achieved through simulation/emulation. For that reason, NASA has established a VTB where we can learn the actual control and dynamics of designs for future space programs, including testing and performance evaluation. The current implementation of the VTB simulates the operations of a sub-orbital vehicle of mission, control, ground-vehicle engineering, launch and range operations. The present development of the test bed simulates the operations of Space Shuttle Vehicle (SSV) at NASA Kennedy Space Center. The test bed supports a wide variety of shuttle missions with ancillary modeling capabilities like weather forecasting, lightning tracker, toxic gas dispersion model, debris dispersion model, telemetry, trajectory modeling, ground operations, payload models and etc. To achieve the simulations, all models are linked using Common Object Request Broker Architecture (CORBA). The test bed provides opportunities for government, universities, researchers and industries to do a real time of shuttle launch in cyber space.
Intelligent launch and range operations virtual testbed (ILRO-VTB)
NASA Astrophysics Data System (ADS)
Bardina, Jorge; Rajkumar, Thirumalainambi
2003-09-01
Intelligent Launch and Range Operations Virtual Test Bed (ILRO-VTB) is a real-time web-based command and control, communication, and intelligent simulation environment of ground-vehicle, launch and range operation activities. ILRO-VTB consists of a variety of simulation models combined with commercial and indigenous software developments (NASA Ames). It creates a hybrid software/hardware environment suitable for testing various integrated control system components of launch and range. The dynamic interactions of the integrated simulated control systems are not well understood. Insight into such systems can only be achieved through simulation/emulation. For that reason, NASA has established a VTB where we can learn the actual control and dynamics of designs for future space programs, including testing and performance evaluation. The current implementation of the VTB simulates the operations of a sub-orbital vehicle of mission, control, ground-vehicle engineering, launch and range operations. The present development of the test bed simulates the operations of Space Shuttle Vehicle (SSV) at NASA Kennedy Space Center. The test bed supports a wide variety of shuttle missions with ancillary modeling capabilities like weather forecasting, lightning tracker, toxic gas dispersion model, debris dispersion model, telemetry, trajectory modeling, ground operations, payload models and etc. To achieve the simulations, all models are linked using Common Object Request Broker Architecture (CORBA). The test bed provides opportunities for government, universities, researchers and industries to do a real time of shuttle launch in cyber space.
Use of psychoactive substances in prison: Results of a study in the Lyon-Corbas prison, France.
Sahajian, F; Berger-Vergiat, A; Pot, E
2017-09-01
In prison, in 2012, according to various sources, from 4 to 56% of the European inmate population used psychoactive substances (PAS). The aim of our study was to describe PAS consumption during incarceration in the prison of Lyon-Corbas, France. A transversal descriptive study was conducted between September 23rd and September 27th 2013 among all inmates of this prison. We used an anonymous self-administered questionnaire, distributed at lunchtime and collected, the same day, at dinnertime, by the mental health service personnel. Among 785 inmates present at the time of the study in the prison of Lyon-Corbas, 710 were included and the response rate was 64.4% (95% CI [60.8-67.8]). Among 457 responding inmates, 16.4% (95% CI [13.2-20.0]) reported no PAS consumption. Among 382 consumers, 74.4% (95% CI [69.8-78.5]) used tobacco, 36.8% (95% CI [32.2-41.8]) cannabis, 30.4% (95% CI [25.9-35.1]) alcohol, 7.7% (95% CI [5.2-10.6]) heroin and 10.3% (95% CI [7.5-13.6]) cocaine. Furthermore, 15% of consumers had started PAS consumption during their incarceration. Among consumers of at least one PAS other than tobacco, cannabis and alcohol, the way of consumption was sniff for 60.0% (95% CI [48.5-70.2]) and injection for 31.0% (95% CI [21.6-42.1]). Use of several PAS at the same time and sharing sniffing and/or injection paraphernalia were other risky behaviors observed; 12% (95% CI [5.8-20.4]) of drug injectors declared using chlorine to sterilize their injection paraphernalia. Our study provides worrying data about PAS consumption in prison. The measures of prohibition do not prevent this consumption. There is even an initiation of consumption of PAS for 15% of the first-time incarcerated inmates. This finding should encourage public authorities to facilitate access of inmates to the care structures in prisons, to improve drug use prevention and care programs and to develop activities (sports, cultural, educational and vocational). Copyright © 2017 Elsevier Masson SAS. All rights reserved.
New capabilities in the HENP grand challenge storage access systemand its application at RHIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernardo, L.; Gibbard, B.; Malon, D.
2000-04-25
The High Energy and Nuclear Physics Data Access GrandChallenge project has developed an optimizing storage access softwaresystem that was prototyped at RHIC. It is currently undergoingintegration with the STAR experiment in preparation for data taking thatstarts in mid-2000. The behavior and lessons learned in the RHIC MockData Challenge exercises are described as well as the observedperformance under conditions designed to characterize scalability. Up to250 simultaneous queries were tested and up to 10 million events across 7event components were involved in these queries. The system coordinatesthe staging of "bundles" of files from the HPSS tape system, so that allthe needed componentsmore » of each event are in disk cache when accessed bythe application software. The caching policy algorithm for thecoordinated bundle staging is described in the paper. The initialprototype implementation interfaced to the Objectivity/DB. In this latestversion, it evolved to work with arbitrary files and use CORBA interfacesto the tag database and file catalog services. The interface to the tagdatabase and the MySQL-based file catalog services used by STAR aredescribed along with the planned usage scenarios.« less
2000-03-01
languages yet still be able to access the legacy relational databases that businesses have huge investments in. JDBC is a low-level API designed for...consider the return of investment . The system requirements, discussed in Chapter II, are the main source of input to developing the relational...1996. Inprise, Gatekeeper Guide, Inprise Corporation, 1999. Kroenke, D., Database Processing Fundementals , Design, and Implementation, Sixth Edition
Performance Evaluation and Software Design for EVA Robotic Assistant Stereo Vision Heads
NASA Technical Reports Server (NTRS)
DiPaolo, Daniel
2003-01-01
The purpose of this project was to aid the EVA Robotic Assistant project by evaluating and designing the necessary interfaces for two stereo vision heads - the TracLabs Biclops pan-tilt-verge head, and the Helpmate Zebra pan-tilt-verge head. The first half of the project consisted of designing the necessary software interface so that the other modules of the EVA Robotic Assistant had proper access to all of the functionalities offered by each of the stereovision heads. This half took most of the project time, due to a lack of ready-made CORBA drivers for either of the heads. Once this was overcome, the evaluation stage of the project began. The second half of the project was to take these interfaces and to evaluate each of the stereo vision heads in terms of usefulness to the project. In the key project areas such as stability and reliability, the Zebra pan-tilt-verge head came out on top. However, the Biclops did have many more advantages over the Zebra, such as: lower power consumption, faster communications, and a simpler, cleaner API. Overall, the Biclops pan-tilt-verge head outperformed the Zebra pan-tilt-verge head.
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
Database technology and the management of multimedia data in the Mirror project
NASA Astrophysics Data System (ADS)
de Vries, Arjen P.; Blanken, H. M.
1998-10-01
Multimedia digital libraries require an open distributed architecture instead of a monolithic database system. In the Mirror project, we use the Monet extensible database kernel to manage different representation of multimedia objects. To maintain independence between content, meta-data, and the creation of meta-data, we allow distribution of data and operations using CORBA. This open architecture introduces new problems for data access. From an end user's perspective, the problem is how to search the available representations to fulfill an actual information need; the conceptual gap between human perceptual processes and the meta-data is too large. From a system's perspective, several representations of the data may semantically overlap or be irrelevant. We address these problems with an iterative query process and active user participating through relevance feedback. A retrieval model based on inference networks assists the user with query formulation. The integration of this model into the database design has two advantages. First, the user can query both the logical and the content structure of multimedia objects. Second, the use of different data models in the logical and the physical database design provides data independence and allows algebraic query optimization. We illustrate query processing with a music retrieval application.
Fumoto, Masaki; Miyazaki, Satoru; Sugawara, Hideaki
2002-01-01
Genome Information Broker (GIB) is a powerful tool for the study of comparative genomics. GIB allows users to retrieve and display partial and/or whole genome sequences together with the relevant biological annotation. GIB has accumulated all the completed microbial genome and has recently been expanded to include Arabidopsis thaliana genome data from DDBJ/EMBL/GenBank. In the near future, hundreds of genome sequences will be determined. In order to handle such huge data, we have enhanced the GIB architecture by using XML, CORBA and distributed RDBs. We introduce the new GIB here. GIB is freely accessible at http://gib.genes.nig.ac.jp/. PMID:11752256
Contingency theoretic methodology for agent-based web-oriented manufacturing systems
NASA Astrophysics Data System (ADS)
Durrett, John R.; Burnell, Lisa J.; Priest, John W.
2000-12-01
The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.
Bidgood, W D; alSafadi, Y; Tucker, M; Prior, F; Hagan, G; Mattison, J E
1998-02-01
The decision to use Digital Imaging and Communications in Medicine (DICOM), Health Level 7 (HL7), a common object broker such as the Common Object Request Brokering Architecture (CORBA) or ActiveX (Microsoft Corp, Redmond, WA) or any other protocol for the transfer of DICOM data depends on the requirements of a particular implementation. The selection of protocol is independent of the information model. Our goal as message standards developers is to design a data interchange infrastructure that will faithfully convey the computer-based patient record and make it available to authorized health care providers when and where it is needed for patient care. DICOM accurately and expressively represents the clinically significant properties of images and the semantics of image-related information. The DICOM data model is small and well-defined. The model can be expressed in Standard Generalized Markup Language (SGML) or Object Management Group Interface Definition Language or other common syntax-and can be implemented using any reliable communications protocol. Therefore our opinion is that the DICOM semantic data model should serve as the basis for a logically equivalent set of specifications in HL7, CORBA, ActiveX, and SGML for the interchange of biomedical images and image-related information.
MITSI project : final local evaluation report
DOT National Transportation Integrated Search
2003-01-01
The mission statement for the MITSI project was facilitating National Standards Compliance migration for NaviGAtor, conducting National Architecture mapping for MARTA and E911, and evaluating CORBA as a methodology for exchanging data. This involved ...
NASA Astrophysics Data System (ADS)
Lanciotti, E.; Merino, G.; Bria, A.; Blomer, J.
2011-12-01
In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.
Architecture of a wireless Personal Assistant for telemedical diabetes care.
García-Sáez, Gema; Hernando, M Elena; Martínez-Sarriegui, Iñaki; Rigla, Mercedes; Torralba, Verónica; Brugués, Eulalia; de Leiva, Alberto; Gómez, Enrique J
2009-06-01
Advanced information technologies joined to the increasing use of continuous medical devices for monitoring and treatment, have made possible the definition of a new telemedical diabetes care scenario based on a hand-held Personal Assistant (PA). This paper describes the architecture, functionality and implementation of the PA, which communicates different medical devices in a personal wireless network. The PA is a mobile system for patients with diabetes connected to a telemedical center. The software design follows a modular approach to make the integration of medical devices or new functionalities independent from the rest of its components. Physicians can remotely control medical devices from the telemedicine server through the integration of the Common Object Request Broker Architecture (CORBA) and mobile GPRS communications. Data about PA modules' usage and patients' behavior evaluation come from a pervasive tracing system implemented into the PA. The PA architecture has been technically validated with commercially available medical devices during a clinical experiment for ambulatory monitoring and expert feedback through telemedicine. The clinical experiment has allowed defining patients' patterns of usage and preferred scenarios and it has proved the Personal Assistant's feasibility. The patients showed high acceptability and interest in the system as recorded in the usability and utility questionnaires. Future work will be devoted to the validation of the system with automatic control strategies from the telemedical center as well as with closed-loop control algorithms.
Clustalnet: the joining of Clustal and CORBA.
Campagne, F
2000-07-01
Performing sequence alignment operations from a different program than the original sequence alignment code, and/or through a network connection, is often required. Interactive alignment editors and large-scale biological data analysis are common examples where such a flexibility is important. Interoperability between the alignment engine and the client should be obtained regardless of the architectures and programming languages of the server and client. Clustalnet, a Clustal alignment CORBA server is described, which was developed on the basis of Clustalw. This server brings the robustness of the algorithms and implementations of Clustal to a new level of reuse. A Clustalnet server object can be accessed from a program, transparently through the network. We present interfaces to perform the alignment operations and to control these operations via immutable contexts. The interfaces that select the contexts do not depend on the nature of the operation to be performed, making the design modular. The IDL interfaces presented here are not specific to Clustal and can be implemented on top of different sequence alignment algorithm implementations.
van der Linden, Helma; Talmon, Jan; Tange, Huibert; Grimson, Jane; Hasman, Arie
2005-03-01
The PropeR EHR system (PropeRWeb) is a multidisciplinary electronic health record (EHR) system for multidisciplinary use in extramural patient care for stroke patients. The system is built using existing open source components and is based on open standards. It is implemented as a web application using servlets and Java Server Pages (JSP's) with a CORBA connection to the database servers, which are based on the OMG HDTF specifications. PropeRWeb is a generic system which can be readily customized for use in a variety of clinical domains. The system proved to be stable and flexible, although some aspects (a.o. user friendliness) could be improved. These improvements are currently under development in a second version.
An Examination of Multi-Tier Designs for Legacy Data Access
1997-12-01
heterogeneous relational database management systems. The first test system incorporates a two-tier architecture design using Java, and the second system...employs a three-tier architecture design using Java and CORBA. Data on replication times for the two-tier and three-tier designs are presented
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.
NASA Technical Reports Server (NTRS)
Kranz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.
NASA Astrophysics Data System (ADS)
Liang, Likai; Bi, Yushen
Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.
Using an architectural approach to integrate heterogeneous, distributed software components
NASA Technical Reports Server (NTRS)
Callahan, John R.; Purtilo, James M.
1995-01-01
Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.
TeleMed: An example of a new system developed with object technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forslund, D.; Phillips, R.; Tomlinson, B.
1996-12-01
Los Alamos National Laboratory has developed a virtual patient record system called TeleMed which is based on a distributed national radiographic and patient record repository located throughout the country. Without leaving their offices, participating doctors can view clinical drug and radiographic data via a sophisticated multimedia interface. For example, a doctor can match a patient`s radiographic information with the data in the repository, review treatment history and success, and then determine the best treatment. Furthermore, the features of TeleMed that make it attractive to clinicians and diagnosticians make it valuable for teaching and presentation as well. Thus, a resident canmore » use TeleMed for self-training in diagnostic techniques and a physician can use it to explain to a patient the course of their illness. In fact, the data can be viewed simultaneously by users at two or more distant locations for consultation with specialists in different fields. This capability is of enormous value to a wide spectrum of healthcare providers. It is made possible by the integration of multimedia information using commercial CORBA technology linking object-enabled databases with client interfaces using a three-tiered architecture.« less
Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy
NASA Astrophysics Data System (ADS)
Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo
2014-01-01
In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.
Molecular Isotopic Distribution Analysis (MIDAs) with adjustable mass accuracy.
Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo
2014-01-01
In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.
Proceedings of Tenth Annual Software Engineering Workshop
NASA Technical Reports Server (NTRS)
1985-01-01
Papers are presented on the following topics: measurement of software technology, recent studies of the Software Engineering Lab, software management tools, expert systems, error seeding as a program validation technique, software quality assurance, software engineering environments (including knowledge-based environments), the Distributed Computing Design System, and various Ada experiments.
Software Management for the NOνAExperiment
NASA Astrophysics Data System (ADS)
Davies, G. S.; Davies, J. P.; C Group; Rebel, B.; Sachdev, K.; Zirnstein, J.
2015-12-01
The NOvAsoftware (NOνASoft) is written in C++, and built on the Fermilab Computing Division's art framework that uses ROOT analysis software. NOνASoftmakes use of more than 50 external software packages, is developed by more than 50 developers and is used by more than 100 physicists from over 30 universities and laboratories in 3 continents. The software builds are handled by Fermilab's custom version of Software Release Tools (SRT), a UNIX based software management system for large, collaborative projects that is used by several experiments at Fermilab. The system provides software version control with SVN configured in a client-server mode and is based on the code originally developed by the BaBar collaboration. In this paper, we present efforts towards distributing the NOvA software via the CernVM File System distributed file system. We will also describe our recent work to use a CMake build system and Jenkins, the open source continuous integration system, for NOνASoft.
NASA Technical Reports Server (NTRS)
Yin, J.; Oyaki, A.; Hwang, C.; Hung, C.
2000-01-01
The purpose of this research and study paper is to provide a summary description and results of rapid development accomplishments at NASA/JPL in the area of advanced distributed computing technology using a Commercial-Off--The-Shelf (COTS)-based object oriented component approach to open inter-operable software development and software reuse.
Soft real-time alarm messages for ATLAS TDAQ
NASA Astrophysics Data System (ADS)
Darlea, G.; Al Shabibi, A.; Martin, B.; Lehmann Miotto, G.
2010-05-01
The ATLAS TDAQ network consists of three separate Ethernet-based networks (Data, Control and Management) with over 2000 end-nodes. The TDAQ system has to be aware of the meaningful network failures and events in order for it to take effective recovery actions. The first stage of the process is implemented with Spectrum, a commercial network management tool. Spectrum detects and registers all network events, then it publishes the information via a CORBA programming interface. A gateway program (called NSG—Network Service Gateway) connects to Spectrum through CORBA and exposes to its clients a Java RMI interface. This interface implements a callback mechanism that allows the clients to subscribe for monitoring "interesting" parts of the network. The last stage of the TDAQ network monitoring tool is implemented in a module named DNC (DAQ to Network Connection), which filters the events that are to be reported to the TDAQ system: it subscribes to the gateway only for the machines that are currently active in the system and it forwards only the alarms that are considered important for the current TDAQ data taking session. The network information is then synthesized and presented in a human-readable format. These messages can be further processed either by the shifter who is in charge, the network expert or the Online Expert System. This article aims to describe the different mechanisms of the chain that transports the network events to the front-end user, as well as the constraints and rules that govern the filtering and the final format of the alarm messages.
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
NASA Technical Reports Server (NTRS)
Davis, George; Cary, Everett; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis
2003-01-01
The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.
Using Scrum Practices in GSD Projects
NASA Astrophysics Data System (ADS)
Paasivaara, Maria; Lassenius, Casper
In this chapter we present advice for applying Scrum practices to globally distributed software development projects. The chapter is based on a multiple-case study of four distributed Scrum projects. We discuss the use of distributed daily Scrums, Scrum-of-Scrums, Sprints, Sprint planning meetings, Sprint Demos, Retrospective meetings, and Backlogs. Moreover, we present lessons that distributed Scrum projects can benefit from non-agile globally distributed software development projects: frequent visits and multiple communication modes.
Software Framework for Peer Data-Management Services
NASA Technical Reports Server (NTRS)
Hughes, John; Hardman, Sean; Crichton, Daniel; Hyon, Jason; Kelly, Sean; Tran, Thuy
2007-01-01
Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-to-peer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.
Astronomical Software Directory Service
NASA Technical Reports Server (NTRS)
Hanisch, R. J.; Payne, H.; Hayes, J.
1998-01-01
This is the final report on the development of the Astronomical Software Directory Service (ASDS), a distributable, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URL's indexed for full-text searching.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
A Distributed Simulation Software System for Multi-Spacecraft Missions
NASA Technical Reports Server (NTRS)
Burns, Richard; Davis, George; Cary, Everett
2003-01-01
The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.
Global Software Development with Cloud Platforms
NASA Astrophysics Data System (ADS)
Yara, Pavan; Ramachandran, Ramaseshan; Balasubramanian, Gayathri; Muthuswamy, Karthik; Chandrasekar, Divya
Offshore and outsourced distributed software development models and processes are facing challenges, previously unknown, with respect to computing capacity, bandwidth, storage, security, complexity, reliability, and business uncertainty. Clouds promise to address these challenges by adopting recent advances in virtualization, parallel and distributed systems, utility computing, and software services. In this paper, we envision a cloud-based platform that addresses some of these core problems. We outline a generic cloud architecture, its design and our first implementation results for three cloud forms - a compute cloud, a storage cloud and a cloud-based software service- in the context of global distributed software development (GSD). Our ”compute cloud” provides computational services such as continuous code integration and a compile server farm, ”storage cloud” offers storage (block or file-based) services with an on-line virtual storage service, whereas the on-line virtual labs represent a useful cloud service. We note some of the use cases for clouds in GSD, the lessons learned with our prototypes and identify challenges that must be conquered before realizing the full business benefits. We believe that in the future, software practitioners will focus more on these cloud computing platforms and see clouds as a means to supporting a ecosystem of clients, developers and other key stakeholders.
CHIME: A Metadata-Based Distributed Software Development Environment
2005-01-01
structures by using typography , graphics , and animation. The Software Im- mersion in our conceptual model for CHIME can be seen as a form of Software...Even small- to medium-sized development efforts may involve hundreds of artifacts -- design documents, change requests, test cases and results, code...for managing and organizing information from all phases of the software lifecycle. CHIME is designed around an XML-based metadata architecture, in
NASA Astrophysics Data System (ADS)
de Faria Scheidt, Rafael; Vilain, Patrícia; Dantas, M. A. R.
2014-10-01
Petroleum reservoir engineering is a complex and interesting field that requires large amount of computational facilities to achieve successful results. Usually, software environments for this field are developed without taking care out of possible interactions and extensibilities required by reservoir engineers. In this paper, we present a research work which it is characterized by the design and implementation based on a software product line model for a real distributed reservoir engineering environment. Experimental results indicate successfully the utilization of this approach for the design of distributed software architecture. In addition, all components from the proposal provided greater visibility of the organization and processes for the reservoir engineers.
NASA Technical Reports Server (NTRS)
Hall, Laverne; Hung, Chaw-Kwei; Lin, Imin
2000-01-01
The purpose of this paper is to provide a description of NASA JPL Distributed Systems Technology (DST) Section's object-oriented component approach to open inter-operable systems software development and software reuse. It will address what is meant by the terminology object component software, give an overview of the component-based development approach and how it relates to infrastructure support of software architectures and promotes reuse, enumerate on the benefits of this approach, and give examples of application prototypes demonstrating its usage and advantages. Utilization of the object-oriented component technology approach for system development and software reuse will apply to several areas within JPL, and possibly across other NASA Centers.
Distributed Computing Framework for Synthetic Radar Application
NASA Technical Reports Server (NTRS)
Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael
2006-01-01
We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.
MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.
Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui
A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.
Microcomputer-Based Programs for Pharmacokinetic Simulations.
ERIC Educational Resources Information Center
Li, Ronald C.; And Others
1995-01-01
Microcomputer software that simulates drug-concentration time profiles based on user-assigned pharmacokinetic parameters such as central volume of distribution, elimination rate constant, absorption rate constant, dosing regimens, and compartmental transfer rate constants is described. The software is recommended for use in undergraduate…
Measurement and analysis of operating system fault tolerance
NASA Technical Reports Server (NTRS)
Lee, I.; Tang, D.; Iyer, R. K.
1992-01-01
This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.
Research into software executives for space operations support
NASA Technical Reports Server (NTRS)
Collier, Mark D.
1990-01-01
Research concepts pertaining to a software (workstation) executive which will support a distributed processing command and control system characterized by high-performance graphics workstations used as computing nodes are presented. Although a workstation-based distributed processing environment offers many advantages, it also introduces a number of new concerns. In order to solve these problems, allow the environment to function as an integrated system, and present a functional development environment to application programmers, it is necessary to develop an additional layer of software. This 'executive' software integrates the system, provides real-time capabilities, and provides the tools necessary to support the application requirements.
Software/hardware distributed processing network supporting the Ada environment
NASA Astrophysics Data System (ADS)
Wood, Richard J.; Pryk, Zen
1993-09-01
A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.
NASA Technical Reports Server (NTRS)
Flora-Adams, Dana; Makihara, Jeanne; Benenyan, Zabel; Berner, Jeff; Kwok, Andrew
2007-01-01
Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-topeer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.
A Review of DIMPACK Version 1.0: Conditional Covariance-Based Test Dimensionality Analysis Package
ERIC Educational Resources Information Center
Deng, Nina; Han, Kyung T.; Hambleton, Ronald K.
2013-01-01
DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…
IUWare and Computing Tools: Indiana University's Approach to Low-Cost Software.
ERIC Educational Resources Information Center
Sheehan, Mark C.; Williams, James G.
1987-01-01
Describes strategies for providing low-cost microcomputer-based software for classroom use on college campuses. Highlights include descriptions of the software (IUWare and Computing Tools); computing center support; license policies; documentation; promotion; distribution; staff, faculty, and user training; problems; and future plans. (LRW)
ETICS: the international software engineering service for the grid
NASA Astrophysics Data System (ADS)
Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.
2008-07-01
The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.
NASA Astrophysics Data System (ADS)
Korzeniewska, Ewa; Szczesny, Artur; Krawczyk, Andrzej; Murawski, Piotr; Mróz, Józef; Seme, Sebastian
2018-03-01
In this paper, the authors describe the distribution of temperatures around electroconductive pathways created by a physical vacuum deposition process on flexible textile substrates used in elastic electronics and textronics. Cordura material was chosen as the substrate. Silver with 99.99% purity was used as the deposited metal. This research was based on thermographic photographs of the produced samples. Analysis of the temperature field around the electroconductive layer was carried out using Image ThermaBase EU software. The analysis of the temperature distribution highlights the software's usefulness in determining the homogeneity of the created metal layer. Higher local temperatures and non-uniform distributions at the same time can negatively influence the work of the textronic system.
Adaptive Multilevel Middleware for Object Systems
2006-12-01
the system at the system-call level or using the CORBA-standard Extensible Transport Framework ( ETF ). Transparent insertion is highly desirable from an...often as it needs to. This is remedied by using the real-time scheduling class in a stock Linux kernel. We used schedsetscheduler system call (with...real-time scheduling class (SCHEDFIFO) for all the ML-NFD programs, later experiments with CPU load indicate that a stock Linux kernel is not
Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG
NASA Astrophysics Data System (ADS)
Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu
2016-12-01
Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.
Distributed Visualization Project
NASA Technical Reports Server (NTRS)
Craig, Douglas; Conroy, Michael; Kickbusch, Tracey; Mazone, Rebecca
2016-01-01
Distributed Visualization allows anyone, anywhere to see any simulation at any time. Development focuses on algorithms, software, data formats, data systems and processes to enable sharing simulation-based information across temporal and spatial boundaries without requiring stakeholders to possess highly-specialized and very expensive display systems. It also introduces abstraction between the native and shared data, which allows teams to share results without giving away proprietary or sensitive data. The initial implementation of this capability is the Distributed Observer Network (DON) version 3.1. DON 3.1 is available for public release in the NASA Software Store (https://software.nasa.gov/software/KSC-13775) and works with version 3.0 of the Model Process Control specification (an XML Simulation Data Representation and Communication Language) to display complex graphical information and associated Meta-Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
VOLTTRON is an agent execution platform providing services to its agents that allow them to easily communicate with physical devices and other resources. VOLTTRON delivers an innovative distributed control and sensing software platform that supports modern control strategies, including agent-based and transaction-based controls. It enables mobile and stationary software agents to perform information gathering, processing, and control actions. VOLTTRON can independently manage a wide range of applications, such as HVAC systems, electric vehicles, distributed energy or entire building loads, leading to improved operational efficiency.
Software for Demonstration of Features of Chain Polymerization Processes
ERIC Educational Resources Information Center
Sosnowski, Stanislaw
2013-01-01
Free software for the demonstration of the features of homo- and copolymerization processes (free radical, controlled radical, and living) is described. The software is based on the Monte Carlo algorithms and offers insight into the kinetics, molecular weight distribution, and microstructure of the macromolecules formed in those processes. It also…
Space Physics Data Facility Web Services
NASA Technical Reports Server (NTRS)
Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.
2005-01-01
The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.
Kendler, K S
2012-04-01
Our tendency to see the world of psychiatric illness in dichotomous and opposing terms has three major sources: the philosophy of Descartes, the state of neuropathology in late nineteenth century Europe (when disorders were divided into those with and without demonstrable pathology and labeled, respectively, organic and functional), and the influential concept of computer functionalism wherein the computer is viewed as a model for the human mind-brain system (brain=hardware, mind=software). These mutually re-enforcing dichotomies, which have had a pernicious influence on our field, make a clear prediction about how 'difference-makers' (aka causal risk factors) for psychiatric disorders should be distributed in nature. In particular, are psychiatric disorders like our laptops, which when they dysfunction, can be cleanly divided into those with software versus hardware problems? I propose 11 categories of difference-makers for psychiatric illness from molecular genetics through culture and review their distribution in schizophrenia, major depression and alcohol dependence. In no case do these distributions resemble that predicted by the organic-functional/hardware-software dichotomy. Instead, the causes of psychiatric illness are dappled, distributed widely across multiple categories. We should abandon Cartesian and computer-functionalism-based dichotomies as scientifically inadequate and an impediment to our ability to integrate the diverse information about psychiatric illness our research has produced. Empirically based pluralism provides a rigorous but dappled view of the etiology of psychiatric illness. Critically, it is based not on how we wish the world to be but how the difference-makers for psychiatric illness are in fact distributed.
A Requirement Specification Language for AADL
2016-06-01
008 | SOFTWARE ENGINEERING INSTITUTE | CARNEGIE MELLON UNIVERSITY Distribution Statement A: Approved for Public Release; Distribution is Unlimited...Copyright 2016 Carnegie Mellon University This material is based upon work funded and supported by the Department of Defense under Contract No...FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineer- ing Institute, a federally funded research and development
An Inverse Modeling Plugin for HydroDesktop using the Method of Anchored Distributions (MAD)
NASA Astrophysics Data System (ADS)
Ames, D. P.; Osorio, C.; Over, M. W.; Rubin, Y.
2011-12-01
The CUAHSI Hydrologic Information System (HIS) software stack is based on an open and extensible architecture that facilitates the addition of new functions and capabilities at both the server side (using HydroServer) and the client side (using HydroDesktop). The HydroDesktop client plugin architecture is used here to expose a new scripting based plugin that makes use of the R statistics software as a means for conducting inverse modeling using the Method of Anchored Distributions (MAD). MAD is a Bayesian inversion technique for conditioning computational model parameters on relevant field observations yielding probabilistic distributions of the model parameters, related to the spatial random variable of interest, by assimilating multi-type and multi-scale data. The implementation of a desktop software tool for using the MAD technique is expected to significantly lower the barrier to use of inverse modeling in education, research, and resource management. The HydroDesktop MAD plugin is being developed following a community-based, open-source approach that will help both its adoption and long term sustainability as a user tool. This presentation will briefly introduce MAD, HydroDesktop, and the MAD plugin and software development effort.
Semantic interoperability--HL7 Version 3 compared to advanced architecture standards.
Blobel, B G M E; Engel, K; Pharow, P
2006-01-01
To meet the challenge for high quality and efficient care, highly specialized and distributed healthcare establishments have to communicate and co-operate in a semantically interoperable way. Information and communication technology must be open, flexible, scalable, knowledge-based and service-oriented as well as secure and safe. For enabling semantic interoperability, a unified process for defining and implementing the architecture, i.e. structure and functions of the cooperating systems' components, as well as the approach for knowledge representation, i.e. the used information and its interpretation, algorithms, etc. have to be defined in a harmonized way. Deploying the Generic Component Model, systems and their components, underlying concepts and applied constraints must be formally modeled, strictly separating platform-independent from platform-specific models. As HL7 Version 3 claims to represent the most successful standard for semantic interoperability, HL7 has been analyzed regarding the requirements for model-driven, service-oriented design of semantic interoperable information systems, thereby moving from a communication to an architecture paradigm. The approach is compared with advanced architectural approaches for information systems such as OMG's CORBA 3 or EHR systems such as GEHR/openEHR and CEN EN 13606 Electronic Health Record Communication. HL7 Version 3 is maturing towards an architectural approach for semantic interoperability. Despite current differences, there is a close collaboration between the teams involved guaranteeing a convergence between competing approaches.
[Example of product development by industry and research solidarity].
Seki, Masayoshi
2014-01-01
When the industrial firms develop the product, the research result from research institutions is used or to reflect the ideas from users on the developed product would be significant in order to improve the product. To state the software product which developed jointly as an example to describe the adopted development technique and its result, and to consider the modality of the industry solidarity seen from the company side and joint development. The software development methods have the merit and demerit and necessary to choose the optimal development technique by the system which develops. We have been jointly developed the dose distribution browsing software. As the software development method, we adopted the prototype model. In order to display the dose distribution information, it is necessary to load four objects which are CT-Image, Structure Set, RT-Plan, and RT-Dose, are displayed in a composite manner. The prototype model which is the development technique was adopted by this joint development was optimal especially to develop the dose distribution browsing software. In a prototype model, since the detail design was created based on the program source code after the program was finally completed, there was merit on the period shortening of document written and consist in design and implementation. This software eventually opened to the public as an open source. Based on this developed prototype software, the release version of the dose distribution browsing software was developed. Developing this type of novelty software, it normally takes two to three years, but since the joint development was adopted, it shortens the development period to one year. Shortening the development period was able to hold down to the minimum development cost for a company and thus, this will be reflected to the product price. The specialists make requests on the product from user's point of view are important, but increase in specialists as professionals for product development will increase the expectations to develop a product to meet the users demand.
Component Technology for High-Performance Scientific Simulation Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epperly, T; Kohn, S; Kumfert, G
2000-11-09
We are developing scientific software component technology to manage the complexity of modem, parallel simulation software and increase the interoperability and re-use of scientific software packages. In this paper, we describe a language interoperability tool named Babel that enables the creation and distribution of language-independent software libraries using interface definition language (IDL) techniques. We have created a scientific IDL that focuses on the unique interface description needs of scientific codes, such as complex numbers, dense multidimensional arrays, complicated data types, and parallelism. Preliminary results indicate that in addition to language interoperability, this approach provides useful tools for thinking about themore » design of modem object-oriented scientific software libraries. Finally, we also describe a web-based component repository called Alexandria that facilitates the distribution, documentation, and re-use of scientific components and libraries.« less
Using Honeynets and the Diamond Model for ICS Threat Analysis
2016-05-11
TR-006 | SOFTWARE ENGINEERING INSTITUTE | CARNEGIE MELLON UNIVERSITY Distribution Statement A: Approved for Public Release; Distribution is...Unlimited Copyright 2016 Carnegie Mellon University This material is based upon work funded and supported by Department of Homeland Security under Contract...No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and
RipleyGUI: software for analyzing spatial patterns in 3D cell distributions
Hansson, Kristin; Jafari-Mamaghani, Mehrdad; Krieger, Patrik
2013-01-01
The true revolution in the age of digital neuroanatomy is the ability to extensively quantify anatomical structures and thus investigate structure-function relationships in great detail. To facilitate the quantification of neuronal cell patterns we have developed RipleyGUI, a MATLAB-based software that can be used to detect patterns in the 3D distribution of cells. RipleyGUI uses Ripley's K-function to analyze spatial distributions. In addition the software contains statistical tools to determine quantitative statistical differences, and tools for spatial transformations that are useful for analyzing non-stationary point patterns. The software has a graphical user interface making it easy to use without programming experience, and an extensive user manual explaining the basic concepts underlying the different statistical tools used to analyze spatial point patterns. The described analysis tool can be used for determining the spatial organization of neurons that is important for a detailed study of structure-function relationships. For example, neocortex that can be subdivided into six layers based on cell density and cell types can also be analyzed in terms of organizational principles distinguishing the layers. PMID:23658544
Network-Based Analysis of Software Change Propagation
Wang, Rongcun; Qu, Binbin
2014-01-01
The object-oriented software systems frequently evolve to meet new change requirements. Understanding the characteristics of changes aids testers and system designers to improve the quality of softwares. Identifying important modules becomes a key issue in the process of evolution. In this context, a novel network-based approach is proposed to comprehensively investigate change distributions and the correlation between centrality measures and the scope of change propagation. First, software dependency networks are constructed at class level. And then, the number of times of cochanges among classes is minded from software repositories. According to the dependency relationships and the number of times of cochanges among classes, the scope of change propagation is calculated. Using Spearman rank correlation analyzes the correlation between centrality measures and the scope of change propagation. Three case studies on java open source software projects Findbugs, Hibernate, and Spring are conducted to research the characteristics of change propagation. Experimental results show that (i) change distribution is very uneven; (ii) PageRank, Degree, and CIRank are significantly correlated to the scope of change propagation. Particularly, CIRank shows higher correlation coefficient, which suggests it can be a more useful indicator for measuring the scope of change propagation of classes in object-oriented software system. PMID:24790557
Network-based analysis of software change propagation.
Wang, Rongcun; Huang, Rubing; Qu, Binbin
2014-01-01
The object-oriented software systems frequently evolve to meet new change requirements. Understanding the characteristics of changes aids testers and system designers to improve the quality of softwares. Identifying important modules becomes a key issue in the process of evolution. In this context, a novel network-based approach is proposed to comprehensively investigate change distributions and the correlation between centrality measures and the scope of change propagation. First, software dependency networks are constructed at class level. And then, the number of times of cochanges among classes is minded from software repositories. According to the dependency relationships and the number of times of cochanges among classes, the scope of change propagation is calculated. Using Spearman rank correlation analyzes the correlation between centrality measures and the scope of change propagation. Three case studies on java open source software projects Findbugs, Hibernate, and Spring are conducted to research the characteristics of change propagation. Experimental results show that (i) change distribution is very uneven; (ii) PageRank, Degree, and CIRank are significantly correlated to the scope of change propagation. Particularly, CIRank shows higher correlation coefficient, which suggests it can be a more useful indicator for measuring the scope of change propagation of classes in object-oriented software system.
Hardware/software codesign for embedded RISC core
NASA Astrophysics Data System (ADS)
Liu, Peng
2001-12-01
This paper describes hardware/software codesign method of the extendible embedded RISC core VIRGO, which based on MIPS-I instruction set architecture. VIRGO is described by Verilog hardware description language that has five-stage pipeline with shared 32-bit cache/memory interface, and it is controlled by distributed control scheme. Every pipeline stage has one small controller, which controls the pipeline stage status and cooperation among the pipeline phase. Since description use high level language and structure is distributed, VIRGO core has highly extension that can meet the requirements of application. We take look at the high-definition television MPEG2 MPHL decoder chip, constructed the hardware/software codesign virtual prototyping machine that can research on VIRGO core instruction set architecture, and system on chip memory size requirements, and system on chip software, etc. We also can evaluate the system on chip design and RISC instruction set based on the virtual prototyping machine platform.
Software selection based on analysis and forecasting methods, practised in 1C
NASA Astrophysics Data System (ADS)
Vazhdaev, A. N.; Chernysheva, T. Y.; Lisacheva, E. I.
2015-09-01
The research focuses on the problem of a “1C: Enterprise 8” platform inboard mechanisms for data analysis and forecasting. It is important to evaluate and select proper software to develop effective strategies for customer relationship management in terms of sales, as well as implementation and further maintenance of software. Research data allows creating new forecast models to schedule further software distribution.
Price Based Local Power Distribution Management System (Local Power Distribution Manager) v1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
BROWN, RICHARD E.; CZARNECKI, STEPHEN; SPEARS, MICHAEL
2016-11-28
A trans-active energy micro-grid controller is implemented in the VOLTTRON distributed control platform. The system uses the price of electricity as the mechanism for conducting transactions that are used to manage energy use and to balance supply and demand. In order to allow testing and analysis of the control system, the implementation is designed to run completely as a software simulation, while allowing the inclusion of selected hardware that physically manages power. Equipment to be integrated with the micro-grid controller must have an IP (Internet Protocol)-based network connection and a software "driver" must exist to translate data communications between themore » device and the controller.« less
IPAD 2: Advances in Distributed Data Base Management for CAD/CAM
NASA Technical Reports Server (NTRS)
Bostic, S. W. (Compiler)
1984-01-01
The Integrated Programs for Aerospace-Vehicle Design (IPAD) Project objective is to improve engineering productivity through better use of computer-aided design and manufacturing (CAD/CAM) technology. The focus is on development of technology and associated software for integrated company-wide management of engineering information. The objectives of this conference are as follows: to provide a greater awareness of the critical need by U.S. industry for advancements in distributed CAD/CAM data management capability; to present industry experiences and current and planned research in distributed data base management; and to summarize IPAD data management contributions and their impact on U.S. industry and computer hardware and software vendors.
2017-03-21
for public release; distribution is unlimited 13. SUPPLEMENTARY NOTES None 14. ABSTRACT ESTCP project EW-201409 aimed at demonstrating the benefits ...of innovative software technology for building HV AC systems. These benefits included reduced system energy use and cost as wetl as improved...Control Approach March 2017 This document has been cleared for public release; Distribution Statement A
Applying the Goal-Question-Indicator-Metric (GQIM) Method to Perform Military Situational Analysis
2016-05-11
www.sei.cmu.edu CMU/SEI-2016-TN-003 | SOFTWARE ENGINEERING INSTITUTE | CARNEGIE MELLON UNIVERSITY Distribution Statement A: Approved for Public Release...Distribution is Unlimited Copyright 2016 Carnegie Mellon University This material is based upon work funded and supported by the Department of...Defense under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally
Progressive retry for software error recovery in distributed systems
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Huang, Yennun; Fuchs, W. K.
1993-01-01
In this paper, we describe a method of execution retry for bypassing software errors based on checkpointing, rollback, message reordering and replaying. We demonstrate how rollback techniques, previously developed for transient hardware failure recovery, can also be used to recover from software faults by exploiting message reordering to bypass software errors. Our approach intentionally increases the degree of nondeterminism and the scope of rollback when a previous retry fails. Examples from our experience with telecommunications software systems illustrate the benefits of the scheme.
Distributed agile software development for the SKA
NASA Astrophysics Data System (ADS)
Wicenec, Andreas; Parsons, Rebecca; Kitaeff, Slava; Vinsen, Kevin; Wu, Chen; Nelson, Paul; Reed, David
2012-09-01
The SKA software will most probably be developed by many groups distributed across the globe and coming from dierent backgrounds, like industries and research institutions. The SKA software subsystems will have to cover a very wide range of dierent areas, but still they have to react and work together like a single system to achieve the scientic goals and satisfy the challenging data ow requirements. Designing and developing such a system in a distributed fashion requires proper tools and the setup of an environment to allow for ecient detection and tracking of interface and integration issues in particular in a timely way. Agile development can provide much faster feedback mechanisms and also much tighter collaboration between the customer (scientist) and the developer. Continuous integration and continuous deployment on the other hand can provide much faster feedback of integration issues from the system level to the subsystem developers. This paper describes the results obtained from trialing a potential SKA development environment based on existing science software development processes like ALMA, the expected distribution of the groups potentially involved in the SKA development and experience gained in the development of large scale commercial software projects.
Software Comparison for Renewable Energy Deployment in a Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian
The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercialmore » tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.« less
Experiments in fault tolerant software reliability
NASA Technical Reports Server (NTRS)
Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.
1987-01-01
The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.
Federated software defined network operations for LHC experiments
NASA Astrophysics Data System (ADS)
Kim, Dongkyun; Byeon, Okhwan; Cho, Kihyeon
2013-09-01
The most well-known high-energy physics collaboration, the Large Hadron Collider (LHC), which is based on e-Science, has been facing several challenges presented by its extraordinary instruments in terms of the generation, distribution, and analysis of large amounts of scientific data. Currently, data distribution issues are being resolved by adopting an advanced Internet technology called software defined networking (SDN). Stability of the SDN operations and management is demanded to keep the federated LHC data distribution networks reliable. Therefore, in this paper, an SDN operation architecture based on the distributed virtual network operations center (DvNOC) is proposed to enable LHC researchers to assume full control of their own global end-to-end data dissemination. This may achieve an enhanced data delivery performance based on data traffic offloading with delay variation. The evaluation results indicate that the overall end-to-end data delivery performance can be improved over multi-domain SDN environments based on the proposed federated SDN/DvNOC operation framework.
BEANS - a software package for distributed Big Data analysis
NASA Astrophysics Data System (ADS)
Hypki, Arkadiusz
2018-07-01
BEANS software is a web-based, easy to install and maintain, new tool to store and analyse in a distributed way a massive amount of data. It provides a clear interface for querying, filtering, aggregating, and plotting data from an arbitrary number of data sets. Its main purpose is to simplify the process of storing, examining, and finding new relations in huge data sets. The software is an answer to a growing need of the astronomical community to have a versatile tool to store, analyse, and compare the complex astrophysical numerical simulations with observations (e.g. simulations of the Galaxy or star clusters with the Gaia archive). However, this software was built in a general form and it is ready to use in any other research field. It can be used as a building block for other open-source software too.
BEANS - a software package for distributed Big Data analysis
NASA Astrophysics Data System (ADS)
Hypki, Arkadiusz
2018-03-01
BEANS software is a web based, easy to install and maintain, new tool to store and analyse in a distributed way a massive amount of data. It provides a clear interface for querying, filtering, aggregating, and plotting data from an arbitrary number of datasets. Its main purpose is to simplify the process of storing, examining and finding new relations in huge datasets. The software is an answer to a growing need of the astronomical community to have a versatile tool to store, analyse and compare the complex astrophysical numerical simulations with observations (e.g. simulations of the Galaxy or star clusters with the Gaia archive). However, this software was built in a general form and it is ready to use in any other research field. It can be used as a building block for other open source software too.
Addressing the Barriers to Agile Development in DoD
2015-05-01
Acquisition Small, Frequent Releases Iteratively Developed Review Working Software Vice Extensive Docs Responsive to Changes...Distribution Unlimited. Case Number 15-1457’ JCIDS IT Box Model Streamlined requirements process for software >$15M JROC approves IS-ICD...Services (FAR Part 37) Product-based Pay for the time and expertise of an Agile development contractor Contract for a defined software delivery
2016-04-05
Unlimited http://www.sei.cmu.edu CMU/SEI-2016-TR-004 | SOFTWARE ENGINEERING INSTITUTE | CARNEGIE MELLON UNIVERSITY Distribution Statement A...Approved for Public Release; Distribution is Unlimited Copyright 2016 Carnegie Mellon University This material is based upon work funded and supported...by Department of Homeland Security under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software
Zhao, Lei; Lim Choi Keung, Sarah N; Taweel, Adel; Tyler, Edward; Ogunsina, Ire; Rossiter, James; Delaney, Brendan C; Peterson, Kevin A; Hobbs, F D Richard; Arvanitis, Theodoros N
2012-01-01
Heterogeneous data models and coding schemes for electronic health records present challenges for automated search across distributed data sources. This paper describes a loosely coupled software framework based on the terminology controlled approach to enable the interoperation between the search interface and heterogeneous data sources. Software components interoperate via common terminology service and abstract criteria model so as to promote component reuse and incremental system evolution.
The SysMan monitoring service and its management environment
NASA Astrophysics Data System (ADS)
Debski, Andrzej; Janas, Ekkehard
1996-06-01
Management of modern information systems is becoming more and more complex. There is a growing need for powerful, flexible and affordable management tools to assist system managers in maintaining such systems. It is at the same time evident that effective management should integrate network management, system management and application management in a uniform way. Object oriented OSI management architecture with its four basic modelling concepts (information, organization, communication and functional models) together with widely accepted distribution platforms such as ANSA/CORBA, constitutes a reliable and modern framework for the implementation of a management toolset. This paper focuses on the presentation of concepts and implementation results of an object oriented management toolset developed and implemented within the framework of the ESPRIT project 7026 SysMan. An overview is given of the implemented SysMan management services including the System Management Service, Monitoring Service, Network Management Service, Knowledge Service, Domain and Policy Service, and the User Interface. Special attention is paid to the Monitoring Service which incorporates the architectural key entity responsible for event management. Its architecture and building components, especially filters, are emphasized and presented in detail.
SU-F-J-194: Development of Dose-Based Image Guided Proton Therapy Workflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, R; Sun, B; Zhao, T
Purpose: To implement image-guided proton therapy (IGPT) based on daily proton dose distribution. Methods: Unlike x-ray therapy, simple alignment based on anatomy cannot ensure proper dose coverage in proton therapy. Anatomy changes along the beam path may lead to underdosing the target, or overdosing the organ-at-risk (OAR). With an in-room mobile computed tomography (CT) system, we are developing a dose-based IGPT software tool that allows patient positioning and treatment adaption based on daily dose distributions. During an IGPT treatment, daily CT images are acquired in treatment position. After initial positioning based on rigid image registration, proton dose distribution is calculatedmore » on daily CT images. The target and OARs are automatically delineated via deformable image registration. Dose distributions are evaluated to decide if repositioning or plan adaptation is necessary in order to achieve proper coverage of the target and sparing of OARs. Besides online dose-based image guidance, the software tool can also map daily treatment doses to the treatment planning CT images for offline adaptive treatment. Results: An in-room helical CT system is commissioned for IGPT purposes. It produces accurate CT numbers that allow proton dose calculation. GPU-based deformable image registration algorithms are developed and evaluated for automatic ROI-delineation and dose mapping. The online and offline IGPT functionalities are evaluated with daily CT images of the proton patients. Conclusion: The online and offline IGPT software tool may improve the safety and quality of proton treatment by allowing dose-based IGPT and adaptive proton treatments. Research is partially supported by Mevion Medical Systems.« less
NASA Astrophysics Data System (ADS)
Tsai, Chun-Wei; Lyu, Bo-Han; Wang, Chen; Hung, Cheng-Chieh
2017-05-01
We have already developed multi-function and easy-to-use modulation software that was based on LabVIEW system. There are mainly four functions in this modulation software, such as computer generated holograms (CGH) generation, CGH reconstruction, image trimming, and special phase distribution. Based on the above development of CGH modulation software, we could enhance the performance of liquid crystal on silicon - spatial light modulator (LCoSSLM) as similar as the diffractive optical element (DOE) and use it on various adaptive optics (AO) applications. Through the development of special phase distribution, we are going to use the LCoS-SLM with CGH modulation software into AO technology, such as optical microscope system. When the LCOS-SLM panel is integrated in an optical microscope system, it could be placed on the illumination path or on the image forming path. However, LCOS-SLM provides a program-controllable liquid crystal array for optical microscope. It dynamically changes the amplitude or phase of light and gives the obvious advantage, "Flexibility", to the system
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.
1991-01-01
Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.
Emerging Technologies for Software-Reliant Systems of Systems
2010-09-01
conditions, such as temperature, sound, vibration, light intensity , motion, or proximity to objects [Raghavendra 2006]. Cognitive Network A cognitive...systems evolutionary development emergent behavior geographic distribution Maier also defines four types of SoS based on their management...by multinational teams. Many organizations use offshoring as a way to reduce costs of software development. Large web- based systems often use
Dynamic Data-Driven Prognostics and Condition Monitoring of On-board Electronics
2012-08-27
of functionality and accessibility; it is an open language unlike Java or Visual meaning that it is also free. It is also one of the most popular...and C# are able to run without the use of a virtual machine like Java . 4.2.1.5 Implementation For building of an OSA-CBM system, the primer...documentation [7] recommends the following steps: 1. Choose a middleware technology (DCOM, CORBA, Web Services, Java RMI, etc.). 2. Transform OSA-CBM UML
NASA Technical Reports Server (NTRS)
Follen, Gregory J.; Naiman, Cynthia
2003-01-01
The objective of GRC CNIS/IE work is to build a plug-n-play infrastructure that provides the Grand Challenge Applications with a suite of tools for coupling codes together, numerical zooming between fidelity of codes and gaining deployment of these simulations onto the Information Power Grid. The GRC CNIS/IE work will streamline and improve this process by providing tighter integration of various tools through the use of object oriented design of component models and data objects and through the use of CORBA (Common Object Request Broker Architecture).
Bringing your tools to CyVerse Discovery Environment using Docker
Devisetty, Upendra Kumar; Kennedy, Kathleen; Sarando, Paul; Merchant, Nirav; Lyons, Eric
2016-01-01
Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse’s Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse’s production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use. PMID:27803802
Bringing your tools to CyVerse Discovery Environment using Docker.
Devisetty, Upendra Kumar; Kennedy, Kathleen; Sarando, Paul; Merchant, Nirav; Lyons, Eric
2016-01-01
Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse's Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse's production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use.
PP-SWAT: A phython-based computing software for efficient multiobjective callibration of SWAT
USDA-ARS?s Scientific Manuscript database
With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...
Using CLIPS in the domain of knowledge-based massively parallel programming
NASA Technical Reports Server (NTRS)
Dvorak, Jiri J.
1994-01-01
The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.
Distributed Collaborative Homework Activities in a Problem-Based Usability Engineering Course
ERIC Educational Resources Information Center
Carroll, John M.; Jiang, Hao; Borge, Marcela
2015-01-01
Teams of students in an upper-division undergraduate Usability Engineering course used a collaborative environment to carry out a series of three distributed collaborative homework assignments. Assignments were case-based analyses structured using a jigsaw design; students were provided a collaborative software environment and introduced to a…
NASA Technical Reports Server (NTRS)
Burns, Richard D.; Davis, George; Cary, Everett; Higinbotham, John; Hogie, Keith
2003-01-01
A mission simulation prototype for Distributed Space Systems has been constructed using existing developmental hardware and software testbeds at NASA s Goddard Space Flight Center. A locally distributed ensemble of testbeds, connected through the local area network, operates in real time and demonstrates the potential to assess the impact of subsystem level modifications on system level performance and, ultimately, on the quality and quantity of the end product science data.
NASA Technical Reports Server (NTRS)
Zhang, Zhong
1997-01-01
The development of large-scale, composite software in a geographically distributed environment is an evolutionary process. Often, in such evolving systems, striving for consistency is complicated by many factors, because development participants have various locations, skills, responsibilities, roles, opinions, languages, terminology and different degrees of abstraction they employ. This naturally leads to many partial specifications or viewpoints. These multiple views on the system being developed usually overlap. From another aspect, these multiple views give rise to the potential for inconsistency. Existing CASE tools do not efficiently manage inconsistencies in distributed development environment for a large-scale project. Based on the ViewPoints framework the WHERE (Web-Based Hypertext Environment for requirements Evolution) toolkit aims to tackle inconsistency management issues within geographically distributed software development projects. Consequently, WHERE project helps make more robust software and support software assurance process. The long term goal of WHERE tools aims to the inconsistency analysis and management in requirements specifications. A framework based on Graph Grammar theory and TCMJAVA toolkit is proposed to detect inconsistencies among viewpoints. This systematic approach uses three basic operations (UNION, DIFFERENCE, INTERSECTION) to study the static behaviors of graphic and tabular notations. From these operations, subgraphs Query, Selection, Merge, Replacement operations can be derived. This approach uses graph PRODUCTIONS (rewriting rules) to study the dynamic transformations of graphs. We discuss the feasibility of implementation these operations. Also, We present the process of porting original TCM (Toolkit for Conceptual Modeling) project from C++ to Java programming language in this thesis. A scenario based on NASA International Space Station Specification is discussed to show the applicability of our approach. Finally, conclusion and future work about inconsistency management issues in WHERE project will be summarized.
AIRSAR Web-Based Data Processing
NASA Technical Reports Server (NTRS)
Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne
2007-01-01
The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.
Framework Support For Knowledge-Based Software Development
NASA Astrophysics Data System (ADS)
Huseth, Steve
1988-03-01
The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.
A knowledge based software engineering environment testbed
NASA Technical Reports Server (NTRS)
Gill, C.; Reedy, A.; Baker, L.
1985-01-01
The Carnegie Group Incorporated and Boeing Computer Services Company are developing a testbed which will provide a framework for integrating conventional software engineering tools with Artifical Intelligence (AI) tools to promote automation and productivity. The emphasis is on the transfer of AI technology to the software development process. Experiments relate to AI issues such as scaling up, inference, and knowledge representation. In its first year, the project has created a model of software development by representing software activities; developed a module representation formalism to specify the behavior and structure of software objects; integrated the model with the formalism to identify shared representation and inheritance mechanisms; demonstrated object programming by writing procedures and applying them to software objects; used data-directed and goal-directed reasoning to, respectively, infer the cause of bugs and evaluate the appropriateness of a configuration; and demonstrated knowledge-based graphics. Future plans include introduction of knowledge-based systems for rapid prototyping or rescheduling; natural language interfaces; blackboard architecture; and distributed processing
STARLSE -- Starlink Extensions to the VAX Language Sensitive Editor
NASA Astrophysics Data System (ADS)
Warren-Smith, R. F.
STARLSE is a ``Starlink Sensitive'' editor based on the VAX Language Sensitive Editor (LSE). It exploits the extensibility of LSE to provide additional features which assist in the writing of portable Fortran 77 software with a standard Starlink style. STARLSE is intended mainly for use by those writing ADAM applications and subroutine libraries for distribution as part of the Starlink Software Collection, although it may also be suitable for other software projects. It is designed to integrate with the SST (Simple Software Tools) package.
Research on distributed optical fiber sensing data processing method based on LabVIEW
NASA Astrophysics Data System (ADS)
Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing
2018-01-01
The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.
Zhang, Ming-cai; Lü, Si-zhe; Cheng, Ying-wu; Gu, Li-xu; Zhan, Hong-sheng; Shi, Yin-yu; Wang, Xiang; Huang, Shi-rong
2011-02-01
To study the effect of vertebrae semi-dislocation on the stress distribution in facet joint and interuertebral disc of patients with cervical syndrome using three dimensional finite element model. A patient with cervical spondylosis was randomly chosen, who was male, 28 years old, and diagnosed as cervical vertebra semidislocation by dynamic and static palpation and X-ray, and scanned from C(1) to C(7) by 0.75 mm slice thickness of CT. Based on the CT data, the software was used to construct the three dimensional finite element model of cervical vertebra semidislocation (C(4)-C(6)). Based on the model,virtual manipulation was used to correct the vertebra semidislocation by the software, and the stress distribution was analyzed. The result of finite element analysis showed that the stress distribution of C(5-6) facet joint and intervertebral disc changed after virtual manipulation. The vertebra semidislocation leads to the abnormal stress distribution of facet joint and intervertebral disc.
Simulation study on electric field intensity above train roof
NASA Astrophysics Data System (ADS)
Fan, Yizhe; Li, Huawei; Yang, Shasha
2018-04-01
In order to understand the distribution of electric field in the space above the train roof accurately and select the installation position of the detection device reasonably, in this paper, the 3D model of pantograph-catenary is established by using SolidWorks software, and the spatial electric field distribution of pantograph-catenary model is simulated based on Comsol software. According to the electric field intensity analysis within the 0.4m space above train roof, we give a reasonable installation of the detection device.
The Software Distribution for Gemini Observatory's Science Operations Group
NASA Astrophysics Data System (ADS)
Hoenig, M. D.; Clarke, M.; Pohlen, M.; Hirst, P.
2014-05-01
Gemini Observatory consists of two telescopes in different hemispheres. It also operates mostly on a queue observing model, meaning observations are performed by staff working shifts as opposed to PIs. For these two reasons alone, maintaining and distributing a diverse software suite is not a trivial matter. We present a way to make the appropriate tools available to staff at Gemini North and South, whether they are working on the summit or from our base facility offices in Hilo, Hawai'i and La Serena, Chile.
ERIC Educational Resources Information Center
Fuchs, Karl Josef; Simonovits, Reinhard; Thaller, Bernd
2008-01-01
This paper describes a high school project where the mathematics teaching and learning software M@th Desktop (MD) based on the Computer Algebra System Mathematica was used for symbolical and numerical calculations and for visualisation. The mathematics teaching and learning software M@th Desktop 2.0 (MD) contains the modules Basics including tools…
Scalable and fail-safe deployment of the ATLAS Distributed Data Management system Rucio
NASA Astrophysics Data System (ADS)
Lassnig, M.; Vigne, R.; Beermann, T.; Barisits, M.; Garonne, V.; Serfon, C.
2015-12-01
This contribution details the deployment of Rucio, the ATLAS Distributed Data Management system. The main complication is that Rucio interacts with a wide variety of external services, and connects globally distributed data centres under different technological and administrative control, at an unprecedented data volume. It is therefore not possible to create a duplicate instance of Rucio for testing or integration. Every software upgrade or configuration change is thus potentially disruptive and requires fail-safe software and automatic error recovery. Rucio uses a three-layer scaling and mitigation strategy based on quasi-realtime monitoring. This strategy mainly employs independent stateless services, automatic failover, and service migration. The technologies used for deployment and mitigation include OpenStack, Puppet, Graphite, HAProxy and Apache. In this contribution, the interplay between these components, their deployment, software mitigation, and the monitoring strategy are discussed.
Espino, Jeremy U; Wagner, M; Szczepaniak, C; Tsui, F C; Su, H; Olszewski, R; Liu, Z; Chapman, W; Zeng, X; Ma, L; Lu, Z; Dara, J
2004-09-24
Computer-based outbreak and disease surveillance requires high-quality software that is well-supported and affordable. Developing software in an open-source framework, which entails free distribution and use of software and continuous, community-based software development, can produce software with such characteristics, and can do so rapidly. The objective of the Real-Time Outbreak and Disease Surveillance (RODS) Open Source Project is to accelerate the deployment of computer-based outbreak and disease surveillance systems by writing software and catalyzing the formation of a community of users, developers, consultants, and scientists who support its use. The University of Pittsburgh seeded the Open Source Project by releasing the RODS software under the GNU General Public License. An infrastructure was created, consisting of a website, mailing lists for developers and users, designated software developers, and shared code-development tools. These resources are intended to encourage growth of the Open Source Project community. Progress is measured by assessing website usage, number of software downloads, number of inquiries, number of system deployments, and number of new features or modules added to the code base. During September--November 2003, users generated 5,370 page views of the project website, 59 software downloads, 20 inquiries, one new deployment, and addition of four features. Thus far, health departments and companies have been more interested in using the software as is than in customizing or developing new features. The RODS laboratory anticipates that after initial installation has been completed, health departments and companies will begin to customize the software and contribute their enhancements to the public code base.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
Software-Enabled Distributed Network Governance: The PopMedNet Experience.
Davies, Melanie; Erickson, Kyle; Wyner, Zachary; Malenfant, Jessica; Rosen, Rob; Brown, Jeffrey
2016-01-01
The expanded availability of electronic health information has led to increased interest in distributed health data research networks. The distributed research network model leaves data with and under the control of the data holder. Data holders, network coordinating centers, and researchers have distinct needs and challenges within this model. The concerns of network stakeholders are addressed in the design and governance models of the PopMedNet software platform. PopMedNet features include distributed querying, customizable workflows, and auditing and search capabilities. Its flexible role-based access control system enables the enforcement of varying governance policies. Four case studies describe how PopMedNet is used to enforce network governance models. Trust is an essential component of a distributed research network and must be built before data partners may be willing to participate further. The complexity of the PopMedNet system must be managed as networks grow and new data, analytic methods, and querying approaches are developed. The PopMedNet software platform supports a variety of network structures, governance models, and research activities through customizable features designed to meet the needs of network stakeholders.
Ground Systems Development Environment (GSDE) interface requirements analysis
NASA Technical Reports Server (NTRS)
Church, Victor E.; Philips, John; Hartenstein, Ray; Bassman, Mitchell; Ruskin, Leslie; Perez-Davila, Alfredo
1991-01-01
A set of procedural and functional requirements are presented for the interface between software development environments and software integration and test systems used for space station ground systems software. The requirements focus on the need for centralized configuration management of software as it is transitioned from development to formal, target based testing. This concludes the GSDE Interface Requirements study. A summary is presented of findings concerning the interface itself, possible interface and prototyping directions for further study, and results of the investigation of the Cronus distributed applications environment.
NASA Astrophysics Data System (ADS)
Choo, Seongho; Li, Vitaly; Choi, Dong Hee; Jung, Gi Deck; Park, Hong Seong; Ryuh, Youngsun
2005-12-01
On developing the personal robot system presently, the internal architecture is every module those occupy separated functions are connected through heterogeneous network system. This module-based architecture supports specialization and division of labor at not only designing but also implementation, as an effect of this architecture, it can reduce developing times and costs for modules. Furthermore, because every module is connected among other modules through network systems, we can get easy integrations and synergy effect to apply advanced mutual functions by co-working some modules. In this architecture, one of the most important technologies is the network middleware that takes charge communications among each modules connected through heterogeneous networks systems. The network middleware acts as the human nerve system inside of personal robot system; it relays, transmits, and translates information appropriately between modules that are similar to human organizations. The network middleware supports various hardware platform, heterogeneous network systems (Ethernet, Wireless LAN, USB, IEEE 1394, CAN, CDMA-SMS, RS-232C). This paper discussed some mechanisms about our network middleware to intercommunication and routing among modules, methods for real-time data communication and fault-tolerant network service. There have designed and implemented a layered network middleware scheme, distributed routing management, network monitoring/notification technology on heterogeneous networks for these goals. The main theme is how to make routing information in our network middleware. Additionally, with this routing information table, we appended some features. Now we are designing, making a new version network middleware (we call 'OO M/W') that can support object-oriented operation, also are updating program sources itself for object-oriented architecture. It is lighter, faster, and can support more operation systems and heterogeneous network systems, but other general purposed middlewares like CORBA, UPnP, etc. can support only one network protocol or operating system.
Mission Services Evolution Center Message Bus
NASA Technical Reports Server (NTRS)
Mayorga, Arturo; Bristow, John O.; Butschky, Mike
2011-01-01
The Goddard Mission Services Evolution Center (GMSEC) Message Bus is a robust, lightweight, fault-tolerant middleware implementation that supports all messaging capabilities of the GMSEC API. This architecture is a distributed software system that routes messages based on message subject names and knowledge of the locations in the network of the interested software components.
Army Logistician. Volume 39, Issue 2, March-April 2007
2007-04-01
Most Army Reduces Tactical Supply System Footprint by thoMas h. aMent, jr. Centralizing all of the Army’s Corps/Theater Automated Data Processing...Middleware, which comprises both hardware and software, revises data in the Standard Army Retail Supply System (SARSS), thereby extending the use of the...Logistics: Supply Based or Distribution Based? The Changing Face of Fuel Management Combat Logistics Patrol Methodology Distribution-Based
2012-09-30
platform (HPC) was developed, called the HPC-Acoustic Data Accelerator, or HPC-ADA for short. The HPC-ADA was designed based on fielded systems [1-4...software (Detection cLassificaiton for MAchine learning - High Peformance Computing). The software package was designed to utilize parallel and...Sedna [7] and is designed using a parallel architecture2, allowing existing algorithms to distribute to the various processing nodes with minimal changes
Design of a clinical notification system.
Wagner, M M; Tsui, F C; Pike, J; Pike, L
1999-01-01
We describe the requirements and design of an enterprise-wide notification system. From published descriptions of notification schemes, our own experience, and use cases provided by diverse users in our institution, we developed a set of functional requirements. The resulting design supports multiple communication channels, third party mappings (algorithms) from message to recipient and/or channel of delivery, and escalation algorithms. A requirement for multiple message formats is addressed by a document specification. We implemented this system in Java as a CORBA object. This paper describes the design and current implementation of our notification system.
A Grid Infrastructure for Supporting Space-based Science Operations
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)
2002-01-01
Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.
Implementing Extreme Programming in Distributed Software Project Teams: Strategies and Challenges
NASA Astrophysics Data System (ADS)
Maruping, Likoebe M.
Agile software development methods and distributed forms of organizing teamwork are two team process innovations that are gaining prominence in today's demanding software development environment. Individually, each of these innovations has yielded gains in the practice of software development. Agile methods have enabled software project teams to meet the challenges of an ever turbulent business environment through enhanced flexibility and responsiveness to emergent customer needs. Distributed software project teams have enabled organizations to access highly specialized expertise across geographic locations. Although much progress has been made in understanding how to more effectively manage agile development teams and how to manage distributed software development teams, managers have little guidance on how to leverage these two potent innovations in combination. In this chapter, I outline some of the strategies and challenges associated with implementing agile methods in distributed software project teams. These are discussed in the context of a study of a large-scale software project in the United States that lasted four months.
Open source software integrated into data services of Japanese planetary explorations
NASA Astrophysics Data System (ADS)
Yamamoto, Y.; Ishihara, Y.; Otake, H.; Imai, K.; Masuda, K.
2015-12-01
Scientific data obtained by Japanese scientific satellites and lunar and planetary explorations are archived in DARTS (Data ARchives and Transmission System). DARTS provides the data with a simple method such as HTTP directory listing for long-term preservation while DARTS tries to provide rich web applications for ease of access with modern web technologies based on open source software. This presentation showcases availability of open source software through our services. KADIAS is a web-based application to search, analyze, and obtain scientific data measured by SELENE(Kaguya), a Japanese lunar orbiter. KADIAS uses OpenLayers to display maps distributed from Web Map Service (WMS). As a WMS server, open source software MapServer is adopted. KAGUYA 3D GIS (KAGUYA 3D Moon NAVI) provides a virtual globe for the SELENE's data. The main purpose of this application is public outreach. NASA World Wind Java SDK is used to develop. C3 (Cross-Cutting Comparisons) is a tool to compare data from various observations and simulations. It uses Highcharts to draw graphs on web browsers. Flow is a tool to simulate a Field-Of-View of an instrument onboard a spacecraft. This tool itself is open source software developed by JAXA/ISAS, and the license is BSD 3-Caluse License. SPICE Toolkit is essential to compile FLOW. SPICE Toolkit is also open source software developed by NASA/JPL, and the website distributes many spacecrafts' data. Nowadays, open source software is an indispensable tool to integrate DARTS services.
The social disutility of software ownership.
Douglas, David M
2011-09-01
Software ownership allows the owner to restrict the distribution of software and to prevent others from reading the software's source code and building upon it. However, free software is released to users under software licenses that give them the right to read the source code, modify it, reuse it, and distribute the software to others. Proponents of free software such as Richard M. Stallman and Eben Moglen argue that the social disutility of software ownership is a sufficient justification for prohibiting it. This social disutility includes the social instability of disregarding laws and agreements covering software use and distribution, inequality of software access, and the inability to help others by sharing software with them. Here I consider these and other social disutility claims against withholding specific software rights from users, in particular, the rights to read the source code, duplicate, distribute, modify, imitate, and reuse portions of the software within new programs. I find that generally while withholding these rights from software users does cause some degree of social disutility, only the rights to duplicate, modify and imitate cannot legitimately be denied to users on this basis. The social disutility of withholding the rights to distribute the software, read its source code and reuse portions of it in new programs is insufficient to prohibit software owners from denying them to users. A compromise between the software owner and user can minimise the social disutility of withholding these particular rights from users. However, the social disutility caused by software patents is sufficient for rejecting such patents as they restrict the methods of reducing social disutility possible with other forms of software ownership.
NASA Astrophysics Data System (ADS)
Changyong, Dou; Huadong, Guo; Chunming, Han; Ming, Liu
2014-03-01
With more and more Earth observation data available to the community, how to manage and sharing these valuable remote sensing datasets is becoming an urgent issue to be solved. The web based Geographical Information Systems (GIS) technology provides a convenient way for the users in different locations to share and make use of the same dataset. In order to efficiently use the airborne Synthetic Aperture Radar (SAR) remote sensing data acquired in the Airborne Remote Sensing Center of the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS), a Web-GIS based platform for airborne SAR data management, distribution and sharing was designed and developed. The major features of the system include map based navigation search interface, full resolution imagery shown overlaid the map, and all the software adopted in the platform are Open Source Software (OSS). The functions of the platform include browsing the imagery on the map navigation based interface, ordering and downloading data online, image dataset and user management, etc. At present, the system is under testing in RADI and will come to regular operation soon.
Data-Driven Software Framework for Web-Based ISS Telescience
NASA Technical Reports Server (NTRS)
Tso, Kam S.
2005-01-01
Software that enables authorized users to monitor and control scientific payloads aboard the International Space Station (ISS) from diverse terrestrial locations equipped with Internet connections is undergoing development. This software reflects a data-driven approach to distributed operations. A Web-based software framework leverages prior developments in Java and Extensible Markup Language (XML) to create portable code and portable data, to which one can gain access via Web-browser software on almost any common computer. Open-source software is used extensively to minimize cost; the framework also accommodates enterprise-class server software to satisfy needs for high performance and security. To accommodate the diversity of ISS experiments and users, the framework emphasizes openness and extensibility. Users can take advantage of available viewer software to create their own client programs according to their particular preferences, and can upload these programs for custom processing of data, generation of views, and planning of experiments. The same software system, possibly augmented with a subset of data and additional software tools, could be used for public outreach by enabling public users to replay telescience experiments, conduct their experiments with simulated payloads, and create their own client programs and other custom software.
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
Jaikuna, Tanwiwat; Khadsiri, Phatchareewan; Chawapun, Nisa; Saekho, Suwit; Tharavichitkul, Ekkasit
2017-02-01
To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL) model. The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR), and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD 2 ) was calculated using biological effective dose (BED) based on the LQL model. The software calculation and the manual calculation were compared for EQD 2 verification with pair t -test statistical analysis using IBM SPSS Statistics version 22 (64-bit). Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS) in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV) determined by D 90% , 0.56% in the bladder, 1.74% in the rectum when determined by D 2cc , and less than 1% in Pinnacle. The difference in the EQD 2 between the software calculation and the manual calculation was not significantly different with 0.00% at p -values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT) and 0.240, 0.320, and 0.849 for brachytherapy (BT) in HR-CTV, bladder, and rectum, respectively. The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Brown, Jason L; Bennett, Joseph R; French, Connor M
2017-01-01
SDMtoolbox 2.0 is a software package for spatial studies of ecology, evolution, and genetics. The release of SDMtoolbox 2.0 allows researchers to use the most current ArcGIS software and MaxEnt software, and reduces the amount of time that would be spent developing common solutions. The central aim of this software is to automate complicated and repetitive spatial analyses in an intuitive graphical user interface. One core tenant facilitates careful parameterization of species distribution models (SDMs) to maximize each model's discriminatory ability and minimize overfitting. This includes carefully processing of occurrence data, environmental data, and model parameterization. This program directly interfaces with MaxEnt, one of the most powerful and widely used species distribution modeling software programs, although SDMtoolbox 2.0 is not limited to species distribution modeling or restricted to modeling in MaxEnt. Many of the SDM pre- and post-processing tools have 'universal' analogs for use with any modeling software. The current version contains a total of 79 scripts that harness the power of ArcGIS for macroecology, landscape genetics, and evolutionary studies. For example, these tools allow for biodiversity quantification (such as species richness or corrected weighted endemism), generation of least-cost paths and corridors among shared haplotypes, assessment of the significance of spatial randomizations, and enforcement of dispersal limitations of SDMs projected into future climates-to only name a few functions contained in SDMtoolbox 2.0. Lastly, dozens of generalized tools exists for batch processing and conversion of GIS data types or formats, which are broadly useful to any ArcMap user.
Astronomical Software Directory Service
NASA Astrophysics Data System (ADS)
Hanisch, Robert J.; Payne, Harry; Hayes, Jeffrey
1997-01-01
With the support of NASA's Astrophysics Data Program (NRA 92-OSSA-15), we have developed the Astronomical Software Directory Service (ASDS): a distributed, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URLs indexed for full-text searching. Users are performing about 400 searches per month. A new aspect of our service is the inclusion of telescope and instrumentation manuals, which prompted us to change the name to the Astronomical Software and Documentation Service. ASDS was originally conceived to serve two purposes: to provide a useful Internet service in an area of expertise of the investigators (astronomical software), and as a research project to investigate various architectures for searching through a set of documents distributed across the Internet. Two of the co-investigators were then installing and maintaining astronomical software as their primary job responsibility. We felt that a service which incorporated our experience in this area would be more useful than a straightforward listing of software packages. The original concept was for a service based on the client/server model, which would function as a directory/referral service rather than as an archive. For performing the searches, we began our investigation with a decision to evaluate the Isite software from the Center for Networked Information Discovery and Retrieval (CNIDR). This software was intended as a replacement for Wide-Area Information Service (WAIS), a client/server technology for performing full-text searches through a set of documents. Isite had some additional features that we considered attractive, and we enjoyed the cooperation of the Isite developers, who were happy to have ASDS as a demonstration project. We ended up staying with the software throughout the project, making modifications to take advantage of new features as they came along, as well as influencing the software development. The Web interface to the search engine is provided by a gateway program written in C++ by a consultant to the project (A. Warnock).
ERIC Educational Resources Information Center
Fernández-Alemán, José Luis; Carrillo-de-Gea, Juan Manuel; Meca, Joaquín Vidal; Ros, Joaquín Nicolás; Toval, Ambrosio; Idri, Ali
2016-01-01
This paper presents the results of two educational experiments carried out to determine whether the process of specifying requirements (catalog-based reuse as opposed to conventional specification) has an impact on effectiveness and productivity in co-located and distributed software development environments. The participants in the experiments…
ERIC Educational Resources Information Center
Smith, Garon C.; Hossain, Md Mainul
2017-01-01
Species TOPOS is a free software package for generating three-dimensional (3-D) topographic surfaces ("topos") for acid-base equilibrium studies. This upgrade adds 3-D species distribution topos to earlier surfaces that showed pH and buffer capacity behavior during titration and dilution procedures. It constructs topos by plotting…
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.
Spectrophotometer-Based Color Measurements
2017-10-24
public release; distribution is unlimited. AD U.S. ARMY ARMAMENT RESEARCH , DEVELOPMENT AND ENGINEERING CENTER Weapons and Software Engineering Center...for public release; distribution is unlimited. UNCLASSIFIED i CONTENTS Page Summary 1 Introduction 1 Methods , Assumptions, and Procedures 1...Values for Federal Color Standards 15 Distribution List 25 TABLES 1 Instrument precision 3 2 Method precision and operator variability 4 3
Designing Distributed Learning Environments with Intelligent Software Agents
ERIC Educational Resources Information Center
Lin, Fuhua, Ed.
2005-01-01
"Designing Distributed Learning Environments with Intelligent Software Agents" reports on the most recent advances in agent technologies for distributed learning. Chapters are devoted to the various aspects of intelligent software agents in distributed learning, including the methodological and technical issues on where and how intelligent agents…
Instrument control software development process for the multi-star AO system ARGOS
NASA Astrophysics Data System (ADS)
Kulas, M.; Barl, L.; Borelli, J. L.; Gässler, W.; Rabien, S.
2012-09-01
The ARGOS project (Advanced Rayleigh guided Ground layer adaptive Optics System) will upgrade the Large Binocular Telescope (LBT) with an AO System consisting of six Rayleigh laser guide stars. This adaptive optics system integrates several control loops and many different components like lasers, calibration swing arms and slope computers that are dispersed throughout the telescope. The purpose of the instrument control software (ICS) is running this AO system and providing convenient client interfaces to the instruments and the control loops. The challenges for the ARGOS ICS are the development of a distributed and safety-critical software system with no defects in a short time, the creation of huge and complex software programs with a maintainable code base, the delivery of software components with the desired functionality and the support of geographically distributed project partners. To tackle these difficult tasks, the ARGOS software engineers reuse existing software like the novel middleware from LINC-NIRVANA, an instrument for the LBT, provide many tests at different functional levels like unit tests and regression tests, agree about code and architecture style and deliver software incrementally while closely collaborating with the project partners. Many ARGOS ICS components are already successfully in use in the laboratories for testing ARGOS control loops.
Distributed medical services within the ATM-based Berlin regional test bed
NASA Astrophysics Data System (ADS)
Thiel, Andreas; Bernarding, Johannes; Krauss, Manfred; Schulz, Sandra; Tolxdorff, Thomas
1996-05-01
The ATM-based Metropolitan Area Network (MAN) of Berlin connects two university hospitals (Benjamin Franklin University Hospital and Charite) with the computer resources of the Technical University of Berlin (TUB). Distributed new medical services have been implemented and will be evaluated within the highspeed MAN of Berlin. The network with its data transmission rates of up to 155 Mbit/s renders these medical services externally available to practicing physicians. Resource and application sharing is demonstrated by the use of two software systems. The first software system is an interactive 3D reconstruction tool (3D- Medbild), based on a client-server mechanism. This structure allows the use of high- performance computers at the TUB from the low-level workstations in the hospitals. A second software system, RAMSES, utilizes a tissue database of Magnetic Resonance Images. For the remote control of the software, the developed applications use standards such as DICOM 3.0 and features of the World Wide Web. Data security concepts are being tested and integrated for the needs of the sensitive medical data. The highspeed network is the necessary prerequisite for the clinical evaluation of data in a joint teleconference. The transmission of digitized real-time sequences such as video and ultrasound and the interactive manipulation of data are made possible by Multi Media tools.
University Approaches to Software Copyright and Licensure Policies.
ERIC Educational Resources Information Center
Hawkins, Brian L.
Issues of copyright policy and software licensure at Drexel University that were developed during the introduction of a new microcomputing program are discussed. Channels for software distribution include: individual purchase of externally-produced software, distribution of internally-developed software, institutional licensure, and "read…
MuffinInfo: HTML5-Based Statistics Extractor from Next-Generation Sequencing Data.
Alic, Andy S; Blanquer, Ignacio
2016-09-01
Usually, the information known a priori about a newly sequenced organism is limited. Even resequencing the same organism can generate unpredictable output. We introduce MuffinInfo, a FastQ/Fasta/SAM information extractor implemented in HTML5 capable of offering insights into next-generation sequencing (NGS) data. Our new tool can run on any software or hardware environment, in command line or graphically, and in browser or standalone. It presents information such as average length, base distribution, quality scores distribution, k-mer histogram, and homopolymers analysis. MuffinInfo improves upon the existing extractors by adding the ability to save and then reload the results obtained after a run as a navigable file (also supporting saving pictures of the charts), by supporting custom statistics implemented by the user, and by offering user-adjustable parameters involved in the processing, all in one software. At the moment, the extractor works with all base space technologies such as Illumina, Roche, Ion Torrent, Pacific Biosciences, and Oxford Nanopore. Owing to HTML5, our software demonstrates the readiness of web technologies for mild intensive tasks encountered in bioinformatics.
Real-time control using open source RTOS
NASA Astrophysics Data System (ADS)
Irwin, Philip C.; Johnson, Richard L., Jr.
2002-12-01
Complex telescope systems such as interferometers tend to rely heavily on hard real-time operating systems (RTOS). It has been standard practice at NASA's Jet Propulsion Laboratory (JPL) and many other institutions to use costly commercial RTOSs and hardware. After developing a real-time toolkit for VxWorks on the PowerPC platform (dubbed RTC), the interferometry group at JPL is porting this code to the real-time Application Interface (RTAI), an open source RTOS that is essentially an extension to the Linux kernel. This port has the potential to reduce software and hardware costs for future projects, while increasing the level of performance. The goals of this paper are to briefly describe the RTC toolkit, highlight the successes and pitfalls of porting the toolkit from VxWorks to Linux-RTAI, and to discuss future enhancements that will be implemented as a direct result of this port. The first port of any body of code is always the most difficult since it uncovers the OS-specific calls and forces "red flags" into those portions of the code. For this reason, It has also been a huge benefit that the project chose a generic, platform independent OS extension, ACE, and its CORBA counterpart, TAO. This port of RTC will pave the way for conversions to other environments, the most interesting of which is a non-real-time simulation environment, currently being considered by the Space Interferometry Mission (SIM) and the Terrestrial Planet Finder (TPF) Projects.
Climate tools in mainstream Linux distributions
NASA Astrophysics Data System (ADS)
McKinstry, Alastair
2015-04-01
Debian/meterology is a project to integrate climate tools and analysis software into the mainstream Debian/Ubuntu Linux distributions. This work describes lessons learnt, and recommends practices for scientific software to be adopted and maintained in OS distributions. In addition to standard analysis tools (cdo,, grads, ferret, metview, ncl, etc.), software used by the Earth System Grid Federation was chosen for integraion, to enable ESGF portals to be built on this base; however exposing scientific codes via web APIs enables security weaknesses, normally ignorable, to be exposed. How tools are hardened, and what changes are required to handle security upgrades, are described. Secondly, to enable libraries and components (e.g. Python modules) to be integrated requires planning by writers: it is not sufficient to assume users can upgrade their code when you make incompatible changes. Here, practices are recommended to enable upgrades and co-installability of C, C++, Fortran and Python codes. Finally, software packages such as NetCDF and HDF5 can be built in multiple configurations. Tools may then expect incompatible versions of these libraries (e.g. serial and parallel) to be simultaneously available; how this was solved in Debian using "pkg-config" and shared library interfaces is described, and best practices for software writers to enable this are summarised.
1984-09-28
variables before simula- tion of model - Search for reality checks a, - Express uncertainty as a probability density distribution. a. H2 a, H-22 TWIF... probability that the software con- tains errors. This prior is updated as test failure data are accumulated. Only a p of 1 (software known to contain...discusssed; both parametric and nonparametric versions are presented. It is shown by the author that the bootstrap underlies the jackknife method and
A distributed data base management system. [for Deep Space Network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1975-01-01
Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Yorozu, K.; Kim, S.
2012-04-01
Data assimilation methods have received increased attention to accomplish uncertainty assessment and enhancement of forecasting capability in various areas. Despite of their potentials, applicable software frameworks to probabilistic approaches and data assimilation are still limited because the most of hydrologic modeling software are based on a deterministic approach. In this study, we developed a hydrological modeling framework for sequential data assimilation, so called MPI-OHyMoS. MPI-OHyMoS allows user to develop his/her own element models and to easily build a total simulation system model for hydrological simulations. Unlike process-based modeling framework, this software framework benefits from its object-oriented feature to flexibly represent hydrological processes without any change of the main library. Sequential data assimilation based on the particle filters is available for any hydrologic models based on MPI-OHyMoS considering various sources of uncertainty originated from input forcing, parameters and observations. The particle filters are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions. In MPI-OHyMoS, ensemble simulations are parallelized, which can take advantage of high performance computing (HPC) system. We applied this software framework for short-term streamflow forecasting of several catchments in Japan using a distributed hydrologic model. Uncertainty of model parameters and remotely-sensed rainfall data such as X-band or C-band radar is estimated and mitigated in the sequential data assimilation.
Fuzzy-Neural Controller in Service Requests Distribution Broker for SOA-Based Systems
NASA Astrophysics Data System (ADS)
Fras, Mariusz; Zatwarnicka, Anna; Zatwarnicki, Krzysztof
The evolution of software architectures led to the rising importance of the Service Oriented Architecture (SOA) concept. This architecture paradigm support building flexible distributed service systems. In the paper the architecture of service request distribution broker designed for use in SOA-based systems is proposed. The broker is built with idea of fuzzy control. The functional and non-functional request requirements in conjunction with monitoring of execution and communication links are used to distribute requests. Decisions are made with use of fuzzy-neural network.
An IP-Based Software System for Real-time, Closed Loop, Multi-Spacecraft Mission Simulations
NASA Technical Reports Server (NTRS)
Cary, Everett; Davis, George; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis
2003-01-01
This viewgraph presentation provides information on the architecture of a computerized testbest for simulating Distributed Space Systems (DSS) for controlling spacecraft flying in formation. The presentation also discusses and diagrams the Distributed Synthesis Environment (DSE) for simulating and planning DSS missions.
Model-based reasoning for power system management using KATE and the SSM/PMAD
NASA Technical Reports Server (NTRS)
Morris, Robert A.; Gonzalez, Avelino J.; Carreira, Daniel J.; Mckenzie, F. D.; Gann, Brian
1993-01-01
The overall goal of this research effort has been the development of a software system which automates tasks related to monitoring and controlling electrical power distribution in spacecraft electrical power systems. The resulting software system is called the Intelligent Power Controller (IPC). The specific tasks performed by the IPC include continuous monitoring of the flow of power from a source to a set of loads, fast detection of anomalous behavior indicating a fault to one of the components of the distribution systems, generation of diagnosis (explanation) of anomalous behavior, isolation of faulty object from remainder of system, and maintenance of flow of power to critical loads and systems (e.g. life-support) despite fault conditions being present (recovery). The IPC system has evolved out of KATE (Knowledge-based Autonomous Test Engineer), developed at NASA-KSC. KATE consists of a set of software tools for developing and applying structure and behavior models to monitoring, diagnostic, and control applications.
Design for Run-Time Monitor on Cloud Computing
NASA Astrophysics Data System (ADS)
Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.
Web-based multi-channel analyzer
Gritzo, Russ E.
2003-12-23
The present invention provides an improved multi-channel analyzer designed to conveniently gather, process, and distribute spectrographic pulse data. The multi-channel analyzer may operate on a computer system having memory, a processor, and the capability to connect to a network and to receive digitized spectrographic pulses. The multi-channel analyzer may have a software module integrated with a general-purpose operating system that may receive digitized spectrographic pulses for at least 10,000 pulses per second. The multi-channel analyzer may further have a user-level software module that may receive user-specified controls dictating the operation of the multi-channel analyzer, making the multi-channel analyzer customizable by the end-user. The user-level software may further categorize and conveniently distribute spectrographic pulse data employing non-proprietary, standard communication protocols and formats.
Similarities between GCS and human motor cortex: complex movement coordination
NASA Astrophysics Data System (ADS)
Rodríguez, Jose A.; Macias, Rosa; Molgo, Jordi; Guerra, Dailos
2014-07-01
The "Gran Telescopio de Canarias" (GTC1) is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). The GTC control system (GCS), the brain of the telescope, is is a distributed object & component oriented system based on RT-CORBA and it is responsible for the management and operation of the telescope, including its instrumentation. On the other hand, the Human motor cortex (HMC) is a region of the cerebrum responsible for the coordination of planning, control, and executing voluntary movements. If we analyze both systems, as far as the movement control of their mechanisms and body parts is concerned, we can find extraordinary similarities in their architectures. Both are structured in layers, and their functionalities are comparable from the movement conception until the movement action itself: In the GCS we can enumerate the Sequencer high level components, the Coordination libraries, the Control Kit library and the Device Driver library as the subsystems involved in the telescope movement control. If we look at the motor cortex, we can also enumerate the primary motor cortex, the secondary motor cortices, which include the posterior parietal cortex, the premotor cortex, and the supplementary motor area (SMA), the motor units, the sensory organs and the basal ganglia. From all these components/areas we will analyze in depth the several subcortical regions, of the the motor cortex, that are involved in organizing motor programs for complex movements and the GCS coordination framework, which is composed by a set of classes that allow to the high level components to transparently control a group of mechanisms simultaneously.
Center for Adaptive Optics | Software
Center for Adaptive Optics A University of California Science and Technology Center home Adaptive Optics Software The Center for Adaptive Optics acts as a clearing house for distributing Software to Institutes it gives specialists in Adaptive Optics a place to distribute their software. All software is
An open source platform for multi-scale spatially distributed simulations of microbial ecosystems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segre, Daniel
2014-08-14
The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.
Architecture for distributed design and fabrication
NASA Astrophysics Data System (ADS)
McIlrath, Michael B.; Boning, Duane S.; Troxel, Donald E.
1997-01-01
We describe a flexible, distributed system architecture capable of supporting collaborative design and fabrication of semi-conductor devices and integrated circuits. Such capabilities are of particular importance in the development of new technologies, where both equipment and expertise are limited. Distributed fabrication enables direct, remote, physical experimentation in the development of leading edge technology, where the necessary manufacturing resources are new, expensive, and scarce. Computational resources, software, processing equipment, and people may all be widely distributed; their effective integration is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages current vendor and consortia developments to define software interfaces and infrastructure based on existing and merging networking, CIM, and CAD standards. Process engineers and product designers access processing and simulation results through a common interface and collaborate across the distributed manufacturing environment.
Hyperspectral Soil Mapper (HYSOMA) software interface: Review and future plans
NASA Astrophysics Data System (ADS)
Chabrillat, Sabine; Guillaso, Stephane; Eisele, Andreas; Rogass, Christian
2014-05-01
With the upcoming launch of the next generation of hyperspectral satellites that will routinely deliver high spectral resolution images for the entire globe (e.g. EnMAP, HISUI, HyspIRI, HypXIM, PRISMA), an increasing demand for the availability/accessibility of hyperspectral soil products is coming from the geoscience community. Indeed, many robust methods for the prediction of soil properties based on imaging spectroscopy already exist and have been successfully used for a wide range of soil mapping airborne applications. Nevertheless, these methods require expert know-how and fine-tuning, which makes them used sparingly. More developments are needed toward easy-to-access soil toolboxes as a major step toward the operational use of hyperspectral soil products for Earth's surface processes monitoring and modelling, to allow non-experienced users to obtain new information based on non-expensive software packages where repeatability of the results is an important prerequisite. In this frame, based on the EU-FP7 EUFAR (European Facility for Airborne Research) project and EnMAP satellite science program, higher performing soil algorithms were developed at the GFZ German Research Center for Geosciences as demonstrators for end-to-end processing chains with harmonized quality measures. The algorithms were built-in into the HYSOMA (Hyperspectral SOil MApper) software interface, providing an experimental platform for soil mapping applications of hyperspectral imagery that gives the choice of multiple algorithms for each soil parameter. The software interface focuses on fully automatic generation of semi-quantitative soil maps such as soil moisture, soil organic matter, iron oxide, clay content, and carbonate content. Additionally, a field calibration option calculates fully quantitative soil maps provided ground truth soil data are available. Implemented soil algorithms have been tested and validated using extensive in-situ ground truth data sets. The source of the HYSOMA code was developed as standalone IDL software to allow easy implementation in the hyperspectral and non-hyperspectral communities. Indeed, within the hyperspectral community, IDL language is very widely used, and for non-expert users that do not have an ENVI license, such software can be executed as a binary version using the free IDL virtual machine under various operating systems. Based on the growing interest of users in the software interface, the experimental software was adapted for public release version in 2012, and since then ~80 users of hyperspectral soil products downloaded the soil algorithms at www.gfz-potsdam.de/hysoma. The software interface was distributed for free as IDL plug-ins under the IDL-virtual machine. Up-to-now distribution of HYSOMA was based on a close source license model, for non-commercial and educational purposes. Currently, the HYSOMA is being under further development in the context of the EnMAP satellite mission, for extension and implementation in the EnMAP Box as EnSoMAP (EnMAP SOil MAPper). The EnMAP Box is a freely available, platform-independent software distributed under an open source license. In the presentation we will focus on an update of the HYSOMA software interface status and upcoming implementation in the EnMAP Box. Scientific software validation, associated publication record and users responses as well as software management and transition to open source will be discussed.
Bennett, Joseph R.; French, Connor M.
2017-01-01
SDMtoolbox 2.0 is a software package for spatial studies of ecology, evolution, and genetics. The release of SDMtoolbox 2.0 allows researchers to use the most current ArcGIS software and MaxEnt software, and reduces the amount of time that would be spent developing common solutions. The central aim of this software is to automate complicated and repetitive spatial analyses in an intuitive graphical user interface. One core tenant facilitates careful parameterization of species distribution models (SDMs) to maximize each model’s discriminatory ability and minimize overfitting. This includes carefully processing of occurrence data, environmental data, and model parameterization. This program directly interfaces with MaxEnt, one of the most powerful and widely used species distribution modeling software programs, although SDMtoolbox 2.0 is not limited to species distribution modeling or restricted to modeling in MaxEnt. Many of the SDM pre- and post-processing tools have ‘universal’ analogs for use with any modeling software. The current version contains a total of 79 scripts that harness the power of ArcGIS for macroecology, landscape genetics, and evolutionary studies. For example, these tools allow for biodiversity quantification (such as species richness or corrected weighted endemism), generation of least-cost paths and corridors among shared haplotypes, assessment of the significance of spatial randomizations, and enforcement of dispersal limitations of SDMs projected into future climates—to only name a few functions contained in SDMtoolbox 2.0. Lastly, dozens of generalized tools exists for batch processing and conversion of GIS data types or formats, which are broadly useful to any ArcMap user. PMID:29230356
Advanced software integration: The case for ITV facilities
NASA Technical Reports Server (NTRS)
Garman, John R.
1990-01-01
The array of technologies and methodologies involved in the development and integration of avionics software has moved almost as rapidly as computer technology itself. Future avionics systems involve major advances and risks in the following areas: (1) Complexity; (2) Connectivity; (3) Security; (4) Duration; and (5) Software engineering. From an architectural standpoint, the systems will be much more distributed, involve session-based user interfaces, and have the layered architectures typified in the layers of abstraction concepts popular in networking. Typified in the NASA Space Station Freedom will be the highly distributed nature of software development itself. Systems composed of independent components developed in parallel must be bound by rigid standards and interfaces, the clean requirements and specifications. Avionics software provides a challenge in that it can not be flight tested until the first time it literally flies. It is the binding of requirements for such an integration environment into the advances and risks of future avionics systems that form the basis of the presented concept and the basic Integration, Test, and Verification concept within the development and integration life cycle of Space Station Mission and Avionics systems.
Compiling software for a hierarchical distributed processing system
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2013-12-31
Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.
NASA Astrophysics Data System (ADS)
Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.
2009-12-01
This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.
A decentralized software bus based on IP multicas ting
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd
1995-01-01
We describe decentralized reconfigurable implementation of a conference management system based on the low-level Internet Protocol (IP) multicasting protocol. IP multicasting allows low-cost, world-wide, two-way transmission of data between large numbers of conferencing participants through the Multicasting Backbone (MBone). Each conference is structured as a software bus -- a messaging system that provides a run-time interconnection model that acts as a separate agent (i.e., the bus) for routing, queuing, and delivering messages between distributed programs. Unlike the client-server interconnection model, the software bus model provides a level of indirection that enhances the flexibility and reconfigurability of a distributed system. Current software bus implementations like POLYLITH, however, rely on a centralized bus process and point-to-point protocols (i.e., TCP/IP) to route, queue, and deliver messages. We implement a software bus called the MULTIBUS that relies on a separate process only for routing and uses a reliable IP multicasting protocol for delivery of messages. The use of multicasting means that interconnections are independent of IP machine addresses. This approach allows reconfiguration of bus participants during system execution without notifying other participants of new IP addresses. The use of IP multicasting also permits an economy of scale in the number of participants. We describe the MULITIBUS protocol elements and show how our implementation performs better than centralized bus implementations.
Marketing Education Computer Curriculum. Final Report.
ERIC Educational Resources Information Center
Pittsburgh Univ., PA. School of Education.
A project developed computer software based upon Interstate Distributive Education Curriculum Consortium (IDECC) competency-based materials to produce a new curriculum management system for Pennsylvania secondary marketing education programs. During the project, an advisory committee composed of secondary marketing teachers, business people, and…
InterProScan 5: genome-scale protein function classification
Jones, Philip; Binns, David; Chang, Hsin-Yu; Fraser, Matthew; Li, Weizhong; McAnulla, Craig; McWilliam, Hamish; Maslen, John; Mitchell, Alex; Nuka, Gift; Pesseat, Sebastien; Quinn, Antony F.; Sangrador-Vegas, Amaia; Scheremetjew, Maxim; Yong, Siew-Yit; Lopez, Rodrigo; Hunter, Sarah
2014-01-01
Motivation: Robust large-scale sequence analysis is a major challenge in modern genomic science, where biologists are frequently trying to characterize many millions of sequences. Here, we describe a new Java-based architecture for the widely used protein function prediction software package InterProScan. Developments include improvements and additions to the outputs of the software and the complete reimplementation of the software framework, resulting in a flexible and stable system that is able to use both multiprocessor machines and/or conventional clusters to achieve scalable distributed data analysis. InterProScan is freely available for download from the EMBl-EBI FTP site and the open source code is hosted at Google Code. Availability and implementation: InterProScan is distributed via FTP at ftp://ftp.ebi.ac.uk/pub/software/unix/iprscan/5/ and the source code is available from http://code.google.com/p/interproscan/. Contact: http://www.ebi.ac.uk/support or interhelp@ebi.ac.uk or mitchell@ebi.ac.uk PMID:24451626
Advanced Shutter Control for a Molecular Beam Epitaxy Reactor
An open-source hardware and software-based shutter controller solution was developed that communicates over Ethernet with our original equipment...manufacturer (OEM) molecular beam epitaxy (MBE) reactor control software. An Arduino Mega microcontroller is the used for the brain of the shutter... controller , while a custom-designed circuit board distributes 24-V power to each of the 16 shutter solenoids available on the MBE. Using Ethernet
RFP Patterns and Techniques for Successful Agile Contracting
2016-11-01
2016-SR-025 SOFTWARE ENGINEERING INSTITUTE | CARNEGIE MELLON UNIVERSITY [Distribution Statement A] This material has been approved for public...release and unlimited distribution. Please see Copyright notice for non-U.S. Government use and distribution. Copyright 2016 Carnegie Mellon University...This material is based upon work funded and supported by the Department of Defense under Contract No. FA8721-05-C-0003 with Carnegie Mellon
Achieving High Performance With TCP Over 40 GbE on NUMA Architectures for CMS Data Acquisition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bawej, Tomasz; et al.
2014-01-01
TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multicore era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities.more » During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.« less
Development of land based radar polarimeter processor system
NASA Technical Reports Server (NTRS)
Kronke, C. W.; Blanchard, A. J.
1983-01-01
The processing subsystem of a land based radar polarimeter was designed and constructed. This subsystem is labeled the remote data acquisition and distribution system (RDADS). The radar polarimeter, an experimental remote sensor, incorporates the RDADS to control all operations of the sensor. The RDADS uses industrial standard components including an 8-bit microprocessor based single board computer, analog input/output boards, a dynamic random access memory board, and power supplis. A high-speed digital electronics board was specially designed and constructed to control range-gating for the radar. A complete system of software programs was developed to operate the RDADS. The software uses a powerful real time, multi-tasking, executive package as an operating system. The hardware and software used in the RDADS are detailed. Future system improvements are recommended.
JANIS 4: An Improved Version of the NEA Java-based Nuclear Data Information System
NASA Astrophysics Data System (ADS)
Soppera, N.; Bossant, M.; Dupont, E.
2014-06-01
JANIS is software developed to facilitate the visualization and manipulation of nuclear data, giving access to evaluated data libraries, and to the EXFOR and CINDA databases. It is stand-alone Java software, downloadable from the web and distributed on DVD. Used offline, the system also makes use of an internet connection to access the NEA Data Bank database. It is now also offered as a full web application, only requiring a browser. The features added in the latest version of the software and this new web interface are described.
JANIS 4: An Improved Version of the NEA Java-based Nuclear Data Information System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soppera, N., E-mail: nicolas.soppera@oecd.org; Bossant, M.; Dupont, E.
JANIS is software developed to facilitate the visualization and manipulation of nuclear data, giving access to evaluated data libraries, and to the EXFOR and CINDA databases. It is stand-alone Java software, downloadable from the web and distributed on DVD. Used offline, the system also makes use of an internet connection to access the NEA Data Bank database. It is now also offered as a full web application, only requiring a browser. The features added in the latest version of the software and this new web interface are described.
Software architecture of INO340 telescope control system
NASA Astrophysics Data System (ADS)
Ravanmehr, Reza; Khosroshahi, Habib
2016-08-01
The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
Remote software upload techniques in future vehicles and their performance analysis
NASA Astrophysics Data System (ADS)
Hossain, Irina
Updating software in vehicle Electronic Control Units (ECUs) will become a mandatory requirement for a variety of reasons, for examples, to update/fix functionality of an existing system, add new functionality, remove software bugs and to cope up with ITS infrastructure. Software modules of advanced vehicles can be updated using Remote Software Upload (RSU) technique. The RSU employs infrastructure-based wireless communication technique where the software supplier sends the software to the targeted vehicle via a roadside Base Station (BS). However, security is critically important in RSU to avoid any disasters due to malfunctions of the vehicle or to protect the proprietary algorithms from hackers, competitors or people with malicious intent. In this thesis, a mechanism of secure software upload in advanced vehicles is presented which employs mutual authentication of the software provider and the vehicle using a pre-shared authentication key before sending the software. The software packets are sent encrypted with a secret key along with the Message Digest (MD). In order to increase the security level, it is proposed the vehicle to receive more than one copy of the software along with the MD in each copy. The vehicle will install the new software only when it receives more than one identical copies of the software. In order to validate the proposition, analytical expressions of average number of packet transmissions for successful software update is determined. Different cases are investigated depending on the vehicle's buffer size and verification methods. The analytical and simulation results show that it is sufficient to send two copies of the software to the vehicle to thwart any security attack while uploading the software. The above mentioned unicast method for RSU is suitable when software needs to be uploaded to a single vehicle. Since multicasting is the most efficient method of group communication, updating software in an ECU of a large number of vehicles could benefit from it. However, like the unicast RSU, the security requirements of multicast communication, i.e., authenticity, confidentiality and integrity of the software transmitted and access control of the group members is challenging. In this thesis, an infrastructure-based mobile multicasting for RSU in vehicle ECUs is proposed where an ECU receives the software from a remote software distribution center using the road side BSs as gateways. The Vehicular Software Distribution Network (VSDN) is divided into small regions administered by a Regional Group Manager (RGM). Two multicast Group Key Management (GKM) techniques are proposed based on the degree of trust on the BSs named Fully-trusted (FT) and Semi-trusted (ST) systems. Analytical models are developed to find the multicast session establishment latency and handover latency for these two protocols. The average latency to perform mutual authentication of the software vendor and a vehicle, and to send the multicast session key by the software provider during multicast session initialization, and the handoff latency during multicast session is calculated. Analytical and simulation results show that the link establishment latency per vehicle of our proposed schemes is in the range of few seconds and the ST system requires few ms higher time than the FT system. The handoff latency is also in the range of few seconds and in some cases ST system requires less handoff time than the FT system. Thus, it is possible to build an efficient GKM protocol without putting too much trust on the BSs.
FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses.
Desai, Trunil S; Srivastava, Shireesh
2018-01-01
13 C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13 C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13 C-MFA software that works in various operating systems will enable more researchers to perform 13 C-MFA and to further modify and develop the package.
FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses
Desai, Trunil S.
2018-01-01
13C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13C-MFA software that works in various operating systems will enable more researchers to perform 13C-MFA and to further modify and develop the package. PMID:29736347
Distributed and Collaborative Software Analysis
NASA Astrophysics Data System (ADS)
Ghezzi, Giacomo; Gall, Harald C.
Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of
NASA Technical Reports Server (NTRS)
Pordes, Ruth (Editor)
1989-01-01
Papers on real-time computer applications in nuclear, particle, and plasma physics are presented, covering topics such as expert systems tactics in testing FASTBUS segment interconnect modules, trigger control in a high energy physcis experiment, the FASTBUS read-out system for the Aleph time projection chamber, a multiprocessor data acquisition systems, DAQ software architecture for Aleph, a VME multiprocessor system for plasma control at the JT-60 upgrade, and a multiasking, multisinked, multiprocessor data acquisition front end. Other topics include real-time data reduction using a microVAX processor, a transputer based coprocessor for VEDAS, simulation of a macropipelined multi-CPU event processor for use in FASTBUS, a distributed VME control system for the LISA superconducting Linac, a distributed system for laboratory process automation, and a distributed system for laboratory process automation. Additional topics include a structure macro assembler for the event handler, a data acquisition and control system for Thomson scattering on ATF, remote procedure execution software for distributed systems, and a PC-based graphic display real-time particle beam uniformity.
NASA Astrophysics Data System (ADS)
Rico, H.; Hauksson, E.; Thomas, E.; Friberg, P.; Given, D.
2002-12-01
The California Integrated Seismic Network (CISN) Display is part of a Web-enabled earthquake notification system alerting users in near real-time of seismicity, and also valuable geophysical information following a large earthquake. It will replace the Caltech/USGS Broadcast of Earthquakes (CUBE) and Rapid Earthquake Data Integration (REDI) Display as the principal means of delivering graphical earthquake information to users at emergency operations centers, and other organizations. Features distinguishing the CISN Display from other GUI tools are a state-full client/server relationship, a scalable message format supporting automated hyperlink creation, and a configurable platform-independent client with a GIS mapping tool; supporting the decision-making activities of critical users. The CISN Display is the front-end of a client/server architecture known as the QuakeWatch system. It is comprised of the CISN Display (and other potential clients), message queues, server, server "feeder" modules, and messaging middleware, schema and generators. It is written in Java, making it platform-independent, and offering the latest in Internet technologies. QuakeWatch's object-oriented design allows components to be easily upgraded through a well-defined set of application programming interfaces (APIs). Central to the CISN Display's role as a gateway to other earthquake products is its comprehensive XML-schema. The message model starts with the CUBE message format, but extends it by provisioning additional attributes for currently available products, and those yet to be considered. The supporting metadata in the XML-message provides the data necessary for the client to create a hyperlink and associate it with a unique event ID. Earthquake products deliverable to the CISN Display are ShakeMap, Ground Displacement, Focal Mechanisms, Rapid Notifications, OES Reports, and Earthquake Commentaries. Leveraging the power of the XML-format, the CISN Display provides prompt access to earthquake information on the Web. The links are automatically created when product generators deliver CUBE formatted packets to a Quake Data Distribution System (QDDS) hub (new distribution methods may be used later). The "feeder" modules tap into the QDDS hub and convert the packets into XML-messages. These messages are forwarded to message queues, and then distributed to clients where URLs are dynamically created for these products and linked to events on the CISN Display map. The products may be downloaded out-of-band; and with the inclusion of a GIS mapping tool users can plot organizational assets on the CISN Display map and overlay them against key spectral data, such as ground accelerations. This gives Emergency Response Managers information useful in allocating limited personnel and resources after a major event. At the heart of the system's robustness is a well-established and reliable set of communication protocols for best-effort delivery of data. For critical users a Common Object Request Broker Architecture (CORBA) state-full connection is used via a dedicated signaling channel. The system employs several CORBA methods that alert users of changes in the link status. Loss of connectivity triggers a strategy that attempts to reconnect through various physical and logical paths. Thus, by building on past application successes and proven Internet advances the CISN Display targets a specific audience by providing enhancements previously not available from other applications.
Autonomous power system brassboard
NASA Technical Reports Server (NTRS)
Merolla, Anthony
1992-01-01
The Autonomous Power System (APS) brassboard is a 20 kHz power distribution system which has been developed at NASA Lewis Research Center, Cleveland, Ohio. The brassboard exists to provide a realistic hardware platform capable of testing artificially intelligent (AI) software. The brassboard's power circuit topology is based upon a Power Distribution Control Unit (PDCU), which is a subset of an advanced development 20 kHz electrical power system (EPS) testbed, originally designed for Space Station Freedom (SSF). The APS program is designed to demonstrate the application of intelligent software as a fault detection, isolation, and recovery methodology for space power systems. This report discusses both the hardware and software elements used to construct the present configuration of the brassboard. The brassboard power components are described. These include the solid-state switches (herein referred to as switchgear), transformers, sources, and loads. Closely linked to this power portion of the brassboard is the first level of embedded control. Hardware used to implement this control and its associated software is discussed. An Ada software program, developed by Lewis Research Center's Space Station Freedom Directorate for their 20 kHz testbed, is used to control the brassboard's switchgear, as well as monitor key brassboard parameters through sensors located within these switches. The Ada code is downloaded from a PC/AT, and is resident within the 8086 microprocessor-based embedded controllers. The PC/AT is also used for smart terminal emulation, capable of controlling the switchgear as well as displaying data from them. Intelligent control is provided through use of a T1 Explorer and the Autonomous Power Expert (APEX) LISP software. Real-time load scheduling is implemented through use of a 'C' program-based scheduling engine. The methods of communication between these computers and the brassboard are explored. In order to evaluate the features of both the brassboard hardware and intelligent controlling software, fault circuits have been developed and integrated as part of the brassboard. A description of these fault circuits and their function is included. The brassboard has become an extremely useful test facility, promoting artificial intelligence (AI) applications for power distribution systems. However, there are elements of the brassboard which could be enhanced, thus improving system performance. Modifications and enhancements to improve the brassboard's operation are discussed.
Gis-Based Spatial Statistical Analysis of College Graduates Employment
NASA Astrophysics Data System (ADS)
Tang, R.
2012-07-01
It is urgently necessary to be aware of the distribution and employment status of college graduates for proper allocation of human resources and overall arrangement of strategic industry. This study provides empirical evidence regarding the use of geocoding and spatial analysis in distribution and employment status of college graduates based on the data from 2004-2008 Wuhan Municipal Human Resources and Social Security Bureau, China. Spatio-temporal distribution of employment unit were analyzed with geocoding using ArcGIS software, and the stepwise multiple linear regression method via SPSS software was used to predict the employment and to identify spatially associated enterprise and professionals demand in the future. The results show that the enterprises in Wuhan east lake high and new technology development zone increased dramatically from 2004 to 2008, and tended to distributed southeastward. Furthermore, the models built by statistical analysis suggest that the specialty of graduates major in has an important impact on the number of the employment and the number of graduates engaging in pillar industries. In conclusion, the combination of GIS and statistical analysis which helps to simulate the spatial distribution of the employment status is a potential tool for human resource development research.
The equipment access software for a distributed UNIX-based accelerator control system
NASA Astrophysics Data System (ADS)
Trofimov, Nikolai; Zelepoukine, Serguei; Zharkov, Eugeny; Charrue, Pierre; Gareyte, Claire; Poirier, Hervé
1994-12-01
This paper presents a generic equipment access software package for a distributed control system using computers with UNIX or UNIX-like operating systems. The package consists of three main components, an application Equipment Access Library, Message Handler and Equipment Data Base. An application task, which may run in any computer in the network, sends requests to access equipment through Equipment Library calls. The basic request is in the form Equipment-Action-Data and is routed via a remote procedure call to the computer to which the given equipment is connected. In this computer the request is received by the Message Handler. According to the type of the equipment connection, the Message Handler either passes the request to the specific process software in the same computer or forwards it to a lower level network of equipment controllers using MIL1553B, GPIB, RS232 or BITBUS communication. The answer is then returned to the calling application. Descriptive information required for request routing and processing is stored in the real-time Equipment Data Base. The package has been written to be portable and is currently available on DEC Ultrix, LynxOS, HPUX, XENIX, OS-9 and Apollo domain.
Windows .NET Network Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST)
Dowd, Scot E; Zaragoza, Joaquin; Rodriguez, Javier R; Oliver, Melvin J; Payton, Paxton R
2005-01-01
Background BLAST is one of the most common and useful tools for Genetic Research. This paper describes a software application we have termed Windows .NET Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST), which enhances the BLAST utility by improving usability, fault recovery, and scalability in a Windows desktop environment. Our goal was to develop an easy to use, fault tolerant, high-throughput BLAST solution that incorporates a comprehensive BLAST result viewer with curation and annotation functionality. Results W.ND-BLAST is a comprehensive Windows-based software toolkit that targets researchers, including those with minimal computer skills, and provides the ability increase the performance of BLAST by distributing BLAST queries to any number of Windows based machines across local area networks (LAN). W.ND-BLAST provides intuitive Graphic User Interfaces (GUI) for BLAST database creation, BLAST execution, BLAST output evaluation and BLAST result exportation. This software also provides several layers of fault tolerance and fault recovery to prevent loss of data if nodes or master machines fail. This paper lays out the functionality of W.ND-BLAST. W.ND-BLAST displays close to 100% performance efficiency when distributing tasks to 12 remote computers of the same performance class. A high throughput BLAST job which took 662.68 minutes (11 hours) on one average machine was completed in 44.97 minutes when distributed to 17 nodes, which included lower performance class machines. Finally, there is a comprehensive high-throughput BLAST Output Viewer (BOV) and Annotation Engine components, which provides comprehensive exportation of BLAST hits to text files, annotated fasta files, tables, or association files. Conclusion W.ND-BLAST provides an interactive tool that allows scientists to easily utilizing their available computing resources for high throughput and comprehensive sequence analyses. The install package for W.ND-BLAST is freely downloadable from . With registration the software is free, installation, networking, and usage instructions are provided as well as a support forum. PMID:15819992
Simulation on friction taper plug welding of AA6063-20Gr metal matrix composite
NASA Astrophysics Data System (ADS)
Hynes, N. Rajesh Jesudoss; Nithin, Abeyram M.
2016-05-01
Friction taper plug welding a variant of friction welding is useful in welding of similar and dissimilar materials. It could be used for joining of composites to metals in sophisticated aerospace applications. In the present work numerical simulation of friction taper plug welding process is carried out using finite element based software. Graphite reinforced AA6063 is modelled using the software ANSYS 15.0 and temperature distribution is predicted. Effect of friction time on temperature distribution is numerically investigated. When the friction time is increased to 30 seconds, the tapered part of plug gets detached and fills the hole in the AA6063 plate perfectly.
Okayama optical polarimetry and spectroscopy system (OOPS) II. Network-transparent control software.
NASA Astrophysics Data System (ADS)
Sasaki, T.; Kurakami, T.; Shimizu, Y.; Yutani, M.
Control system of the OOPS (Okayama Optical Polarimetry and Spectroscopy system) is designed to integrate several instruments whose controllers are distributed over a network; the OOPS instrument, a CCD camera and data acquisition unit, the 91 cm telescope, an autoguider, a weather monitor, and an image display tool SAOimage. With the help of message-based communication, the control processes cooperate with related processes to perform an astronomical observation under supervising control by a scheduler process. A logger process collects status data of all the instruments to distribute them to related processes upon request. Software structure of each process is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mijnheer, B; Mans, A; Olaciregui-Ruiz, I
Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is donemore » offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.« less
Software metrics: Software quality metrics for distributed systems. [reliability engineering
NASA Technical Reports Server (NTRS)
Post, J. V.
1981-01-01
Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.
The Web Measurement Environment (WebME): A Tool for Combining and Modeling Distributed Data
NASA Technical Reports Server (NTRS)
Tesoriero, Roseanne; Zelkowitz, Marvin
1997-01-01
Many organizations have incorporated data collection into their software processes for the purpose of process improvement. However, in order to improve, interpreting the data is just as important as the collection of data. With the increased presence of the Internet and the ubiquity of the World Wide Web, the potential for software processes being distributed among several physically separated locations has also grown. Because project data may be stored in multiple locations and in differing formats, obtaining and interpreting data from this type of environment becomes even more complicated. The Web Measurement Environment (WebME), a Web-based data visualization tool, is being developed to facilitate the understanding of collected data in a distributed environment. The WebME system will permit the analysis of development data in distributed, heterogeneous environments. This paper provides an overview of the system and its capabilities.
Distributed network scheduling
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Schaffer, Steven R.
2004-01-01
Distributed Network Scheduling is the scheduling of future communications of a network by nodes in the network. This report details software for doing this onboard spacecraft in a remote network. While prior work on distributed scheduling has been applied to remote spacecraft networks, the software reported here focuses on modeling communication activities in greater detail and including quality of service constraints. Our main results are based on a Mars network of spacecraft and include identifying a maximum opportunity of improving traverse exploration rate a factor of three; a simulation showing reduction in one-way delivery times from a rover to Earth from as much as 5 to 1.5 hours; simulated response to unexpected events averaging under an hour onboard; and ground schedule generation ranging from seconds to 50 minutes for 15 to 100 communication goals.
BioContainers: an open-source and community-driven framework for software standardization.
da Veiga Leprevost, Felipe; Grüning, Björn A; Alves Aflitos, Saulo; Röst, Hannes L; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I; Perez-Riverol, Yasset
2017-08-15
BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). The software is freely available at github.com/BioContainers/. yperez@ebi.ac.uk. © The Author(s) 2017. Published by Oxford University Press.
BioContainers: an open-source and community-driven framework for software standardization
da Veiga Leprevost, Felipe; Grüning, Björn A.; Alves Aflitos, Saulo; Röst, Hannes L.; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C.; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I.; Perez-Riverol, Yasset
2017-01-01
Abstract Motivation BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). Availability and Implementation The software is freely available at github.com/BioContainers/. Contact yperez@ebi.ac.uk PMID:28379341
Understanding and Predicting the Process of Software Maintenance Releases
NASA Technical Reports Server (NTRS)
Basili, Victor; Briand, Lionel; Condon, Steven; Kim, Yong-Mi; Melo, Walcelio L.; Valett, Jon D.
1996-01-01
One of the major concerns of any maintenance organization is to understand and estimate the cost of maintenance releases of software systems. Planning the next release so as to maximize the increase in functionality and the improvement in quality are vital to successful maintenance management. The objective of this paper is to present the results of a case study in which an incremental approach was used to better understand the effort distribution of releases and build a predictive effort model for software maintenance releases. This study was conducted in the Flight Dynamics Division (FDD) of NASA Goddard Space Flight Center(GSFC). This paper presents three main results: 1) a predictive effort model developed for the FDD's software maintenance release process; 2) measurement-based lessons learned about the maintenance process in the FDD; and 3) a set of lessons learned about the establishment of a measurement-based software maintenance improvement program. In addition, this study provides insights and guidelines for obtaining similar results in other maintenance organizations.
Software structure for Vega/Chara instrument
NASA Astrophysics Data System (ADS)
Clausse, J.-M.
2008-07-01
VEGA (Visible spEctroGraph and polArimeter) is one of the focal instruments of the CHARA array at Mount Wilson near Los Angeles. Its control system is based on techniques developed on the GI2T interferometer (Grand Interferometre a 2 Telescopes) and on the SIRIUS fibered hyper telescope testbed at OCA (Observatoire de la Cote d'Azur). This article describes the software and electronics architecture of the instrument. It is based on local network architecture and uses also Virtual Private Network connections. The server part is based on Windows XP (VC++). The control software is on Linux (C, GTK). For the control of the science detector and the fringe tracking systems, distributed API use real-time techniques. The control software gathers all the necessary informations of the instrument. It allows an automatic management of the instrument by using an original task scheduler. This architecture intends to drive the instrument from remote sites, such as our institute in South of France.
Distributed Engine Control Empirical/Analytical Verification Tools
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan
2013-01-01
NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique capabilities to study the effects of a given change to the control system in the context of the distributed paradigm. The simulation tool can support treatment of all components within the control system, both virtual and real; these include communication data network, smart sensor and actuator nodes, centralized control system (FADEC full authority digital engine control), and the aircraft engine itself. The DECsim tool can allow simulation-based prototyping of control laws, control architectures, and decentralization strategies before hardware is integrated into the system. With the configuration specified, the simulator allows a variety of key factors to be systematically assessed. Such factors include control system performance, reliability, weight, and bandwidth utilization.
Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811
3D Fiber Orientation Simulation for Plastic Injection Molding
NASA Astrophysics Data System (ADS)
Lin, Baojiu; Jin, Xiaoshi; Zheng, Rong; Costa, Franco S.; Fan, Zhiliang
2004-06-01
Glass fiber reinforced polymer is widely used in the products made using injection molding processing. The distribution of fiber orientation inside plastic parts has direct effects on quality of molded parts. Using computer simulation to predict fiber orientation distribution is one of most efficient ways to assist engineers to do warpage analysis and to find a good design solution to produce high quality plastic parts. Fiber orientation simulation software based on 2-1/2D (midplane /Dual domain mesh) techniques has been used in industry for a decade. However, the 2-1/2D technique is based on the planar Hele-Shaw approximation and it is not suitable when the geometry has complex three-dimensional features which cannot be well approximated by 2D shells. Recently, a full 3D simulation software for fiber orientation has been developed and integrated into Moldflow Plastics Insight 3D simulation software. The theory for this new 3D fiber orientation calculation module is described in this paper. Several examples are also presented to show the benefit in using 3D fiber orientation simulation.
Design and development of a run-time monitor for multi-core architectures in cloud computing.
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.
Logical optimization for database uniformization
NASA Technical Reports Server (NTRS)
Grant, J.
1984-01-01
Data base uniformization refers to the building of a common user interface facility to support uniform access to any or all of a collection of distributed heterogeneous data bases. Such a system should enable a user, situated anywhere along a set of distributed data bases, to access all of the information in the data bases without having to learn the various data manipulation languages. Furthermore, such a system should leave intact the component data bases, and in particular, their already existing software. A survey of various aspects of the data bases uniformization problem and a proposed solution are presented.
JIP: Java image processing on the Internet
NASA Astrophysics Data System (ADS)
Wang, Dongyan; Lin, Bo; Zhang, Jun
1998-12-01
In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.
Research on cross - Project software defect prediction based on transfer learning
NASA Astrophysics Data System (ADS)
Chen, Ya; Ding, Xiaoming
2018-04-01
According to the two challenges in the prediction of cross-project software defects, the distribution differences between the source project and the target project dataset and the class imbalance in the dataset, proposing a cross-project software defect prediction method based on transfer learning, named NTrA. Firstly, solving the source project data's class imbalance based on the Augmented Neighborhood Cleaning Algorithm. Secondly, the data gravity method is used to give different weights on the basis of the attribute similarity of source project and target project data. Finally, a defect prediction model is constructed by using Trad boost algorithm. Experiments were conducted using data, come from NASA and SOFTLAB respectively, from a published PROMISE dataset. The results show that the method has achieved good values of recall and F-measure, and achieved good prediction results.
Anti-islanding Protection of Distributed Generation Using Rate of Change of Impedance
NASA Astrophysics Data System (ADS)
Shah, Pragnesh; Bhalja, Bhavesh
2013-08-01
Distributed Generation (DG), which is interlinked with distribution system, has inevitable effect on distribution system. Integrating DG with the utility network demands an anti-islanding scheme to protect the system. Failure to trip islanded generators can lead to problems such as threats to personnel safety, out-of-phase reclosing, and degradation of power quality. In this article, a new method for anti-islanding protection based on impedance monitoring of distribution network is carried out in presence of DG. The impedance measured between two phases is used to derive the rate of change of impedance (dz/dt), and its peak values are used for final trip decision. Test data are generated using PSCAD/EMTDC software package and the performance of the proposed method is evaluated in MatLab software. The simulation results show the effectiveness of the proposed scheme as it is capable to detect islanding condition accurately. Subsequently, it is also observed that the proposed scheme does not mal-operate during other disturbances such as short circuit and switching event.
OASIS: a data and software distribution service for Open Science Grid
NASA Astrophysics Data System (ADS)
Bockelman, B.; Caballero Bejar, J.; De Stefano, J.; Hover, J.; Quick, R.; Teige, S.
2014-06-01
The Open Science Grid encourages the concept of software portability: a user's scientific application should be able to run at as many sites as possible. It is necessary to provide a mechanism for OSG Virtual Organizations to install software at sites. Since its initial release, the OSG Compute Element has provided an application software installation directory to Virtual Organizations, where they can create their own sub-directory, install software into that sub-directory, and have the directory shared on the worker nodes at that site. The current model has shortcomings with regard to permissions, policies, versioning, and the lack of a unified, collective procedure or toolset for deploying software across all sites. Therefore, a new mechanism for data and software distributing is desirable. The architecture for the OSG Application Software Installation Service (OASIS) is a server-client model: the software and data are installed only once in a single place, and are automatically distributed to all client sites simultaneously. Central file distribution offers other advantages, including server-side authentication and authorization, activity records, quota management, data validation and inspection, and well-defined versioning and deletion policies. The architecture, as well as a complete analysis of the current implementation, will be described in this paper.
Assessing risk based on uncertain avalanche activity patterns
NASA Astrophysics Data System (ADS)
Zeidler, Antonia; Fromm, Reinhard
2015-04-01
Avalanches may affect critical infrastructure and may cause great economic losses. The planning horizon of infrastructures, e.g. hydropower generation facilities, reaches well into the future. Based on the results of previous studies on the effect of changing meteorological parameters (precipitation, temperature) and the effect on avalanche activity we assume that there will be a change of the risk pattern in future. The decision makers need to understand what the future might bring to best formulate their mitigation strategies. Therefore, we explore a commercial risk software to calculate risk for the coming years that might help in decision processes. The software @risk, is known to many larger companies, and therefore we explore its capabilities to include avalanche risk simulations in order to guarantee a comparability of different risks. In a first step, we develop a model for a hydropower generation facility that reflects the problem of changing avalanche activity patterns in future by selecting relevant input parameters and assigning likely probability distributions. The uncertain input variables include the probability of avalanches affecting an object, the vulnerability of an object, the expected costs for repairing the object and the expected cost due to interruption. The crux is to find the distribution that best represents the input variables under changing meteorological conditions. Our focus is on including the uncertain probability of avalanches based on the analysis of past avalanche data and expert knowledge. In order to explore different likely outcomes we base the analysis on three different climate scenarios (likely, worst case, baseline). For some variables, it is possible to fit a distribution to historical data, whereas in cases where the past dataset is insufficient or not available the software allows to select from over 30 different distribution types. The Monte Carlo simulation uses the probability distribution of uncertain variables using all valid combinations of the values of input variables to simulate all possible outcomes. In our case the output is the expected risk (Euro/year) for each object (e.g. water intake) considered and the entire hydropower generation system. The output is again a distribution that is interpreted by the decision makers as the final strategy depends on the needs and requirements of the end-user, which may be driven by personal preferences. In this presentation, we will show a way on how we used the uncertain information on avalanche activity in future to subsequently use it in a commercial risk software and therefore bringing the knowledge of natural hazard experts to decision makers.
Akiyama, M
2001-01-01
The Hospital Information System (HIS) has been positioned as the hub of the healthcare information management architecture. In Japan, the billing system assigns an "insurance disease names" to performed exams based on the diagnosis type. Departmental systems provide localized, departmental services, such as order receipt and diagnostic reporting, but do not provide patient demographic information. The system above has many problems. The departmental system's terminals and the HIS's terminals are not integrated. Duplicate data entry introduces errors and increases workloads. Order and exam data managed by the HIS can be sent to the billing system, but departmental data cannot usually be entered. Additionally, billing systems usually keep departmental data for only a short time before it is deleted. The billing system provides payment based on what is entered. The billing system is oriented towards diagnoses. Most importantly, the system is geared towards generating billing reports rather than at providing high-quality patient care. The role of the application server is that of a mediator between system components. Data and events generated by system components are sent to the application server that routes them to appropriate destinations. It also records all system events, including state changes to clinical data, access of clinical data and so on. Finally, the Resource Management System identifies all system resources available to the enterprise. The departmental systems are responsible for managing data and clinical processes at a departmental level. The client interacts with the system via the application server, which provides a general set of system-level functions. The system is implemented using current technologies CORBA and HTTP. System data is collected by the application server and assembled into XML documents for delivery to clients. Clients can access these URLs using standard HTTP clients, since each department provides an HTTP compliant web-server. We have implemented an integrated system communicating via CORBA middleware, consisting of an application server, endoscopy departmental server, pathology departmental server and wrappered legacy HIS. We have found this new approach solves the problems outlined earlier. It provides the services needed to ensure that data is never lost and is always available, that events that occur in the hospital are always captured, and that resources are managed and tracked effectively. Finally, it reduces costs, raises efficiency, increases the quality of patient care, and ultimately saves lives. Now, we are going to integrate all remaining hospital departments, and ultimately, all hospital functions.
Managing Communication among Geographically Distributed Teams: A Brazilian Case
NASA Astrophysics Data System (ADS)
Almeida, Ana Carina M.; de Farias Junior, Ivaldir H.; de S. Carneiro, Pedro Jorge
The growing demand for qualified professionals is making software companies opt for distributed software development (DSD). At the project conception, communication and synchronization of information are critical factors for success. However problems such as time-zone difference between teams, culture, language and different development processes among sites could difficult the communication among teams. In this way, the main goal of this paper is to describe the solution adopted by a Brazilian team to improve communication in a multisite project environment. The purposed solution was based on the best practices described in the literature, and the communication plan was created based on the infrastructure needed by the project. The outcome of this work is to minimize the impact of communication issues in multisite projects, increasing productivity, good understanding and avoiding rework on code and document writing.
Software for Simulation of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Richtsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.
2002-01-01
A package of software generates simulated hyperspectral images for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport as well as surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, 'ground truth' is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces and the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for and a supplement to field validation data.
Towards a Community Environmental Observation Network
NASA Astrophysics Data System (ADS)
Mertl, Stefan; Lettenbichler, Anton
2014-05-01
The Community Environmental Observation Network (CEON) is dedicated to the development of a free sensor network to collect and distribute environmental data (e.g. ground shaking, climate parameters). The data collection will be done with contributions from citizens, research institutions and public authorities like communities or schools. This will lead to a large freely available data base which can be used for public information, research, the arts,..... To start a free sensor network, the most important step is to provide easy access to free data collection and -distribution tools. The initial aims of the project CEON are dedicated to the development of these tools. A high quality data logger based on open hardware and free software is developed and a software suite of already existing free software for near-real time data communication and data distribution over the Internet will be assembled. Foremost, the development focuses on the collection of data related to the deformation of the earth (such as ground shaking, surface displacement of mass movements and glaciers) and the collection of climate data. The extent to other measurements will be considered in the design. The data logger is built using open hardware prototyping platforms like BeagleBone Black and Arduino. Main features of the data logger are: a 24Bit analog-to-digital converter; a GPS module for time reference and positioning; wireless mesh networking using Optimized Link State Routing; near real-time data transmission and communication; and near real-time differential GNSS positioning using the RTKLIB software. The project CEON is supported by the Internet Foundation Austria (IPA) within the NetIdee 2013 call.
NASA Astrophysics Data System (ADS)
Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.
2012-12-01
The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.
Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel
String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less
Project Management Software for Distributed Industrial Companies
NASA Astrophysics Data System (ADS)
Dobrojević, M.; Medjo, B.; Rakin, M.; Sedmak, A.
This paper gives an overview of the development of a new software solution for project management, intended mainly to use in industrial environment. The main concern of the proposed solution is application in everyday engineering practice in various, mainly distributed industrial companies. Having this in mind, special care has been devoted to development of appropriate tools for tracking, storing and analysis of the information about the project, and in-time delivering to the right team members or other responsible persons. The proposed solution is Internet-based and uses LAMP/WAMP (Linux or Windows - Apache - MySQL - PHP) platform, because of its stability, versatility, open source technology and simple maintenance. Modular structure of the software makes it easy for customization according to client specific needs, with a very short implementation period. Its main advantages are simple usage, quick implementation, easy system maintenance, short training and only basic computer skills needed for operators.
Evolutionary Telemetry and Command Processor (TCP) architecture
NASA Technical Reports Server (NTRS)
Schneider, John R.
1992-01-01
A low cost, modular, high performance, and compact Telemetry and Command Processor (TCP) is being built as the foundation of command and data handling subsystems for the next generation of satellites. The TCP product line will support command and telemetry requirements for small to large spacecraft and from low to high rate data transmission. It is compatible with the latest TDRSS, STDN and SGLS transponders and provides CCSDS protocol communications in addition to standard TDM formats. Its high performance computer provides computing resources for hosted flight software. Layered and modular software provides common services using standardized interfaces to applications thereby enhancing software re-use, transportability, and interoperability. The TCP architecture is based on existing standards, distributed networking, distributed and open system computing, and packet technology. The first TCP application is planned for the 94 SDIO SPAS 3 mission. The architecture enhances rapid tailoring of functions thereby reducing costs and schedules developed for individual spacecraft missions.
T-LECS: The Control Software System for MOIRCS
NASA Astrophysics Data System (ADS)
Yoshikawa, T.; Omata, K.; Konishi, M.; Ichikawa, T.; Suzuki, R.; Tokoku, C.; Katsuno, Y.; Nishimura, T.
2006-07-01
MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru Telescope. We present the system design of the control software system for MOIRCS, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS is a PC-Linux based network distributed system. Two PCs equipped with the focal plane array system operate two HAWAII2 detectors, respectively, and another PC is used for user interfaces and a database server. Moreover, these PCs control various devices for observations distributed on a TCP/IP network. T-LECS has three interfaces; interfaces to the devices and two user interfaces. One of the user interfaces is to the integrated observation control system (Subaru Observation Software System) for observers, and another one provides the system developers the direct access to the devices of MOIRCS. In order to help the communication between these interfaces, we employ an SQL database system.
Müller-Linow, Mark; Pinto-Espinosa, Francisco; Scharr, Hanno; Rascher, Uwe
2015-01-01
Three-dimensional canopies form complex architectures with temporally and spatially changing leaf orientations. Variations in canopy structure are linked to canopy function and they occur within the scope of genetic variability as well as a reaction to environmental factors like light, water and nutrient supply, and stress. An important key measure to characterize these structural properties is the leaf angle distribution, which in turn requires knowledge on the 3-dimensional single leaf surface. Despite a large number of 3-d sensors and methods only a few systems are applicable for fast and routine measurements in plants and natural canopies. A suitable approach is stereo imaging, which combines depth and color information that allows for easy segmentation of green leaf material and the extraction of plant traits, such as leaf angle distribution. We developed a software package, which provides tools for the quantification of leaf surface properties within natural canopies via 3-d reconstruction from stereo images. Our approach includes a semi-automatic selection process of single leaves and different modes of surface characterization via polygon smoothing or surface model fitting. Based on the resulting surface meshes leaf angle statistics are computed on the whole-leaf level or from local derivations. We include a case study to demonstrate the functionality of our software. 48 images of small sugar beet populations (4 varieties) have been analyzed on the base of their leaf angle distribution in order to investigate seasonal, genotypic and fertilization effects on leaf angle distributions. We could show that leaf angle distributions change during the course of the season with all varieties having a comparable development. Additionally, different varieties had different leaf angle orientation that could be separated in principle component analysis. In contrast nitrogen treatment had no effect on leaf angles. We show that a stereo imaging setup together with the appropriate image processing tools is capable of retrieving the geometric leaf surface properties of plants and canopies. Our software package provides whole-leaf statistics but also a local estimation of leaf angles, which may have great potential to better understand and quantify structural canopy traits for guided breeding and optimized crop management.
NASA Astrophysics Data System (ADS)
Leuchter, S.; Reinert, F.; Müller, W.
2014-06-01
Procurement and design of system architectures capable of network centric operations demand for an assessment scheme in order to compare different alternative realizations. In this contribution an assessment method for system architectures targeted at the C4ISR domain is presented. The method addresses the integration capability of software systems from a complex and distributed software system perspective focusing communication, interfaces and software. The aim is to evaluate the capability to integrate a system or its functions within a system-of-systems network. This method uses approaches from software architecture quality assessment and applies them on the system architecture level. It features a specific goal tree of several dimensions that are relevant for enterprise integration. These dimensions have to be weighed against each other and totalized using methods from the normative decision theory in order to reflect the intention of the particular enterprise integration effort. The indicators and measurements for many of the considered quality features rely on a model based view on systems, networks, and the enterprise. That means it is applicable to System-of-System specifications based on enterprise architectural frameworks relying on defined meta-models or domain ontologies for defining views and viewpoints. In the defense context we use the NATO Architecture Framework (NAF) to ground respective system models. The proposed assessment method allows evaluating and comparing competing system designs regarding their future integration potential. It is a contribution to the system-of-systems engineering methodology.
Lee, L.; Helsel, D.
2007-01-01
Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.
NASA Astrophysics Data System (ADS)
Verma, R. V.
2018-04-01
The Archive Inventory Management System (AIMS) is a software package for understanding the distribution, characteristics, integrity, and nuances of files and directories in large file-based data archives on a continuous basis.
[Design of a miniaturized blood temperature-varying system based on computer distributed control].
Xu, Qiang; Zhou, Zhaoying; Peng, Jiegang; Zhu, Junhua
2007-10-01
Blood temperature-varying has been widely applied in clinical practice such as extracorporeal circulation for whole-body perfusion hyperthermia (WBPH), body rewarming and blood temperature-varying in organ transplantation. This paper reports a novel DCS (Computer distributed control)-based blood temperature-varying system which includes therapy management function and whose hardware and software can be extended easily. Simulation results illustrate that this system provides precise temperature control with good performance in various operation conditions.
Emerging Technologies for Software-Reliant Systems
2011-02-24
needs • Loose coupling • Global distribution of hardware, software and people • Horizontal integration and convergence • Virtualization...Webinar– February 2011 © 2011 Carnegie Mellon University Global Distribution of Hardware, Software and People Globalization is an essential part of...University Required Software Engineering Emphasis Due to Emerging Technologies (2) Defensive Programming • Security • Auto-adaptation • Globalization
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
Evidence of absence (v2.0) software user guide
Dalthorp, Daniel; Huso, Manuela; Dail, David
2017-07-06
Evidence of Absence software (EoA) is a user-friendly software application for estimating bird and bat fatalities at wind farms and for designing search protocols. The software is particularly useful in addressing whether the number of fatalities is below a given threshold and what search parameters are needed to give assurance that thresholds were not exceeded. The software also includes tools (1) for estimating carcass persistence distributions and searcher efficiency parameters ( and ) from field trials, (2) for projecting future mortality based on past monitoring data, and (3) for exploring the potential consequences of various choices in the design of long-term incidental take permits for protected species. The software was designed specifically for cases where tolerance for mortality is low and carcass counts are small or even 0, but the tools also may be used for mortality estimates when carcass counts are large.
Web-based segmentation and display of three-dimensional radiologic image data.
Silverstein, J; Rubenstein, J; Millman, A; Panko, W
1998-01-01
In many clinical circumstances, viewing sequential radiological image data as three-dimensional models is proving beneficial. However, designing customized computer-generated radiological models is beyond the scope of most physicians, due to specialized hardware and software requirements. We have created a simple method for Internet users to remotely construct and locally display three-dimensional radiological models using only a standard web browser. Rapid model construction is achieved by distributing the hardware intensive steps to a remote server. Once created, the model is automatically displayed on the requesting browser and is accessible to multiple geographically distributed users. Implementation of our server software on large scale systems could be of great service to the worldwide medical community.
Temperature distribution of thick thermoset composites
NASA Astrophysics Data System (ADS)
Guo, Zhan-Sheng; Du, Shanyi; Zhang, Boming
2004-05-01
The development of temperature distribution of thick polymeric matrix laminates during an autoclave vacuum bag process was measured and compared with numerically calculated results. The finite element formulation of the transient heat transfer problem was carried out for polymeric matrix composite materials from the heat transfer differential equations including internal heat generation produced by exothermic chemical reactions. Software based on the general finite element software package was developed for numerical simulation of the entire composite process. From the experimental and numerical results, it was found that the measured temperature profiles were in good agreement with the numerical ones, and conventional cure cycles recommended by prepreg manufacturers for thin laminates should be modified to prevent temperature overshoot.
Regional Earthquake Shaking and Loss Estimation
NASA Astrophysics Data System (ADS)
Sesetyan, K.; Demircioglu, M. B.; Zulfikar, C.; Durukal, E.; Erdik, M.
2009-04-01
This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses in the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Both Level 0 (similar to PAGER system of USGS) and Level 1 analyses of the ELER routine are based on obtaining intensity distributions analytically and estimating total number of casualties and their geographic distribution either using regionally adjusted intensity-casualty or magnitude-casualty correlations (Level 0) of using regional building inventory data bases (Level 1). Level 0 analysis is similar to the PAGER system being developed by USGS. For given basis source parameters the intensity distributions can be computed using: a)Regional intensity attenuation relationships, b)Intensity correlations with attenuation relationship based PGV, PGA, and Spectral Amplitudes and, c)Intensity correlations with synthetic Fourier Amplitude Spectrum. In Level 1 analysis EMS98 based building vulnerability relationships are used for regional estimates of building damage and the casualty distributions. Results obtained from pilot applications of the Level 0 and Level 1 analysis modes of the ELER software to the 1999 M 7.4 Kocaeli, 1995 M 6.1 Dinar, and 2007 M 5.4 Bingol earthquakes in terms of ground shaking and losses are presented and comparisons with the observed losses are made. The regional earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation and related Monte-Carlo type simulations.
Sensor Data Distribution With Robustness and Reliability: Toward Distributed Components Model
NASA Technical Reports Server (NTRS)
Alena, Richard L.; Lee, Charles
2005-01-01
In planetary surface exploration mission, sensor data distribution is required in many aspects, for example, in navigation, scheduling, planning, monitoring, diagnostics, and automation of the field tasks. The challenge is to distribute such data in the robust and reliable way so that we can minimize the errors caused by miscalculations, and misjudgments that based on the error data input in the mission. The ad-hoc wireless network on planetary surface is not constantly connected because of the nature of the rough terrain and lack of permanent establishments on the surface. There are some disconnected moments that the computation nodes will re-associate with different repeaters or access points until connections are reestablished. Such a nature requires our sensor data distribution software robust and reliable with ability to tolerant disconnected moments. This paper presents a distributed components model as a framework to accomplish such tasks. The software is written in Java and utilized the available Java Message Services schema and the Boss implementation. The results of field experimentations show that the model is very effective in completing the tasks.
EVA: Collaborative Distributed Learning Environment Based in Agents.
ERIC Educational Resources Information Center
Sheremetov, Leonid; Tellez, Rolando Quintero
In this paper, a Web-based learning environment developed within the project called Virtual Learning Spaces (EVA, in Spanish) is presented. The environment is composed of knowledge, collaboration, consulting, experimentation, and personal spaces as a collection of agents and conventional software components working over the knowledge domains. All…
Finite element analysis of container ship's cargo hold using ANSYS and POSEIDON software
NASA Astrophysics Data System (ADS)
Tanny, Tania Tamiz; Akter, Naznin; Amin, Osman Md.
2017-12-01
Nowadays ship structural analysis has become an integral part of the preliminary ship design providing further support for the development and detail design of ship structures. Structural analyses of container ship's cargo holds are carried out for the balancing of their safety and capacity, as those ships are exposed to the high risk of structural damage during voyage. Two different design methodologies have been considered for the structural analysis of a container ship's cargo hold. One is rule-based methodology and the other is a more conventional software based analyses. The rule based analysis is done by DNV-GL's software POSEIDON and the conventional package based analysis is done by ANSYS structural module. Both methods have been applied to analyze some of the mechanical properties of the model such as total deformation, stress-strain distribution, Von Mises stress, Fatigue etc., following different design bases and approaches, to indicate some guidance's for further improvements in ship structural design.
Fault Tolerant Software Technology for Distributed Computer Systems
1989-03-01
RAY.) &-TR-88-296 I Fin;.’ Technical Report ,r 19,39 i A28 3329 F’ULT TOLERANT SOFTWARE TECHNOLOGY FOR DISTRIBUTED COMPUTER SYSTEMS Georgia Institute...GrfisABN 34-70IiWftlI NO0. IN?3. NO IACCESSION NO. 158 21 7 11. TITLE (Incld security Cassification) FAULT TOLERANT SOFTWARE FOR DISTRIBUTED COMPUTER ...Technology for Distributed Computing Systems," a two year effort performed at Georgia Institute of Technology as part of the Clouds Project. The Clouds
Software Tools for Formal Specification and Verification of Distributed Real-Time Systems
1994-07-29
time systems and to evaluate the design. The evaluation of the design includes investigation of both the capability and potential usefulness of the toolkit environment and the feasibility of its implementation....The goals of Phase 1 are to design in detail a toolkit environment based on formal methods for the specification and verification of distributed real
Characterizing Crowd Participation and Productivity of Foldit Through Web Scraping
2016-03-01
Berkeley Open Infrastructure for Network Computing CDF Cumulative Distribution Function CPU Central Processing Unit CSSG Crowdsourced Serious Game...computers at once can create a similar capacity. According to Anderson [6], principal investigator for the Berkeley Open Infrastructure for Network...extraterrestrial life. From this project, a software-based distributed computing platform called the Berkeley Open Infrastructure for Network Computing
Pedretti, Alessandro; Mazzolari, Angelica; Vistoli, Giulio
2018-05-21
The manuscript describes WarpEngine, a novel platform implemented within the VEGA ZZ suite of software for performing distributed simulations both in local and wide area networks. Despite being tailored for structure-based virtual screening campaigns, WarpEngine possesses the required flexibility to carry out distributed calculations utilizing various pieces of software, which can be easily encapsulated within this platform without changing their source codes. WarpEngine takes advantages of all cheminformatics features implemented in the VEGA ZZ program as well as of its largely customizable scripting architecture thus allowing an efficient distribution of various time-demanding simulations. To offer an example of the WarpEngine potentials, the manuscript includes a set of virtual screening campaigns based on the ACE data set of the DUD-E collections using PLANTS as the docking application. Benchmarking analyses revealed a satisfactory linearity of the WarpEngine performances, the speed-up values being roughly equal to the number of utilized cores. Again, the computed scalability values emphasized that a vast majority (i.e., >90%) of the performed simulations benefit from the distributed platform presented here. WarpEngine can be freely downloaded along with the VEGA ZZ program at www.vegazz.net .
Business logic for geoprocessing of distributed geodata
NASA Astrophysics Data System (ADS)
Kiehle, Christian
2006-12-01
This paper describes the development of a business-logic component for the geoprocessing of distributed geodata. The business logic acts as a mediator between the data and the user, therefore playing a central role in any spatial information system. The component is used in service-oriented architectures to foster the reuse of existing geodata inventories. Based on a geoscientific case study of groundwater vulnerability assessment and mapping, the demands for such architectures are identified with special regard to software engineering tasks. Methods are derived from the field of applied Geosciences (Hydrogeology), Geoinformatics, and Software Engineering. In addition to the development of a business logic component, a forthcoming Open Geospatial Consortium (OGC) specification is introduced: the OGC Web Processing Service (WPS) specification. A sample application is introduced to demonstrate the potential of WPS for future information systems. The sample application Geoservice Groundwater Vulnerability is described in detail to provide insight into the business logic component, and demonstrate how information can be generated out of distributed geodata. This has the potential to significantly accelerate the assessment and mapping of groundwater vulnerability. The presented concept is easily transferable to other geoscientific use cases dealing with distributed data inventories. Potential application fields include web-based geoinformation systems operating on distributed data (e.g. environmental planning systems, cadastral information systems, and others).
NASA Astrophysics Data System (ADS)
Kumlander, Deniss
The globalization of companies operations and competitor between software vendors demand improving quality of delivered software and decreasing the overall cost. The same in fact introduce a lot of problem into software development process as produce distributed organization breaking the co-location rule of modern software development methodologies. Here we propose a reformulation of the ambassador position increasing its productivity in order to bridge communication and workflow gap by managing the entire communication process rather than concentrating purely on the communication result.
Design document for the Surface Currents Data Base (SCDB) Management System (SCDBMS), version 1.0
NASA Technical Reports Server (NTRS)
Krisnnamagaru, Ramesh; Cesario, Cheryl; Foster, M. S.; Das, Vishnumohan
1994-01-01
The Surface Currents Database Management System (SCDBMS) provides access to the Surface Currents Data Base (SCDB) which is maintained by the Naval Oceanographic Office (NAVOCEANO). The SCDBMS incorporates database technology in providing seamless access to surface current data. The SCDBMS is an interactive software application with a graphical user interface (GUI) that supports user control of SCDBMS functional capabilities. The purpose of this document is to define and describe the structural framework and logistical design of the software components/units which are integrated into the major computer software configuration item (CSCI) identified as the SCDBMS, Version 1.0. The preliminary design is based on functional specifications and requirements identified in the governing Statement of Work prepared by the Naval Oceanographic Office (NAVOCEANO) and distributed as a request for proposal by the National Aeronautics and Space Administration (NASA).
[Simulation and data analysis of stereological modeling based on virtual slices].
Wang, Hao; Shen, Hong; Bai, Xiao-yan
2008-05-01
To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.
Katzman, G L; Morris, D; Lauman, J; Cochella, C; Goede, P; Harnsberger, H R
2001-06-01
To foster a community supported evaluation processes for open-source digital teaching file (DTF) development and maintenance. The mechanisms used to support this process will include standard web browsers, web servers, forum software, and custom additions to the forum software to potentially enable a mediated voting protocol. The web server will also serve as a focal point for beta and release software distribution, which is the desired end-goal of this process. We foresee that www.mdtf.org will provide for widespread distribution of open source DTF software that will include function and interface design decisions from community participation on the website forums.
Distributed Computer Networks in Support of Complex Group Practices
Wess, Bernard P.
1978-01-01
The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.
The widest practicable dissemination: The NASA technical report server
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Binkley, Robert L.; Kellogg, Yvonne D.; Paulson, Sharon S.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael; Accomazzi, Alberto
1995-01-01
The search for innovative methods to distribute NASA's information lead a gross-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the services over the initial 6-month period. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained will allow NASA to ensure that its institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.
Collaboration and decision making tools for mobile groups
NASA Astrophysics Data System (ADS)
Abrahamyan, Suren; Balyan, Serob; Ter-Minasyan, Harutyun; Degtyarev, Alexander
2017-12-01
Nowadays the use of distributed collaboration tools is widespread in many areas of people activity. But lack of mobility and certain equipment-dependency creates difficulties and decelerates development and integration of such technologies. Also mobile technologies allow individuals to interact with each other without need of traditional office spaces and regardless of location. Hence, realization of special infrastructures on mobile platforms with help of ad-hoc wireless local networks could eliminate hardware-attachment and be useful also in terms of scientific approach. Solutions from basic internet-messengers to complex software for online collaboration equipment in large-scale workgroups are implementations of tools based on mobile infrastructures. Despite growth of mobile infrastructures, applied distributed solutions in group decisionmaking and e-collaboration are not common. In this article we propose software complex for real-time collaboration and decision-making based on mobile devices, describe its architecture and evaluate performance.
Pressure distribution under flexible polishing tools. I - Conventional aspheric optics
NASA Astrophysics Data System (ADS)
Mehta, Pravin K.; Hufnagel, Robert E.
1990-10-01
The paper presents a mathematical model, based on Kirchoff's thin flat plate theory, developed to determine polishing pressure distribution for a flexible polishing tool. A two-layered tool in which bending and compressive stiffnesses are equal is developed, which is formulated as a plate on a linearly elastic foundation. An equivalent eigenvalue problem and solution for a free-free plate are created from the plate formulation. For aspheric, anamorphic optical surfaces, the tool misfit is derived; it is defined as the result of movement from the initial perfect fit on the optic to any other position. The Polisher Design (POD) software for circular tools on aspheric optics is introduced. NASTRAN-based finite element analysis results are compared with the POD software, showing high correlation. By employing existing free-free eigenvalues and eigenfunctions, the work may be extended to rectangular polishing tools as well.
West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Young, Nicholas E.; Stohlgren, Thomas J.; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan
2016-01-01
Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.
West, Amanda M; Evangelista, Paul H; Jarnevich, Catherine S; Young, Nicholas E; Stohlgren, Thomas J; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan
2016-10-11
Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1992-01-01
The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.
Using a web-based survey tool to undertake a Delphi study: application for nurse education research.
Gill, Fenella J; Leslie, Gavin D; Grech, Carol; Latour, Jos M
2013-11-01
The Internet is increasingly being used as a data collection medium to access research participants. This paper reports on the experience and value of using web-survey software to conduct an eDelphi study to develop Australian critical care course graduate practice standards. The eDelphi technique used involved the iterative process of administering three rounds of surveys to a national expert panel. The survey was developed online using SurveyMonkey. Panel members responded to statements using one rating scale for round one and two scales for rounds two and three. Text boxes for panel comments were provided. For each round, the SurveyMonkey's email tool was used to distribute an individualized email invitation containing the survey web link. The distribution of panel responses, individual responses and a summary of comments were emailed to panel members. Stacked bar charts representing the distribution of responses were generated using the SurveyMonkey software. Panel response rates remained greater than 85% over all rounds. An online survey provided numerous advantages over traditional survey approaches including high quality data collection, ease and speed of survey administration, direct communication with the panel and rapid collation of feedback allowing data collection to be undertaken in 12 weeks. Only minor challenges were experienced using the technology. Ethical issues, specific to using the Internet to conduct research and external hosting of web-based software, lacked formal guidance. High response rates and an increased level of data quality were achieved in this study using web-survey software and the process was efficient and user-friendly. However, when considering online survey software, it is important to match the research design with the computer capabilities of participants and recognize that ethical review guidelines and processes have not yet kept pace with online research practices. Copyright © 2013 Elsevier Ltd. All rights reserved.
Experiences using OpenMP based on Computer Directed Software DSM on a PC Cluster
NASA Technical Reports Server (NTRS)
Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland
2003-01-01
In this work we report on our experiences running OpenMP programs on a commodity cluster of PCs running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS Parallel Benchmarks that have been automaticaly parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.
NASA Astrophysics Data System (ADS)
Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.
2015-12-01
The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.
Technology Solutions | Distributed Generation Interconnection Collaborative
technologies, both hardware and software, can support the wider adoption of distributed generation on the grid . As the penetration of distributed-generation photovoltaics (DGPV) has risen rapidly in recent years posed by high penetrations of distributed PV. Other promising technologies include new utility software
WE-DE-201-12: Thermal and Dosimetric Properties of a Ferrite-Based Thermo-Brachytherapy Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warrell, G; Shvydka, D; Parsai, E I
Purpose: The novel thermo-brachytherapy (TB) seed provides a simple means of adding hyperthermia to LDR prostate permanent implant brachytherapy. The high blood perfusion rate (BPR) within the prostate motivates the use of the ferrite and conductive outer layer design for the seed cores. We describe the results of computational analyses of the thermal properties of this ferrite-based TB seed in modelled patient-specific anatomy, as well as studies of the interseed and scatter (ISA) effect. Methods: The anatomies (including the thermophysical properties of the main tissue types) and seed distributions of 6 prostate patients who had been treated with LDR brachytherapymore » seeds were modelled in the finite element analysis software COMSOL, using ferrite-based TB and additional hyperthermia-only (HT-only) seeds. The resulting temperature distributions were compared to those computed for patient-specific seed distributions, but in uniform anatomy with a constant blood perfusion rate. The ISA effect was quantified in the Monte Carlo software package MCNP5. Results: Compared with temperature distributions calculated in modelled uniform tissue, temperature distributions in the patient-specific anatomy were higher and more heterogeneous. Moreover, the maximum temperature to the rectal wall was typically ∼1 °C greater for patient-specific anatomy than for uniform anatomy. The ISA effect of the TB and HT-only seeds caused a reduction in D90 similar to that found for previously-investigated NiCu-based seeds, but of a slightly smaller magnitude. Conclusion: The differences between temperature distributions computed for uniform and patient-specific anatomy for ferrite-based seeds are significant enough that heterogeneous anatomy should be considered. Both types of modelling indicate that ferrite-based seeds provide sufficiently high and uniform hyperthermia to the prostate, without excessively heating surrounding tissues. The ISA effect of these seeds is slightly less than that for the previously-presented NiCu-based seeds.« less
Reproducible Bioconductor workflows using browser-based interactive notebooks and containers.
Almugbel, Reem; Hung, Ling-Hong; Hu, Jiaming; Almutairy, Abeer; Ortogero, Nicole; Tamta, Yashaswi; Yeung, Ka Yee
2018-01-01
Bioinformatics publications typically include complex software workflows that are difficult to describe in a manuscript. We describe and demonstrate the use of interactive software notebooks to document and distribute bioinformatics research. We provide a user-friendly tool, BiocImageBuilder, that allows users to easily distribute their bioinformatics protocols through interactive notebooks uploaded to either a GitHub repository or a private server. We present four different interactive Jupyter notebooks using R and Bioconductor workflows to infer differential gene expression, analyze cross-platform datasets, process RNA-seq data and KinomeScan data. These interactive notebooks are available on GitHub. The analytical results can be viewed in a browser. Most importantly, the software contents can be executed and modified. This is accomplished using Binder, which runs the notebook inside software containers, thus avoiding the need to install any software and ensuring reproducibility. All the notebooks were produced using custom files generated by BiocImageBuilder. BiocImageBuilder facilitates the publication of workflows with a point-and-click user interface. We demonstrate that interactive notebooks can be used to disseminate a wide range of bioinformatics analyses. The use of software containers to mirror the original software environment ensures reproducibility of results. Parameters and code can be dynamically modified, allowing for robust verification of published results and encouraging rapid adoption of new methods. Given the increasing complexity of bioinformatics workflows, we anticipate that these interactive software notebooks will become as necessary for documenting software methods as traditional laboratory notebooks have been for documenting bench protocols, and as ubiquitous. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.
2004-11-01
Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.
Mittal, Varun; Hung, Ling-Hong; Keswani, Jayant; Kristiyanto, Daniel; Lee, Sung Bong
2017-01-01
Abstract Background: Software container technology such as Docker can be used to package and distribute bioinformatics workflows consisting of multiple software implementations and dependencies. However, Docker is a command line–based tool, and many bioinformatics pipelines consist of components that require a graphical user interface. Results: We present a container tool called GUIdock-VNC that uses a graphical desktop sharing system to provide a browser-based interface for containerized software. GUIdock-VNC uses the Virtual Network Computing protocol to render the graphics within most commonly used browsers. We also present a minimal image builder that can add our proposed graphical desktop sharing system to any Docker packages, with the end result that any Docker packages can be run using a graphical desktop within a browser. In addition, GUIdock-VNC uses the Oauth2 authentication protocols when deployed on the cloud. Conclusions: As a proof-of-concept, we demonstrated the utility of GUIdock-noVNC in gene network inference. We benchmarked our container implementation on various operating systems and showed that our solution creates minimal overhead. PMID:28327936
Mittal, Varun; Hung, Ling-Hong; Keswani, Jayant; Kristiyanto, Daniel; Lee, Sung Bong; Yeung, Ka Yee
2017-04-01
Software container technology such as Docker can be used to package and distribute bioinformatics workflows consisting of multiple software implementations and dependencies. However, Docker is a command line-based tool, and many bioinformatics pipelines consist of components that require a graphical user interface. We present a container tool called GUIdock-VNC that uses a graphical desktop sharing system to provide a browser-based interface for containerized software. GUIdock-VNC uses the Virtual Network Computing protocol to render the graphics within most commonly used browsers. We also present a minimal image builder that can add our proposed graphical desktop sharing system to any Docker packages, with the end result that any Docker packages can be run using a graphical desktop within a browser. In addition, GUIdock-VNC uses the Oauth2 authentication protocols when deployed on the cloud. As a proof-of-concept, we demonstrated the utility of GUIdock-noVNC in gene network inference. We benchmarked our container implementation on various operating systems and showed that our solution creates minimal overhead. © The Authors 2017. Published by Oxford University Press.
Development of confidence limits by pivotal functions for estimating software reliability
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-09-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, L.; Liming, L.; Foster, I.
2008-10-15
This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has beenmore » to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2) A method for characterizing users according to their technology interactions, and identification of four user types among the interviewees using the method; (3) Four profiles that highlight points of commonality and diversity in each user type; (4) Recommendations for technology developers and future studies; (5) A description of the interview protocol and overall study methodology; (6) An anonymized list of the interviewees; and (7) Interview writeups and summary data. The interview summaries in Section 3 and transcripts in Appendix D illustrate the value of distributed computing software--and Globus in particular--to scientific enterprises. They also document opportunities to make these tools still more useful both to current users and to new communities. We aim our recommendations at developers who intend their software to be used and reused in many applications. (This kind of software is often referred to as 'middleware.') Our two core recommendations are as follows. First, it is essential for middleware developers to understand and explicitly manage the multiple user products in which their software components are used. We must avoid making assumptions about the commonality of these products and, instead, study and account for their diversity. Second, middleware developers should engage in different ways with different kinds of users. Having identified four general user types in Section 4, we provide specific ideas for how to engage them in Section 5.« less
Customer Communication Challenges and Solutions in Globally Distributed Agile Software Development
NASA Astrophysics Data System (ADS)
Pikkarainen, Minna; Korkala, Mikko
Working in the globally distributed market is one of the key trends among the software organizations all over the world. [1-5]. Several factors have contributed to the growth of distributed software development; time-zone independent ”follow the sun” development, access to well-educated labour, maturation of the technical infrastructure and reduced costs are some of the most commonly cited benefits of distributed development [3, 6-8]. Furthermore, customers are often located in different countries because of the companies’ internationalization purposes or good market opportunities.
NASA Tech Briefs, January 2003
NASA Technical Reports Server (NTRS)
2003-01-01
Topics covered include: Optoelectronic Tool Adds Scale Marks to Photographic Images; Compact Interconnection Networks Based on Quantum Dots; Laterally Coupled Quantum-Dot Distributed-Feedback Lasers; Bit-Serial Adder Based on Quantum Dots; Stabilized Fiber-Optic Distribution of Reference Frequency; Delay/Doppler-Mapping GPS-Reflection Remote-Sensing System; Ladar System Identifies Obstacles Partly Hidden by Grass; Survivable Failure Data Recorders for Spacecraft; Fiber-Optic Ammonia Sensors; Silicon Membrane Mirrors with Electrostatic Shape Actuators; Nanoscale Hot-Wire Probes for Boundary-Layer Flows; Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing; Efficient Coupling of Lasers to Telescopes with Obscuration; Aligning Three Off-Axis Mirrors with Help of a DOE; Calibrating Laser Gas Measurements by Use of Natural CO2; Laser Ranging Simulation Program; Micro-Ball-Lens Optical Switch Driven by SMA Actuator; Evaluation of Charge Storage and Decay in Spacecraft Insulators; Alkaline Capacitors Based on Nitride Nanoparticles; Low-EC-Content Electrolytes for Low-Temperature Li-Ion Cells; Software for a GPS-Reflection Remote-Sensing System; Software for Building Models of 3D Objects via the Internet; "Virtual Cockpit Window" for a Windowless Aerospacecraft; CLARAty Functional-Layer Software; Java Library for Input and Output of Image Data and Metadata; Software for Estimating Costs of Testing Rocket Engines; Energy-Absorbing, Lightweight Wheels; Viscoelastic Vibration Dampers for Turbomachine Blades; Soft Landing of Spacecraft on Energy-Absorbing Self-Deployable Cushions; Pneumatically Actuated Miniature Peristaltic Vacuum Pumps; Miniature Gas-Turbine Power Generator; Pressure-Sensor Assembly Technique; Wafer-Level Membrane-Transfer Process for Fabricating MEMS; A Reactive-Ion Etch for Patterning Piezoelectric Thin Film; Wavelet-Based Real-Time Diagnosis of Complex Systems; Quantum Search in Hilbert Space; Analytic Method for Computing Instrument Pointing Jitter; and Semiselective Optoelectronic Sensors for Monitoring Microbes.
Knowledge-based processing for aircraft flight control
NASA Technical Reports Server (NTRS)
Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul
1994-01-01
This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.
NASA Technical Reports Server (NTRS)
2001-01-01
REI Systems, Inc. developed a software solution that uses the Internet to eliminate the paperwork typically required to document and manage complex business processes. The data management solution, called Electronic Handbooks (EHBs), is presently used for the entire SBIR program processes at NASA. The EHB-based system is ideal for programs and projects whose users are geographically distributed and are involved in complex management processes and procedures. EHBs provide flexible access control and increased communications while maintaining security for systems of all sizes. Through Internet Protocol- based access, user authentication and user-based access restrictions, role-based access control, and encryption/decryption, EHBs provide the level of security required for confidential data transfer. EHBs contain electronic forms and menus, which can be used in real time to execute the described processes. EHBs use standard word processors that generate ASCII HTML code to set up electronic forms that are viewed within a web browser. EHBs require no end-user software distribution, significantly reducing operating costs. Each interactive handbook simulates a hard-copy version containing chapters with descriptions of participants' roles in the online process.
Rafael Moreno-Sanchez
2006-01-01
The aim of this is paper is to provide a conceptual framework for the session: âThe role of web-based Geographic Information Systems in supporting sustainable management.â The concepts of sustainability, sustainable forest management, Web Services, Distributed Geographic Information Systems, interoperability, Open Specifications, and Open Source Software are defined...
Programming model for distributed intelligent systems
NASA Technical Reports Server (NTRS)
Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.
1988-01-01
A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.
Evaluation of a Game-Based Simulation During Distributed Exercises
2010-09-01
the management team guiding development of the software. The questionnaires have not been used enough to collect data sufficient for factor...capable of internationally distributed exercises without excessive time lags or technical problems, given that commercial games seem to manage while...established by RDECOM-STTC military liaison and managers . Engineering constraints combined to limit the number of participants and the possible roles that
A distributed data acquisition software scheme for the Laboratory Telerobotic Manipulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, P.L.; Glassell, R.L.; Rowe, J.C.
1990-01-01
A custom software architecture was developed for use in the Laboratory Telerobotic Manipulator (LTM) to provide support for the distributed data acquisition electronics. This architecture was designed to provide a comprehensive development environment that proved to be useful for both hardware and software debugging. This paper describes the development environment and the operational characteristics of the real-time data acquisition software. 8 refs., 5 figs.
Pi-Sat: A Low Cost Small Satellite and Distributed Spacecraft Mission System Test Platform
NASA Technical Reports Server (NTRS)
Cudmore, Alan
2015-01-01
Current technology and budget trends indicate a shift in satellite architectures from large, expensive single satellite missions, to small, low cost distributed spacecraft missions. At the center of this shift is the SmallSatCubesat architecture. The primary goal of the Pi-Sat project is to create a low cost, and easy to use Distributed Spacecraft Mission (DSM) test bed to facilitate the research and development of next-generation DSM technologies and concepts. This test bed also serves as a realistic software development platform for Small Satellite and Cubesat architectures. The Pi-Sat is based on the popular $35 Raspberry Pi single board computer featuring a 700Mhz ARM processor, 512MB of RAM, a flash memory card, and a wealth of IO options. The Raspberry Pi runs the Linux operating system and can easily run Code 582s Core Flight System flight software architecture. The low cost and high availability of the Raspberry Pi make it an ideal platform for a Distributed Spacecraft Mission and Cubesat software development. The Pi-Sat models currently include a Pi-Sat 1U Cube, a Pi-Sat Wireless Node, and a Pi-Sat Cubesat processor card.The Pi-Sat project takes advantage of many popular trends in the Maker community including low cost electronics, 3d printing, and rapid prototyping in order to provide a realistic platform for flight software testing, training, and technology development. The Pi-Sat has also provided fantastic hands on training opportunities for NASA summer interns and Pathways students.
Johnson, Z. P.; Eady, R. D.; Ahmad, S. F.; Agravat, S.; Morris, T; Else, J; Lank, S. M.; Wiseman, R. W.; O’Connor, D. H.; Penedo, M. C. T.; Larsen, C. P.
2012-01-01
Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permitsmultiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox onWindows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie. kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo, user name: imsdemo7@gmail.com and password: imsdemo. PMID:22080300
Review of the Water Resources Information System of Argentina
Hutchison, N.E.
1987-01-01
A representative of the U.S. Geological Survey traveled to Buenos Aires, Argentina, in November 1986, to discuss water information systems and data bank implementation in the Argentine Government Center for Water Resources Information. Software has been written by Center personnel for a minicomputer to be used to manage inventory (index) data and water quality data. Additional hardware and software have been ordered to upgrade the existing computer. Four microcomputers, statistical and data base management software, and network hardware and software for linking the computers have also been ordered. The Center plans to develop a nationwide distributed data base for Argentina that will include the major regional offices as nodes. Needs for continued development of the water resources information system for Argentina were reviewed. Identified needs include: (1) conducting a requirements analysis to define the content of the data base and insure that all user requirements are met, (2) preparing a plan for the development, implementation, and operation of the data base, and (3) developing a conceptual design to inform all development personnel and users of the basic functionality planned for the system. A quality assurance and configuration management program to provide oversight to the development process was also discussed. (USGS)
Johnson, Z P; Eady, R D; Ahmad, S F; Agravat, S; Morris, T; Else, J; Lank, S M; Wiseman, R W; O'Connor, D H; Penedo, M C T; Larsen, C P; Kean, L S
2012-04-01
Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permits multiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox on Windows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie.kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo , user name: imsdemo7@gmail.com and password: imsdemo.
The new OGC Catalogue Services 3.0 specification - status of work
NASA Astrophysics Data System (ADS)
Bigagli, Lorenzo; Voges, Uwe
2013-04-01
We report on the work of the Open Geospatial Consortium Catalogue Services 3.0 Standards Working Group (OGC Cat 3.0 SWG for short), started in March 2008, with the purpose to process change requests on the Catalogue Services 2.0.2 Implementation Specification (OGC 07-006r1) and produce a revised version thereof, comprising the related XML schemas and abstract test suite. The work was initially intended as a minor revision (version 2.1), but later retargeted as a major update of the standard and rescheduled (the anticipated roadmap ended in 2008). The target audience of Catalogue Services 3.0 includes: • Implementors of catalogue services solutions. • Designers and developers of catalogue services profiles. • Providers/users of catalogue services. The two main general areas of enhancement included: restructuring the specification document according to the OGC standard for modular specifications (OGC 08-131r3, also known as Core and Extension model); incorporating the current mass-market technologies for discovery on the Web, namely OpenSearch. The document was initially split into four parts: the general model and the three protocol bindings HTTP, Z39.50, and CORBA. The CORBA binding, which was very rarely implemented, and the Z39.50 binding have later been dropped. Parts of the Z39.50 binding, namely Search/Retrieve via URL (SRU; same semantics as Z39.50, but stateless), have been provided as a discussion paper (OGC 12-082) for possibly developing a future SRU profile. The Catalogue Services 3.0 specification is structured as follows: • Part 1: General Model (Core) • Part 2: HTTP Protocol Binding (CSW) In CSW, the GET/KVP encoding is mandatory. The POST/XML encoding is optional. SOAP is supported as a special case of the POST/XML encoding. OpenSearch must always be supported, regardless of the implemented profiles, along with the OpenSearch Geospatial and Temporal Extensions (OGC 10-032r2). The latter specifies spatial (e.g. point-plus-radius, bounding box, polygons, in EPSG:4326/WGS84 coordinates) and temporal constraints (e.g. time start/end) for searching. The temporal extent (begin/end) is added as a core queryable and returnable property. Plenty of other changes are incorporated, including improvements to query distribution, WSDL and schema documents, requirements and conformance classes. In conclusion, CS 3.0 is a long-awaited revision of the previous CS 2.0.2 Implementation Specification (approved in early 2007), whose requirements started to be collected as early as July 2007. Various profiles of CS 2.0.2 exist, with related dependencies, and will need to be harmonized with this specification.
Generalized Support Software: Domain Analysis and Implementation
NASA Technical Reports Server (NTRS)
Stark, Mike; Seidewitz, Ed
1995-01-01
For the past five years, the Flight Dynamics Division (FDD) at NASA's Goddard Space Flight Center has been carrying out a detailed domain analysis effort and is now beginning to implement Generalized Support Software (GSS) based on this analysis. GSS is part of the larger Flight Dynamics Distributed System (FDDS), and is designed to run under the FDDS User Interface / Executive (UIX). The FDD is transitioning from a mainframe based environment to systems running on engineering workstations. The GSS will be a library of highly reusable components that may be configured within the standard FDDS architecture to quickly produce low-cost satellite ground support systems. The estimates for the first release is that this library will contain approximately 200,000 lines of code. The main driver for developing generalized software is development cost and schedule improvement. The goal is to ultimately have at least 80 percent of all software required for a spacecraft mission (within the domain supported by the GSS) to be configured from the generalized components.
Distributed nuclear medicine applications using World Wide Web and Java technology.
Knoll, P; Höll, K; Mirzaei, S; Koriska, K; Köhn, H
2000-01-01
At present, medical applications applying World Wide Web (WWW) technology are mainly used to view static images and to retrieve some information. The Java platform is a relative new way of computing, especially designed for network computing and distributed applications which enables interactive connection between user and information via the WWW. The Java 2 Software Development Kit (SDK) including Java2D API, Java Remote Method Invocation (RMI) technology, Object Serialization and the Java Advanced Imaging (JAI) extension was used to achieve a robust, platform independent and network centric solution. Medical image processing software based on this technology is presented and adequate performance capability of Java is demonstrated by an iterative reconstruction algorithm for single photon emission computerized tomography (SPECT).
A Case Study of Coordination in Distributed Agile Software Development
NASA Astrophysics Data System (ADS)
Hole, Steinar; Moe, Nils Brede
Global Software Development (GSD) has gained significant popularity as an emerging paradigm. Companies also show interest in applying agile approaches in distributed development to combine the advantages of both approaches. However, in their most radical forms, agile and GSD can be placed in each end of a plan-based/agile spectrum because of how work is coordinated. We describe how three GSD projects applying agile methods coordinate their work. We found that trust is needed to reduce the need of standardization and direct supervision when coordinating work in a GSD project, and that electronic chatting supports mutual adjustment. Further, co-location and modularization mitigates communication problems, enables agility in at least part of a GSD project, and renders the implementation of Scrum of Scrums possible.
A User-Friendly Software Package for HIFU Simulation
NASA Astrophysics Data System (ADS)
Soneson, Joshua E.
2009-04-01
A freely-distributed, MATLAB (The Mathworks, Inc., Natick, MA)-based software package for simulating axisymmetric high-intensity focused ultrasound (HIFU) beams and their heating effects is discussed. The package (HIFU_Simulator) consists of a propagation module which solves the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and a heating module which solves Pennes' bioheat transfer (BHT) equation. The pressure, intensity, heating rate, temperature, and thermal dose fields are computed, plotted, the output is released to the MATLAB workspace for further user analysis or postprocessing.
A VME-based software trigger system using UNIX processors
NASA Astrophysics Data System (ADS)
Atmur, Robert; Connor, David F.; Molzon, William
1997-02-01
We have constructed a distributed computing platform with eight processors to assemble and filter data from digitization crates. The filtered data were transported to a tape-writing UNIX computer via ethernet. Each processor ran a UNIX operating system and was installed in its own VME crate. Each VME crate contained dual-port memories which interfaced with the digitizers. Using standard hardware and software (VME and UNIX) allows us to select from a wide variety of non-proprietary products and makes upgrades simpler, if they are necessary.
Experiences Using OpenMP Based on Compiler Directed Software DSM on a PC Cluster
NASA Technical Reports Server (NTRS)
Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland; Biegel, Bryan (Technical Monitor)
2002-01-01
In this work we report on our experiences running OpenMP (message passing) programs on a commodity cluster of PCs (personal computers) running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS (NASA Advanced Supercomputing) Parallel Benchmarks that have been automatically parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.
S-Cube: Enabling the Next Generation of Software Services
NASA Astrophysics Data System (ADS)
Metzger, Andreas; Pohl, Klaus
The Service Oriented Architecture (SOA) paradigm is increasingly adopted by industry for building distributed software systems. However, when designing, developing and operating innovative software services and servicebased systems, several challenges exist. Those challenges include how to manage the complexity of those systems, how to establish, monitor and enforce Quality of Service (QoS) and Service Level Agreements (SLAs), as well as how to build those systems such that they can proactively adapt to dynamically changing requirements and context conditions. Developing foundational solutions for those challenges requires joint efforts of different research communities such as Business Process Management, Grid Computing, Service Oriented Computing and Software Engineering. This paper provides an overview of S-Cube, the European Network of Excellence on Software Services and Systems. S-Cube brings together researchers from leading research institutions across Europe, who join their competences to develop foundations, theories as well as methods and tools for future service-based systems.
Spacelab experiment computer study. Volume 1: Executive summary (presentation)
NASA Technical Reports Server (NTRS)
Lewis, J. L.; Hodges, B. C.; Christy, J. O.
1976-01-01
A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.
NEXUS - Resilient Intelligent Middleware
NASA Astrophysics Data System (ADS)
Kaveh, N.; Hercock, R. Ghanea
Service-oriented computing, a composition of distributed-object computing, component-based, and Web-based concepts, is becoming the widespread choice for developing dynamic heterogeneous software assets available as services across a network. One of the major strengths of service-oriented technologies is the high abstraction layer and large granularity level at which software assets are viewed compared to traditional object-oriented technologies. Collaboration through encapsulated and separately defined service interfaces creates a service-oriented environment, whereby multiple services can be linked together through their interfaces to compose a functional system. This approach enables better integration of legacy and non-legacy services, via wrapper interfaces, and allows for service composition at a more abstract level especially in cases such as vertical market stacks. The heterogeneous nature of service-oriented technologies and the granularity of their software components makes them a suitable computing model in the pervasive domain.
Simulation of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Richsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.
2004-01-01
A software package generates simulated hyperspectral imagery for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport, as well as reflections from surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, "ground truth" is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces, as well as the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for, and a supplement to, field validation data.
Software Authority Transition through Multiple Distributors
Han, Kyusunk; Shon, Taeshik
2014-01-01
The rapid growth in the use of smartphones and tablets has changed the software distribution ecosystem. The trend today is to purchase software through application stores rather than from traditional offline markets. Smartphone and tablet users can install applications easily by purchasing from the online store deployed in their device. Several systems, such as Android or PC-based OS units, allow users to install software from multiple sources. Such openness, however, can promote serious threats, including malware and illegal usage. In order to prevent such threats, several stores use online authentication techniques. These methods can, however, also present a problem whereby even licensed users cannot use their purchased application. In this paper, we discuss these issues and provide an authentication method that will make purchased applications available to the registered user at all times. PMID:25143971
Software authority transition through multiple distributors.
Han, Kyusunk; Shon, Taeshik
2014-01-01
The rapid growth in the use of smartphones and tablets has changed the software distribution ecosystem. The trend today is to purchase software through application stores rather than from traditional offline markets. Smartphone and tablet users can install applications easily by purchasing from the online store deployed in their device. Several systems, such as Android or PC-based OS units, allow users to install software from multiple sources. Such openness, however, can promote serious threats, including malware and illegal usage. In order to prevent such threats, several stores use online authentication techniques. These methods can, however, also present a problem whereby even licensed users cannot use their purchased application. In this paper, we discuss these issues and provide an authentication method that will make purchased applications available to the registered user at all times.
Comparison of Fiber Optic Strain Demodulation Implementations
NASA Technical Reports Server (NTRS)
Quach, Cuong C.; Vazquez, Sixto L.
2005-01-01
NASA Langley Research Center is developing instrumentation based upon principles of Optical Frequency-Domain Reflectometry (OFDR) for the provision of large-scale, dense distribution of strain sensors using fiber optics embedded with Bragg gratings. Fiber Optic Bragg Grating technology enables the distribution of thousands of sensors immune to moisture and electromagnetic interference with negligible weight penalty. At Langley, this technology provides a key component for research and development relevant to comprehensive aerospace vehicle structural health monitoring. A prototype system is under development that includes hardware and software necessary for the acquisition of data from an optical network and conversion of the data into strain measurements. This report documents the steps taken to verify the software that implements the algorithm for calculating the fiber strain. Brief descriptions of the strain measurement system and the test article are given. The scope of this report is the verification of software implementations as compared to a reference model. The algorithm will be detailed along with comparison results.
Error correction and diversity analysis of population mixtures determined by NGS
Burroughs, Nigel J.; Evans, David J.; Ryabov, Eugene V.
2014-01-01
The impetus for this work was the need to analyse nucleotide diversity in a viral mix taken from honeybees. The paper has two findings. First, a method for correction of next generation sequencing error in the distribution of nucleotides at a site is developed. Second, a package of methods for assessment of nucleotide diversity is assembled. The error correction method is statistically based and works at the level of the nucleotide distribution rather than the level of individual nucleotides. The method relies on an error model and a sample of known viral genotypes that is used for model calibration. A compendium of existing and new diversity analysis tools is also presented, allowing hypotheses about diversity and mean diversity to be tested and associated confidence intervals to be calculated. The methods are illustrated using honeybee viral samples. Software in both Excel and Matlab and a guide are available at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/, the Warwick University Systems Biology Centre software download site. PMID:25405074
48 CFR 227.7203-9 - Copyright.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Software and Computer Software Documentation 227.7203-9 Copyright. (a) Copyright license. (1) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software... Government to reproduce the software or documentation, distribute copies, perform or display the software or...
48 CFR 227.7203-9 - Copyright.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Software and Computer Software Documentation 227.7203-9 Copyright. (a) Copyright license. (1) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software... Government to reproduce the software or documentation, distribute copies, perform or display the software or...
48 CFR 227.7203-9 - Copyright.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Software and Computer Software Documentation 227.7203-9 Copyright. (a) Copyright license. (1) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software... Government to reproduce the software or documentation, distribute copies, perform or display the software or...
48 CFR 227.7203-9 - Copyright.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Software and Computer Software Documentation 227.7203-9 Copyright. (a) Copyright license. (1) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software... Government to reproduce the software or documentation, distribute copies, perform or display the software or...
48 CFR 227.7203-9 - Copyright.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Software and Computer Software Documentation 227.7203-9 Copyright. (a) Copyright license. (1) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software... Government to reproduce the software or documentation, distribute copies, perform or display the software or...
Blagec, Kathrin; Jungwirth, David; Haluza, Daniela; Samwald, Matthias
2018-01-01
Medical device regulations which aim to ensure safety standards do not only apply to hardware devices but also to standalone medical software, e.g. mobile apps. To explore the effects of these regulations on the development and distribution of medical standalone software. We invited a convenience sample of 130 domain experts to participate in an online survey about the impact of current regulations on the development and distribution of medical standalone software. 21 respondents completed the questionnaire. Participants reported slight positive effects on usability, reliability, and data security of their products, whereas the ability to modify already deployed software and customization by end users were negatively impacted. The additional time and costs needed to go through the regulatory process were perceived as the greatest obstacles in developing and distributing medical software. Further research is needed to compare positive effects on software quality with negative impacts on market access and innovation. Strategies for avoiding over-regulation while still ensuring safety standards need to be devised.
Analysis and numerical simulation research of the heating process in the oven
NASA Astrophysics Data System (ADS)
Chen, Yawei; Lei, Dingyou
2016-10-01
How to use the oven to bake delicious food is the most concerned problem of the designers and users of the oven. For this intent, this paper analyzed the heat distribution in the oven based on the basic operation principles and proceeded the data simulation of the temperature distribution on the rack section. Constructing the differential equation model of the temperature distribution changes in the pan when the oven works based on the heat radiation and heat transmission, based on the idea of utilizing cellular automation to simulate heat transfer process, used ANSYS software to proceed the numerical simulation analysis to the rectangular, round-cornered rectangular, elliptical and circular pans and giving out the instantaneous temperature distribution of the corresponding shapes of the pans. The temperature distribution of the rectangular and circular pans proves that the product gets overcooked easily at the corners and edges of rectangular pans but not of a round pan.
A controlled experiment on the impact of software structure on maintainability
NASA Technical Reports Server (NTRS)
Rombach, Dieter H.
1987-01-01
The impact of software structure on maintainability aspects including comprehensibility, locality, modifiability, and reusability in a distributed system environment is studied in a controlled maintenance experiment involving six medium-size distributed software systems implemented in LADY (language for distributed systems) and six in an extended version of sequential PASCAL. For all maintenance aspects except reusability, the results were quantitatively given in terms of complexity metrics which could be automated. The results showed LADY to be better suited to the development of maintainable software than the extension of sequential PASCAL. The strong typing combined with high parametrization of units is suggested to improve the reusability of units in LADY.
Cooperative organic mine avoidance path planning
NASA Astrophysics Data System (ADS)
McCubbin, Christopher B.; Piatko, Christine D.; Peterson, Adam V.; Donnald, Creighton R.; Cohen, David
2005-06-01
The JHU/APL Path Planning team has developed path planning techniques to look for paths that balance the utility and risk associated with different routes through a minefield. Extending on previous years' efforts, we investigated real-world Naval mine avoidance requirements and developed a tactical decision aid (TDA) that satisfies those requirements. APL has developed new mine path planning techniques using graph based and genetic algorithms which quickly produce near-minimum risk paths for complicated fitness functions incorporating risk, path length, ship kinematics, and naval doctrine. The TDA user interface, a Java Swing application that obtains data via Corba interfaces to path planning databases, allows the operator to explore a fusion of historic and in situ mine field data, control the path planner, and display the planning results. To provide a context for the minefield data, the user interface also renders data from the Digital Nautical Chart database, a database created by the National Geospatial-Intelligence Agency containing charts of the world's ports and coastal regions. This TDA has been developed in conjunction with the COMID (Cooperative Organic Mine Defense) system. This paper presents a description of the algorithms, architecture, and application produced.
West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew; Chignell, Steve
2017-01-01
Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.
NASA Astrophysics Data System (ADS)
West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew W.; Chignell, Stephen M.
2017-07-01
Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.
2016-08-11
INSTITUTE | CARNEGIE MELLON UNIVERSITY [Distribution Statement A] This material has been approved for public release and unlimited distribution...Copyright 2016 Carnegie Mellon University This material is based upon work funded and supported by the Department of Defense under Contract No. FA8721...05-C-0003 with Carnegie Mellon University for the operation of the Software Engineer- ing Institute, a federally funded research and development
A Disk-Based System for Producing and Distributing Science Products from MODIS
NASA Technical Reports Server (NTRS)
Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael
2007-01-01
Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.
Low latency messages on distributed memory multiprocessors
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Saltz, Joel
1993-01-01
Many of the issues in developing an efficient interface for communication on distributed memory machines are described and a portable interface is proposed. Although the hardware component of message latency is less than one microsecond on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 microseconds. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. Based on several tests that were run on the iPSC/860, an interface that will better match current distributed memory machines is proposed. The model used in the proposed interface consists of a computation processor and a communication processor on each node. Communication between these processors and other nodes in the system is done through a buffered network. Information that is transmitted is either data or procedures to be executed on the remote processor. The dual processor system is better suited for efficiently handling asynchronous communications compared to a single processor system. The ability to send data or procedure is very flexible for minimizing message latency, based on the type of communication being performed. The test performed and the proposed interface are described.
Computer Sciences and Data Systems, volume 1
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.
Are the expected benefits of requirements reuse hampered by distance? An experiment.
Carrillo de Gea, Juan M; Nicolás, Joaquín; Fernández-Alemán, José L; Toval, Ambrosio; Idri, Ali
2016-01-01
Software development processes are often performed by distributed teams which may be separated by great distances. Global software development (GSD) has undergone a significant growth in recent years. The challenges concerning GSD are especially relevant to requirements engineering (RE). Stakeholders need to share a common ground, but there are many difficulties as regards the potentially variable interpretation of the requirements in different contexts. We posit that the application of requirements reuse techniques could alleviate this problem through the diminution of the number of requirements open to misinterpretation. This paper presents a reuse-based approach with which to address RE in GSD, with special emphasis on specification techniques, namely parameterised requirements and traceability relationships. An experiment was carried out with the participation of 29 university students enrolled on a Computer Science and Engineering course. Two main scenarios that represented co-localisation and distribution in software development were portrayed by participants from Spain and Morocco. The global teams achieved a slightly better performance than the co-located teams as regards effectiveness , which could be a result of the worse productivity of the global teams in comparison to the co-located teams. Subjective perceptions were generally more positive in the case of the distributed teams ( difficulty , speed and understanding ), with the exception of quality . A theoretical model has been proposed as an evaluation framework with which to analyse, from the point of view of the factor of distance, the effect of requirements specification techniques on a set of performance and perception-based variables. The experiment utilised a new internationalisation requirements catalogue. None of the differences found between co-located and distributed teams were significant according to the outcome of our statistical tests. The well-known benefits of requirements reuse in traditional co-located projects could, therefore, also be expected in GSD projects.
SU-E-P-43: A Knowledge Based Approach to Guidelines for Software Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salomons, G; Kelly, D
Purpose: In the fall of 2012, a survey was distributed to medical physicists across Canada. The survey asked the respondents to comment on various aspects of software development and use in their clinic. The survey revealed that most centers employ locally produced (in-house) software of some kind. The respondents also indicated an interest in having software guidelines, but cautioned that the realities of cancer clinics include variations, that preclude a simple solution. Traditional guidelines typically involve periodically repeating a set of prescribed tests with defined tolerance limits. However, applying a similar formula to software is problematic since it assumes thatmore » the users have a perfect knowledge of how and when to apply the software and that if the software operates correctly under one set of conditions it will operate correctly under all conditions Methods: In the approach presented here the personnel involved with the software are included as an integral part of the system. Activities performed to improve the safety of the software are done with both software and people in mind. A learning oriented approach is taken, following the premise that the best approach to safety is increasing the understanding of those associated with the use or development of the software. Results: The software guidance document is organized by areas of knowledge related to use and development of software. The categories include: knowledge of the underlying algorithm and its limitations; knowledge of the operation of the software, such as input values, parameters, error messages, and interpretation of output; and knowledge of the environment for the software including both data and users. Conclusion: We propose a new approach to developing guidelines which is based on acquiring knowledge-rather than performing tests. The ultimate goal is to provide robust software guidelines which will be practical and effective.« less
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
The Bioperl Toolkit: Perl Modules for the Life Sciences
Stajich, Jason E.; Block, David; Boulez, Kris; Brenner, Steven E.; Chervitz, Stephen A.; Dagdigian, Chris; Fuellen, Georg; Gilbert, James G.R.; Korf, Ian; Lapp, Hilmar; Lehväslaiho, Heikki; Matsalla, Chad; Mungall, Chris J.; Osborne, Brian I.; Pocock, Matthew R.; Schattner, Peter; Senger, Martin; Stein, Lincoln D.; Stupka, Elia; Wilkinson, Mark D.; Birney, Ewan
2002-01-01
The Bioperl project is an international open-source collaboration of biologists, bioinformaticians, and computer scientists that has evolved over the past 7 yr into the most comprehensive library of Perl modules available for managing and manipulating life-science information. Bioperl provides an easy-to-use, stable, and consistent programming interface for bioinformatics application programmers. The Bioperl modules have been successfully and repeatedly used to reduce otherwise complex tasks to only a few lines of code. The Bioperl object model has been proven to be flexible enough to support enterprise-level applications such as EnsEMBL, while maintaining an easy learning curve for novice Perl programmers. Bioperl is capable of executing analyses and processing results from programs such as BLAST, ClustalW, or the EMBOSS suite. Interoperation with modules written in Python and Java is supported through the evolving BioCORBA bridge. Bioperl provides access to data stores such as GenBank and SwissProt via a flexible series of sequence input/output modules, and to the emerging common sequence data storage format of the Open Bioinformatics Database Access project. This study describes the overall architecture of the toolkit, the problem domains that it addresses, and gives specific examples of how the toolkit can be used to solve common life-sciences problems. We conclude with a discussion of how the open-source nature of the project has contributed to the development effort. [Supplemental material is available online at www.genome.org. Bioperl is available as open-source software free of charge and is licensed under the Perl Artistic License (http://www.perl.com/pub/a/language/misc/Artistic.html). It is available for download at http://www.bioperl.org. Support inquiries should be addressed to bioperl-l@bioperl.org.] PMID:12368254
NASA Astrophysics Data System (ADS)
Kumar, T. S.
2016-08-01
In this paper, we describe the details of control unit and GUI software for positioning two filter wheels, a slit wheel and a grism wheel in the ADFOSC instrument. This is a first generation instrument being built for the 3.6 m Devasthal optical telescope. The control hardware consists of five electronic boards based on low cost 8-bit PIC microcontrollers and are distributed over I2C bus. The four wheels are controlled by four identical boards which are configured in I2C slave mode while the fifth board acts as an I2C master for sending commands to and receiving status from the slave boards. The master also communicates with the interfacing PC over TCP/IP protocol using simple ASCII commands. For moving the wheels stepper motors along with suitable amplifiers have been employed. Homing after powering ON is achieved using hall effect sensors. By implementing distributed control units having identical design modularity is achieved enabling easier maintenance and upgradation. A GUI based software for commanding the instrument is developed in Microsoft Visual C++. For operating the system during observations the user selects normal mode while the engineering mode is available for offering additional flexibility and low level control during maintenance and testing. A detailed time-stamped log of commands, status and errors are continuously generated. Both the control unit and the software have been successfully tested and integrated with the ADFOSC instrument.
Development of Data Processing Software for NBI Spectroscopic Analysis System
NASA Astrophysics Data System (ADS)
Zhang, Xiaodan; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Wu, Deyun; Cui, Qinglong
2015-04-01
A set of data processing software is presented in this paper for processing NBI spectroscopic data. For better and more scientific managment and querying these data, they are managed uniformly by the NBI data server. The data processing software offers the functions of uploading beam spectral original and analytic data to the data server manually and automatically, querying and downloading all the NBI data, as well as dealing with local LZO data. The set software is composed of a server program and a client program. The server software is programmed in C/C++ under a CentOS development environment. The client software is developed under a VC 6.0 platform, which offers convenient operational human interfaces. The network communications between the server and the client are based on TCP. With the help of this set software, the NBI spectroscopic analysis system realizes the unattended automatic operation, and the clear interface also makes it much more convenient to offer beam intensity distribution data and beam power data to operators for operation decision-making. supported by National Natural Science Foundation of China (No. 11075183), the Chinese Academy of Sciences Knowledge Innovation
Enforcement of entailment constraints in distributed service-based business processes.
Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram
2013-11-01
A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations.
Software-defined Radio Based Measurement Platform for Wireless Networks
Chao, I-Chun; Lee, Kang B.; Candell, Richard; Proctor, Frederick; Shen, Chien-Chung; Lin, Shinn-Yan
2015-01-01
End-to-end latency is critical to many distributed applications and services that are based on computer networks. There has been a dramatic push to adopt wireless networking technologies and protocols (such as WiFi, ZigBee, WirelessHART, Bluetooth, ISA100.11a, etc.) into time-critical applications. Examples of such applications include industrial automation, telecommunications, power utility, and financial services. While performance measurement of wired networks has been extensively studied, measuring and quantifying the performance of wireless networks face new challenges and demand different approaches and techniques. In this paper, we describe the design of a measurement platform based on the technologies of software-defined radio (SDR) and IEEE 1588 Precision Time Protocol (PTP) for evaluating the performance of wireless networks. PMID:27891210
Software-defined Radio Based Measurement Platform for Wireless Networks.
Chao, I-Chun; Lee, Kang B; Candell, Richard; Proctor, Frederick; Shen, Chien-Chung; Lin, Shinn-Yan
2015-10-01
End-to-end latency is critical to many distributed applications and services that are based on computer networks. There has been a dramatic push to adopt wireless networking technologies and protocols (such as WiFi, ZigBee, WirelessHART, Bluetooth, ISA100.11a, etc. ) into time-critical applications. Examples of such applications include industrial automation, telecommunications, power utility, and financial services. While performance measurement of wired networks has been extensively studied, measuring and quantifying the performance of wireless networks face new challenges and demand different approaches and techniques. In this paper, we describe the design of a measurement platform based on the technologies of software-defined radio (SDR) and IEEE 1588 Precision Time Protocol (PTP) for evaluating the performance of wireless networks.
EMCORE - Emotional Cooperative Groupware
NASA Astrophysics Data System (ADS)
Fasoli, N.; Messina, A.
In the last years considerable effort has been spent to develop groupware applications. Despite this, no general consenus has been met by groupware applications in computer field. Interdisciplinary approach could prove very useful to overcome these difficulties. A workgroup is not simply a set of people gathered together, working for a common goal. It can also be thought as a strong, hard mental reality. Actually, sociological and psychological definitions of group differ considerably. At sociological level a group is generally described in the view of the activities and events occurring inside the group itself. On the other hand, the psychological group approach considers not only the actions occurring inside the group, but also all the mental activities originated by belonging to the group, be they emotional or rational nature. Since early '60 simple work group (i.e. discussion group) has been analyzed in his psychological behavior. EMCORE is a prototype which aims to support computer science methods with psychological approach. The tool has been developed for a discussion group supported by heterogeneous distributed systems and has been implemented according to the CORBA abstraction augmented by the machine independent JAVA language. The tool allows all the common activities of a discussion group: discussion by voice or by chatting board if multimedia device are not present; discussion and elaboration of a shared document by text and/or graphic editor. At the same time, tools are provided for the psychoanalytic approach, according to a specific methodology.
Cowell, Robert G
2018-05-04
Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.
Tools for Supporting Distributed Agile Project Planning
NASA Astrophysics Data System (ADS)
Wang, Xin; Maurer, Frank; Morgan, Robert; Oliveira, Josyleuda
Agile project planning plays an important part in agile software development. In distributed settings, project planning is severely impacted by the lack of face-to-face communication and the inability to share paper index cards amongst all meeting participants. To address these issues, several distributed agile planning tools were developed. The tools vary in features, functions and running platforms. In this chapter, we first summarize the requirements for distributed agile planning. Then we give an overview on existing agile planning tools. We also evaluate existing tools based on tool requirements. Finally, we present some practical advices for both designers and users of distributed agile planning tools.
Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter
2017-06-28
High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.
Hail Size Distribution Mapping
NASA Technical Reports Server (NTRS)
2008-01-01
A 3-D weather radar visualization software program was developed and implemented as part of an experimental Launch Pad 39 Hail Monitor System. 3DRadPlot, a radar plotting program, is one of several software modules that form building blocks of the hail data processing and analysis system (the complete software processing system under development). The spatial and temporal mapping algorithms were originally developed through research at the University of Central Florida, funded by NASA s Tropical Rainfall Measurement Mission (TRMM), where the goal was to merge National Weather Service (NWS) Next-Generation Weather Radar (NEXRAD) volume reflectivity data with drop size distribution data acquired from a cluster of raindrop disdrometers. In this current work, we adapted these algorithms to process data from a cluster of hail disdrometers positioned around Launch Pads 39A or 39B, along with the corresponding NWS radar data. Radar data from all NWS NEXRAD sites is archived at the National Climatic Data Center (NCDC). That data can be readily accessed at
Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.
2001-01-01
The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.
3D printed plano-freeform optics for non-coherent discontinuous beam shaping
NASA Astrophysics Data System (ADS)
Assefa, Bisrat G.; Saastamoinen, Toni; Biskop, Joris; Kuittinen, Markku; Turunen, Jari; Saarinen, Jyrki
2018-03-01
The design, fabrication, and characterization of freeform optics for LED-based complex target irradiance distribution are challenging. Here, we investigate a 3D printing technology called Printoptical® technology in order to relax or push forward both the fabrication limits and LED-based applications of thick freeform lenses with small slope features. The freeform designs are carried out with an assumption of small-sized LED source using an existing point-source-based Tailoring method, which is available in the semi-commercial software. The numerical methods of the designs are characterized by ray-tracing software. The irradiance patterns of the 3D printed freeform lenses are promising considering the average shape conformity deviation of around ± 40 µm and center area surface roughness around ± 12 nm, which is to our knowledge by far the best result reported for 3D printed freeform lenses with a thickness greater than 1 mm. Applications of freeform lenses with discontinuous target irradiance distribution patterns are expected in eco-friendly energy efficient lighting such as in zebra-cross lighting.
3D printed plano-freeform optics for non-coherent discontinuous beam shaping
NASA Astrophysics Data System (ADS)
Assefa, Bisrat G.; Saastamoinen, Toni; Biskop, Joris; Kuittinen, Markku; Turunen, Jari; Saarinen, Jyrki
2018-06-01
The design, fabrication, and characterization of freeform optics for LED-based complex target irradiance distribution are challenging. Here, we investigate a 3D printing technology called Printoptical® technology in order to relax or push forward both the fabrication limits and LED-based applications of thick freeform lenses with small slope features. The freeform designs are carried out with an assumption of small-sized LED source using an existing point-source-based Tailoring method, which is available in the semi-commercial software. The numerical methods of the designs are characterized by ray-tracing software. The irradiance patterns of the 3D printed freeform lenses are promising considering the average shape conformity deviation of around ± 40 µm and center area surface roughness around ± 12 nm, which is to our knowledge by far the best result reported for 3D printed freeform lenses with a thickness greater than 1 mm. Applications of freeform lenses with discontinuous target irradiance distribution patterns are expected in eco-friendly energy efficient lighting such as in zebra-cross lighting.
Implementation of medical monitor system based on networks
NASA Astrophysics Data System (ADS)
Yu, Hui; Cao, Yuzhen; Zhang, Lixin; Ding, Mingshi
2006-11-01
In this paper, the development trend of medical monitor system is analyzed and portable trend and network function become more and more popular among all kinds of medical monitor devices. The architecture of medical network monitor system solution is provided and design and implementation details of medical monitor terminal, monitor center software, distributed medical database and two kind of medical information terminal are especially discussed. Rabbit3000 system is used in medical monitor terminal to implement security administration of data transfer on network, human-machine interface, power management and DSP interface while DSP chip TMS5402 is used in signal analysis and data compression. Distributed medical database is designed for hospital center according to DICOM information model and HL7 standard. Pocket medical information terminal based on ARM9 embedded platform is also developed to interactive with center database on networks. Two kernels based on WINCE are customized and corresponding terminal software are developed for nurse's routine care and doctor's auxiliary diagnosis. Now invention patent of the monitor terminal is approved and manufacture and clinic test plans are scheduled. Applications for invention patent are also arranged for two medical information terminals.
NASA Astrophysics Data System (ADS)
Betz, Jessie M. Bethly
1993-12-01
The Video Distribution Subsystem (VDS) for Space Station Freedom provides onboard video communications. The VDS includes three major functions: external video switching; internal video switching; and sync and control generation. The Video Subsystem Routing (VSR) is a part of the VDS Manager Computer Software Configuration Item (VSM/CSCI). The VSM/CSCI is the software which controls and monitors the VDS equipment. VSR activates, terminates, and modifies video services in response to Tier-1 commands to connect video sources to video destinations. VSR selects connection paths based on availability of resources and updates the video routing lookup tables. This project involves investigating the current methodology to automate the Video Subsystem Routing and developing and testing a prototype as 'proof of concept' for designers.
Planning of electroporation-based treatments using Web-based treatment-planning software.
Pavliha, Denis; Kos, Bor; Marčan, Marija; Zupanič, Anže; Serša, Gregor; Miklavčič, Damijan
2013-11-01
Electroporation-based treatment combining high-voltage electric pulses and poorly permanent cytotoxic drugs, i.e., electrochemotherapy (ECT), is currently used for treating superficial tumor nodules by following standard operating procedures. Besides ECT, another electroporation-based treatment, nonthermal irreversible electroporation (N-TIRE), is also efficient at ablating deep-seated tumors. To perform ECT or N-TIRE of deep-seated tumors, following standard operating procedures is not sufficient and patient-specific treatment planning is required for successful treatment. Treatment planning is required because of the use of individual long-needle electrodes and the diverse shape, size and location of deep-seated tumors. Many institutions that already perform ECT of superficial metastases could benefit from treatment-planning software that would enable the preparation of patient-specific treatment plans. To this end, we have developed a Web-based treatment-planning software for planning electroporation-based treatments that does not require prior engineering knowledge from the user (e.g., the clinician). The software includes algorithms for automatic tissue segmentation and, after segmentation, generation of a 3D model of the tissue. The procedure allows the user to define how the electrodes will be inserted. Finally, electric field distribution is computed, the position of electrodes and the voltage to be applied are optimized using the 3D model and a downloadable treatment plan is made available to the user.
Distributed Software for Observations in the Near Infrared
NASA Astrophysics Data System (ADS)
Gavryusev, V.; Baffa, C.; Giani, E.
We have developed an integrated system that performs astronomical observations in Near Infrared bands operating two-dimensional instruments at the Italian National Infrared Facility's \\htmllink{ARNICA}{http://helios.arcetri.astro.it:/home/idefix/Mosaic/ instr/arnica/arnica.html} and \\htmllink{LONGSP}{http://helios.arcetri.astro.it:/home/idefix/Mosaic/ instr/longsp/longsp.html}. This software consists of several communicating processes, generally executed across a network, as well as on a single computer. The user interface is organized as widget-based X11 client. The interprocess communication is provided by sockets and uses TCP/IP. The processes denoted for control of hardware (telescope and other instruments) should be executed currently on a PC dedicated for this task under DESQview/X, while all other components (user interface, tools for the data analysis, etc.) can also work under UNIX\\@. The hardware independent part of software is based on the Athena Widget Set and is compiled by GNU C to provide maximum portability.
RINGMesh: A programming library for developing mesh-based geomodeling applications
NASA Astrophysics Data System (ADS)
Pellerin, Jeanne; Botella, Arnaud; Bonneau, François; Mazuyer, Antoine; Chauvin, Benjamin; Lévy, Bruno; Caumon, Guillaume
2017-07-01
RINGMesh is a C++ open-source programming library for manipulating discretized geological models. It is designed to ease the development of applications and workflows that use discretized 3D models. It is neither a geomodeler, nor a meshing software. RINGMesh implements functionalities to read discretized surface-based or volumetric structural models and to check their validity. The models can be then exported in various file formats. RINGMesh provides data structures to represent geological structural models, either defined by their discretized boundary surfaces, and/or by discretized volumes. A programming interface allows to develop of new geomodeling methods, and to plug in external software. The goal of RINGMesh is to help researchers to focus on the implementation of their specific method rather than on tedious tasks common to many applications. The documented code is open-source and distributed under the modified BSD license. It is available at https://www.ring-team.org/index.php/software/ringmesh.
The use of hypermedia to increase the productivity of software development teams
NASA Technical Reports Server (NTRS)
Coles, L. Stephen
1991-01-01
Rapid progress in low-cost commercial PC-class multimedia workstation technology will potentially have a dramatic impact on the productivity of distributed work groups of 50-100 software developers. Hypermedia/multimedia involves the seamless integration in a graphical user interface (GUI) of a wide variety of data structures, including high-resolution graphics, maps, images, voice, and full-motion video. Hypermedia will normally require the manipulation of large dynamic files for which relational data base technology and SQL servers are essential. Basic machine architecture, special-purpose video boards, video equipment, optical memory, software needed for animation, network technology, and the anticipated increase in productivity that will result for the introduction of hypermedia technology are covered. It is suggested that the cost of the hardware and software to support an individual multimedia workstation will be on the order of $10,000.
Development of Ada language control software for the NASA power management and distribution test bed
NASA Technical Reports Server (NTRS)
Wright, Ted; Mackin, Michael; Gantose, Dave
1989-01-01
The Ada language software developed to control the NASA Lewis Research Center's Power Management and Distribution testbed is described. The testbed is a reduced-scale prototype of the electric power system to be used on space station Freedom. It is designed to develop and test hardware and software for a 20-kHz power distribution system. The distributed, multiprocessor, testbed control system has an easy-to-use operator interface with an understandable English-text format. A simple interface for algorithm writers that uses the same commands as the operator interface is provided, encouraging interactive exploration of the system.
Web-Enabled Systems for Student Access.
ERIC Educational Resources Information Center
Harris, Chad S.; Herring, Tom
1999-01-01
California State University, Fullerton is developing a suite of server-based, Web-enabled applications that distribute the functionality of its student information system software to external customers without modifying the mainframe applications or databases. The cost-effective, secure, and rapidly deployable business solution involves using the…
ERIC Educational Resources Information Center
Martins, Rosane Maria; Chaves, Magali Ribeiro; Pirmez, Luci; Rust da Costa Carmo, Luiz Fernando
2001-01-01
Discussion of the need to filter and retrieval relevant information from the Internet focuses on the use of mobile agents, specific software components which are based on distributed artificial intelligence and integrated systems. Surveys agent technology and discusses the agent building package used to develop two applications using IBM's Aglet…
NASA Technical Reports Server (NTRS)
Quinn, Todd M.; Walters, Jerry L.
1991-01-01
Future space explorations will require long term human presence in space. Space environments that provide working and living quarters for manned missions are becoming increasingly larger and more sophisticated. Monitor and control of the space environment subsystems by expert system software, which emulate human reasoning processes, could maintain the health of the subsystems and help reduce the human workload. The autonomous power expert (APEX) system was developed to emulate a human expert's reasoning processes used to diagnose fault conditions in the domain of space power distribution. APEX is a fault detection, isolation, and recovery (FDIR) system, capable of autonomous monitoring and control of the power distribution system. APEX consists of a knowledge base, a data base, an inference engine, and various support and interface software. APEX provides the user with an easy-to-use interactive interface. When a fault is detected, APEX will inform the user of the detection. The user can direct APEX to isolate the probable cause of the fault. Once a fault has been isolated, the user can ask APEX to justify its fault isolation and to recommend actions to correct the fault. APEX implementation and capabilities are discussed.
A self-referential HOWTO on release engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galassi, Mark C.
Release engineering is a fundamental part of the software development cycle: it is the point at which quality control is exercised and bug fixes are integrated. The way in which software is released also gives the end user her first experience of a software package, while in scientific computing release engineering can guarantee reproducibility. For these reasons and others, the release process is a good indicator of the maturity and organization of a development team. Software teams often do not put in place a release process at the beginning. This is unfortunate because the team does not have early andmore » continuous execution of test suites, and it does not exercise the software in the same conditions as the end users. I describe an approach to release engineering based on the software tools developed and used by the GNU project, together with several specific proposals related to packaging and distribution. I do this in a step-by-step manner, demonstrating how this very paper is written and built using proper release engineering methods. Because many aspects of release engineering are not exercised in the building of the paper, the accompanying software repository also contains examples of software libraries.« less
48 CFR 227.7205 - Contracts for special works.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Computer Software and Computer Software Documentation 227.7205 Contracts for special works. (a) Use the... a specific need to control the distribution of computer software or computer software documentation..., modification, reproduction, release, performance, display, or disclosure of such software or documentation. Use...
48 CFR 227.7205 - Contracts for special works.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Computer Software and Computer Software Documentation 227.7205 Contracts for special works. (a) Use the... a specific need to control the distribution of computer software or computer software documentation..., modification, reproduction, release, performance, display, or disclosure of such software or documentation. Use...
2016-10-27
Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 © 2016 Carnegie Mellon University [DISTRIBUTION STATEMENT A: This... Carnegie Mellon University [DISTRIBUTION STATEMENT A: This material has been approved for public release and unlimited distribution] Copyright 2016 Carnegie ... Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center sponsored by
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1991-01-01
The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.
Multiple Coulomb scattering in thin silicon
NASA Astrophysics Data System (ADS)
Berger, N.; Buniatyan, A.; Eckert, P.; Förster, F.; Gredig, R.; Kovalenko, O.; Kiehn, M.; Philipp, R.; Schöning, A.; Wiedner, D.
2014-07-01
We present a measurement of multiple Coulomb scattering of 1 to 6 GeV/c electrons in thin (50-140 μm) silicon targets. The data were obtained with the EUDET telescope Aconite at DESY and are compared to parametrisations as used in the Geant4 software package. We find good agreement between data and simulation in the scattering distribution width but large deviations in the shape of the distribution. In order to achieve a better description of the shape, a new scattering model based on a Student's t distribution is developed and compared to the data.
Algorithms and Object-Oriented Software for Distributed Physics-Based Modeling
NASA Technical Reports Server (NTRS)
Kenton, Marc A.
2001-01-01
The project seeks to develop methods to more efficiently simulate aerospace vehicles. The goals are to reduce model development time, increase accuracy (e.g.,by allowing the integration of multidisciplinary models), facilitate collaboration by geographically- distributed groups of engineers, support uncertainty analysis and optimization, reduce hardware costs, and increase execution speeds. These problems are the subject of considerable contemporary research (e.g., Biedron et al. 1999; Heath and Dick, 2000).
2013-10-10
Science and Engineering Stony Brook University Stony Brook, NY 11794 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting...NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15...Spectra were recorded from 4000 – 500 cm-1 with a resolution of 2 cm-1, and were analyzed using the Nicolet OMNIC software suite. Raman
Flexible distributed architecture for semiconductor process control and experimentation
NASA Astrophysics Data System (ADS)
Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.
1997-01-01
Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.
ICESat Science Investigator led Processing System (I-SIPS)
NASA Astrophysics Data System (ADS)
Bhardwaj, S.; Bay, J.; Brenner, A.; Dimarzio, J.; Hancock, D.; Sherman, M.
2003-12-01
The ICESat Science Investigator-led Processing System (I-SIPS) generates the GLAS standard data products. It consists of two main parts the Scheduling and Data Management System (SDMS) and the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software. The system has been operational since the successful launch of ICESat. It ingests data from the GLAS instrument, generates GLAS data products, and distributes them to the GLAS Science Computing Facility (SCF), the Instrument Support Facility (ISF) and the National Snow and Ice Data Center (NSIDC) ECS DAAC. The SDMS is the Planning, Scheduling and Data Management System that runs the GLAS Science Algorithm Software (GSAS). GSAS is based on the Algorithm Theoretical Basis Documents provided by the Science Team and is developed independently of SDMS. The SDMS provides the processing environment to plan jobs based on existing data, control job flow, data distribution, and archiving. The SDMS design is based on a mission-independent architecture that imposes few constraints on the science code thereby facilitating I-SIPS integration. I-SIPS currently works in an autonomous manner to ingest GLAS instrument data, distribute this data to the ISF, run the science processing algorithms to produce the GLAS standard products, reprocess data when new versions of science algorithms are released, and distributes the products to the SCF, ISF, and NSIDC. I-SIPS has a proven performance record, delivering the data to the SCF within hours after the initial instrument activation. The I-SIPS design philosophy gives this system a high potential for reuse in other science missions.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
A Linguistic Model in Component Oriented Programming
NASA Astrophysics Data System (ADS)
Crăciunean, Daniel Cristian; Crăciunean, Vasile
2016-12-01
It is a fact that the component-oriented programming, well organized, can bring a large increase in efficiency in the development of large software systems. This paper proposes a model for building software systems by assembling components that can operate independently of each other. The model is based on a computing environment that runs parallel and distributed applications. This paper introduces concepts as: abstract aggregation scheme and aggregation application. Basically, an aggregation application is an application that is obtained by combining corresponding components. In our model an aggregation application is a word in a language.
NASA Astrophysics Data System (ADS)
Kintsakis, Athanassios M.; Psomopoulos, Fotis E.; Symeonidis, Andreas L.; Mitkas, Pericles A.
Hermes introduces a new "describe once, run anywhere" paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.
The StarLite Project Prototyping Real-Time Software
1991-10-01
multiversion data objects using the prototyping environment. Section 5 concludes the paper. 2. Message-Based Simulation When prototyping distributed...phase locking and priority-based synchronization algorithms, and between a multiversion database and its corresponding single-version database, through...its deadline, since the transaction is only aborted in the validation phase. 4.5. A Multiversion Database System To illustrate the effctivcness of the
Logistics Force Planner Assistant (Log Planner)
1989-09-01
elements. The system is implemented on a MS-DOS based microcomputer, using the "Knowledge Pro’ software tool., 20 DISTRIBUTION/AVAILABILITY OF... service support structure. 3. A microcomputer-based knowledge system was developed and successfully demonstrated. Four modules of information are...combat service support (CSS) units planning process to Army Staff logistics planners. Personnel newly assigned to logistics planning need an
Cost-aware request routing in multi-geography cloud data centres using software-defined networking
NASA Astrophysics Data System (ADS)
Yuan, Haitao; Bi, Jing; Li, Bo Hu; Tan, Wei
2017-03-01
Current geographically distributed cloud data centres (CDCs) require gigantic energy and bandwidth costs to provide multiple cloud applications to users around the world. Previous studies only focus on energy cost minimisation in distributed CDCs. However, a CDC provider needs to deliver gigantic data between users and distributed CDCs through internet service providers (ISPs). Geographical diversity of bandwidth and energy costs brings a highly challenging problem of how to minimise the total cost of a CDC provider. With the recently emerging software-defined networking, we study the total cost minimisation problem for a CDC provider by exploiting geographical diversity of energy and bandwidth costs. We formulate the total cost minimisation problem as a mixed integer non-linear programming (MINLP). Then, we develop heuristic algorithms to solve the problem and to provide a cost-aware request routing for joint optimisation of the selection of ISPs and the number of servers in distributed CDCs. Besides, to tackle the dynamic workload in distributed CDCs, this article proposes a regression-based workload prediction method to obtain future incoming workload. Finally, this work evaluates the cost-aware request routing by trace-driven simulation and compares it with the existing approaches to demonstrate its effectiveness.
2012-09-30
recognition. Algorithm design and statistical analysis and feature analysis. Post -Doctoral Associate, Cornell University, Bioacoustics Research...short. The HPC-ADA was designed based on fielded systems [1-4, 6] that offer a variety of desirable attributes, specifically dynamic resource...The software package was designed to utilize parallel and distributed processing for running recognition and other advanced algorithms. DeLMA
NASA Technical Reports Server (NTRS)
Leake, Stephen; Green, Tom; Cofer, Sue; Sauerwein, Tim
1989-01-01
HARPS is a telerobot control system that can perform some simple but useful tasks. This capability is demonstrated by performing the ORU exchange demonstration. HARPS is based on NASREM (NASA Standard Reference Model). All software is developed in Ada, and the project incorporates a number of different CASE (computer-aided software engineering) tools. NASREM was found to be a valid and useful model for building a telerobot control system. Its hierarchical and distributed structure creates a natural and logical flow for implementing large complex robust control systems. The ability of Ada to create and enforce abstraction enhanced the implementation of such control systems.
The Particle Physics Data Grid. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron
2002-08-16
The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less
Khammarnia, Mohammad; Sharifian, Roxana; Zand, Farid; Keshtkaran, Ali; Barati, Omid
2016-09-01
This study aimed to identify the functional requirements of computerized provider order entry software and design this software in Iran. This study was conducted using review documentation, interview, and focus group discussions in Shiraz University of Medical Sciences, as the medical pole in Iran, in 2013-2015. The study sample consisted of physicians (n = 12) and nurses (n = 2) in the largest hospital in the southern part of Iran and information technology experts (n = 5) in Shiraz University of Medical Sciences. Functional requirements of the computerized provider order entry system were examined in three phases. Finally, the functional requirements were distributed in four levels, and accordingly, the computerized provider order entry software was designed. The software had seven main dimensions: (1) data entry, (2) drug interaction management system, (3) warning system, (4) treatment services, (5) ability to write in software, (6) reporting from all sections of the software, and (7) technical capabilities of the software. The nurses and physicians emphasized quick access to the computerized provider order entry software, order prescription section, and applicability of the software. The software had some items that had not been mentioned in other studies. Ultimately, the software was designed by a company specializing in hospital information systems in Iran. This study was the first specific investigation of computerized provider order entry software design in Iran. Based on the results, it is suggested that this software be implemented in hospitals.
Parallelization of Rocket Engine System Software (Press)
NASA Technical Reports Server (NTRS)
Cezzar, Ruknet
1996-01-01
The main goal is to assess parallelization requirements for the Rocket Engine Numeric Simulator (RENS) project which, aside from gathering information on liquid-propelled rocket engines and setting forth requirements, involve a large FORTRAN based package at NASA Lewis Research Center and TDK software developed by SUBR/UWF. The ultimate aim is to develop, test, integrate, and suitably deploy a family of software packages on various aspects and facets of rocket engines using liquid-propellants. At present, all project efforts by the funding agency, NASA Lewis Research Center, and the HBCU participants are disseminated over the internet using world wide web home pages. Considering obviously expensive methods of actual field trails, the benefits of software simulators are potentially enormous. When realized, these benefits will be analogous to those provided by numerous CAD/CAM packages and flight-training simulators. According to the overall task assignments, Hampton University's role is to collect all available software, place them in a common format, assess and evaluate, define interfaces, and provide integration. Most importantly, the HU's mission is to see to it that the real-time performance is assured. This involves source code translations, porting, and distribution. The porting will be done in two phases: First, place all software on Cray XMP platform using FORTRAN. After testing and evaluation on the Cray X-MP, the code will be translated to C + + and ported to the parallel nCUBE platform. At present, we are evaluating another option of distributed processing over local area networks using Sun NFS, Ethernet, TCP/IP. Considering the heterogeneous nature of the present software (e.g., first started as an expert system using LISP machines) which now involve FORTRAN code, the effort is expected to be quite challenging.
Valjevac, Salih; Ridjanovic, Zoran; Masic, Izet
2009-01-01
CONFLICT OF INTEREST: NONE DECLARED SUMMARY Introduction Agency for healthcare quality and accreditation in Federation of Bosnia and Herzegovina (AKAZ) is authorized body in the field of healthcare quality and safety improvement and accreditation of healthcare institutions. Beside accreditation standards for hospitals and primary health care centers, AKAZ has also developed accreditation standards for family medicine teams. Methods Software development was primarily based on Accreditation Standards for Family Medicine Teams. Seven chapters / topics: (1. Physical factors; 2. Equipment; 3. Organization and Management; 4. Health promotion and illness prevention; 5. Clinical services; 6. Patient survey; and 7. Patient’s rights and obligations) contain 35 standards describing expected level of family medicine team’s quality. Based on accreditation standards structure and needs of different potential users, it was concluded that software backbone should be a database containing all accreditation standards, self assessment and external assessment details. In this article we will present the development of standardized software for self and external evaluation of quality of service in family medicine, as well as plans for the future development of this software package. Conclusion Electronic data gathering and storing enhances the management, access and overall use of information. During this project we came to conclusion that software for self assessment and external assessment is ideal for accreditation standards distribution, their overview by the family medicine team members, their self assessment and external assessment. PMID:24109157
Improvements to the National Transport Code Collaboration Data Server
NASA Astrophysics Data System (ADS)
Alexander, David A.
2001-10-01
The data server of the National Transport Code Colaboration Project provides a universal network interface to interpolated or raw transport data accessible by a universal set of names. Data can be acquired from a local copy of the Iternational Multi-Tokamak (ITER) profile database as well as from TRANSP trees of MDS Plus data systems on the net. Data is provided to the user's network client via a CORBA interface, thus providing stateful data server instances, which have the advantage of remembering the desired interpolation, data set, etc. This paper will review the status and discuss the recent improvements made to the data server, such as the modularization of the data server and the addition of hdf5 and MDS Plus data file writing capability.
Bringing modeling to the masses: A web based system to predict potential species distributions
Graham, Jim; Newman, Greg; Kumar, Sunil; Jarnevich, Catherine S.; Young, Nick; Crall, Alycia W.; Stohlgren, Thomas J.; Evangelista, Paul
2010-01-01
Predicting current and potential species distributions and abundance is critical for managing invasive species, preserving threatened and endangered species, and conserving native species and habitats. Accurate predictive models are needed at local, regional, and national scales to guide field surveys, improve monitoring, and set priorities for conservation and restoration. Modeling capabilities, however, are often limited by access to software and environmental data required for predictions. To address these needs, we built a comprehensive web-based system that: (1) maintains a large database of field data; (2) provides access to field data and a wealth of environmental data; (3) accesses values in rasters representing environmental characteristics; (4) runs statistical spatial models; and (5) creates maps that predict the potential species distribution. The system is available online at www.niiss.org, and provides web-based tools for stakeholders to create potential species distribution models and maps under current and future climate scenarios.
Integrating CLIPS applications into heterogeneous distributed systems
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1991-01-01
SOCIAL is an advanced, object-oriented development tool for integrating intelligent and conventional applications across heterogeneous hardware and software platforms. SOCIAL defines a family of 'wrapper' objects called agents, which incorporate predefined capabilities for distributed communication and control. Developers embed applications within agents and establish interactions between distributed agents via non-intrusive message-based interfaces. This paper describes a predefined SOCIAL agent that is specialized for integrating C Language Integrated Production System (CLIPS)-based applications. The agent's high-level Application Programming Interface supports bidirectional flow of data, knowledge, and commands to other agents, enabling CLIPS applications to initiate interactions autonomously, and respond to requests and results from heterogeneous remote systems. The design and operation of CLIPS agents are illustrated with two distributed applications that integrate CLIPS-based expert systems with other intelligent systems for isolating and mapping problems in the Space Shuttle Launch Processing System at the NASA Kennedy Space Center.
PisCES: Pis(cine) Community Estimation Software
PisCES predicts a fish community for any NHD-Plus stream reach in the conterminous United States. PisCES utilizes HUC-based distributional information for over 1,000 nature and non-native species obtained from NatureServe, the USGS, and Peterson Field Guide to Freshwater Fishes o...
Distributed Operations Planning
NASA Technical Reports Server (NTRS)
Fox, Jason; Norris, Jeffrey; Powell, Mark; Rabe, Kenneth; Shams, Khawaja
2007-01-01
Maestro software provides a secure and distributed mission planning system for long-term missions in general, and the Mars Exploration Rover Mission (MER) specifically. Maestro, the successor to the Science Activity Planner, has a heavy emphasis on portability and distributed operations, and requires no data replication or expensive hardware, instead relying on a set of services functioning on JPL institutional servers. Maestro works on most current computers with network connections, including laptops. When browsing down-link data from a spacecraft, Maestro functions similarly to being on a Web browser. After authenticating the user, it connects to a database server to query an index of data products. It then contacts a Web server to download and display the actual data products. The software also includes collaboration support based upon a highly reliable messaging system. Modifications made to targets in one instance are quickly and securely transmitted to other instances of Maestro. The back end that has been developed for Maestro could benefit many future missions by reducing the cost of centralized operations system architecture.
Autoadaptivity and optimization in distributed ECG interpretation.
Augustyniak, Piotr
2010-03-01
This paper addresses principal issues of the ECG interpretation adaptivity in a distributed surveillance network. In the age of pervasive access to wireless digital communication, distributed biosignal interpretation networks may not only optimally solve difficult medical cases, but also adapt the data acquisition, interpretation, and transmission to the variable patient's status and availability of technical resources. The background of such adaptivity is the innovative use of results from the automatic ECG analysis to the seamless remote modification of the interpreting software. Since the medical relevance of issued diagnostic data depends on the patient's status, the interpretation adaptivity implies the flexibility of report content and frequency. Proposed solutions are based on the research on human experts behavior, procedures reliability, and usage statistics. Despite the limited scale of our prototype client-server application, the tests yielded very promising results: the transmission channel occupation was reduced by 2.6 to 5.6 times comparing to the rigid reporting mode and the improvement of the remotely computed diagnostic outcome was achieved in case of over 80% of software adaptation attempts.
INTERIM -- Starlink Software Environment
NASA Astrophysics Data System (ADS)
Pearce, Dave; Pavelin, Cliff; Lawden, M. D.
Early versions of this paper were based on a number of other papers produced at a very early stage of the Starlink project. They contained a description of a specific implementation of a subroutine library, speculations on the desirable attributes of a software environment, and future development plans. They reflected the experimental nature of the Starlink software environment at that time. Since then, the situation has changed. The implemented subroutine library, INTERIM_DIR:INTERIM.OLB, is now a well established and widely used piece of software. A completely new Starlink software environment (ADAM) has been developed and distributed. Thus the library released in 1980 as `STARLINK' and now called `INTERIM' has reached the end of its development cycle and is now frozen in its current state, apart from bug corrections. This paper has, therefore, been completely rewritten and restructured to reflect the new situation. Its aim is to describe the facilities of the INTERIM subroutine library as clearly and concisely as possible. It avoids speculation, discussion of design decisions, and announcements of future plans.
Hardware-assisted software clock synchronization for homogeneous distributed systems
NASA Technical Reports Server (NTRS)
Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.
1990-01-01
A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.
Software fault tolerance in computer operating systems
NASA Technical Reports Server (NTRS)
Iyer, Ravishankar K.; Lee, Inhwan
1994-01-01
This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.
Lee, L.; Helsel, D.
2005-01-01
Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.
Tolerancing aspheres based on manufacturing knowledge
NASA Astrophysics Data System (ADS)
Wickenhagen, S.; Kokot, S.; Fuchs, U.
2017-10-01
A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.
Tolerancing aspheres based on manufacturing statistics
NASA Astrophysics Data System (ADS)
Wickenhagen, S.; Möhl, A.; Fuchs, U.
2017-11-01
A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.
The HydroServer Platform for Sharing Hydrologic Data
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.
2010-12-01
The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its parts in advancing hydrologic research. Details of the CUAHSI HIS can be found at http://his.cuahsi.org, and HydroServer codeplex site http://hydroserver.codeplex.com.
A Legal Guide for the Software Developer.
ERIC Educational Resources Information Center
Minnesota Small Business Assistance Office, St. Paul.
This booklet has been prepared to familiarize the inventor, creator, or developer of a new computer software product or software invention with the basic legal issues involved in developing, protecting, and distributing the software in the United States. Basic types of software protection and related legal matters are discussed in detail,…
21 CFR 801.50 - Labeling requirements for stand-alone software.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Labeling requirements for stand-alone software....50 Labeling requirements for stand-alone software. (a) Stand-alone software that is not distributed... in packaged form, stand-alone software regulated as a medical device must provide its unique device...
Toward Baseline Software Anomalies in NASA Missions
NASA Technical Reports Server (NTRS)
Layman, Lucas; Zelkowitz, Marvin; Basili, Victor; Nikora, Allen P.
2012-01-01
In this fast abstract, we provide preliminary findings an analysis of 14,500 spacecraft anomalies from unmanned NASA missions. We provide some baselines for the distributions of software vs. non-software anomalies in spaceflight systems, the risk ratings of software anomalies, and the corrective actions associated with software anomalies.
Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform
NASA Astrophysics Data System (ADS)
Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian
2017-04-01
The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust configuration is based on cloud computing and allows the installation on a private or public cloud infrastructure. In this configuration, the processing resources can be dynamically allocated and the execution time can be considerably improved by the available virtual resources and the number of parallelizable sequences in the processing flow. The presentation highlights the benefits and issues of the proposed solution by analyzing some significant experimental use cases. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Constantin Nandra, Dorian Gorgan: "Defining Earth data batch processing tasks by means of a flexible workflow description language", ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-4, 59-66, (2016). [3] Victor Bacu, Teodor Stefanut, Dorian Gorgan, "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).
Ong, Edison; He, Yongqun
2016-01-01
Hundreds of biological and biomedical ontologies have been developed to support data standardization, integration and analysis. Although ontologies are typically developed for community usage, community efforts in ontology development are limited. To support ontology visualization, distribution, and community-based annotation and development, we have developed Ontokiwi, an ontology extension to the MediaWiki software. Ontokiwi displays hierarchical classes and ontological axioms. Ontology classes and axioms can be edited and added using Ontokiwi form or MediaWiki source editor. Ontokiwi also inherits MediaWiki features such as Wikitext editing and version control. Based on the Ontokiwi/MediaWiki software package, we have developed Ontobedia, which targets to support community-based development and annotations of biological and biomedical ontologies. As demonstrations, we have loaded the Ontology of Adverse Events (OAE) and the Cell Line Ontology (CLO) into Ontobedia. Our studies showed that Ontobedia was able to achieve expected Ontokiwi features. PMID:27570653
The component-based architecture of the HELIOS medical software engineering environment.
Degoulet, P; Jean, F C; Engelmann, U; Meinzer, H P; Baud, R; Sandblad, B; Wigertz, O; Le Meur, R; Jagermann, C
1994-12-01
The constitution of highly integrated health information networks and the growth of multimedia technologies raise new challenges for the development of medical applications. We describe in this paper the general architecture of the HELIOS medical software engineering environment devoted to the development and maintenance of multimedia distributed medical applications. HELIOS is made of a set of software components, federated by a communication channel called the HELIOS Unification Bus. The HELIOS kernel includes three main components, the Analysis-Design and Environment, the Object Information System and the Interface Manager. HELIOS services consist in a collection of toolkits providing the necessary facilities to medical application developers. They include Image Related services, a Natural Language Processor, a Decision Support System and Connection services. The project gives special attention to both object-oriented approaches and software re-usability that are considered crucial steps towards the development of more reliable, coherent and integrated applications.
How reliable is computerized assessment of readability?
Mailloux, S L; Johnson, M E; Fisher, D G; Pettibone, T J
1995-01-01
To assess the consistency and comparability of readability software programs, four software programs (Corporate Voice, Grammatix IV, Microsoft Word for Windows, and RightWriter) were compared. Standard materials included 28 pieces of printed educational materials on human immunodeficiency virus/acquired immunodeficiency syndrome distributed nationally and the Gettysburg Address. Statistical analyses for the educational materials revealed that each of the three formulas assessed (Flesch-Kincaid, Flesch Reading Ease, and Gunning Fog Index) provided significantly different grade equivalent scores and that the Microsoft Word program provided significantly lower grade levels and was more inconsistent in the scores provided. For the Gettysburg Address, considerable variation was revealed among formulas, with the discrepancy being up to two grade levels. When averaging across formulas, there was a variation of 1.3 grade levels between the four software programs. Given the variation between formulas and programs, implications for decisions based on results of these software programs are provided.
Implementation errors in the GingerALE Software: Description and recommendations.
Eickhoff, Simon B; Laird, Angela R; Fox, P Mickle; Lancaster, Jack L; Fox, Peter T
2017-01-01
Neuroscience imaging is a burgeoning, highly sophisticated field the growth of which has been fostered by grant-funded, freely distributed software libraries that perform voxel-wise analyses in anatomically standardized three-dimensional space on multi-subject, whole-brain, primary datasets. Despite the ongoing advances made using these non-commercial computational tools, the replicability of individual studies is an acknowledged limitation. Coordinate-based meta-analysis offers a practical solution to this limitation and, consequently, plays an important role in filtering and consolidating the enormous corpus of functional and structural neuroimaging results reported in the peer-reviewed literature. In both primary data and meta-analytic neuroimaging analyses, correction for multiple comparisons is a complex but critical step for ensuring statistical rigor. Reports of errors in multiple-comparison corrections in primary-data analyses have recently appeared. Here, we report two such errors in GingerALE, a widely used, US National Institutes of Health (NIH)-funded, freely distributed software package for coordinate-based meta-analysis. These errors have given rise to published reports with more liberal statistical inferences than were specified by the authors. The intent of this technical report is threefold. First, we inform authors who used GingerALE of these errors so that they can take appropriate actions including re-analyses and corrective publications. Second, we seek to exemplify and promote an open approach to error management. Third, we discuss the implications of these and similar errors in a scientific environment dependent on third-party software. Hum Brain Mapp 38:7-11, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
A Component-based Programming Model for Composite, Distributed Applications
NASA Technical Reports Server (NTRS)
Eidson, Thomas M.; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
The nature of scientific programming is evolving to larger, composite applications that are composed of smaller element applications. These composite applications are more frequently being targeted for distributed, heterogeneous networks of computers. They are most likely programmed by a group of developers. Software component technology and computational frameworks are being proposed and developed to meet the programming requirements of these new applications. Historically, programming systems have had a hard time being accepted by the scientific programming community. In this paper, a programming model is outlined that attempts to organize the software component concepts and fundamental programming entities into programming abstractions that will be better understood by the application developers. The programming model is designed to support computational frameworks that manage many of the tedious programming details, but also that allow sufficient programmer control to design an accurate, high-performance application.
A parameterization of nuclear track profiles in CR-39 detector
NASA Astrophysics Data System (ADS)
Azooz, A. A.; Al-Nia'emi, S. H.; Al-Jubbori, M. A.
2012-11-01
In this work, the empirical parameterization describing the alpha particles’ track depth in CR-39 detectors is extended to describe longitudinal track profiles against etching time for protons and alpha particles. MATLAB based software is developed for this purpose. The software calculates and plots the depth, diameter, range, residual range, saturation time, and etch rate versus etching time. The software predictions are compared with other experimental data and with results of calculations using the original software, TRACK_TEST, developed for alpha track calculations. The software related to this work is freely downloadable and performs calculations for protons in addition to alpha particles. Program summary Program title: CR39 Catalog identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Copyright (c) 2011, Aasim Azooz Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met • Redistributions of source code must retain the above copyright, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution This software is provided by the copyright holders and contributors “as is” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage. No. of lines in distributed program, including test data, etc.: 15598 No. of bytes in distributed program, including test data, etc.: 3933244 Distribution format: tar.gz Programming language: MATLAB. Computer: Any Desktop or Laptop. Operating system: Windows 1998 or above (with MATLAB R13 or above installed). RAM: 512 Megabytes or higher Classification: 17.5. Nature of problem: A new semispherical parameterization of charged particle tracks in CR-39 SSNTD is carried out in a previous paper. This parameterization is developed here into a MATLAB based software to calculate the track length and track profile for any proton or alpha particle energy or etching time. This software is intended to compete with the TRACK_TEST [1] and TRACK_VISION [2] software currently in use by all people working in the field of SSNTD. Solution method: Based on fitting of experimental results of protons and alpha particles track lengths for various energies and etching times to a new semispherical formula with four free fitting parameters, the best set of energy independent parameters were found. These parameters are introduced into the software and the software is programmed to solve the set of equations to calculate the track depth, track etching rate as a function of both time and residual range for particles of normal and oblique incidence, the track longitudinal profile at both normal and oblique incidence, and the three dimensional track profile at normal incidence. Running time: 1-8 s on Pentium (4) 2 GHz CPU, 3 GB of RAM depending on the etching time value References: [1] ADWT_v1_0 Track_Test Computer program TRACK_TEST for calculating parameters and plotting profiles for etch pits in nuclear track materials. D. Nikezic, K.N. Yu Comput. Phys. Commun. 174(2006)160 [2] AEAF_v1_0 TRACK_VISION Computer program TRACK_VISION for simulating optical appearance of etched tracks in CR-39 nuclear track detectors. D. Nikezic, K.N. Yu Comput. Phys. Commun. 178(2008)591
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kargupta, H.; Stafford, B.; Hamzaoglu, I.
This paper describes an experimental parallel/distributed data mining system PADMA (PArallel Data Mining Agents) that uses software agents for local data accessing and analysis and a web based interface for interactive data visualization. It also presents the results of applying PADMA for detecting patterns in unstructured texts of postmortem reports and laboratory test data for Hepatitis C patients.
Water Distribution System Risk Tool for Investment Planning (WaterRF Report 4332)
Product Description/Abstract The product consists of the Pipe Risk Screening Tool (PRST), and a report on the development and use of the tool. The PRST is a software-based screening aid to identify and rank candidate pipes for actions that range from active monitoring (including...
Intelligent resource discovery using ontology-based resource profiles
NASA Technical Reports Server (NTRS)
Hughes, J. Steven; Crichton, Dan; Kelly, Sean; Crichton, Jerry; Tran, Thuy
2004-01-01
Successful resource discovery across heterogeneous repositories is strongly dependent on the semantic and syntactic homogeneity of the associated resource descriptions. Ideally, resource descriptions are easily extracted from pre-existing standardized sources, expressed using standard syntactic and semantic structures, and managed and accessed within a distributed, flexible, and scaleable software framework.
A Performance Support Tool for Cisco Training Program Managers
ERIC Educational Resources Information Center
Benson, Angela D.; Bothra, Jashoda; Sharma, Priya
2004-01-01
Performance support systems can play an important role in corporations by managing and allowing distribution of information more easily. These systems run the gamut from simple paper job aids to sophisticated computer- and web-based software applications that support the entire corporate supply chain. According to Gery (1991), a performance…
Geologic Communications | Alaska Division of Geological & Geophysical
improves a database for the Division's digital and map-based geological, geophysical, and geochemical data interfaces DGGS metadata and digital data distribution - Geospatial datasets published by DGGS are designed to be compatible with a broad variety of digital mapping software, to present DGGS's geospatial data
Scalability and Validation of Big Data Bioinformatics Software.
Yang, Andrian; Troup, Michael; Ho, Joshua W K
2017-01-01
This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.
2007-11-01
accuracy. FPGA ADC data acquisition is controlled by distributed Java -based software. Java -based server application sits on each of the acquisition...JNI ( Java Native Interface) is used to allow Java indirect control of the USB driver. Fig. 5. Photograph of mobile electronics rack...supplies with the monitor and keyboard. The server application on each of these machines is controlled by a remote client Java -based application
NASA Astrophysics Data System (ADS)
Garov, A. S.; Karachevtseva, I. P.; Matveev, E. V.; Zubarev, A. E.; Florinsky, I. V.
2016-06-01
We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (http://cartsrv.mexlab.ru/geoportal) will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olch, A
2015-06-15
Purpose: Systematic radiotherapy plan quality assessment promotes quality improvement. Software tools can perform this analysis by applying site-specific structure dose metrics. The next step is to similarly evaluate the quality of the dose delivery. This study defines metrics for acceptable doses to targets and normal organs for a particular treatment site and scores each plan accordingly. The input can be the TPS or the measurement-based 3D patient dose. From this analysis, one can determine whether the delivered dose distribution to the patient receives a score which is comparable to the TPS plan score, otherwise replanning may be indicated. Methods: Elevenmore » neuroblastoma patient plans were exported from Eclipse to the Quality Reports program. A scoring algorithm defined a score for each normal and target structure based on dose-volume parameters. Each plan was scored by this algorithm and the percentage of total possible points was obtained. Each plan also underwent IMRT QA measurements with a Mapcheck2 or ArcCheck. These measurements were input into the 3DVH program to compute the patient 3D dose distribution which was analyzed using the same scoring algorithm as the TPS plan. Results: The mean quality score for the TPS plans was 75.37% (std dev=14.15%) compared to 71.95% (std dev=13.45%) for the 3DVH dose distribution. For 3/11 plans, the 3DVH-based quality score was higher than the TPS score, by between 0.5 to 8.4 percentage points. Eight/11 plans scores decreased based on IMRT QA measurements by 1.2 to 18.6 points. Conclusion: Software was used to determine the degree to which the plan quality score differed between the TPS and measurement-based dose. Although the delivery score was generally in good agreement with the planned dose score, there were some that improved while there was one plan whose delivered dose quality was significantly less than planned. This methodology helps evaluate both planned and delivered dose quality. Sun Nuclear Corporation has provded a license for the software described.« less
A Tour of Big Data, Open Source Data Management Technologies from the Apache Software Foundation
NASA Astrophysics Data System (ADS)
Mattmann, C. A.
2012-12-01
The Apache Software Foundation, a non-profit foundation charged with dissemination of open source software for the public good, provides a suite of data management technologies for distributed archiving, data ingestion, data dissemination, processing, triage and a host of other functionalities that are becoming critical in the Big Data regime. Apache is the world's largest open source software organization, boasting over 3000 developers from around the world all contributing to some of the most pervasive technologies in use today, from the HTTPD web server that powers a majority of Internet web sites to the Hadoop technology that is now projected at over a $1B dollar industry. Apache data management technologies are emerging as de facto off-the-shelf components for searching, distributing, processing and archiving key science data sets both geophysical, space and planetary based, all the way to biomedicine. In this talk, I will give a virtual tour of the Apache Software Foundation, its meritocracy and governance structure, and also its key big data technologies that organizations can take advantage of today and use to save cost, schedule, and resources in implementing their Big Data needs. I'll illustrate the Apache technologies in the context of several national priority projects, including the U.S. National Climate Assessment (NCA), and in the International Square Kilometre Array (SKA) project that are stretching the boundaries of volume, velocity, complexity, and other key Big Data dimensions.
Cardio-PACs: a new opportunity
NASA Astrophysics Data System (ADS)
Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary
2000-05-01
It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.
A common distributed language approach to software integration
NASA Technical Reports Server (NTRS)
Antonelli, Charles J.; Volz, Richard A.; Mudge, Trevor N.
1989-01-01
An important objective in software integration is the development of techniques to allow programs written in different languages to function together. Several approaches are discussed toward achieving this objective and the Common Distributed Language Approach is presented as the approach of choice.
Research and design of intelligent distributed traffic signal light control system based on CAN bus
NASA Astrophysics Data System (ADS)
Chen, Yu
2007-12-01
Intelligent distributed traffic signal light control system was designed based on technologies of infrared, CAN bus, single chip microprocessor (SCM), etc. The traffic flow signal is processed with the core of SCM AT89C51. At the same time, the SCM controls the CAN bus controller SJA1000/transceiver PCA82C250 to build a CAN bus communication system to transmit data. Moreover, up PC realizes to connect and communicate with SCM through USBCAN chip PDIUSBD12. The distributed traffic signal light control system with three control styles of Vehicle flux, remote and PC is designed. This paper introduces the system composition method and parts of hardware/software design in detail.
NASA Astrophysics Data System (ADS)
Isnur Haryudo, Subuh; Imam Agung, Achmad; Firmansyah, Rifqi
2018-04-01
The purpose of this research is to develop learning media of control technique using Matrix Laboratory software with industry requirement approach. Learning media serves as a tool for creating a better and effective teaching and learning situation because it can accelerate the learning process in order to enhance the quality of learning. Control Techniques using Matrix Laboratory software can enlarge the interest and attention of students, with real experience and can grow independent attitude. This research design refers to the use of research and development (R & D) methods that have been modified by multi-disciplinary team-based researchers. This research used Computer based learning method consisting of computer and Matrix Laboratory software which was integrated with props. Matrix Laboratory has the ability to visualize the theory and analysis of the Control System which is an integration of computing, visualization and programming which is easy to use. The result of this instructional media development is to use mathematical equations using Matrix Laboratory software on control system application with DC motor plant and PID (Proportional-Integral-Derivative). Considering that manufacturing in the field of Distributed Control systems (DCSs), Programmable Controllers (PLCs), and Microcontrollers (MCUs) use PID systems in production processes are widely used in industry.
NASA Astrophysics Data System (ADS)
Vikhlyantsev, O. P.; Generalov, L. N.; Kuryakin, A. V.; Karpov, I. A.; Gurin, N. E.; Tumkin, A. D.; Fil'chagin, S. V.
2017-12-01
A hardware-software complex for measurement of energy and angular distributions of charged particles formed in nuclear reactions is presented. Hardware and software structures of the complex, the basic set of the modular nuclear-physical apparatus of a multichannel detecting system on the basis of Δ E- E telescopes of silicon detectors, and the hardware of experimental data collection, storage, and processing are presented and described.
The deployment of routing protocols in distributed control plane of SDN.
Jingjing, Zhou; Di, Cheng; Weiming, Wang; Rong, Jin; Xiaochun, Wu
2014-01-01
Software defined network (SDN) provides a programmable network through decoupling the data plane, control plane, and application plane from the original closed system, thus revolutionizing the existing network architecture to improve the performance and scalability. In this paper, we learned about the distributed characteristics of Kandoo architecture and, meanwhile, improved and optimized Kandoo's two levels of controllers based on ideological inspiration of RCP (routing control platform). Finally, we analyzed the deployment strategies of BGP and OSPF protocol in a distributed control plane of SDN. The simulation results show that our deployment strategies are superior to the traditional routing strategies.
NASA Astrophysics Data System (ADS)
Yetman, G.; Downs, R. R.
2011-12-01
Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.
Off-the-shelf Control of Data Analysis Software
NASA Astrophysics Data System (ADS)
Wampler, S.
The Gemini Project must provide convenient access to data analysis facilities to a wide user community. The international nature of this community makes the selection of data analysis software particularly interesting, with staunch advocates of systems such as ADAM and IRAF among the users. Additionally, the continuing trends towards increased use of networked systems and distributed processing impose additional complexity. To meet these needs, the Gemini Project is proposing the novel approach of using low-cost, off-the-shelf software to abstract out both the control and distribution of data analysis from the functionality of the data analysis software. For example, the orthogonal nature of control versus function means that users might select analysis routines from both ADAM and IRAF as appropriate, distributing these routines across a network of machines. It is the belief of the Gemini Project that this approach results in a system that is highly flexible, maintainable, and inexpensive to develop. The Khoros visualization system is presented as an example of control software that is currently available for providing the control and distribution within a data analysis system. The visual programming environment provided with Khoros is also discussed as a means to providing convenient access to this control.
NASA Technical Reports Server (NTRS)
Pisaich, Gregory; Flueckiger, Lorenzo; Neukom, Christian; Wagner, Mike; Buchanan, Eric; Plice, Laura
2007-01-01
The Mission Simulation Toolkit (MST) is a flexible software system for autonomy research. It was developed as part of the Mission Simulation Facility (MSF) project that was started in 2001 to facilitate the development of autonomous planetary robotic missions. Autonomy is a key enabling factor for robotic exploration. There has been a large gap between autonomy software (at the research level), and software that is ready for insertion into near-term space missions. The MST bridges this gap by providing a simulation framework and a suite of tools for supporting research and maturation of autonomy. MST uses a distributed framework based on the High Level Architecture (HLA) standard. A key feature of the MST framework is the ability to plug in new models to replace existing ones with the same services. This enables significant simulation flexibility, particularly the mixing and control of fidelity level. In addition, the MST provides automatic code generation from robot interfaces defined with the Unified Modeling Language (UML), methods for maintaining synchronization across distributed simulation systems, XML-based robot description, and an environment server. Finally, the MSF supports a number of third-party products including dynamic models and terrain databases. Although the communication objects and some of the simulation components that are provided with this toolkit are specifically designed for terrestrial surface rovers, the MST can be applied to any other domain, such as aerial, aquatic, or space.
Distributed operating system for NASA ground stations
NASA Technical Reports Server (NTRS)
Doyle, John F.
1987-01-01
NASA ground stations are characterized by ever changing support requirements, so application software is developed and modified on a continuing basis. A distributed operating system was designed to optimize the generation and maintenance of those applications. Unusual features include automatic program generation from detailed design graphs, on-line software modification in the testing phase, and the incorporation of a relational database within a real-time, distributed system.
Architecture-Centric Development in Globally Distributed Projects
NASA Astrophysics Data System (ADS)
Sauer, Joachim
In this chapter architecture-centric development is proposed as a means to strengthen the cohesion of distributed teams and to tackle challenges due to geographical and temporal distances and the clash of different cultures. A shared software architecture serves as blueprint for all activities in the development process and ties them together. Architecture-centric development thus provides a plan for task allocation, facilitates the cooperation of globally distributed developers, and enables continuous integration reaching across distributed teams. Advice is also provided for software architects who work with distributed teams in an agile manner.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments
Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...
2015-11-09
Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.
Management of Globally Distributed Software Development Projects in Multiple-Vendor Constellations
NASA Astrophysics Data System (ADS)
Schott, Katharina; Beck, Roman; Gregory, Robert Wayne
Global information systems development outsourcing is an apparent trend that is expected to continue in the foreseeable future. Thereby, IS-related services are not only increasingly provided from different geographical sites simultaneously but beyond that from multiple service providers based in different countries. The purpose of this paper is to understand how the involvement of multiple service providers affects the management of the globally distributed information systems development projects. As research on this topic is scarce, we applied an exploratory in-depth single-case study design as research approach. The case we analyzed comprises a global software development outsourcing project initiated by a German bank together with several globally distributed vendors. For data collection and data analysis we have adopted techniques suggested by the grounded theory method. Whereas the extant literature points out the increased management overhead associated with multi-sourcing, the analysis of our case suggests that the required effort for managing global outsourcing projects with multiple vendors depends among other things on the maturation level of the cooperation within the vendor portfolio. Furthermore, our data indicate that this interplay maturity is positively impacted through knowledge about the client that has been derived based on already existing client-vendor relationships. The paper concludes by offering theoretical and practical implications.
Measurement-device-independent quantum digital signatures
NASA Astrophysics Data System (ADS)
Puthoor, Ittoop Vergheese; Amiri, Ryan; Wallden, Petros; Curty, Marcos; Andersson, Erika
2016-08-01
Digital signatures play an important role in software distribution, modern communication, and financial transactions, where it is important to detect forgery and tampering. Signatures are a cryptographic technique for validating the authenticity and integrity of messages, software, or digital documents. The security of currently used classical schemes relies on computational assumptions. Quantum digital signatures (QDS), on the other hand, provide information-theoretic security based on the laws of quantum physics. Recent work on QDS Amiri et al., Phys. Rev. A 93, 032325 (2016);, 10.1103/PhysRevA.93.032325 Yin, Fu, and Zeng-Bing, Phys. Rev. A 93, 032316 (2016), 10.1103/PhysRevA.93.032316 shows that such schemes do not require trusted quantum channels and are unconditionally secure against general coherent attacks. However, in practical QDS, just as in quantum key distribution (QKD), the detectors can be subjected to side-channel attacks, which can make the actual implementations insecure. Motivated by the idea of measurement-device-independent quantum key distribution (MDI-QKD), we present a measurement-device-independent QDS (MDI-QDS) scheme, which is secure against all detector side-channel attacks. Based on the rapid development of practical MDI-QKD, our MDI-QDS protocol could also be experimentally implemented, since it requires a similar experimental setup.
Prediction of contaminant fate and transport in potable water systems using H2OFate
NASA Astrophysics Data System (ADS)
Devarakonda, Venkat; Manickavasagam, Sivakumar; VanBlaricum, Vicki; Ginsberg, Mark
2009-05-01
BlazeTech has recently developed a software called H2OFate to predict the fate and transport of chemical and biological contaminants in water distribution systems. This software includes models for the reactions of these contaminants with residual disinfectant in bulk water and at the pipe wall, and their adhesion/reactions with the pipe walls. This software can be interfaced with sensors through SCADA systems to monitor water distribution networks for contamination events and activate countermeasures, as needed. This paper presents results from parametric calculations carried out using H2OFate for a simulated contaminant release into a sample water distribution network.
Chełkowski, Tadeusz; Gloor, Peter; Jemielniak, Dariusz
2016-01-01
While researchers are becoming increasingly interested in studying OSS phenomenon, there is still a small number of studies analyzing larger samples of projects investigating the structure of activities among OSS developers. The significant amount of information that has been gathered in the publicly available open-source software repositories and mailing-list archives offers an opportunity to analyze projects structures and participant involvement. In this article, using on commits data from 263 Apache projects repositories (nearly all), we show that although OSS development is often described as collaborative, but it in fact predominantly relies on radically solitary input and individual, non-collaborative contributions. We also show, in the first published study of this magnitude, that the engagement of contributors is based on a power-law distribution.
2016-01-01
While researchers are becoming increasingly interested in studying OSS phenomenon, there is still a small number of studies analyzing larger samples of projects investigating the structure of activities among OSS developers. The significant amount of information that has been gathered in the publicly available open-source software repositories and mailing-list archives offers an opportunity to analyze projects structures and participant involvement. In this article, using on commits data from 263 Apache projects repositories (nearly all), we show that although OSS development is often described as collaborative, but it in fact predominantly relies on radically solitary input and individual, non-collaborative contributions. We also show, in the first published study of this magnitude, that the engagement of contributors is based on a power-law distribution. PMID:27096157
Clinical software development for the Web: lessons learned from the BOADICEA project
2012-01-01
Background In the past 20 years, society has witnessed the following landmark scientific advances: (i) the sequencing of the human genome, (ii) the distribution of software by the open source movement, and (iii) the invention of the World Wide Web. Together, these advances have provided a new impetus for clinical software development: developers now translate the products of human genomic research into clinical software tools; they use open-source programs to build them; and they use the Web to deliver them. Whilst this open-source component-based approach has undoubtedly made clinical software development easier, clinical software projects are still hampered by problems that traditionally accompany the software process. This study describes the development of the BOADICEA Web Application, a computer program used by clinical geneticists to assess risks to patients with a family history of breast and ovarian cancer. The key challenge of the BOADICEA Web Application project was to deliver a program that was safe, secure and easy for healthcare professionals to use. We focus on the software process, problems faced, and lessons learned. Our key objectives are: (i) to highlight key clinical software development issues; (ii) to demonstrate how software engineering tools and techniques can facilitate clinical software development for the benefit of individuals who lack software engineering expertise; and (iii) to provide a clinical software development case report that can be used as a basis for discussion at the start of future projects. Results We developed the BOADICEA Web Application using an evolutionary software process. Our approach to Web implementation was conservative and we used conventional software engineering tools and techniques. The principal software development activities were: requirements, design, implementation, testing, documentation and maintenance. The BOADICEA Web Application has now been widely adopted by clinical geneticists and researchers. BOADICEA Web Application version 1 was released for general use in November 2007. By May 2010, we had > 1200 registered users based in the UK, USA, Canada, South America, Europe, Africa, Middle East, SE Asia, Australia and New Zealand. Conclusions We found that an evolutionary software process was effective when we developed the BOADICEA Web Application. The key clinical software development issues identified during the BOADICEA Web Application project were: software reliability, Web security, clinical data protection and user feedback. PMID:22490389
Clinical software development for the Web: lessons learned from the BOADICEA project.
Cunningham, Alex P; Antoniou, Antonis C; Easton, Douglas F
2012-04-10
In the past 20 years, society has witnessed the following landmark scientific advances: (i) the sequencing of the human genome, (ii) the distribution of software by the open source movement, and (iii) the invention of the World Wide Web. Together, these advances have provided a new impetus for clinical software development: developers now translate the products of human genomic research into clinical software tools; they use open-source programs to build them; and they use the Web to deliver them. Whilst this open-source component-based approach has undoubtedly made clinical software development easier, clinical software projects are still hampered by problems that traditionally accompany the software process. This study describes the development of the BOADICEA Web Application, a computer program used by clinical geneticists to assess risks to patients with a family history of breast and ovarian cancer. The key challenge of the BOADICEA Web Application project was to deliver a program that was safe, secure and easy for healthcare professionals to use. We focus on the software process, problems faced, and lessons learned. Our key objectives are: (i) to highlight key clinical software development issues; (ii) to demonstrate how software engineering tools and techniques can facilitate clinical software development for the benefit of individuals who lack software engineering expertise; and (iii) to provide a clinical software development case report that can be used as a basis for discussion at the start of future projects. We developed the BOADICEA Web Application using an evolutionary software process. Our approach to Web implementation was conservative and we used conventional software engineering tools and techniques. The principal software development activities were: requirements, design, implementation, testing, documentation and maintenance. The BOADICEA Web Application has now been widely adopted by clinical geneticists and researchers. BOADICEA Web Application version 1 was released for general use in November 2007. By May 2010, we had > 1200 registered users based in the UK, USA, Canada, South America, Europe, Africa, Middle East, SE Asia, Australia and New Zealand. We found that an evolutionary software process was effective when we developed the BOADICEA Web Application. The key clinical software development issues identified during the BOADICEA Web Application project were: software reliability, Web security, clinical data protection and user feedback.
Application and study of land-reclaim based on Arc/Info
NASA Astrophysics Data System (ADS)
Zhao, Jun; Zhang, Ruiju; Wang, Zhian; Li, Shiyong
2005-10-01
This paper firstly puts forward the evaluation models of land-reclaim, which is derived from the thoery of Fuzzy associative memory nerve network and corresponding supplemental CASE tools, based on the model the mode of land reclaim can determined, and then the elements of land-reclaim are displayed and synthesized visually and virtually by virtue of Arc/Info software. In the process of land reclaim, it is particularly important to build the model of land-reclaim and to map the distribution of soil elements. In this way rational and feasible schemes are adopted in order to instruct the project of land reclaim. This thesis mainly takes the fourth mining area of East Beach as an example and puts this model into practice. Based on Arc/Info software the application of land-reclaim is studied and good results are achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malony, Allen D; Shende, Sameer
The primary goal of the University of Oregon's DOE "ÃÂcompetitiveness" project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translationmore » of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.« less
JANIS: NEA JAva-based Nuclear Data Information System
NASA Astrophysics Data System (ADS)
Soppera, Nicolas; Bossant, Manuel; Cabellos, Oscar; Dupont, Emmeric; Díez, Carlos J.
2017-09-01
JANIS (JAva-based Nuclear Data Information System) software is developed by the OECD Nuclear Energy Agency (NEA) Data Bank to facilitate the visualization and manipulation of nuclear data, giving access to evaluated nuclear data libraries, such as ENDF, JEFF, JENDL, TENDL etc., and also to experimental nuclear data (EXFOR) and bibliographical references (CINDA). It is available as a standalone Java program, downloadable and distributed on DVD and also a web application available on the NEA website. One of the main new features in JANIS is the scripting capability via command line, which notably automatizes plots generation and permits automatically extracting data from the JANIS database. Recent NEA software developments rely on these JANIS features to access nuclear data, for example the Nuclear Data Sensitivity Tool (NDaST) makes use of covariance data in BOXER and COVERX formats, which are retrieved from the JANIS database. New features added in this version of the JANIS software are described along this paper with some examples.
Uranus: a rapid prototyping tool for FPGA embedded computer vision
NASA Astrophysics Data System (ADS)
Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.
2007-01-01
The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.