Microcomputers in Libraries: The Quiet Revolution.
ERIC Educational Resources Information Center
Boss, Richard
1985-01-01
This article defines three separate categories of microcomputers--personal, desk-top, multi-user devices--and relates storage capabilities (expandability, floppy disks) to library applications. Highlghts include de facto standards, operating systems, database management systems, applications software, circulation control systems, dumb and…
Software Defined Radio with Parallelized Software Architecture
NASA Technical Reports Server (NTRS)
Heckler, Greg
2013-01-01
This software implements software-defined radio procession over multi-core, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to .50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.
Software defined radio (SDR) architecture for concurrent multi-satellite communications
NASA Astrophysics Data System (ADS)
Maheshwarappa, Mamatha R.
SDRs have emerged as a viable approach for space communications over the last decade by delivering low-cost hardware and flexible software solutions. The flexibility introduced by the SDR concept not only allows the realisation of concurrent multiple standards on one platform, but also promises to ease the implementation of one communication standard on differing SDR platforms by signal porting. This technology would facilitate implementing reconfigurable nodes for parallel satellite reception in Mobile/Deployable Ground Segments and Distributed Satellite Systems (DSS) for amateur radio/university satellite operations. This work outlines the recent advances in embedded technologies that can enable new communication architectures for concurrent multi-satellite or satellite-to-ground missions where multi-link challenges are associated. This research proposes a novel concept to run advanced parallelised SDR back-end technologies in a Commercial-Off-The-Shelf (COTS) embedded system that can support multi-signal processing for multi-satellite scenarios simultaneously. The initial SDR implementation could support only one receiver chain due to system saturation. However, the design was optimised to facilitate multiple signals within the limited resources available on an embedded system at any given time. This was achieved by providing a VHDL solution to the existing Python and C/C++ programming languages along with parallelisation so as to accelerate performance whilst maintaining the flexibility. The improvement in the performance was validated at every stage through profiling. Various cases of concurrent multiple signals with different standards such as frequency (with Doppler effect) and symbol rates were simulated in order to validate the novel architecture proposed in this research. Also, the architecture allows the system to be reconfigurable by providing the opportunity to change the communication standards in soft real-time. The chosen COTS solution provides a generic software methodology for both ground and space applications that will remain unaltered despite new evolutions in hardware, and supports concurrent multi-standard, multi-channel and multi-rate telemetry signals.
Software Defined Radio with Parallelized Software Architecture
NASA Technical Reports Server (NTRS)
Heckler, Greg
2013-01-01
This software implements software-defined radio procession over multicore, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to approx.50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.
Miniature EVA Software Defined Radio
NASA Technical Reports Server (NTRS)
Pozhidaev, Aleksey
2012-01-01
As NASA embarks upon developing the Next-Generation Extra Vehicular Activity (EVA) Radio for deep space exploration, the demands on EVA battery life will substantially increase. The number of modes and frequency bands required will continue to grow in order to enable efficient and complex multi-mode operations including communications, navigation, and tracking applications. Whether conducting astronaut excursions, communicating to soldiers, or first responders responding to emergency hazards, NASA has developed an innovative, affordable, miniaturized, power-efficient software defined radio that offers unprecedented power-efficient flexibility. This lightweight, programmable, S-band, multi-service, frequency- agile EVA software defined radio (SDR) supports data, telemetry, voice, and both standard and high-definition video. Features include a modular design, an easily scalable architecture, and the EVA SDR allows for both stationary and mobile battery powered handheld operations. Currently, the radio is equipped with an S-band RF section. However, its scalable architecture can accommodate multiple RF sections simultaneously to cover multiple frequency bands. The EVA SDR also supports multiple network protocols. It currently implements a Hybrid Mesh Network based on the 802.11s open standard protocol. The radio targets RF channel data rates up to 20 Mbps and can be equipped with a real-time operating system (RTOS) that can be switched off for power-aware applications. The EVA SDR's modular design permits implementation of the same hardware at all Network Nodes concept. This approach assures the portability of the same software into any radio in the system. It also brings several benefits to the entire system including reducing system maintenance, system complexity, and development cost.
Software Formal Inspections Standard
NASA Technical Reports Server (NTRS)
1993-01-01
This Software Formal Inspections Standard (hereinafter referred to as Standard) is applicable to NASA software. This Standard defines the requirements that shall be fulfilled by the software formal inspections process whenever this process is specified for NASA software. The objective of this Standard is to define the requirements for a process that inspects software products to detect and eliminate defects as early as possible in the software life cycle. The process also provides for the collection and analysis of inspection data to improve the inspection process as well as the quality of the software.
Evolution of a Reconfigurable Processing Platform for a Next Generation Space Software Defined Radio
NASA Technical Reports Server (NTRS)
Kacpura, Thomas J.; Downey, Joseph A.; Anderson, Keffery R.; Baldwin, Keith
2014-01-01
The National Aeronautics and Space Administration (NASA)Harris Ka-Band Software Defined Radio (SDR) is the first, fully reprogrammable space-qualified SDR operating in the Ka-Band frequency range. Providing exceptionally higher data communication rates than previously possible, this SDR offers in-orbit reconfiguration, multi-waveform operation, and fast deployment due to its highly modular hardware and software architecture. Currently in operation on the International Space Station (ISS), this new paradigm of reconfigurable technology is enabling experimenters to investigate navigation and networking in the space environment.The modular SDR and the NASA developed Space Telecommunications Radio System (STRS) architecture standard are the basis for Harris reusable, digital signal processing space platform trademarked as AppSTAR. As a result, two new space radio products are a synthetic aperture radar payload and an Automatic Detection Surveillance Broadcast (ADS-B) receiver. In addition, Harris is currently developing many new products similar to the Ka-Band software defined radio for other applications. For NASAs next generation flight Ka-Band radio development, leveraging these advancements could lead to a more robust and more capable software defined radio.The space environment has special considerations different from terrestrial applications that must be considered for any system operated in space. Each space mission has unique requirements that can make these systems unique. These unique requirements can make products that are expensive and limited in reuse. Space systems put a premium on size, weight and power. A key trade is the amount of reconfigurability in a space system. The more reconfigurable the hardware platform, the easier it is to adapt to the platform to the next mission, and this reduces the amount of non-recurring engineering costs. However, the more reconfigurable platforms often use more spacecraft resources. Software has similar considerations to hardware. Having an architecture standard promotes reuse of software and firmware. Space platforms have limited processor capability, which makes the trade on the amount of amount of flexibility paramount.
Software Defined Radio Standard Architecture and its Application to NASA Space Missions
NASA Technical Reports Server (NTRS)
Andro, Monty; Reinhart, Richard C.
2006-01-01
A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.
Standardized Modular Power Interfaces for Future Space Explorations Missions
NASA Technical Reports Server (NTRS)
Oeftering, Richard
2015-01-01
Earlier studies show that future human explorations missions are composed of multi-vehicle assemblies with interconnected electric power systems. Some vehicles are often intended to serve as flexible multi-purpose or multi-mission platforms. This drives the need for power architectures that can be reconfigured to support this level of flexibility. Power system developmental costs can be reduced, program wide, by utilizing a common set of modular building blocks. Further, there are mission operational and logistics cost benefits of using a common set of modular spares. These benefits are the goals of the Advanced Exploration Systems (AES) Modular Power System (AMPS) project. A common set of modular blocks requires a substantial level of standardization in terms of the Electrical, Data System, and Mechanical interfaces. The AMPS project is developing a set of proposed interface standards that will provide useful guidance for modular hardware developers but not needlessly constrain technology options, or limit future growth in capability. In 2015 the AMPS project focused on standardizing the interfaces between the elements of spacecraft power distribution and energy storage. The development of the modular power standard starts with establishing mission assumptions and ground rules to define design application space. The standards are defined in terms of AMPS objectives including Commonality, Reliability-Availability, Flexibility-Configurability and Supportability-Reusability. The proposed standards are aimed at assembly and sub-assembly level building blocks. AMPS plans to adopt existing standards for spacecraft command and data, software, network interfaces, and electrical power interfaces where applicable. Other standards including structural encapsulation, heat transfer, and fluid transfer, are governed by launch and spacecraft environments and bound by practical limitations of weight and volume. Developing these mechanical interface standards is more difficult but an essential part of defining physical building blocks of modular power. This presentation describes the AMPS projects progress towards standardized modular power interfaces.
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Li, Yajie; Wang, Xinbo; Chen, Bowen; Zhang, Jie
2016-09-01
A hierarchical software-defined networking (SDN) control architecture is designed for multi-domain optical networks with the Open Daylight (ODL) controller. The OpenFlow-based Control Virtual Network Interface (CVNI) protocol is deployed between the network orchestrator and the domain controllers. Then, a dynamic bandwidth on demand (BoD) provisioning solution is proposed based on time scheduling in software-defined multi-domain optical networks (SD-MDON). Shared Risk Link Groups (SRLG)-disjoint routing schemes are adopted to separate each tenant for reliability. The SD-MDON testbed is built based on the proposed hierarchical control architecture. Then the proposed time scheduling-based BoD (Ts-BoD) solution is experimentally demonstrated on the testbed. The performance of the Ts-BoD solution is evaluated with respect to blocking probability, resource utilization, and lightpath setup latency.
Open Architecture Standard for NASA's Software-Defined Space Telecommunications Radio Systems
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Johnson, Sandra K.; Kacpura, Thomas J.; Hall, Charles S.; Smith, Carl R.; Liebetreu, John
2008-01-01
NASA is developing an architecture standard for software-defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer. This paper presents the initial Space Telecommunications Radio System (STRS) Architecture for NASA missions to provide the desired software abstraction and flexibility while minimizing the resources necessary to support the architecture.
An Open Simulation System Model for Scientific Applications
NASA Technical Reports Server (NTRS)
Williams, Anthony D.
1995-01-01
A model for a generic and open environment for running multi-code or multi-application simulations - called the open Simulation System Model (OSSM) - is proposed and defined. This model attempts to meet the requirements of complex systems like the Numerical Propulsion Simulator System (NPSS). OSSM places no restrictions on the types of applications that can be integrated at any state of its evolution. This includes applications of different disciplines, fidelities, etc. An implementation strategy is proposed that starts with a basic prototype, and evolves over time to accommodate an increasing number of applications. Potential (standard) software is also identified which may aid in the design and implementation of the system.
NASA's SDR Standard: Space Telecommunications Radio System
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Johnson, Sandra K.
2007-01-01
A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG s SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA s current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Ji, Yuefeng; Zhang, Jie; Li, Hui; Xiong, Qianjin; Qiu, Shaofeng
2014-08-01
Ultrahigh throughout capacity requirement is challenging the current optical switching nodes with the fast development of data center networks. Pbit/s level all optical switching networks need to be deployed soon, which will cause the high complexity of node architecture. How to control the future network and node equipment together will become a new problem. An enhanced Software Defined Networking (eSDN) control architecture is proposed in the paper, which consists of Provider NOX (P-NOX) and Node NOX (N-NOX). With the cooperation of P-NOX and N-NOX, the flexible control of the entire network can be achieved. All optical switching network testbed has been experimentally demonstrated with efficient control of enhanced Software Defined Networking (eSDN). Pbit/s level all optical switching nodes in the testbed are implemented based on multi-dimensional switching architecture, i.e. multi-level and multi-planar. Due to the space and cost limitation, each optical switching node is only equipped with four input line boxes and four output line boxes respectively. Experimental results are given to verify the performance of our proposed control and switching architecture.
Space Telecommunications Radio System (STRS) Architecture Standard. Release 1.02.1
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Kacpura, Thomas J.; Handler, Louis M.; Hall, C. Steve; Mortensen, Dale J.; Johnson, Sandra K.; Briones, Janette C.; Nappier, Jennifer M.; Downey, Joseph A.; Lux, James P.
2012-01-01
This document contains the NASA architecture standard for software defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer.
Space Telecommunications Radio Architecture (STRS)
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
2006-01-01
A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA s current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.
Space Telecommunications Radio Architecture (STRS): Technical Overview
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
2006-01-01
A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG s SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.
NASA space station software standards issues
NASA Technical Reports Server (NTRS)
Tice, G. D., Jr.
1985-01-01
The selection and application of software standards present the NASA Space Station Program with the opportunity to serve as a pacesetter for the United States software in the area of software standards. The strengths and weaknesses of each of the NASA defined software standards issues are summerized and discussed. Several significant standards issues are offered for NASA consideration. A challenge is presented for the NASA Space Station Program to serve as a pacesetter for the U.S. Software Industry through: (1) Management commitment to software standards; (2) Overall program participation in software standards; and (3) Employment of the best available technology to support software standards
Using component technology to facilitate external software reuse in ground-based planning systems
NASA Technical Reports Server (NTRS)
Chase, A.
2003-01-01
APGEN (Activity Plan GENerator - 314), a multi-mission planning tool, must interface with external software to vest serve its users. AP-GEN's original method for incorporating external software, the User-Defined library mechanism, has been very successful in allowing APGEN users access to external software functionality.
Advanced Data Visualization in Astrophysics: The X3D Pathway
NASA Astrophysics Data System (ADS)
Vogt, Frédéric P. A.; Owen, Chris I.; Verdes-Montenegro, Lourdes; Borthakur, Sanchayeeta
2016-02-01
Most modern astrophysical data sets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. However, the same multi-dimensional data sets are systematically cropped, sliced, and/or projected to printable two-dimensional diagrams at the publication stage. In this article, we introduce the concept of the “X3D pathway” as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3D) diagrams. The X3D pathway exploits the facts that (1) the X3D 3D file format lies at the center of a product tree that includes interactive HTML documents, 3D printing, and high-end animations, and (2) all high-impact-factor and peer-reviewed journals in astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional data sets because it provides direct access to a range of different data visualization techniques, is fully open source, and is a well-defined standard from the International Organization for Standardization. Unlike other earlier propositions to publish multi-dimensional data sets via 3D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead is compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone toward the implementation of the X3D pathway for any other data set.
NASA Technical Reports Server (NTRS)
Dehghani, Navid; Tankenson, Michael
2006-01-01
This paper details an architectural description of the Mission Data Processing and Control System (MPCS), an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is developed based on a set of small reusable components, implemented in Java, each designed with a specific function and well-defined interfaces. An industry standard messaging bus is used to transfer information among system components. Components generate standard messages which are used to capture system information, as well as triggers to support the event-driven architecture of the system. Event-driven systems are highly desirable for processing high-rate telemetry (science and engineering) data, and for supporting automation for many mission operations processes.
ImTK: an open source multi-center information management toolkit
NASA Astrophysics Data System (ADS)
Alaoui, Adil; Ingeholm, Mary Lou; Padh, Shilpa; Dorobantu, Mihai; Desai, Mihir; Cleary, Kevin; Mun, Seong K.
2008-03-01
The Information Management Toolkit (ImTK) Consortium is an open source initiative to develop robust, freely available tools related to the information management needs of basic, clinical, and translational research. An open source framework and agile programming methodology can enable distributed software development while an open architecture will encourage interoperability across different environments. The ISIS Center has conceptualized a prototype data sharing network that simulates a multi-center environment based on a federated data access model. This model includes the development of software tools to enable efficient exchange, sharing, management, and analysis of multimedia medical information such as clinical information, images, and bioinformatics data from multiple data sources. The envisioned ImTK data environment will include an open architecture and data model implementation that complies with existing standards such as Digital Imaging and Communications (DICOM), Health Level 7 (HL7), and the technical framework and workflow defined by the Integrating the Healthcare Enterprise (IHE) Information Technology Infrastructure initiative, mainly the Cross Enterprise Document Sharing (XDS) specifications.
Software technology insertion: A study of success factors
NASA Technical Reports Server (NTRS)
Lydon, Tom
1990-01-01
Managing software development in large organizations has become increasingly difficult due to increasing technical complexity, stricter government standards, a shortage of experienced software engineers, competitive pressure for improved productivity and quality, the need to co-develop hardware and software together, and the rapid changes in both hardware and software technology. The 'software factory' approach to software development minimizes risks while maximizing productivity and quality through standardization, automation, and training. However, in practice, this approach is relatively inflexible when adopting new software technologies. The methods that a large multi-project software engineering organization can use to increase the likelihood of successful software technology insertion (STI), especially in a standardized engineering environment, are described.
NASA Technical Reports Server (NTRS)
Simons, Rainee N.; Wintucky, Edwin G.; Landon, David G.; Sun, Jun Y.; Winn, James S.; Laraway, Stephen; McIntire, William K.; Metz, John L.; Smith, Francis J.
2011-01-01
The paper presents the first ever research and experimental results regarding the combination of a software-defined multi-Gbps modem and a broadband high power space amplifier when tested with an extended form of the industry standard DVB-S2 and LDPC rate 9/10 FEC codec. The modem supports waveforms including QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK, and 128-QAM. The broadband high power amplifier is a space qualified traveling-wave tube (TWT), which has a passband greater than 3 GHz at 33 GHz, output power of 200 W and efficiency greater than 60 percent. The modem and the TWTA together enabled an unprecedented data rate at 20 Gbps with low BER of 10(exp -9). The presented results include a plot of the received waveform constellation, BER vs. E(sub b)/N(sub 0) and implementation loss for each of the modulation types tested. The above results when included in an RF link budget analysis show that NASA s payload data rate can be increased by at least an order of magnitude (greater than 10X) over current state-of-practice, limited only by the spacecraft EIRP, ground receiver G/T, range, and available spectrum or bandwidth.
Parallel Execution of Functional Mock-up Units in Buildings Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozmen, Ozgur; Nutaro, James J.; New, Joshua Ryan
2016-06-30
A Functional Mock-up Interface (FMI) defines a standardized interface to be used in computer simulations to develop complex cyber-physical systems. FMI implementation by a software modeling tool enables the creation of a simulation model that can be interconnected, or the creation of a software library called a Functional Mock-up Unit (FMU). This report describes an FMU wrapper implementation that imports FMUs into a C++ environment and uses an Euler solver that executes FMUs in parallel using Open Multi-Processing (OpenMP). The purpose of this report is to elucidate the runtime performance of the solver when a multi-component system is imported asmore » a single FMU (for the whole system) or as multiple FMUs (for different groups of components as sub-systems). This performance comparison is conducted using two test cases: (1) a simple, multi-tank problem; and (2) a more realistic use case based on the Modelica Buildings Library. In both test cases, the performance gains are promising when each FMU consists of a large number of states and state events that are wrapped in a single FMU. Load balancing is demonstrated to be a critical factor in speeding up parallel execution of multiple FMUs.« less
Web-based multi-channel analyzer
Gritzo, Russ E.
2003-12-23
The present invention provides an improved multi-channel analyzer designed to conveniently gather, process, and distribute spectrographic pulse data. The multi-channel analyzer may operate on a computer system having memory, a processor, and the capability to connect to a network and to receive digitized spectrographic pulses. The multi-channel analyzer may have a software module integrated with a general-purpose operating system that may receive digitized spectrographic pulses for at least 10,000 pulses per second. The multi-channel analyzer may further have a user-level software module that may receive user-specified controls dictating the operation of the multi-channel analyzer, making the multi-channel analyzer customizable by the end-user. The user-level software may further categorize and conveniently distribute spectrographic pulse data employing non-proprietary, standard communication protocols and formats.
Space Communication and Navigation SDR Testbed, Overview and Opportunity for Experiments
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
2013-01-01
NASA has developed an experimental flight payload (referred to as the Space Communication and Navigation (SCAN) Test Bed) to investigate software defined radio (SDR) communications, networking, and navigation technologies, operationally in the space environment. The payload consists of three software defined radios each compliant to NASAs Space Telecommunications Radio System Architecture, a common software interface description standard for software defined radios. The software defined radios are new technology developments underway by NASA and industry partners launched in 2012. The payload is externally mounted to the International Space Station truss to conduct experiments representative of future mission capability. Experiment operations include in-flight reconfiguration of the SDR waveform functions and payload networking software. The flight system will communicate with NASAs orbiting satellite relay network, the Tracking and Data Relay Satellite System at both S-band and Ka-band and to any Earth-based compatible S-band ground station. The system is available for experiments by industry, academia, and other government agencies to participate in the SDR technology assessments and standards advancements.
Sensor Open System Architecture (SOSA) evolution for collaborative standards development
NASA Astrophysics Data System (ADS)
Collier, Charles Patrick; Lipkin, Ilya; Davidson, Steven A.; Baldwin, Rusty; Orlovsky, Michael C.; Ibrahim, Tim
2017-04-01
The Sensor Open System Architecture (SOSA) is a C4ISR-focused technical and economic collaborative effort between the Air Force, Navy, Army, the Department of Defense (DoD), Industry, and other Governmental agencies to develop (and incorporate) a technical Open Systems Architecture standard in order to maximize C4ISR sub-system, system, and platform affordability, re-configurability, and hardware/software/firmware re-use. The SOSA effort will effectively create an operational and technical framework for the integration of disparate payloads into C4ISR systems; with a focus on the development of a modular decomposition (defining functions and behaviors) and associated key interfaces (physical and logical) for common multi-purpose architecture for radar, EO/IR, SIGINT, EW, and Communications. SOSA addresses hardware, software, and mechanical/electrical interfaces. The modular decomposition will produce a set of re-useable components, interfaces, and sub-systems that engender reusable capabilities. This, in effect, creates a realistic and affordable ecosystem enabling mission effectiveness through systematic re-use of all available re-composed hardware, software, and electrical/mechanical base components and interfaces. To this end, SOSA will leverage existing standards as much as possible and evolve the SOSA architecture through modification, reuse, and enhancements to achieve C4ISR goals. This paper will present accomplishments over the first year of SOSA initiative.
SAO mission support software and data standards, version 1.0
NASA Technical Reports Server (NTRS)
Hsieh, P.
1993-01-01
This document defines the software developed by the SAO AXAF Mission Support (MS) Program and defines standards for the software development process and control of data products generated by the software. The SAO MS is tasked to develop and use software to perform a variety of functions in support of the AXAF mission. Software is developed by software engineers and scientists, and commercial off-the-shelf (COTS) software is used either directly or customized through the use of scripts to implement analysis procedures. Software controls real-time laboratory instruments, performs data archiving, displays data, and generates model predictions. Much software is used in the analysis of data to generate data products that are required by the AXAF project, for example, on-orbit mirror performance predictions or detailed characterization of the mirror reflection performance with energy.
BYMUR software: a free and open source tool for quantifying and visualizing multi-risk analyses
NASA Astrophysics Data System (ADS)
Tonini, Roberto; Selva, Jacopo
2013-04-01
The BYMUR software aims to provide an easy-to-use open source tool for both computing multi-risk and managing/visualizing/comparing all the inputs (e.g. hazard, fragilities and exposure) as well as the corresponding results (e.g. risk curves, risk indexes). For all inputs, a complete management of inter-model epistemic uncertainty is considered. The BYMUR software will be one of the final products provided by the homonymous ByMuR project (http://bymur.bo.ingv.it/) funded by Italian Ministry of Education, Universities and Research (MIUR), focused to (i) provide a quantitative and objective general method for a comprehensive long-term multi-risk analysis in a given area, accounting for inter-model epistemic uncertainty through Bayesian methodologies, and (ii) apply the methodology to seismic, volcanic and tsunami risks in Naples (Italy). More specifically, the BYMUR software will be able to separately account for the probabilistic hazard assessment of different kind of hazardous phenomena, the relative (time-dependent/independent) vulnerabilities and exposure data, and their possible (predefined) interactions: the software will analyze these inputs and will use them to estimate both single- and multi- risk associated to a specific target area. In addition, it will be possible to connect the software to further tools (e.g., a full hazard analysis), allowing a dynamic I/O of results. The use of Python programming language guarantees that the final software will be open source and platform independent. Moreover, thanks to the integration of some most popular and rich-featured Python scientific modules (Numpy, Matplotlib, Scipy) with the wxPython graphical user toolkit, the final tool will be equipped with a comprehensive Graphical User Interface (GUI) able to control and visualize (in the form of tables, maps and/or plots) any stage of the multi-risk analysis. The additional features of importing/exporting data in MySQL databases and/or standard XML formats (for instance, the global standards defined in the frame of GEM project for seismic hazard and risk) will grant the interoperability with other FOSS software and tools and, at the same time, to be on hand of the geo-scientific community. An already available example of connection is represented by the BET_VH(**) tool, which probabilistic volcanic hazard outputs will be used as input for BYMUR. Finally, the prototype version of BYMUR will be used for the case study of the municipality of Naples, by considering three different natural hazards (volcanic eruptions, earthquakes and tsunamis) and by assessing the consequent long-term risk evaluation. (**)BET_VH (Bayesian Event Tree for Volcanic Hazard) is probabilistic tool for long-term volcanic hazard assessment, recently re-designed and adjusted to be run on the Vhub cyber-infrastructure, a free web-based collaborative tool in volcanology research (see http://vhub.org/resources/betvh).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yixing; Zhang, Jianshun; Pelken, Michael
Executive Summary The objective of this study was to develop a “Virtual Design Studio (VDS)”: a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. This VDS is intended to assist collaborating architects, engineers and project management team members throughout from the early phases to the detailed building design stages. It can be used to plan design tasks and workflow, and evaluate the potential impacts of various green building strategies on the building performance by using the state of the art simulation toolsmore » as well as industrial/professional standards and guidelines for green building system design. Engaged in the development of VDS was a multi-disciplinary research team that included architects, engineers, and software developers. Based on the review and analysis of how existing professional practices in building systems design operate, particularly those used in the U.S., Germany and UK, a generic process for performance-based building design, construction and operation was proposed. It distinguishes the whole process into five distinct stages: Assess, Define, Design, Apply, and Monitoring (ADDAM). The current VDS is focused on the first three stages. The VDS considers building design as a multi-dimensional process, involving multiple design teams, design factors, and design stages. The intersection among these three dimensions defines a specific design task in terms of “who”, “what” and “when”. It also considers building design as a multi-objective process that aims to enhance the five aspects of performance for green building systems: site sustainability, materials and resource efficiency, water utilization efficiency, energy efficiency and impacts to the atmospheric environment, and IEQ. The current VDS development has been limited to energy efficiency and IEQ performance, with particular focus on evaluating thermal performance, air quality and lighting environmental quality because of their strong interaction with the energy performance of buildings. The VDS software framework contains four major functions: 1) Design coordination: It enables users to define tasks using the Input-Process-Output flow approach, which specifies the anticipated activities (i.e., the process), required input and output information, and anticipated interactions with other tasks. It also allows task scheduling to define the work flow, and sharing of the design data and information via the internet. 2) Modeling and simulation: It enables users to perform building simulations to predict the energy consumption and IEQ conditions at any of the design stages by using EnergyPlus and a combined heat, air, moisture and pollutant simulation (CHAMPS) model. A method for co-simulation was developed to allow the use of both models at the same time step for the combined energy and indoor air quality analysis. 3) Results visualization: It enables users to display a 3-D geometric design of the building by reading BIM (building information model) file generated by design software such as SketchUp, and the predicted results of heat, air, moisture, pollutant and light distributions in the building. 4) Performance evaluation: It enables the users to compare the performance of a proposed building design against a reference building that is defined for the same type of buildings under the same climate condition, and predicts the percent of improvements over the minimum requirements specified in ASHRAE Standard 55-2010, 62.1-2010 and 90.1-2010. An approach was developed to estimate the potential impact of a design factor on the whole building performance, and hence can assist the user to identify areas that have most pay back for investment. The VDS software was developed by using C++ with the conventional Model, View and Control (MVC) software architecture. The software has been verified by using a simple 3-zone case building. The application of the VDS concepts and framework for building design and performance analysis has been illustrated by using a medium-sized, five story office building that received LEED Platinum Certification from USGBC.« less
ANOPP programming and documentation standards document
NASA Technical Reports Server (NTRS)
1976-01-01
Standards defining the requirements for preparing software for the Aircraft Noise Prediction Program (ANOPP) were given. It is the intent of these standards to provide definition, design, coding, and documentation criteria for the achievement of a unity among ANOPP products. These standards apply to all of ANOPP's standard software system. The standards encompass philosophy as well as techniques and conventions.
NASA Astrophysics Data System (ADS)
Li, Yajie; Zhao, Yongli; Zhang, Jie; Yu, Xiaosong; Chen, Haoran; Zhu, Ruijie; Zhou, Quanwei; Yu, Chenbei; Cui, Rui
2017-01-01
A Virtual Network Operator (VNO) is a provider and reseller of network services from other telecommunications suppliers. These network providers are categorized as virtual because they do not own the underlying telecommunication infrastructure. In terms of business operation, VNO can provide customers with personalized services by leasing network infrastructure from traditional network providers. The unique business modes of VNO lead to the emergence of network on demand (NoD) services. The conventional network provisioning involves a series of manual operation and configuration, which leads to high cost in time. Considering the advantages of Software Defined Networking (SDN), this paper proposes a novel NoD service provisioning solution to satisfy the private network need of VNOs. The solution is first verified in the real software defined multi-domain optical networks with multi-vendor OTN equipment. With the proposed solution, NoD service can be deployed via online web portals in near-real time. It reinvents the customer experience and redefines how network services are delivered to customers via an online self-service portal. Ultimately, this means a customer will be able to simply go online, click a few buttons and have new services almost instantaneously.
Yang, Hui; Zhang, Jie; Ji, Yuefeng; Tian, Rui; Han, Jianrui; Lee, Young
2015-11-30
Data center interconnect with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resilience between IP and elastic optical networks that allows to accommodate data center services. In view of this, this study extends to consider the resource integration by breaking the limit of network device, which can enhance the resource utilization. We propose a novel multi-stratum resources integration (MSRI) architecture based on network function virtualization in software defined elastic data center optical interconnect. A resource integrated mapping (RIM) scheme for MSRI is introduced in the proposed architecture. The MSRI can accommodate the data center services with resources integration when the single function or resource is relatively scarce to provision the services, and enhance globally integrated optimization of optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of OpenFlow-based enhanced software defined networking (eSDN) testbed. The performance of RIM scheme under heavy traffic load scenario is also quantitatively evaluated based on MSRI architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning schemes.
Shi, Zhenyu; Vickers, Claudia E
2016-12-01
Molecular Cloning Designer Simulator (MCDS) is a powerful new all-in-one cloning and genetic engineering design, simulation and management software platform developed for complex synthetic biology and metabolic engineering projects. In addition to standard functions, it has a number of features that are either unique, or are not found in combination in any one software package: (1) it has a novel interactive flow-chart user interface for complex multi-step processes, allowing an integrated overview of the whole project; (2) it can perform a user-defined workflow of cloning steps in a single execution of the software; (3) it can handle multiple types of genetic recombineering, a technique that is rapidly replacing classical cloning for many applications; (4) it includes experimental information to conveniently guide wet lab work; and (5) it can store results and comments to allow the tracking and management of the whole project in one platform. MCDS is freely available from https://mcds.codeplex.com.
NASA Technical Reports Server (NTRS)
Albus, James S.; Mccain, Harry G.; Lumia, Ronald
1989-01-01
The document describes the NASA Standard Reference Model (NASREM) Architecture for the Space Station Telerobot Control System. It defines the functional requirements and high level specifications of the control system for the NASA space Station document for the functional specification, and a guideline for the development of the control system architecture, of the 10C Flight Telerobot Servicer. The NASREM telerobot control system architecture defines a set of standard modules and interfaces which facilitates software design, development, validation, and test, and make possible the integration of telerobotics software from a wide variety of sources. Standard interfaces also provide the software hooks necessary to incrementally upgrade future Flight Telerobot Systems as new capabilities develop in computer science, robotics, and autonomous system control.
Space and Missile Systems Center Standard: Software Development
2015-01-16
maintenance , or any other activity or combination of activities resulting in products . Within this standard, requirements to “develop,” “define...integration, reuse, reengineering, maintenance , or any other activity that results in products ). The term “developer” encompasses all software team...activities that results in software products . Software development includes new development, modification, reuse, reengineering, maintenance , and any other
MMX-I: data-processing software for multimodal X-ray imaging and tomography.
Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea
2016-05-01
A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Kacpura, Thomas J.; Johnson, Sandra K.; Lux, James P.
2010-01-01
NASA is developing an experimental flight payload (referred to as the Space Communication and Navigation (SCAN) Test Bed) to investigate software defined radio (SDR), networking, and navigation technologies, operationally in the space environment. The payload consists of three software defined radios each compliant to NASA s Space Telecommunications Radio System Architecture, a common software interface description standard for software defined radios. The software defined radios are new technology developments underway by NASA and industry partners. Planned for launch in early 2012, the payload will be externally mounted to the International Space Station truss and conduct experiments representative of future mission capability.
Announcing a Community Effort to Create an Information Model for Research Software Archives
NASA Astrophysics Data System (ADS)
Million, C.; Brazier, A.; King, T.; Hayes, A.
2018-04-01
An effort has started to create recommendations and standards for the archiving of planetary science research software. The primary goal is to define an information model that is consistent with OAIS standards.
Software architecture standard for simulation virtual machine, version 2.0
NASA Technical Reports Server (NTRS)
Sturtevant, Robert; Wessale, William
1994-01-01
The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.
MMX-I: data-processing software for multimodal X-ray imaging and tomography
Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea
2016-01-01
A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159
Preparation guide for class B software specification documents
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1979-01-01
General conceptual requirements and specific application rules and procedures are provided for the production of software specification documents in conformance with deep space network software standards and class B standards. Class B documentation is identified as the appropriate level applicable to implementation, sustaining engineering, and operational uses by qualified personnel. Special characteristics of class B documents are defined.
Low Power, Low Mass, Modular, Multi-band Software-defined Radios
NASA Technical Reports Server (NTRS)
Haskins, Christopher B. (Inventor); Millard, Wesley P. (Inventor)
2013-01-01
Methods and systems to implement and operate software-defined radios (SDRs). An SDR may be configured to perform a combination of fractional and integer frequency synthesis and direct digital synthesis under control of a digital signal processor, which may provide a set of relatively agile, flexible, low-noise, and low spurious, timing and frequency conversion signals, and which may be used to maintain a transmit path coherent with a receive path. Frequency synthesis may include dithering to provide additional precision. The SDR may include task-specific software-configurable systems to perform tasks in accordance with software-defined parameters or personalities. The SDR may include a hardware interface system to control hardware components, and a host interface system to provide an interface to the SDR with respect to a host system. The SDR may be configured for one or more of communications, navigation, radio science, and sensors.
Pathway collages: personalized multi-pathway diagrams.
Paley, Suzanne; O'Maille, Paul E; Weaver, Daniel; Karp, Peter D
2016-12-13
Metabolic pathway diagrams are a classical way of visualizing a linked cascade of biochemical reactions. However, to understand some biochemical situations, viewing a single pathway is insufficient, whereas viewing the entire metabolic network results in information overload. How do we enable scientists to rapidly construct personalized multi-pathway diagrams that depict a desired collection of interacting pathways that emphasize particular pathway interactions? We define software for constructing personalized multi-pathway diagrams called pathway-collages using a combination of manual and automatic layouts. The user specifies a set of pathways of interest for the collage from a Pathway/Genome Database. Layouts for the individual pathways are generated by the Pathway Tools software, and are sent to a Javascript Pathway Collage application implemented using Cytoscape.js. That application allows the user to re-position pathways; define connections between pathways; change visual style parameters; and paint metabolomics, gene expression, and reaction flux data onto the collage to obtain a desired multi-pathway diagram. We demonstrate the use of pathway collages in two application areas: a metabolomics study of pathogen drug response, and an Escherichia coli metabolic model. Pathway collages enable facile construction of personalized multi-pathway diagrams.
NASA software documentation standard software engineering program
NASA Technical Reports Server (NTRS)
1991-01-01
The NASA Software Documentation Standard (hereinafter referred to as Standard) can be applied to the documentation of all NASA software. This Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. This basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.
STRS Compliant FPGA Waveform Development
NASA Technical Reports Server (NTRS)
Nappier, Jennifer; Downey, Joseph
2008-01-01
The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. Current standards were researched and new standard interfaces were proposed. The implementation of the proposed standard interfaces on a laboratory breadboard SDR will be presented.
Software-defined Quantum Networking Ecosystem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S.; Sadlier, Ronald
The software enables a user to perform modeling and simulation of software-defined quantum networks. The software addresses the problem of how to synchronize transmission of quantum and classical signals through multi-node networks and to demonstrate quantum information protocols such as quantum teleportation. The software approaches this problem by generating a graphical model of the underlying network and attributing properties to each node and link in the graph. The graphical model is then simulated using a combination of discrete-event simulators to calculate the expected state of each node and link in the graph at a future time. A user interacts withmore » the software by providing an initial network model and instantiating methods for the nodes to transmit information with each other. This includes writing application scripts in python that make use of the software library interfaces. A user then initiates the application scripts, which invokes the software simulation. The user then uses the built-in diagnostic tools to query the state of the simulation and to collect statistics on synchronization.« less
Modular Software for Spacecraft Navigation Using the Global Positioning System (GPS)
NASA Technical Reports Server (NTRS)
Truong, S. H.; Hartman, K. R.; Weidow, D. A.; Berry, D. L.; Oza, D. H.; Long, A. C.; Joyce, E.; Steger, W. L.
1996-01-01
The Goddard Space Flight Center Flight Dynamics and Mission Operations Divisions have jointly investigated the feasibility of engineering modular Global Positioning SYSTEM (GPS) navigation software to support both real time flight and ground postprocessing configurations. The goals of this effort are to define standard GPS data interfaces and to engineer standard, reusable navigation software components that can be used to build a broad range of GPS navigation support applications. The paper discusses the GPS modular software (GMOD) system and operations concepts, major requirements, candidate software architecture, feasibility assessment and recommended software interface standards. In additon, ongoing efforts to broaden the scope of the initial study and to develop modular software to support autonomous navigation using GPS are addressed,
Space Communication and Navigation Testbed Communications Technology for Exploration
NASA Technical Reports Server (NTRS)
Reinhart, Richard
2013-01-01
NASA developed and launched an experimental flight payload (referred to as the Space Communication and Navigation Test Bed) to investigate software defined radio, networking, and navigation technologies, operationally in the space environment. The payload consists of three software defined radios each compliant to NASAs Space Telecommunications Radio System Architecture, a common software interface description standard for software defined radios. The software defined radios are new technology developed by NASA and industry partners. The payload is externally mounted to the International Space Station truss and available to NASA, industry, and university partners to conduct experiments representative of future mission capability. Experiment operations include in-flight reconfiguration of the SDR waveform functions and payload networking software. The flight system communicates with NASAs orbiting satellite relay network, the Tracking, Data Relay Satellite System at both S-band and Ka-band and to any Earth-based compatible S-band ground station.
The Infeasibility of Experimental Quantification of Life-Critical Software Reliability
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Finelli, George B.
1991-01-01
This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.
System Framework for a Multi-Band, Multi-Mode Software Defined Radio
2014-06-01
detection, while the VITA Radio Transport ( VRT ) protocol over Gigabit Ethernet (GIGE) is implemented for the data interface. In addition to the SoC...CTRL VGA CTRL C2 GPP C2 CORE SW ARM0 RX SYN CTRL PL MEMORY MAP DR CTRL GENERIC INTERRUPT CONTROLLER DR GPP VITERBI ALGORITHM & VRT INTERFACE ARM1
Lindoerfer, Doris; Mansmann, Ulrich
2017-07-01
Patient registries are instrumental for medical research. Often their structures are complex and their implementations use composite software systems to meet the wide spectrum of challenges. Commercial and open-source systems are available for registry implementation, but many research groups develop their own systems. Methodological approaches in the selection of software as well as the construction of proprietary systems are needed. We propose an evidence-based checklist, summarizing essential items for patient registry software systems (CIPROS), to accelerate the requirements engineering process. Requirements engineering activities for software systems follow traditional software requirements elicitation methods, general software requirements specification (SRS) templates, and standards. We performed a multistep procedure to develop a specific evidence-based CIPROS checklist: (1) A systematic literature review to build a comprehensive collection of technical concepts, (2) a qualitative content analysis to define a catalogue of relevant criteria, and (3) a checklist to construct a minimal appraisal standard. CIPROS is based on 64 publications and covers twelve sections with a total of 72 items. CIPROS also defines software requirements. Comparing CIPROS with traditional software requirements elicitation methods, SRS templates and standards show a broad consensus but differences in issues regarding registry-specific aspects. Using an evidence-based approach to requirements engineering for registry software adds aspects to the traditional methods and accelerates the software engineering process for registry software. The method we used to construct CIPROS serves as a potential template for creating evidence-based checklists in other fields. The CIPROS list supports developers in assessing requirements for existing systems and formulating requirements for their own systems, while strengthening the reporting of patient registry software system descriptions. It may be a first step to create standards for patient registry software system assessments. Copyright © 2017 Elsevier Inc. All rights reserved.
Online Remote Sensing Interface
NASA Technical Reports Server (NTRS)
Lawhead, Joel
2007-01-01
BasinTools Module 1 processes remotely sensed raster data, including multi- and hyper-spectral data products, via a Web site with no downloads and no plug-ins required. The interface provides standardized algorithms designed so that a user with little or no remote-sensing experience can use the site. This Web-based approach reduces the amount of software, hardware, and computing power necessary to perform the specified analyses. Access to imagery and derived products is enterprise-level and controlled. Because the user never takes possession of the imagery, the licensing of the data is greatly simplified. BasinTools takes the "just-in-time" inventory control model from commercial manufacturing and applies it to remotely-sensed data. Products are created and delivered on-the-fly with no human intervention, even for casual users. Well-defined procedures can be combined in different ways to extend verified and validated methods in order to derive new remote-sensing products, which improves efficiency in any well-defined geospatial domain. Remote-sensing products produced in BasinTools are self-documenting, allowing procedures to be independently verified or peer-reviewed. The software can be used enterprise-wide to conduct low-level remote sensing, viewing, sharing, and manipulating of image data without the need for desktop applications.
GNSS software receiver sampling noise and clock jitter performance and impact analysis
NASA Astrophysics Data System (ADS)
Chen, Jian Yun; Feng, XuZhe; Li, XianBin; Wu, GuangYao
2015-02-01
In the design of a multi-frequency multi-constellation GNSS software defined radio receivers is becoming more and more popular due to its simple architecture, flexible configuration and good coherence in multi-frequency signal processing. It plays an important role in navigation signal processing and signal quality monitoring. In particular, GNSS software defined radio receivers driving the sampling clock of analogue-to-digital converter (ADC) by FPGA implies that a more flexible radio transceiver design is possible. According to the concept of software defined radio (SDR), the ideal is to digitize as close to the antenna as possible. Whereas the carrier frequency of GNSS signal is of the frequency of GHz, converting at this frequency is expensive and consumes more power. Band sampling method is a cheaper, more effective alternative. When using band sampling method, it is possible to sample a RF signal at twice the bandwidth of the signal. Unfortunately, as the other side of the coin, the introduction of SDR concept and band sampling method induce negative influence on the performance of the GNSS receivers. ADC's suffer larger sampling clock jitter generated by FPGA; and low sampling frequency introduces more noise to the receiver. Then the influence of sampling noise cannot be neglected. The paper analyzes the sampling noise, presents its influence on the carrier noise ratio, and derives the ranging error by calculating the synchronization error of the delay locked loop. Simulations aiming at each impact factors of sampling-noise-induced ranging error are performed. Simulation and experiment results show that if the target ranging accuracy is at the level of centimeter, the quantization length should be no less than 8 and the sampling clock jitter should not exceed 30ps.
A miniaturized NQR spectrometer for a multi-channel NQR-based detection device
NASA Astrophysics Data System (ADS)
Beguš, Samo; Jazbinšek, Vojko; Pirnat, Janez; Trontelj, Zvonko
2014-10-01
A low frequency (0.5-5 MHz) battery operated sensitive pulsed NQR spectrometer with a transmitter power up to 5 W and a total mass of about 3 kg aimed at detecting 14 N NQR signals, predominantly of illicit materials, was designed and assembled. This spectrometer uses a standard software defined radio (SDR) platform for the data acquisition unit. Signal processing is done with the LabView Virtual instrument on a personal computer. We successfully tested the spectrometer by measuring 14 N NQR signals from aminotetrazole monohydrate (ATMH), potassium nitrate (PN), paracetamol (PCM) and trinitrotoluene (TNT). Such a spectrometer is a feasible component of a portable single or multichannel 14 N NQR based detection device.
JPL Space Telecommunications Radio System Operating Environment
NASA Technical Reports Server (NTRS)
Lux, James P.; Lang, Minh; Peters, Kenneth J.; Taylor, Gregory H.; Duncan, Courtney B.; Orozco, David S.; Stern, Ryan A.; Ahten, Earl R.; Girard, Mike
2013-01-01
A flight-qualified implementation of a Software Defined Radio (SDR) Operating Environment for the JPL-SDR built for the CoNNeCT Project has been developed. It is compliant with the NASA Space Telecommunications Radio System (STRS) Architecture Standard, and provides the software infrastructure for STRS compliant waveform applications. This software provides a standards-compliant abstracted view of the JPL-SDR hardware platform. It uses industry standard POSIX interfaces for most functions, as well as exposing the STRS API (Application Programming In terface) required by the standard. This software includes a standardized interface for IP components instantiated within a Xilinx FPGA (Field Programmable Gate Array). The software provides a standardized abstracted interface to platform resources such as data converters, file system, etc., which can be used by STRS standards conformant waveform applications. It provides a generic SDR operating environment with a much smaller resource footprint than similar products such as SCA (Software Communications Architecture) compliant implementations, or the DoD Joint Tactical Radio Systems (JTRS).
Experimental control in software reliability certification
NASA Technical Reports Server (NTRS)
Trammell, Carmen J.; Poore, Jesse H.
1994-01-01
There is growing interest in software 'certification', i.e., confirmation that software has performed satisfactorily under a defined certification protocol. Regulatory agencies, customers, and prospective reusers all want assurance that a defined product standard has been met. In other industries, products are typically certified under protocols in which random samples of the product are drawn, tests characteristic of operational use are applied, analytical or statistical inferences are made, and products meeting a standard are 'certified' as fit for use. A warranty statement is often issued upon satisfactory completion of a certification protocol. This paper outlines specific engineering practices that must be used to preserve the validity of the statistical certification testing protocol. The assumptions associated with a statistical experiment are given, and their implications for statistical testing of software are described.
STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.
Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X
2009-08-01
This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.
Second generation registry framework.
Bellgard, Matthew I; Render, Lee; Radochonski, Maciej; Hunter, Adam
2014-01-01
Information management systems are essential to capture data be it for public health and human disease, sustainable agriculture, or plant and animal biosecurity. In public health, the term patient registry is often used to describe information management systems that are used to record and track phenotypic data of patients. Appropriate design, implementation and deployment of patient registries enables rapid decision making and ongoing data mining ultimately leading to improved patient outcomes. A major bottleneck encountered is the static nature of these registries. That is, software developers are required to work with stakeholders to determine requirements, design the system, implement the required data fields and functionality for each patient registry. Additionally, software developer time is required for ongoing maintenance and customisation. It is desirable to deploy a sophisticated registry framework that can allow scientists and registry curators possessing standard computing skills to dynamically construct a complete patient registry from scratch and customise it for their specific needs with little or no need to engage a software developer at any stage. This paper introduces our second generation open source registry framework which builds on our previous rare disease registry framework (RDRF). This second generation RDRF is a new approach as it empowers registry administrators to construct one or more patient registries without software developer effort. New data elements for a diverse range of phenotypic and genotypic measurements can be defined at any time. Defined data elements can then be utilised in any of the created registries. Fine grained, multi-level user and workgroup access can be applied to each data element to ensure appropriate access and data privacy. We introduce the concept of derived data elements to assist the data element standards communities on how they might be best categorised. We introduce the second generation RDRF that enables the user-driven dynamic creation of patient registries. We believe this second generation RDRF is a novel approach to patient registry design, implementation and deployment and a significant advance on existing registry systems.
Second generation registry framework
2014-01-01
Background Information management systems are essential to capture data be it for public health and human disease, sustainable agriculture, or plant and animal biosecurity. In public health, the term patient registry is often used to describe information management systems that are used to record and track phenotypic data of patients. Appropriate design, implementation and deployment of patient registries enables rapid decision making and ongoing data mining ultimately leading to improved patient outcomes. A major bottleneck encountered is the static nature of these registries. That is, software developers are required to work with stakeholders to determine requirements, design the system, implement the required data fields and functionality for each patient registry. Additionally, software developer time is required for ongoing maintenance and customisation. It is desirable to deploy a sophisticated registry framework that can allow scientists and registry curators possessing standard computing skills to dynamically construct a complete patient registry from scratch and customise it for their specific needs with little or no need to engage a software developer at any stage. Results This paper introduces our second generation open source registry framework which builds on our previous rare disease registry framework (RDRF). This second generation RDRF is a new approach as it empowers registry administrators to construct one or more patient registries without software developer effort. New data elements for a diverse range of phenotypic and genotypic measurements can be defined at any time. Defined data elements can then be utilised in any of the created registries. Fine grained, multi-level user and workgroup access can be applied to each data element to ensure appropriate access and data privacy. We introduce the concept of derived data elements to assist the data element standards communities on how they might be best categorised. Conclusions We introduce the second generation RDRF that enables the user-driven dynamic creation of patient registries. We believe this second generation RDRF is a novel approach to patient registry design, implementation and deployment and a significant advance on existing registry systems. PMID:24982690
NASA Astrophysics Data System (ADS)
Xiong, Wenhao; Tian, Xin; Chen, Genshe; Pham, Khanh; Blasch, Erik
2017-05-01
Software defined radio (SDR) has become a popular tool for the implementation and testing for communications performance. The advantage of the SDR approach includes: a re-configurable design, adaptive response to changing conditions, efficient development, and highly versatile implementation. In order to understand the benefits of SDR, the space telecommunication radio system (STRS) was proposed by NASA Glenn research center (GRC) along with the standard application program interface (API) structure. Each component of the system uses a well-defined API to communicate with other components. The benefit of standard API is to relax the platform limitation of each component for addition options. For example, the waveform generating process can support a field programmable gate array (FPGA), personal computer (PC), or an embedded system. As long as the API defines the requirements, the generated waveform selection will work with the complete system. In this paper, we demonstrate the design and development of adaptive SDR following the STRS and standard API protocol. We introduce step by step the SDR testbed system including the controlling graphic user interface (GUI), database, GNU radio hardware control, and universal software radio peripheral (USRP) tranceiving front end. In addition, a performance evaluation in shown on the effectiveness of the SDR approach for space telecommunication.
7 CFR 1753.6 - Standards, specifications, and general requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... compliant, as defined in 7 CFR 1735.22(e). (d) All materials and equipment financed with loan funds are subject to the “Buy American” provision (7 U.S.C. 901 et seq. as amended in 1938). (e) All software, software systems, and firmware financed with loan funds must be year 2000 compliant, as defined in 7 CFR...
7 CFR 1753.6 - Standards, specifications, and general requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... compliant, as defined in 7 CFR 1735.22(e). (d) All materials and equipment financed with loan funds are subject to the “Buy American” provision (7 U.S.C. 901 et seq. as amended in 1938). (e) All software, software systems, and firmware financed with loan funds must be year 2000 compliant, as defined in 7 CFR...
7 CFR 1753.6 - Standards, specifications, and general requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... compliant, as defined in 7 CFR 1735.22(e). (d) All materials and equipment financed with loan funds are subject to the “Buy American” provision (7 U.S.C. 901 et seq. as amended in 1938). (e) All software, software systems, and firmware financed with loan funds must be year 2000 compliant, as defined in 7 CFR...
7 CFR 1753.6 - Standards, specifications, and general requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... compliant, as defined in 7 CFR 1735.22(e). (d) All materials and equipment financed with loan funds are subject to the “Buy American” provision (7 U.S.C. 901 et seq. as amended in 1938). (e) All software, software systems, and firmware financed with loan funds must be year 2000 compliant, as defined in 7 CFR...
NASA Software Documentation Standard
NASA Technical Reports Server (NTRS)
1991-01-01
The NASA Software Documentation Standard (hereinafter referred to as "Standard") is designed to support the documentation of all software developed for NASA; its goal is to provide a framework and model for recording the essential information needed throughout the development life cycle and maintenance of a software system. The NASA Software Documentation Standard can be applied to the documentation of all NASA software. The Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. The basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.
SEQ-REVIEW: A tool for reviewing and checking spacecraft sequences
NASA Astrophysics Data System (ADS)
Maldague, Pierre F.; El-Boushi, Mekki; Starbird, Thomas J.; Zawacki, Steven J.
1994-11-01
A key component of JPL's strategy to make space missions faster, better and cheaper is the Advanced Multi-Mission Operations System (AMMOS), a ground software intensive system currently in use and in further development. AMMOS intends to eliminate the cost of re-engineering a ground system for each new JPL mission. This paper discusses SEQ-REVIEW, a component of AMMOS that was designed to facilitate and automate the task of reviewing and checking spacecraft sequences. SEQ-REVIEW is a smart browser for inspecting files created by other sequence generation tools in the AMMOS system. It can parse sequence-related files according to a computer-readable version of a 'Software Interface Specification' (SIS), which is a standard document for defining file formats. It lets users display one or several linked files and check simple constraints using a Basic-like 'Little Language'. SEQ-REVIEW represents the first application of the Quality Function Development (QFD) method to sequence software development at JPL. The paper will show how the requirements for SEQ-REVIEW were defined and converted into a design based on object-oriented principles. The process starts with interviews of potential users, a small but diverse group that spans multiple disciplines and 'cultures'. It continues with the development of QFD matrices that related product functions and characteristics to user-demanded qualities. These matrices are then turned into a formal Software Requirements Document (SRD). The process concludes with the design phase, in which the CRC (Class, Responsibility, Collaboration) approach was used to convert requirements into a blueprint for the final product.
SEQ-REVIEW: A tool for reviewing and checking spacecraft sequences
NASA Technical Reports Server (NTRS)
Maldague, Pierre F.; El-Boushi, Mekki; Starbird, Thomas J.; Zawacki, Steven J.
1994-01-01
A key component of JPL's strategy to make space missions faster, better and cheaper is the Advanced Multi-Mission Operations System (AMMOS), a ground software intensive system currently in use and in further development. AMMOS intends to eliminate the cost of re-engineering a ground system for each new JPL mission. This paper discusses SEQ-REVIEW, a component of AMMOS that was designed to facilitate and automate the task of reviewing and checking spacecraft sequences. SEQ-REVIEW is a smart browser for inspecting files created by other sequence generation tools in the AMMOS system. It can parse sequence-related files according to a computer-readable version of a 'Software Interface Specification' (SIS), which is a standard document for defining file formats. It lets users display one or several linked files and check simple constraints using a Basic-like 'Little Language'. SEQ-REVIEW represents the first application of the Quality Function Development (QFD) method to sequence software development at JPL. The paper will show how the requirements for SEQ-REVIEW were defined and converted into a design based on object-oriented principles. The process starts with interviews of potential users, a small but diverse group that spans multiple disciplines and 'cultures'. It continues with the development of QFD matrices that related product functions and characteristics to user-demanded qualities. These matrices are then turned into a formal Software Requirements Document (SRD). The process concludes with the design phase, in which the CRC (Class, Responsibility, Collaboration) approach was used to convert requirements into a blueprint for the final product.
A NASA family of minicomputer systems, Appendix A
NASA Technical Reports Server (NTRS)
Deregt, M. P.; Dulfer, J. E.
1972-01-01
This investigation was undertaken to establish sufficient specifications, or standards, for minicomputer hardware and software to provide NASA with realizable economics in quantity purchases, interchangeability of minicomputers, software, storage and peripherals, and a uniformly high quality. The standards will define minicomputer system component types, each specialized to its intended NASA application, in as many levels of capacity as required.
A Multi-Agent Environment for Negotiation
NASA Astrophysics Data System (ADS)
Hindriks, Koen V.; Jonker, Catholijn M.; Tykhonov, Dmytro
In this chapter we introduce the System for Analysis of Multi-Issue Negotiation (SAMIN). SAMIN offers a negotiation environment that supports and facilitates the setup of various negotiation setups. The environment has been designed to analyse negotiation processes between human negotiators, between human and software agents, and between software agents. It offers a range of different agents, different domains, and other options useful to define a negotiation setup. The environment has been used to test and evaluate a range of negotiation strategies in various domains playing against other negotiating agents as well as humans. We discuss some of the results obtained by means of these experiments.
Current state of the mass storage system reference model
NASA Technical Reports Server (NTRS)
Coyne, Robert
1993-01-01
IEEE SSSWG was chartered in May 1990 to abstract the hardware and software components of existing and emerging storage systems and to define the software interfaces between these components. The immediate goal is the decomposition of a storage system into interoperable functional modules which vendors can offer as separate commercial products. The ultimate goal is to develop interoperable standards which define the software interfaces, and in the distributed case, the associated protocols to each of the architectural modules in the model. The topics are presented in viewgraph form and include the following: IEEE SSSWG organization; IEEE SSSWG subcommittees & chairs; IEEE standards activity board; layered view of the reference model; layered access to storage services; IEEE SSSWG emphasis; and features for MSSRM version 5.
A miniaturized NQR spectrometer for a multi-channel NQR-based detection device.
Beguš, Samo; Jazbinšek, Vojko; Pirnat, Janez; Trontelj, Zvonko
2014-10-01
A low frequency (0.5-5 MHz) battery operated sensitive pulsed NQR spectrometer with a transmitter power up to 5 W and a total mass of about 3 kg aimed at detecting (14)N NQR signals, predominantly of illicit materials, was designed and assembled. This spectrometer uses a standard software defined radio (SDR) platform for the data acquisition unit. Signal processing is done with the LabView Virtual instrument on a personal computer. We successfully tested the spectrometer by measuring (14)N NQR signals from aminotetrazole monohydrate (ATMH), potassium nitrate (PN), paracetamol (PCM) and trinitrotoluene (TNT). Such a spectrometer is a feasible component of a portable single or multichannel (14)N NQR based detection device. Copyright © 2014 Elsevier Inc. All rights reserved.
Information system life-cycle and documentation standards, volume 1
NASA Technical Reports Server (NTRS)
Callender, E. David; Steinbacher, Jody
1989-01-01
The Software Management and Assurance Program (SMAP) Information System Life-Cycle and Documentation Standards Document describes the Version 4 standard information system life-cycle in terms of processes, products, and reviews. The description of the products includes detailed documentation standards. The standards in this document set can be applied to the life-cycle, i.e., to each phase in the system's development, and to the documentation of all NASA information systems. This provides consistency across the agency as well as visibility into the completeness of the information recorded. An information system is software-intensive, but consists of any combination of software, hardware, and operational procedures required to process, store, or transmit data. This document defines a standard life-cycle model and content for associated documentation.
Biomorphic Multi-Agent Architecture for Persistent Computing
NASA Technical Reports Server (NTRS)
Lodding, Kenneth N.; Brewster, Paul
2009-01-01
A multi-agent software/hardware architecture, inspired by the multicellular nature of living organisms, has been proposed as the basis of design of a robust, reliable, persistent computing system. Just as a multicellular organism can adapt to changing environmental conditions and can survive despite the failure of individual cells, a multi-agent computing system, as envisioned, could adapt to changing hardware, software, and environmental conditions. In particular, the computing system could continue to function (perhaps at a reduced but still reasonable level of performance) if one or more component( s) of the system were to fail. One of the defining characteristics of a multicellular organism is unity of purpose. In biology, the purpose is survival of the organism. The purpose of the proposed multi-agent architecture is to provide a persistent computing environment in harsh conditions in which repair is difficult or impossible. A multi-agent, organism-like computing system would be a single entity built from agents or cells. Each agent or cell would be a discrete hardware processing unit that would include a data processor with local memory, an internal clock, and a suite of communication equipment capable of both local line-of-sight communications and global broadcast communications. Some cells, denoted specialist cells, could contain such additional hardware as sensors and emitters. Each cell would be independent in the sense that there would be no global clock, no global (shared) memory, no pre-assigned cell identifiers, no pre-defined network topology, and no centralized brain or control structure. Like each cell in a living organism, each agent or cell of the computing system would contain a full description of the system encoded as genes, but in this case, the genes would be components of a software genome.
STAR-CCM+ Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pointer, William David
2016-09-30
The commercial Computational Fluid Dynamics (CFD) code STAR-CCM+ provides general purpose finite volume method solutions for fluid dynamics and energy transport. This document defines plans for verification and validation (V&V) of the base code and models implemented within the code by the Consortium for Advanced Simulation of Light water reactors (CASL). The software quality assurance activities described herein are port of the overall software life cycle defined in the CASL Software Quality Assurance (SQA) Plan [Sieger, 2015]. STAR-CCM+ serves as the principal foundation for development of an advanced predictive multi-phase boiling simulation capability within CASL. The CASL Thermal Hydraulics Methodsmore » (THM) team develops advanced closure models required to describe the subgrid-resolution behavior of secondary fluids or fluid phases in multiphase boiling flows within the Eulerian-Eulerian framework of the code. These include wall heat partitioning models that describe the formation of vapor on the surface and the forces the define bubble/droplet dynamic motion. The CASL models are implemented as user coding or field functions within the general framework of the code. This report defines procedures and requirements for V&V of the multi-phase CFD capability developed by CASL THM. Results of V&V evaluations will be documented in a separate STAR-CCM+ V&V assessment report. This report is expected to be a living document and will be updated as additional validation cases are identified and adopted as part of the CASL THM V&V suite.« less
STRS Compliant FPGA Waveform Development
NASA Technical Reports Server (NTRS)
Nappier, Jennifer; Downey, Joseph; Mortensen, Dale
2008-01-01
The Space Telecommunications Radio System (STRS) Architecture Standard describes a standard for NASA space software defined radios (SDRs). It provides a common framework that can be used to develop and operate a space SDR in a reconfigurable and reprogrammable manner. One goal of the STRS Architecture is to promote waveform reuse among multiple software defined radios. Many space domain waveforms are designed to run in the special signal processing (SSP) hardware. However, the STRS Architecture is currently incomplete in defining a standard for designing waveforms in the SSP hardware. Therefore, the STRS Architecture needs to be extended to encompass waveform development in the SSP hardware. The extension of STRS to the SSP hardware will promote easier waveform reconfiguration and reuse. A transmit waveform for space applications was developed to determine ways to extend the STRS Architecture to a field programmable gate array (FPGA). These extensions include a standard hardware abstraction layer for FPGAs and a standard interface between waveform functions running inside a FPGA. A FPGA-based transmit waveform implementation of the proposed standard interfaces on a laboratory breadboard SDR will be discussed.
STRS Radio Service Software for NASA's SCaN Testbed
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Bishop, Daniel Wayne; Chelmins, David T.
2012-01-01
NASAs Space Communication and Navigation(SCaN) Testbed was launched to the International Space Station in 2012. The objective is to promote new software defined radio technologies and associated software application reuse, enabled by this first flight of NASAs Space Telecommunications Radio System(STRS) architecture standard. Pre-launch testing with the testbeds software defined radios was performed as part of system integration. Radio services for the JPL SDR were developed during system integration to allow the waveform application to operate properly in the space environment, especially considering thermal effects. These services include receiver gain control, frequency offset, IQ modulator balance, and transmit level control. Development, integration, and environmental testing of the radio services will be described. The added software allows the waveform application to operate properly in the space environment, and can be reused by future experimenters testing different waveform applications. Integrating such services with the platform provided STRS operating environment will attract more users, and these services are candidates for interface standardization via STRS.
STRS Radio Service Software for NASA's SCaN Testbed
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Bishop, Daniel Wayne; Chelmins, David T.
2013-01-01
NASA's Space Communication and Navigation(SCaN) Testbed was launched to the International Space Station in 2012. The objective is to promote new software defined radio technologies and associated software application reuse, enabled by this first flight of NASA's Space Telecommunications Radio System (STRS) architecture standard. Pre-launch testing with the testbed's software defined radios was performed as part of system integration. Radio services for the JPL SDR were developed during system integration to allow the waveform application to operate properly in the space environment, especially considering thermal effects. These services include receiver gain control, frequency offset, IQ modulator balance, and transmit level control. Development, integration, and environmental testing of the radio services will be described. The added software allows the waveform application to operate properly in the space environment, and can be reused by future experimenters testing different waveform applications. Integrating such services with the platform provided STRS operating environment will attract more users, and these services are candidates for interface standardization via STRS.
TMT approach to observatory software development process
NASA Astrophysics Data System (ADS)
Buur, Hanne; Subramaniam, Annapurni; Gillies, Kim; Dumas, Christophe; Bhatia, Ravinder
2016-07-01
The purpose of the Observatory Software System (OSW) is to integrate all software and hardware components of the Thirty Meter Telescope (TMT) to enable observations and data capture; thus it is a complex software system that is defined by four principal software subsystems: Common Software (CSW), Executive Software (ESW), Data Management System (DMS) and Science Operations Support System (SOSS), all of which have interdependencies with the observatory control systems and data acquisition systems. Therefore, the software development process and plan must consider dependencies to other subsystems, manage architecture, interfaces and design, manage software scope and complexity, and standardize and optimize use of resources and tools. Additionally, the TMT Observatory Software will largely be developed in India through TMT's workshare relationship with the India TMT Coordination Centre (ITCC) and use of Indian software industry vendors, which adds complexity and challenges to the software development process, communication and coordination of activities and priorities as well as measuring performance and managing quality and risk. The software project management challenge for the TMT OSW is thus a multi-faceted technical, managerial, communications and interpersonal relations challenge. The approach TMT is using to manage this multifaceted challenge is a combination of establishing an effective geographically distributed software team (Integrated Product Team) with strong project management and technical leadership provided by the TMT Project Office (PO) and the ITCC partner to manage plans, process, performance, risk and quality, and to facilitate effective communications; establishing an effective cross-functional software management team composed of stakeholders, OSW leadership and ITCC leadership to manage dependencies and software release plans, technical complexities and change to approved interfaces, architecture, design and tool set, and to facilitate effective communications; adopting an agile-based software development process across the observatory to enable frequent software releases to help mitigate subsystem interdependencies; defining concise scope and work packages for each of the OSW subsystems to facilitate effective outsourcing of software deliverables to the ITCC partner, and to enable performance monitoring and risk management. At this stage, the architecture and high-level design of the software system has been established and reviewed. During construction each subsystem will have a final design phase with reviews, followed by implementation and testing. The results of the TMT approach to the Observatory Software development process will only be preliminary at the time of the submittal of this paper, but it is anticipated that the early results will be a favorable indication of progress.
Multi-Domain SDN Survivability for Agricultural Wireless Sensor Networks.
Huang, Tao; Yan, Siyu; Yang, Fan; Liu, Jiang
2016-11-06
Wireless sensor networks (WSNs) have been widely applied in agriculture field; meanwhile, the advent of multi-domain software-defined networks (SDNs) have improved the wireless resource utilization rate and strengthened network management. In recent times, multi-domain SDNs have been applied to agricultural sensor networks, namely multi-domain software-defined wireless sensor networks (SDWSNs). However, when the SDNs controlling agriculture networks suddenly become unavailable, whether intra-domain or inter-domain, sensor network communication is abnormal because of the loss of control. Moreover, there are controller and switch info-updating problems even if the controller becomes available again. To resolve these problems, this paper proposes a new approach based on an Open vSwitch extension for multi-domain SDWSNs, which can enhance agriculture network survivability and stability. We achieved this by designing a connection-state mechanism, a communication mechanism on both L2 and L3, and an info-updating mechanism based on Open vSwitch. The experimental results show that, whether it is agricultural inter-domain or intra-domain during the controller failure period, the sensor switches can enter failure recovery mode as soon as possible so that the sensor network keeps a stable throughput, a short failure recovery time below 300 ms, and low packet loss. Further, the domain can smoothly control the domain network again once the controller becomes available. This approach based on an Open vSwitch extension can enhance the survivability and stability of multi-domain SDWSNs in precision agriculture.
Multi-Domain SDN Survivability for Agricultural Wireless Sensor Networks
Huang, Tao; Yan, Siyu; Yang, Fan; Liu, Jiang
2016-01-01
Wireless sensor networks (WSNs) have been widely applied in agriculture field; meanwhile, the advent of multi-domain software-defined networks (SDNs) have improved the wireless resource utilization rate and strengthened network management. In recent times, multi-domain SDNs have been applied to agricultural sensor networks, namely multi-domain software-defined wireless sensor networks (SDWSNs). However, when the SDNs controlling agriculture networks suddenly become unavailable, whether intra-domain or inter-domain, sensor network communication is abnormal because of the loss of control. Moreover, there are controller and switch info-updating problems even if the controller becomes available again. To resolve these problems, this paper proposes a new approach based on an Open vSwitch extension for multi-domain SDWSNs, which can enhance agriculture network survivability and stability. We achieved this by designing a connection-state mechanism, a communication mechanism on both L2 and L3, and an info-updating mechanism based on Open vSwitch. The experimental results show that, whether it is agricultural inter-domain or intra-domain during the controller failure period, the sensor switches can enter failure recovery mode as soon as possible so that the sensor network keeps a stable throughput, a short failure recovery time below 300 ms, and low packet loss. Further, the domain can smoothly control the domain network again once the controller becomes available. This approach based on an Open vSwitch extension can enhance the survivability and stability of multi-domain SDWSNs in precision agriculture. PMID:27827971
An Exploration of Software-Based GNSS Signal Processing at Multiple Frequencies
NASA Astrophysics Data System (ADS)
Pasqual Paul, Manuel; Elosegui, Pedro; Lind, Frank; Vazquez, Antonio; Pankratius, Victor
2017-01-01
The Global Navigation Satellite System (GNSS; i.e., GPS, GLONASS, Galileo, and other constellations) has recently grown into numerous areas that go far beyond the traditional scope in navigation. In the geosciences, for example, high-precision GPS has become a powerful tool for a myriad of geophysical applications such as in geodynamics, seismology, paleoclimate, cryosphere, and remote sensing of the atmosphere. Positioning with millimeter-level accuracy can be achieved through carrier-phase-based, multi-frequency signal processing, which mitigates various biases and error sources such as those arising from ionospheric effects. Today, however, most receivers with multi-frequency capabilities are highly specialized hardware receiving systems with proprietary and closed designs, limited interfaces, and significant acquisition costs. This work explores alternatives that are entirely software-based, using Software-Defined Radio (SDR) receivers as a way to digitize the entire spectrum of interest. It presents an overview of existing open-source frameworks and outlines the next steps towards converting GPS software receivers from single-frequency to dual-frequency, geodetic-quality systems. In the future, this development will lead to a more flexible multi-constellation GNSS processing architecture that can be easily reused in different contexts, as well as to further miniaturization of receivers.
Glossary of Software Engineering Laboratory terms
NASA Technical Reports Server (NTRS)
1983-01-01
A glossary of terms used in the Software Engineering Laboratory (SEL) is given. The terms are defined within the context of the software development environment for flight dynamics at the Goddard Space Flight Center. A concise reference for clarifying the language employed in SEL documents and data collection forms is given. Basic software engineering concepts are explained and standard definitions for use by SEL personnel are established.
Alternatives to current flow cytometry data analysis for clinical and research studies.
Gondhalekar, Carmen; Rajwa, Bartek; Patsekin, Valery; Ragheb, Kathy; Sturgis, Jennifer; Robinson, J Paul
2018-02-01
Flow cytometry has well-established methods for data analysis based on traditional data collection techniques. These techniques typically involved manual insertion of tube samples into an instrument that, historically, could only measure 1-3 colors. The field has since evolved to incorporate new technologies for faster and highly automated sample preparation and data collection. For example, the use of microwell plates on benchtop instruments is now a standard on virtually every new instrument, and so users can easily accumulate multiple data sets quickly. Further, because the user must carefully define the layout of the plate, this information is already defined when considering the analytical process, expanding the opportunities for automated analysis. Advances in multi-parametric data collection, as demonstrated by the development of hyperspectral flow-cytometry, 20-40 color polychromatic flow cytometry, and mass cytometry (CyTOF), are game-changing. As data and assay complexity increase, so too does the complexity of data analysis. Complex data analysis is already a challenge to traditional flow cytometry software. New methods for reviewing large and complex data sets can provide rapid insight into processes difficult to define without more advanced analytical tools. In settings such as clinical labs where rapid and accurate data analysis is a priority, rapid, efficient and intuitive software is needed. This paper outlines opportunities for analysis of complex data sets using examples of multiplexed bead-based assays, drug screens and cell cycle analysis - any of which could become integrated into the clinical environment. Copyright © 2017. Published by Elsevier Inc.
Heegaard, P M; Holm, A; Hagerup, M
1993-01-01
A personal computer program for the conversion of linear amino acid sequences to multiple, small, overlapping peptide sequences has been developed. Peptide lengths and "jumps" (the distance between two consecutive overlapping peptides) are defined by the user. To facilitate the use of the program for parallel solid-phase chemical peptide syntheses for the synchronous production of multiple peptides, amino acids at each acylation step are laid out by the program in a convenient standard multi-well setup. Also, the total number of equivalents, as well as the derived amount in milligrams (depend-ending on user-defined equivalent weights and molar surplus), of each amino acid are given. The program facilitates the implementation of multipeptide synthesis, e.g., for the elucidation of polypeptide structure-function relationships, and greatly reduces the risk of introducing mistakes at the planning step. It is written in Pascal and runs on any DOS-based personal computer. No special graphic display is needed.
ESML for Earth Science Data Sets and Analysis
NASA Technical Reports Server (NTRS)
Graves, Sara; Ramachandran, Rahul
2003-01-01
The primary objective of this research project was to transition ESML from design to application. The resulting schema and prototype software will foster community acceptance for the Define once, use anywhere concept central to ESML. Supporting goals include: 1) Refinement of the ESML schema and software libraries in cooperation with the user community; 2) Application of the ESML schema and software to a variety of Earth science data sets and analysis tools; 3) Development of supporting prototype software for enhanced ease of use; 4) Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate; and 5) Widespread publication of the ESML approach, schema, and software.
Earth Science Markup Language: Transitioning From Design to Application
NASA Technical Reports Server (NTRS)
Moe, Karen; Graves, Sara; Ramachandran, Rahul
2002-01-01
The primary objective of the proposed Earth Science Markup Language (ESML) research is to transition from design to application. The resulting schema and prototype software will foster community acceptance for the "define once, use anywhere" concept central to ESML. Supporting goals include: 1. Refinement of the ESML schema and software libraries in cooperation with the user community. 2. Application of the ESML schema and software libraries to a variety of Earth science data sets and analysis tools. 3. Development of supporting prototype software for enhanced ease of use. 4. Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate. 5. Widespread publication of the ESML approach, schema, and software.
A New Framework for Software Visualization: A Multi-Layer Approach
2006-09-01
primary target is an exploration of the current state of the area so that we can discover the challenges and propose solutions for them. The study ...Small define both areas of study to collectively be a part of Software Visualization. 22 Visual Programming as ’Visual Programming’ (VP) refers to...founded taxonomy, with the proper characteristics, can further investigation in any field of study . A common language or terminology and the existence of
NASA Technical Reports Server (NTRS)
Briones, Janette C.; Handler, Louis M.; Hall, Steve C.; Reinhart, Richard C.; Kacpura, Thomas J.
2009-01-01
The Space Telecommunication Radio System (STRS) standard is a Software Defined Radio (SDR) architecture standard developed by NASA. The goal of STRS is to reduce NASA s dependence on custom, proprietary architectures with unique and varying interfaces and hardware and support reuse of waveforms across platforms. The STRS project worked with members of the Object Management Group (OMG), Software Defined Radio Forum, and industry partners to leverage existing standards and knowledge. This collaboration included investigating the use of the OMG s Platform-Independent Model (PIM) SWRadio as the basis for an STRS PIM. This paper details the influence of the OMG technologies on the STRS update effort, findings in the STRS/SWRadio mapping, and provides a summary of the SDR Forum recommendations.
2014-01-01
Background According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). Methods The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. Results The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. Conclusions The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers. PMID:24655818
Höss, Angelika; Lampe, Christian; Panse, Ralf; Ackermann, Benjamin; Naumann, Jakob; Jäkel, Oliver
2014-03-21
According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers.
Analyzing Spacecraft Telecommunication Systems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric
2004-01-01
Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.
Software-defined microwave photonic filter with high reconfigurable resolution
Wei, Wei; Yi, Lilin; Jaouën, Yves; Hu, Weisheng
2016-01-01
Microwave photonic filters (MPFs) are of great interest in radio frequency systems since they provide prominent flexibility on microwave signal processing. Although filter reconfigurability and tunability have been demonstrated repeatedly, it is still difficult to control the filter shape with very high precision. Thus the MPF application is basically limited to signal selection. Here we present a polarization-insensitive single-passband arbitrary-shaped MPF with ~GHz bandwidth based on stimulated Brillouin scattering (SBS) in optical fibre. For the first time the filter shape, bandwidth and central frequency can all be precisely defined by software with ~MHz resolution. The unprecedented multi-dimensional filter flexibility offers new possibilities to process microwave signals directly in optical domain with high precision thus enhancing the MPF functionality. Nanosecond pulse shaping by implementing precisely defined filters is demonstrated to prove the filter superiority and practicability. PMID:27759062
Software-defined microwave photonic filter with high reconfigurable resolution.
Wei, Wei; Yi, Lilin; Jaouën, Yves; Hu, Weisheng
2016-10-19
Microwave photonic filters (MPFs) are of great interest in radio frequency systems since they provide prominent flexibility on microwave signal processing. Although filter reconfigurability and tunability have been demonstrated repeatedly, it is still difficult to control the filter shape with very high precision. Thus the MPF application is basically limited to signal selection. Here we present a polarization-insensitive single-passband arbitrary-shaped MPF with ~GHz bandwidth based on stimulated Brillouin scattering (SBS) in optical fibre. For the first time the filter shape, bandwidth and central frequency can all be precisely defined by software with ~MHz resolution. The unprecedented multi-dimensional filter flexibility offers new possibilities to process microwave signals directly in optical domain with high precision thus enhancing the MPF functionality. Nanosecond pulse shaping by implementing precisely defined filters is demonstrated to prove the filter superiority and practicability.
NASA Technical Reports Server (NTRS)
Over, Ann P.; Barrett, Michael J.; Reinhart, Richard C.; Free, James M.; Cikanek, Harry A., III
2011-01-01
The Communication Navigation and Networking Reconfigurable Testbed (CoNNeCT) is a NASA-sponsored mission, which will investigate the usage of Software Defined Radios (SDRs) as a multi-function communication system for space missions. A softwaredefined radio system is a communication system in which typical components of the system (e.g., modulators) are incorporated into software. The software-defined capability allows flexibility and experimentation in different modulation, coding and other parameters to understand their effects on performance. This flexibility builds inherent redundancy and flexibility into the system for improved operational efficiency, real-time changes to space missions and enhanced reliability/redundancy. The CoNNeCT Project is a collaboration between industrial radio providers and NASA. The industrial radio providers are providing the SDRs and NASA is designing, building and testing the entire flight system. The flight system will be integrated on the Express Logistics Carrier (ELC) on the International Space Station (ISS) after launch on the H-IIB Transfer Vehicle in 2012. This paper provides an overview of the technology research objectives, payload description, design challenges and pre-flight testing results.
Geometric registration of remotely sensed data with SAMIR
NASA Astrophysics Data System (ADS)
Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto
2015-06-01
The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.
libdrdc: software standards library
NASA Astrophysics Data System (ADS)
Erickson, David; Peng, Tie
2008-04-01
This paper presents the libdrdc software standards library including internal nomenclature, definitions, units of measure, coordinate reference frames, and representations for use in autonomous systems research. This library is a configurable, portable C-function wrapped C++ / Object Oriented C library developed to be independent of software middleware, system architecture, processor, or operating system. It is designed to use the automatically-tuned linear algebra suite (ATLAS) and Basic Linear Algebra Suite (BLAS) and port to firmware and software. The library goal is to unify data collection and representation for various microcontrollers and Central Processing Unit (CPU) cores and to provide a common Application Binary Interface (ABI) for research projects at all scales. The library supports multi-platform development and currently works on Windows, Unix, GNU/Linux, and Real-Time Executive for Multiprocessor Systems (RTEMS). This library is made available under LGPL version 2.1 license.
Bruland, Philipp; Dugas, Martin
2017-01-07
Data capture for clinical registries or pilot studies is often performed in spreadsheet-based applications like Microsoft Excel or IBM SPSS. Usually, data is transferred into statistic software, such as SAS, R or IBM SPSS Statistics, for analyses afterwards. Spreadsheet-based solutions suffer from several drawbacks: It is generally not possible to ensure a sufficient right and role management; it is not traced who has changed data when and why. Therefore, such systems are not able to comply with regulatory requirements for electronic data capture in clinical trials. In contrast, Electronic Data Capture (EDC) software enables a reliable, secure and auditable collection of data. In this regard, most EDC vendors support the CDISC ODM standard to define, communicate and archive clinical trial meta- and patient data. Advantages of EDC systems are support for multi-user and multicenter clinical trials as well as auditable data. Migration from spreadsheet based data collection to EDC systems is labor-intensive and time-consuming at present. Hence, the objectives of this research work are to develop a mapping model and implement a converter between the IBM SPSS and CDISC ODM standard and to evaluate this approach regarding syntactic and semantic correctness. A mapping model between IBM SPSS and CDISC ODM data structures was developed. SPSS variables and patient values can be mapped and converted into ODM. Statistical and display attributes from SPSS are not corresponding to any ODM elements; study related ODM elements are not available in SPSS. The S2O converting tool was implemented as command-line-tool using the SPSS internal Java plugin. Syntactic and semantic correctness was validated with different ODM tools and reverse transformation from ODM into SPSS format. Clinical data values were also successfully transformed into the ODM structure. Transformation between the spreadsheet format IBM SPSS and the ODM standard for definition and exchange of trial data is feasible. S2O facilitates migration from Excel- or SPSS-based data collections towards reliable EDC systems. Thereby, advantages of EDC systems like reliable software architecture for secure and traceable data collection and particularly compliance with regulatory requirements are achievable.
7 CFR 1753.6 - Standards, specifications, and general requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE TELECOMMUNICATIONS SYSTEM CONSTRUCTION POLICIES AND PROCEDURES... subject to the “Buy American” provision (7 U.S.C. 901 et seq. as amended in 1938). (e) All software, software systems, and firmware financed with loan funds must be year 2000 compliant, as defined in 7 CFR...
2011-10-31
designs with code division multiple access ( CDMA ). Analog chirp filters were used to produce an up-chirp, which is used as a radar waveform, coupled with...signals. A potential shortcoming of CDMA techniques is that the addition of two signals will result in a non-constant amplitude signal which will be...of low-frequency A/ Ds . As an example for a multiple carrier signal all the received signals from the multiple carriers are aliased onto the
Results of a Multi-Institutional Benchmark Test for Cranial CT/MR Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulin, Kenneth; Urie, Marcia M., E-mail: murie@qarc.or; Cherlow, Joel M.
2010-08-01
Purpose: Variability in computed tomography/magnetic resonance imaging (CT/MR) cranial image registration was assessed using a benchmark case developed by the Quality Assurance Review Center to credential institutions for participation in Children's Oncology Group Protocol ACNS0221 for treatment of pediatric low-grade glioma. Methods and Materials: Two DICOM image sets, an MR and a CT of the same patient, were provided to each institution. A small target in the posterior occipital lobe was readily visible on two slices of the MR scan and not visible on the CT scan. Each institution registered the two scans using whatever software system and method itmore » ordinarily uses for such a case. The target volume was then contoured on the two MR slices, and the coordinates of the center of the corresponding target in the CT coordinate system were reported. The average of all submissions was used to determine the true center of the target. Results: Results are reported from 51 submissions representing 45 institutions and 11 software systems. The average error in the position of the center of the target was 1.8 mm (1 standard deviation = 2.2 mm). The least variation in position was in the lateral direction. Manual registration gave significantly better results than did automatic registration (p = 0.02). Conclusion: When MR and CT scans of the head are registered with currently available software, there is inherent uncertainty of approximately 2 mm (1 standard deviation), which should be considered when defining planning target volumes and PRVs for organs at risk on registered image sets.« less
PyEvolve: a toolkit for statistical modelling of molecular evolution.
Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A
2004-01-05
Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.
Yang, Hui; Zhang, Jie; Zhao, Yongli; Ji, Yuefeng; Wu, Jialin; Lin, Yi; Han, Jianrui; Lee, Young
2015-05-18
Inter-data center interconnect with IP over elastic optical network (EON) is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resources integration among IP networks, optical networks and application stratums resources that allows to accommodate data center services. In view of this, this study extends to consider the service resilience in case of edge optical node failure. We propose a novel multi-stratum resources integrated resilience (MSRIR) architecture for the services in software defined inter-data center interconnect based on IP over EON. A global resources integrated resilience (GRIR) algorithm is introduced based on the proposed architecture. The MSRIR can enable cross stratum optimization and provide resilience using the multiple stratums resources, and enhance the data center service resilience responsiveness to the dynamic end-to-end service demands. The overall feasibility and efficiency of the proposed architecture is experimentally verified on the control plane of our OpenFlow-based enhanced SDN (eSDN) testbed. The performance of GRIR algorithm under heavy traffic load scenario is also quantitatively evaluated based on MSRIR architecture in terms of path blocking probability, resilience latency and resource utilization, compared with other resilience algorithms.
Channegowda, M; Nejabati, R; Rashidi Fard, M; Peng, S; Amaya, N; Zervas, G; Simeonidou, D; Vilalta, R; Casellas, R; Martínez, R; Muñoz, R; Liu, L; Tsuritani, T; Morita, I; Autenrieth, A; Elbers, J P; Kostecki, P; Kaczmarek, P
2013-03-11
Software defined networking (SDN) and flexible grid optical transport technology are two key technologies that allow network operators to customize their infrastructure based on application requirements and therefore minimizing the extra capital and operational costs required for hosting new applications. In this paper, for the first time we report on design, implementation & demonstration of a novel OpenFlow based SDN unified control plane allowing seamless operation across heterogeneous state-of-the-art optical and packet transport domains. We verify and experimentally evaluate OpenFlow protocol extensions for flexible DWDM grid transport technology along with its integration with fixed DWDM grid and layer-2 packet switching.
Structure and software tools of AIDA.
Duisterhout, J S; Franken, B; Witte, F
1987-01-01
AIDA consists of a set of software tools to allow for fast development and easy-to-maintain Medical Information Systems. AIDA supports all aspects of such a system both during development and operation. It contains tools to build and maintain forms for interactive data entry and on-line input validation, a database management system including a data dictionary and a set of run-time routines for database access, and routines for querying the database and output formatting. Unlike an application generator, the user of AIDA may select parts of the tools to fulfill his needs and program other subsystems not developed with AIDA. The AIDA software uses as host language the ANSI-standard programming language MUMPS, an interpreted language embedded in an integrated database and programming environment. This greatly facilitates the portability of AIDA applications. The database facilities supported by AIDA are based on a relational data model. This data model is built on top of the MUMPS database, the so-called global structure. This relational model overcomes the restrictions of the global structure regarding string length. The global structure is especially powerful for sorting purposes. Using MUMPS as a host language allows the user an easy interface between user-defined data validation checks or other user-defined code and the AIDA tools. AIDA has been designed primarily for prototyping and for the construction of Medical Information Systems in a research environment which requires a flexible approach. The prototyping facility of AIDA operates terminal independent and is even to a great extent multi-lingual. Most of these features are table-driven; this allows on-line changes in the use of terminal type and language, but also causes overhead. AIDA has a set of optimizing tools by which it is possible to build a faster, but (of course) less flexible code from these table definitions. By separating the AIDA software in a source and a run-time version, one is able to write implementation-specific code which can be selected and loaded by a special source loader, being part of the AIDA software. This feature is also accessible for maintaining software on different sites and on different installations.
Criteria for software modularization
NASA Technical Reports Server (NTRS)
Card, David N.; Page, Gerald T.; Mcgarry, Frank E.
1985-01-01
A central issue in programming practice involves determining the appropriate size and information content of a software module. This study attempted to determine the effectiveness of two widely used criteria for software modularization, strength and size, in reducing fault rate and development cost. Data from 453 FORTRAN modules developed by professional programmers were analyzed. The results indicated that module strength is a good criterion with respect to fault rate, whereas arbitrary module size limitations inhibit programmer productivity. This analysis is a first step toward defining empirically based standards for software modularization.
36 CFR 1235.50 - What specifications and standards for transfer apply to electronic records?
Code of Federal Regulations, 2012 CFR
2012-07-01
... electronic records in a format that is independent of specific hardware or software. Except as specified in... a request from NARA to provide the software to decompress the records. (3) Agencies interested in... organization. Acceptable transfer formats include the Geography Markup Language (GML) as defined by the Open...
Multi-detector CT features of acute intestinal ischemia and their prognostic correlations.
Moschetta, Marco; Telegrafo, Michele; Rella, Leonarda; Stabile Ianora, Amato Antonio; Angelelli, Giuseppe
2014-05-28
Acute intestinal ischemia is an abdominal emergency occurring in nearly 1% of patients presenting with acute abdomen. The causes can be occlusive or non occlusive. Early diagnosis is important to improve survival rates. In most cases of late or missed diagnosis, the mortality rate from intestinal infarction is very high, with a reported value ranging from 60% to 90%. Multi-detector computed tomography (MDCT) is a fundamental imaging technique that must be promptly performed in all patients with suspected bowel ischemia. Thanks to the new dedicated reconstruction program, its diagnostic potential is much improved compared to the past and currently it is superior to that of any other noninvasive technique. The increased spatial and temporal resolution, high-quality multi-planar reconstructions, maximum intensity projections, vessel probe, surface-shaded volume rending and tissue transition projections make MDCT the gold standard for the diagnosis of intestinal ischemia, with reported sensitivity, specificity, positive and negative predictive values of 64%-93%, 92%-100%, 90%-100% and 94%-98%, respectively. MDCT contributes to appropriate treatment planning and provides important prognostic information thanks to its ability to define the nature and extent of the disease. The purpose of this review is to examine the diagnostic and prognostic role of MDCT in bowel ischemia with special regard to the state of art new reconstruction software.
Multi-User Space Link Extension (SLE) System
NASA Technical Reports Server (NTRS)
Perkins, Toby
2013-01-01
The Multi-User Space (MUS) Link Extension system, a software and data system, provides Space Link Extension (SLE) users with three space data transfer services in timely, complete, and offline modes as applicable according to standards defined by the Consultative Committee for Space Data Systems (CCSDS). MUS radically reduces the schedule, cost, and risk of implementing a new SLE user system, minimizes operating costs with a lights-out approach to SLE, and is designed to require no sustaining engineering expense during its lifetime unless changes in the CCSDS SLE standards, combined with new provider implementations, force changes. No software modification to MUS needs to be made to support a new mission. Any systems engineer with Linux experience can begin testing SLE user service instances with MUS starting from a personal computer (PC) within five days. For flight operators, MUS provides a familiar-looking Web page for entering SLE configuration data received from SLE. Operators can also use the Web page to back up a space mission's entire set of up to approximately 500 SLE service instances in less than five seconds, or to restore or transfer from another system the same amount of data from a MUS backup file in about the same amount of time. Missions operate each MUS SLE service instance independently by sending it MUS directives, which are legible, plain ASCII strings. MUS directives are usually (but not necessarily) sent through a TCP-IP (Transmission Control Protocol Internet Protocol) socket from a MOC (Mission Operations Center) or POCC (Payload Operations Control Center) system, under scripted control, during "lights-out" spacecraft operation. MUS permits the flight operations team to configure independently each of its data interfaces; not only commands and telemetry, but also MUS status messages to the MOC. Interfaces can use single- or multiple-client TCP/IP server sockets, TCP/IP client sockets, temporary disk files, the system log, or standard in, standard out, or standard error as applicable. By defining MUS templates in ASCII, the flight operations team can include any MUS system variable in telemetry or command headers or footers, and/or in status messages. Data fields can be arranged within messages in different sequences, according to the mission s needs. The only constraints imposed are on the format of MUS directive strings, and some bare minimum logical requirements that must be met in order for MUS to read the mission control center's spacecraft command inputs. The MUS system imposes no limits or constraints on the numbers and combinations of missions and SLE service instances that it will support simultaneously. At any time, flight operators may add, change, delete, bind, connect, or disconnect.
Information Metacatalog for a Grid
NASA Technical Reports Server (NTRS)
Kolano, Paul
2007-01-01
SWIM is a Software Information Metacatalog that gathers detailed information about the software components and packages installed on a grid resource. Information is currently gathered for Executable and Linking Format (ELF) executables and shared libraries, Java classes, shell scripts, and Perl and Python modules. SWIM is built on top of the POUR framework, which is described in the preceding article. SWIM consists of a set of Perl modules for extracting software information from a system, an XML schema defining the format of data that can be added by users, and a POUR XML configuration file that describes how these elements are used to generate periodic, on-demand, and user-specified information. Periodic software information is derived mainly from the package managers used on each system. SWIM collects information from native package managers in FreeBSD, Solaris, and IRX as well as the RPM, Perl, and Python package managers on multiple platforms. Because not all software is available, or installed in package form, SWIM also crawls the set of relevant paths from the File System Hierarchy Standard that defines the standard file system structure used by all major UNIX distributions. Using these two techniques, the vast majority of software installed on a system can be located. SWIM computes the same information gathered by the periodic routines for specific files on specific hosts, and locates software on a system given only its name and type.
NASA Astrophysics Data System (ADS)
Herbrechtsmeier, Stefan; Witkowski, Ulf; Rückert, Ulrich
Mobile robots become more and more important in current research and education. Especially small ’on the table’ experiments attract interest, because they need no additional or special laboratory equipments. In this context platforms are desirable which are small, simple to access and relatively easy to program. An additional powerful information processing unit is advantageous to simplify the implementation of algorithm and the porting of software from desktop computers to the robot platform. In this paper we present a new versatile miniature robot that can be ideally used for research and education. The small size of the robot of about 9 cm edge length, its robust drive and its modular structure make the robot a general device for single and multi-robot experiments executed ’on the table’. For programming and evaluation the robot can be wirelessly connected via Bluetooth or WiFi. The operating system of the robot is based on the standard Linux kernel and the GNU C standard library. A player/stage model eases software development and testing.
Multi-layer service function chaining scheduling based on auxiliary graph in IP over optical network
NASA Astrophysics Data System (ADS)
Li, Yixuan; Li, Hui; Liu, Yuze; Ji, Yuefeng
2017-10-01
Software Defined Optical Network (SDON) can be considered as extension of Software Defined Network (SDN) in optical networks. SDON offers a unified control plane and makes optical network an intelligent transport network with dynamic flexibility and service adaptability. For this reason, a comprehensive optical transmission service, able to achieve service differentiation all the way down to the optical transport layer, can be provided to service function chaining (SFC). IP over optical network, as a promising networking architecture to interconnect data centers, is the most widely used scenarios of SFC. In this paper, we offer a flexible and dynamic resource allocation method for diverse SFC service requests in the IP over optical network. To do so, we firstly propose the concept of optical service function (OSF) and a multi-layer SFC model. OSF represents the comprehensive optical transmission service (e.g., multicast, low latency, quality of service, etc.), which can be achieved in multi-layer SFC model. OSF can also be considered as a special SF. Secondly, we design a resource allocation algorithm, which we call OSF-oriented optical service scheduling algorithm. It is able to address multi-layer SFC optical service scheduling and provide comprehensive optical transmission service, while meeting multiple optical transmission requirements (e.g., bandwidth, latency, availability). Moreover, the algorithm exploits the concept of Auxiliary Graph. Finally, we compare our algorithm with the Baseline algorithm in simulation. And simulation results show that our algorithm achieves superior performance than Baseline algorithm in low traffic load condition.
Expert system verification and validation study. Delivery 3A and 3B: Trip summaries
NASA Technical Reports Server (NTRS)
French, Scott
1991-01-01
Key results are documented from attending the 4th workshop on verification, validation, and testing. The most interesting part of the workshop was when representatives from the U.S., Japan, and Europe presented surveys of VV&T within their respective regions. Another interesting part focused on current efforts to define industry standards for artificial intelligence and how that might affect approaches to VV&T of expert systems. The next part of the workshop focused on VV&T methods of applying mathematical techniques to verification of rule bases and techniques for capturing information relating to the process of developing software. The final part focused on software tools. A summary is also presented of the EPRI conference on 'Methodologies, Tools, and Standards for Cost Effective Reliable Software Verification and Validation. The conference was divided into discussion sessions on the following issues: development process, automated tools, software reliability, methods, standards, and cost/benefit considerations.
Implementation of AAPG exchange format
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiser, K.; Guerrero, I.
1989-03-01
The American Association of Petroleum Geologists (AAPG) has proposed a format for exchanging geologic and other petroleum data. The AAPG Computer Applications Committee approved the proposal at the March 1988 AAPG annual meeting in Houston, Texas. By adopting this format, data input into application software and data exchange between software packages are greatly simplified. Benefits to both users and suppliers of software are substantial. The AAPG exchange format supports a flexible, generic data structure. This flexibility allows application software to use the standard format for storing internal control data. In some cases, extensions to the standard format, such as separationmore » of header and data files and use of data delimiters, permits the use of AAPG format translator programs on data that were defined and generated before the emergence of the exchange format. Translation software, programmed in C, has been written and contributes to successful implementation of the AAPG exchange format in application software.« less
NASA Astrophysics Data System (ADS)
Laracuente, Nicholas; Grossman, Carl
2013-03-01
We developed an algorithm and software to calculate autocorrelation functions from real-time photon-counting data using the fast, parallel capabilities of graphical processor units (GPUs). Recent developments in hardware and software have allowed for general purpose computing with inexpensive GPU hardware. These devices are more suited for emulating hardware autocorrelators than traditional CPU-based software applications by emphasizing parallel throughput over sequential speed. Incoming data are binned in a standard multi-tau scheme with configurable points-per-bin size and are mapped into a GPU memory pattern to reduce time-expensive memory access. Applications include dynamic light scattering (DLS) and fluorescence correlation spectroscopy (FCS) experiments. We ran the software on a 64-core graphics pci card in a 3.2 GHz Intel i5 CPU based computer running Linux. FCS measurements were made on Alexa-546 and Texas Red dyes in a standard buffer (PBS). Software correlations were compared to hardware correlator measurements on the same signals. Supported by HHMI and Swarthmore College
Content analysis of cancer blog posts.
Kim, Sujin
2009-10-01
The efficacy of user-defined subject tagging and software-generated subject tagging for describing and organizing cancer blog contents was explored. The Technorati search engine was used to search the blogosphere for cancer blog postings generated during a two-month period. Postings were mined for relevant subject concepts, and blogger-defined tags and Text Analysis Portal for Research (TAPoR) software-defined tags were generated for each message. Descriptive data were collected, and the blogger-defined tags were compared with software-generated tags. Three standard vocabularies (Opinion Templates, Basic Resource, and Medical Subject Headings [MeSH] Resource) were used to assign subject terms to the blogs, with results compared for efficacy in information retrieval. Descriptive data showed that most of the studied cancer blogs (80%) contained fewer than 500 words each. The numbers of blogger-defined tags per posting (M = 4.49 per posting) were significantly smaller than the TAPoR keywords (M = 23.55 per posting). Both blogger-defined subject tags and software-generated subject tags were often overly broad or overly narrow in focus, producing less than effective search results for those seeking to extract information from cancer blogs. Additional exploration into methods for systematically organizing cancer blog postings is necessary if blogs are to become stable and efficacious information resources for cancer patients, friends, families, or providers.
Space Software Defined Radio Characterization to Enable Reuse
NASA Technical Reports Server (NTRS)
Mortensen, Dale J.; Bishop, Daniel W.; Chelmins, David
2012-01-01
NASA's Space Communication and Navigation Testbed is beginning operations on the International Space Station this year. The objective is to promote new software defined radio technologies and associated software application reuse, enabled by this first flight of NASA's Space Telecommunications Radio System architecture standard. The Space Station payload has three software defined radios onboard that allow for a wide variety of communications applications; however, each radio was only launched with one waveform application. By design the testbed allows new waveform applications to be uploaded and tested by experimenters in and outside of NASA. During the system integration phase of the testbed special waveform test modes and stand-alone test waveforms were used to characterize the SDR platforms for the future experiments. Characterization of the Testbed's JPL SDR using test waveforms and specialized ground test modes is discussed in this paper. One of the test waveforms, a record and playback application, can be utilized in a variety of ways, including new satellite on-orbit checkout as well as independent on-board testbed experiments.
Design Aids for Real-Time Systems (DARTS)
NASA Technical Reports Server (NTRS)
Szulewski, P. A.
1982-01-01
Design-Aids for Real-Time Systems (DARTS) is a tool that assists in defining embedded computer systems through tree structured graphics, military standard documentation support, and various analyses including automated Software Science parameter counting and metrics calculation. These analyses provide both static and dynamic design quality feedback which can potentially aid in producing efficient, high quality software systems.
The purpose of this SOP is to define the procedures used for the initial and periodic verification and validation of computer programs used during the Arizona NHEXAS project and the "Border" study. Keywords: Computers; Software; QA/QC.
The National Human Exposure Assessment Sur...
ERIC Educational Resources Information Center
Barajas-Saavedra, Arturo; Álvarez-Rodriguez, Francisco J.; Mendoza-González, Ricardo; Oviedo-De-Luna, Ana C.
2015-01-01
Development of digital resources is difficult due to their particular complexity relying on pedagogical aspects. Another aspect is the lack of well-defined development processes, experiences documented, and standard methodologies to guide and organize game development. Added to this, there is no documented technique to ensure correct…
Software Defined Networking for Next Generation Converged Metro-Access Networks
NASA Astrophysics Data System (ADS)
Ruffini, M.; Slyne, F.; Bluemm, C.; Kitsuwan, N.; McGettrick, S.
2015-12-01
While the concept of Software Defined Networking (SDN) has seen a rapid deployment within the data center community, its adoption in telecommunications network has progressed slowly, although the concept has been swiftly adopted by all major telecoms vendors. This paper presents a control plane architecture for SDN-driven converged metro-access networks, developed through the DISCUS European FP7 project. The SDN-based controller architecture was developed in a testbed implementation targeting two main scenarios: fast feeder fiber protection over dual-homed Passive Optical Networks (PONs) and dynamic service provisioning over a multi-wavelength PON. Implementation details and results of the experiment carried out over the second scenario are reported in the paper, showing the potential of SDN in providing assured on-demand services to end-users.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.
Lommen, Arjen; van der Kamp, Henk J; Kools, Harrie J; van der Lee, Martijn K; van der Weg, Guido; Mol, Hans G J
2012-11-09
A new alternative data processing tool set, metAlignID, is developed for automated pre-processing and library-based identification and concentration estimation of target compounds after analysis by comprehensive two-dimensional gas chromatography with mass spectrometric detection. The tool set has been developed for and tested on LECO data. The software is developed to run multi-threaded (one thread per processor core) on a standard PC (personal computer) under different operating systems and is as such capable of processing multiple data sets simultaneously. Raw data files are converted into netCDF (network Common Data Form) format using a fast conversion tool. They are then preprocessed using previously developed algorithms originating from metAlign software. Next, the resulting reduced data files are searched against a user-composed library (derived from user or commercial NIST-compatible libraries) (NIST=National Institute of Standards and Technology) and the identified compounds, including an indicative concentration, are reported in Excel format. Data can be processed batch wise. The overall time needed for conversion together with processing and searching of 30 raw data sets for 560 compounds is routinely within an hour. The screening performance is evaluated for detection of pesticides and contaminants in raw data obtained after analysis of soil and plant samples. Results are compared to the existing data-handling routine based on proprietary software (LECO, ChromaTOF). The developed software tool set, which is freely downloadable at www.metalign.nl, greatly accelerates data-analysis and offers more options for fine-tuning automated identification toward specific application needs. The quality of the results obtained is slightly better than the standard processing and also adds a quantitative estimate. The software tool set in combination with two-dimensional gas chromatography coupled to time-of-flight mass spectrometry shows great potential as a highly-automated and fast multi-residue instrumental screening method. Copyright © 2012 Elsevier B.V. All rights reserved.
Model driven development of clinical information sytems using openEHR.
Atalag, Koray; Yang, Hong Yul; Tempero, Ewan; Warren, Jim
2011-01-01
openEHR and the recent international standard (ISO 13606) defined a model driven software development methodology for health information systems. However there is little evidence in the literature describing implementation; especially for desktop clinical applications. This paper presents an implementation pathway using .Net/C# technology for Microsoft Windows desktop platforms. An endoscopy reporting application driven by openEHR Archetypes and Templates has been developed. A set of novel GUI directives has been defined and presented which guides the automatic graphical user interface generator to render widgets properly. We also reveal the development steps and important design decisions; from modelling to the final software product. This might provide guidance for other developers and form evidence required for the adoption of these standards for vendors and national programs alike.
C++ Coding Standards for the AMP Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Thomas M; Clarno, Kevin T
2009-09-01
This document provides an initial starting point to define the C++ coding standards used by the AMP nuclear fuel performance integrated code project and a part of AMP's software development process. This document draws from the experiences, and documentation [1], of the developers of the Marmot Project at Los Alamos National Laboratory. Much of the software in AMP will be written in C++. The power of C++ can be abused easily, resulting in code that is difficult to understand and maintain. This document gives the practices that should be followed on the AMP project for all new code that ismore » written. The intent is not to be onerous but to ensure that the code can be readily understood by the entire code team and serve as a basis for collectively defining a set of coding standards for use in future development efforts. At the end of the AMP development in fiscal year (FY) 2010, all developers will have experience with the benefits, restrictions, and limitations of the standards described and will collectively define a set of standards for future software development. External libraries that AMP uses do not have to meet these requirements, although we encourage external developers to follow these practices. For any code of which AMP takes ownership, the project will decide on any changes on a case-by-case basis. The practices that we are using in the AMP project have been in use in the Denovo project [2] for several years. The practices build on those given in References [3-5]; the practices given in these references should also be followed. Some of the practices given in this document can also be found in [6].« less
The RISC (Reduced Instruction Set Computer) Architecture and Computer Performance Evaluation.
1986-03-01
time where the main emphasis of the evaluation process is put on the software . The model is intended to provide a tool for computer architects to use...program, or 3) Was to be implemented in random logic more effec- tively than the equivalent sequence of software instructions. Both data and address...definition is the IEEE standard 729-1983 stating Computer Architecture as: " The process of defining a collection of hardware and software components and
Algorithm Design of CPCI Backboard's Interrupts Management Based on VxWorks' Multi-Tasks
NASA Astrophysics Data System (ADS)
Cheng, Jingyuan; An, Qi; Yang, Junfeng
2006-09-01
This paper begins with a brief introduction of the embedded real-time operating system VxWorks and CompactPCI standard, then gives the programming interfaces of Peripheral Controller Interface (PCI) configuring, interrupts handling and multi-tasks programming interface under VxWorks, and then emphasis is placed on the software frameworks of CPCI interrupt management based on multi-tasks. This method is sound in design and easy to adapt, ensures that all possible interrupts are handled in time, which makes it suitable for data acquisition systems with multi-channels, a high data rate, and hard real-time high energy physics.
NASA Astrophysics Data System (ADS)
Ortiz Geist, Estefania
2015-04-01
Precise GNSS orbit and clock solutions are essential for the generation of the Terrestrial Reference Frame (TRF) and required for a broad variety of applications. Over the last decades the combination products of the International GNSS Service (IGS) have become the standard for all kinds of GNSS applications requiring highest accuracy. The emerging new GNSS constellations Galileo, BeiDou and the QZSS as well as the modernization of the already established GPS and GLONASS constellations will stimulate a new development in the GNSS data processing in order to gain be best benefit from the new signals and systems for geodetic and geodynamic applications. This introduces the question regarding the influence of this development on the orbit and clock products. What are the consequences for the consistency of the contributions from the Analysis Centres (ACs) of the IGS and how does the combination procedure need to react on his development? Another set of questions is related to the expected scenario in which not all IGS ACs will likely include all GNSS. The algorithm for the orbit and clock combination needs to be adapted for a multi-system combination to keep on one hand the internal consistency between the GNSS during the combination procedure but also consider the differences in the expected orbit qualities between the satellite systems (e.g., due to the number of satellites or network coverage). To investigate these questions ESOC and AIUB have agreed on a joint research fellowship for three years. The objective of this research is to analyse the capabilities and challenges when combining hybrid multi-GNSS solutions and to develop a concept, which compares and combines orbit and clock contributions to come up with a consistent, reliable, truly combined multi-GNSS combination product. Well-defined test scenarios shall be constructed and analysed based on the GNSS data processing software packages in the two institutions, namely "NAPEOS" and "Bernese GNSS Software". The presentation will show selected results from the on-going research, to address the impact of several key elements, on the potential combination, which will ultimately give the criteria for the weighting scheme undertaken in the combination.
Updates to the NASA Space Telecommunications Radio System (STRS) Architecture
NASA Technical Reports Server (NTRS)
Kacpura, Thomas J.; Handler, Louis M.; Briones, Janette; Hall, Charles S.
2008-01-01
This paper describes an update of the Space Telecommunications Radio System (STRS) open architecture for NASA space based radios. The STRS architecture has been defined as a framework for the design, development, operation and upgrade of space based software defined radios, where processing resources are constrained. The architecture has been updated based upon reviews by NASA missions, radio providers, and component vendors. The STRS Standard prescribes the architectural relationship between the software elements used in software execution and defines the Application Programmer Interface (API) between the operating environment and the waveform application. Modeling tools have been adopted to present the architecture. The paper will present a description of the updated API, configuration files, and constraints. Minimum compliance is discussed for early implementations. The paper then closes with a summary of the changes made and discussion of the relevant alignment with the Object Management Group (OMG) SWRadio specification, and enhancements to the specialized signal processing abstraction.
The role of open-source software in innovation and standardization in radiology.
Erickson, Bradley J; Langer, Steve; Nagy, Paul
2005-11-01
The use of open-source software (OSS), in which developers release the source code to applications they have developed, is popular in the software industry. This is done to allow others to modify and improve software (which may or may not be shared back to the community) and to allow others to learn from the software. Radiology was an early participant in this model, supporting OSS that implemented the ACR-National Electrical Manufacturers Association (now Digital Imaging and Communications in Medicine) standard for medical image communications. In radiology and in other fields, OSS has promoted innovation and the adoption of standards. Popular OSS is of high quality because access to source code allows many people to identify and resolve errors. Open-source software is analogous to the peer-review scientific process: one must be able to see and reproduce results to understand and promote what is shared. The authors emphasize that support for OSS need not threaten vendors; most vendors embrace and benefit from standards. Open-source development does not replace vendors but more clearly defines their roles, typically focusing on areas in which proprietary differentiators benefit customers and on professional services such as implementation planning and service. Continued support for OSS is essential for the success of our field.
NASA Astrophysics Data System (ADS)
Baru, C.; Lin, K.
2009-04-01
The Geosciences Network project (www.geongrid.org) has been developing cyberinfrastructure for data sharing in the Earth Science community based on a service-oriented architecture. The project defines a standard "software stack", which includes a standardized set of software modules and corresponding service interfaces. The system employs Grid certificates for distributed user authentication. The GEON Portal provides online access to these services via a set of portlets. This service-oriented approach has enabled the GEON network to easily expand to new sites and deploy the same infrastructure in new projects. To facilitate interoperation with other distributed geoinformatics environments, service standards are being defined and implemented for catalog services and federated search across distributed catalogs. The need arises because there may be multiple metadata catalogs in a distributed system, for example, for each institution, agency, geographic region, and/or country. Ideally, a geoinformatics user should be able to search across all such catalogs by making a single search request. In this paper, we describe our implementation for such a search capability across federated metadata catalogs in the GEON service-oriented architecture. The GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, thus, can be imbedded in any software application. The need for federated catalogs in GEON arises because, (i) GEON collaborators at the University of Hyderabad, India have deployed their own catalog, as part of the iGEON-India effort, to register information about local resources for broader access across the network, (ii) GEON collaborators in the GEO Grid (Global Earth Observations Grid) project at AIST, Japan have implemented a catalog for their ASTER data products, and (iii) we have recently deployed a search service to access all data products from the EarthScope project in the US (http://es-portal.geongrid.org), which are distributed across data archives at IRIS in Seattle, Washington, UNAVCO in Boulder, Colorado, and at the ICDP archives in GFZ, Potsdam, Germany. This service implements a "virtual" catalog--the actual/"physical" catalogs and data are stored at each of the remote locations. A federated search across all these catalogs would enable GEON users to discover data across all of these environments with a single search request. Our objective is to implement this search service via the OGC Catalog Services for the Web (CS-W) standard by providing appropriate CSW "wrappers" for each metadata catalog, as necessary. This paper will discuss technical issues in designing and deploying such a multi-catalog search service in GEON and describe an initial prototype of the federated search capability.
The purpose of this SOP is to define the procedures used for the initial and periodic verification and validation of computer programs used during the Arizona NHEXAS project and the Border study. Keywords: Computers; Software; QA/QC.
The U.S.-Mexico Border Program is sponsored ...
NASA Astrophysics Data System (ADS)
Boettcher, M. A.; Butt, B. M.; Klinkner, S.
2016-10-01
A major concern of a university satellite mission is to download the payload and the telemetry data from a satellite. While the ground station antennas are in general easy and with limited afford to procure, the receiving unit is most certainly not. The flexible and low-cost software-defined radio (SDR) transceiver "BladeRF" is used to receive the QPSK modulated and CCSDS compliant coded data of a satellite in the HAM radio S-band. The control software is based on the Open Source program GNU Radio, which also is used to perform CCSDS post processing of the binary bit stream. The test results show a good performance of the receiving system.
Tuszynski, Tobias; Rullmann, Michael; Luthardt, Julia; Butzke, Daniel; Tiepolt, Solveig; Gertz, Hermann-Josef; Hesse, Swen; Seese, Anita; Lobsien, Donald; Sabri, Osama; Barthel, Henryk
2016-06-01
For regional quantification of nuclear brain imaging data, defining volumes of interest (VOIs) by hand is still the gold standard. As this procedure is time-consuming and operator-dependent, a variety of software tools for automated identification of neuroanatomical structures were developed. As the quality and performance of those tools are poorly investigated so far in analyzing amyloid PET data, we compared in this project four algorithms for automated VOI definition (HERMES Brass, two PMOD approaches, and FreeSurfer) against the conventional method. We systematically analyzed florbetaben brain PET and MRI data of ten patients with probable Alzheimer's dementia (AD) and ten age-matched healthy controls (HCs) collected in a previous clinical study. VOIs were manually defined on the data as well as through the four automated workflows. Standardized uptake value ratios (SUVRs) with the cerebellar cortex as a reference region were obtained for each VOI. SUVR comparisons between ADs and HCs were carried out using Mann-Whitney-U tests, and effect sizes (Cohen's d) were calculated. SUVRs of automatically generated VOIs were correlated with SUVRs of conventionally derived VOIs (Pearson's tests). The composite neocortex SUVRs obtained by manually defined VOIs were significantly higher for ADs vs. HCs (p=0.010, d=1.53). This was also the case for the four tested automated approaches which achieved effect sizes of d=1.38 to d=1.62. SUVRs of automatically generated VOIs correlated significantly with those of the hand-drawn VOIs in a number of brain regions, with regional differences in the degree of these correlations. Best overall correlation was observed in the lateral temporal VOI for all tested software tools (r=0.82 to r=0.95, p<0.001). Automated VOI definition by the software tools tested has a great potential to substitute for the current standard procedure to manually define VOIs in β-amyloid PET data analysis.
Multi-Purpose Crew Vehicle Camera Asset Planning: Imagery Previsualization
NASA Technical Reports Server (NTRS)
Beaulieu, K.
2014-01-01
Using JSC-developed and other industry-standard off-the-shelf 3D modeling, animation, and rendering software packages, the Image Science Analysis Group (ISAG) supports Orion Project imagery planning efforts through dynamic 3D simulation and realistic previsualization of ground-, vehicle-, and air-based camera output.
KEYNOTE 2 : Rebuilding the Tower of Babel - Better Communication with Standards
2013-02-01
and a member of the Object Management Group (OMG) SysML specification team. He has been developing multi-national complex systems for almost 35 years...critical systems development, virtual team management, systems development, and software development with UML, SysML and Architectural Frameworks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C; Yan, G; Helmig, R
2014-06-01
Purpose: To develop a system that can define the radiation isocenter and correlate this information with couch coordinates, laser alignment, optical distance indicator (ODI) settings, optical tracking system (OTS) calibrations, and mechanical isocenter walkout. Methods: Our team developed a multi-adapter, multi-purpose quality assurance (QA) and calibration device that uses an electronic portal imaging device (EPID) and in-house image-processing software to define the radiation isocenter, thereby allowing linear accelerator (Linac) components to be verified and calibrated. Motivated by the concept that each Linac component related to patient setup for image-guided radiotherapy based on cone-beam CT should be calibrated with respect tomore » the radiation isocenter, we designed multiple concentric adapters of various materials and shapes to meet the needs of MV and KV radiation isocenter definition, laser alignment, and OTS calibration. The phantom's ability to accurately define the radiation isocenter was validated on 4 Elekta Linacs using a commercial ball bearing (BB) phantom as a reference. Radiation isocenter walkout and the accuracy of couch coordinates, ODI, and OTS were then quantified with the device. Results: The device was able to define the radiation isocenter within 0.3 mm. Radiation isocenter walkout was within ±1 mm at 4 cardinal angles. By switching adapters, we identified that the accuracy of the couch position digital readout, ODI, OTS, and mechanical isocenter walkout was within sub-mm. Conclusion: This multi-adapter, multi-purpose isocenter phantom can be used to accurately define the radiation isocenter and represents a potential paradigm shift in Linac QA. Moreover, multiple concentric adapters allowed for sub-mm accuracy for the other relevant components. This intuitive and user-friendly design is currently patent pending.« less
Rocca-Serra, Philippe; Brandizi, Marco; Maguire, Eamonn; Sklyar, Nataliya; Taylor, Chris; Begley, Kimberly; Field, Dawn; Harris, Stephen; Hide, Winston; Hofmann, Oliver; Neumann, Steffen; Sterk, Peter; Tong, Weida; Sansone, Susanna-Assunta
2010-01-01
Summary: The first open source software suite for experimentalists and curators that (i) assists in the annotation and local management of experimental metadata from high-throughput studies employing one or a combination of omics and other technologies; (ii) empowers users to uptake community-defined checklists and ontologies; and (iii) facilitates submission to international public repositories. Availability and Implementation: Software, documentation, case studies and implementations at http://www.isa-tools.org Contact: isatools@googlegroups.com PMID:20679334
Lott, Gus K; Johnson, Bruce R; Bonow, Robert H; Land, Bruce R; Hoy, Ronald R
2009-01-01
We present g-PRIME, a software based tool for physiology data acquisition, analysis, and stimulus generation in education and research. This software was developed in an undergraduate neurophysiology course and strongly influenced by instructor and student feedback. g-PRIME is a free, stand-alone, windows application coded and "compiled" in Matlab (does not require a Matlab license). g-PRIME supports many data acquisition interfaces from the PC sound card to expensive high throughput calibrated equipment. The program is designed as a software oscilloscope with standard trigger modes, multi-channel visualization controls, and data logging features. Extensive analysis options allow real time and offline filtering of signals, multi-parameter threshold-and-window based event detection, and two-dimensional display of a variety of parameters including event time, energy density, maximum FFT frequency component, max/min amplitudes, and inter-event rate and intervals. The software also correlates detected events with another simultaneously acquired source (event triggered average) in real time or offline. g-PRIME supports parameter histogram production and a variety of elegant publication quality graphics outputs. A major goal of this software is to merge powerful engineering acquisition and analysis tools with a biological approach to studies of nervous system function.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
Charliecloud: Unprivileged containers for user-defined software stacks in HPC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priedhorsky, Reid; Randles, Timothy C.
Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining accessmore » to the performance and functionality already on offer, doing so in less than 500 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannetti, C.; Becker, R.
The software is an ABAQUS/Standard UMAT (user defined material behavior subroutine) that implements the constitutive model for shape-memory alloy materials developed by Jannetti et. al. (2003a) using a fully implicit time integration scheme to integrate the constitutive equations. The UMAT is used in conjunction with ABAQUS/Standard to perform a finite-element analysis of SMA materials.
Programmable multi-node quantum network design and simulation
NASA Astrophysics Data System (ADS)
Dasari, Venkat R.; Sadlier, Ronald J.; Prout, Ryan; Williams, Brian P.; Humble, Travis S.
2016-05-01
Software-defined networking offers a device-agnostic programmable framework to encode new network functions. Externally centralized control plane intelligence allows programmers to write network applications and to build functional network designs. OpenFlow is a key protocol widely adopted to build programmable networks because of its programmability, flexibility and ability to interconnect heterogeneous network devices. We simulate the functional topology of a multi-node quantum network that uses programmable network principles to manage quantum metadata for protocols such as teleportation, superdense coding, and quantum key distribution. We first show how the OpenFlow protocol can manage the quantum metadata needed to control the quantum channel. We then use numerical simulation to demonstrate robust programmability of a quantum switch via the OpenFlow network controller while executing an application of superdense coding. We describe the software framework implemented to carry out these simulations and we discuss near-term efforts to realize these applications.
SpaceWire Driver Software for Special DSPs
NASA Technical Reports Server (NTRS)
Clark, Douglas; Lux, James; Nishimoto, Kouji; Lang, Minh
2003-01-01
A computer program provides a high-level C-language interface to electronics circuitry that controls a SpaceWire interface in a system based on a space qualified version of the ADSP-21020 digital signal processor (DSP). SpaceWire is a spacecraft-oriented standard for packet-switching data-communication networks that comprise nodes connected through bidirectional digital serial links that utilize low-voltage differential signaling (LVDS). The software is tailored to the SMCS-332 application-specific integrated circuit (ASIC) (also available as the TSS901E), which provides three highspeed (150 Mbps) serial point-to-point links compliant with the proposed Institute of Electrical and Electronics Engineers (IEEE) Standard 1355.2 and equivalent European Space Agency (ESA) Standard ECSS-E-50-12. In the specific application of this software, the SpaceWire ASIC was combined with the DSP processor, memory, and control logic in a Multi-Chip Module DSP (MCM-DSP). The software is a collection of low-level driver routines that provide a simple message-passing application programming interface (API) for software running on the DSP. Routines are provided for interrupt-driven access to the two styles of interface provided by the SMCS: (1) the "word at a time" conventional host interface (HOCI); and (2) a higher performance "dual port memory" style interface (COMI).
BioBrick assembly standards and techniques and associated software tools.
Røkke, Gunvor; Korvald, Eirin; Pahr, Jarle; Oyås, Ove; Lale, Rahmi
2014-01-01
The BioBrick idea was developed to introduce the engineering principles of abstraction and standardization into synthetic biology. BioBricks are DNA sequences that serve a defined biological function and can be readily assembled with any other BioBrick parts to create new BioBricks with novel properties. In order to achieve this, several assembly standards can be used. Which assembly standards a BioBrick is compatible with, depends on the prefix and suffix sequences surrounding the part. In this chapter, five of the most common assembly standards will be described, as well as some of the most used assembly techniques, cloning procedures, and a presentation of the available software tools that can be used for deciding on the best method for assembling of different BioBricks, and searching for BioBrick parts in the Registry of Standard Biological Parts database.
Artificial intelligence approaches to software engineering
NASA Technical Reports Server (NTRS)
Johannes, James D.; Macdonald, James R.
1988-01-01
Artificial intelligence approaches to software engineering are examined. The software development life cycle is a sequence of not so well-defined phases. Improved techniques for developing systems have been formulated over the past 15 years, but pressure continues to attempt to reduce current costs. Software development technology seems to be standing still. The primary objective of the knowledge-based approach to software development presented in this paper is to avoid problem areas that lead to schedule slippages, cost overruns, or software products that fall short of their desired goals. Identifying and resolving software problems early, often in the phase in which they first occur, has been shown to contribute significantly to reducing risks in software development. Software development is not a mechanical process but a basic human activity. It requires clear thinking, work, and rework to be successful. The artificial intelligence approaches to software engineering presented support the software development life cycle through the use of software development techniques and methodologies in terms of changing current practices and methods. These should be replaced by better techniques that that improve the process of of software development and the quality of the resulting products. The software development process can be structured into well-defined steps, of which the interfaces are standardized, supported and checked by automated procedures that provide error detection, production of the documentation and ultimately support the actual design of complex programs.
A multi-center study benchmarks software tools for label-free proteome quantification
Gillet, Ludovic C; Bernhardt, Oliver M.; MacLean, Brendan; Röst, Hannes L.; Tate, Stephen A.; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I.; Aebersold, Ruedi; Tenzer, Stefan
2016-01-01
The consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from SWATH-MS (sequential window acquisition of all theoretical fragment ion spectra), a method that uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test datasets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation windows setups. For consistent evaluation we developed LFQbench, an R-package to calculate metrics of precision and accuracy in label-free quantitative MS, and report the identification performance, robustness and specificity of each software tool. Our reference datasets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics. PMID:27701404
Data Acquisition System for Multi-Frequency Radar Flight Operations Preparation
NASA Technical Reports Server (NTRS)
Leachman, Jonathan
2010-01-01
A three-channel data acquisition system was developed for the NASA Multi-Frequency Radar (MFR) system. The system is based on a commercial-off-the-shelf (COTS) industrial PC (personal computer) and two dual-channel 14-bit digital receiver cards. The decimated complex envelope representations of the three radar signals are passed to the host PC via the PCI bus, and then processed in parallel by multiple cores of the PC CPU (central processing unit). The innovation is this parallelization of the radar data processing using multiple cores of a standard COTS multi-core CPU. The data processing portion of the data acquisition software was built using autonomous program modules or threads, which can run simultaneously on different cores. A master program module calculates the optimal number of processing threads, launches them, and continually supplies each with data. The benefit of this new parallel software architecture is that COTS PCs can be used to implement increasingly complex processing algorithms on an increasing number of radar range gates and data rates. As new PCs become available with higher numbers of CPU cores, the software will automatically utilize the additional computational capacity.
Sustaining Software-Intensive Systems
2006-05-01
2.2 Multi- Service Operational Test and Evaluation .......................................4 2.3 Stable Software Baseline...or equivalent document • completed Multi- Service Operational Test and Evaluation (MOT&E) for the potential production software package (or OT&E if...not multi- service ) • stable software production baseline • complete and current software documentation • Authority to Operate (ATO) for an
Assembling Appliances Standards from a Basket of Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siderious, Hans-Paul; Meier, Alan
2014-08-11
Rapid innovation in product design challenges the current methodology for setting standards and labels, especially for electronics, software and networking. Major problems include defining the product, measuring its energy consumption, and choosing the appropriate metric and level for the standard. Most governments have tried to solve these problems by defining ever more specific product subcategories, along with their corresponding test methods and metrics. An alternative approach would treat each energy-using product as something that delivers a basket of functions. Then separate standards would be constructed for the individual functions that can be defined, tested, and evaluated. Case studies of thermostats,more » displays and network equipment are presented to illustrate the problems with the classical approach for setting standards and indicate the merits and drawbacks of the alternative. The functional approach appears best suited to products whose primary purpose is processing information and that have multiple functions.« less
Ontology Based Quality Evaluation for Spatial Data
NASA Astrophysics Data System (ADS)
Yılmaz, C.; Cömert, Ç.
2015-08-01
Many institutions will be providing data to the National Spatial Data Infrastructure (NSDI). Current technical background of the NSDI is based on syntactic web services. It is expected that this will be replaced by semantic web services. The quality of the data provided is important in terms of the decision-making process and the accuracy of transactions. Therefore, the data quality needs to be tested. This topic has been neglected in Turkey. Data quality control for NSDI may be done by private or public "data accreditation" institutions. A methodology is required for data quality evaluation. There are studies for data quality including ISO standards, academic studies and software to evaluate spatial data quality. ISO 19157 standard defines the data quality elements. Proprietary software such as, 1Spatial's 1Validate and ESRI's Data Reviewer offers quality evaluation based on their own classification of rules. Commonly, rule based approaches are used for geospatial data quality check. In this study, we look for the technical components to devise and implement a rule based approach with ontologies using free and open source software in semantic web context. Semantic web uses ontologies to deliver well-defined web resources and make them accessible to end-users and processes. We have created an ontology conforming to the geospatial data and defined some sample rules to show how to test data with respect to data quality elements including; attribute, topo-semantic and geometrical consistency using free and open source software. To test data against rules, sample GeoSPARQL queries are created, associated with specifications.
AIRS Maps from Space Processing Software
NASA Technical Reports Server (NTRS)
Thompson, Charles K.; Licata, Stephen J.
2012-01-01
This software package processes Atmospheric Infrared Sounder (AIRS) Level 2 swath standard product geophysical parameters, and generates global, colorized, annotated maps. It automatically generates daily and multi-day averaged colorized and annotated maps of various AIRS Level 2 swath geophysical parameters. It also generates AIRS input data sets for Eyes on Earth, Puffer-sphere, and Magic Planet. This program is tailored to AIRS Level 2 data products. It re-projects data into 1/4-degree grids that can be combined and averaged for any number of days. The software scales and colorizes global grids utilizing AIRS-specific color tables, and annotates images with title and color bar. This software can be tailored for use with other swath data products for the purposes of visualization.
Modeling and Grid Generation of Iced Airfoils
NASA Technical Reports Server (NTRS)
Vickerman, Mary B.; Baez, Marivell; Braun, Donald C.; Hackenberg, Anthony W.; Pennline, James A.; Schilling, Herbert W.
2007-01-01
SmaggIce Version 2.0 is a software toolkit for geometric modeling and grid generation for two-dimensional, singleand multi-element, clean and iced airfoils. A previous version of SmaggIce was described in Preparing and Analyzing Iced Airfoils, NASA Tech Briefs, Vol. 28, No. 8 (August 2004), page 32. To recapitulate: Ice shapes make it difficult to generate quality grids around airfoils, yet these grids are essential for predicting ice-induced complex flow. This software efficiently creates high-quality structured grids with tools that are uniquely tailored for various ice shapes. SmaggIce Version 2.0 significantly enhances the previous version primarily by adding the capability to generate grids for multi-element airfoils. This version of the software is an important step in streamlining the aeronautical analysis of ice airfoils using computational fluid dynamics (CFD) tools. The user may prepare the ice shape, define the flow domain, decompose it into blocks, generate grids, modify/divide/merge blocks, and control grid density and smoothness. All these steps may be performed efficiently even for the difficult glaze and rime ice shapes. Providing the means to generate highly controlled grids near rough ice, the software includes the creation of a wrap-around block (called the "viscous sublayer block"), which is a thin, C-type block around the wake line and iced airfoil. For multi-element airfoils, the software makes use of grids that wrap around and fill in the areas between the viscous sub-layer blocks for all elements that make up the airfoil. A scripting feature records the history of interactive steps, which can be edited and replayed later to produce other grids. Using this version of SmaggIce, ice shape handling and grid generation can become a practical engineering process, rather than a laborious research effort.
NASA Astrophysics Data System (ADS)
Paget, A. C.; Brodzik, M. J.; Long, D. G.; Hardman, M.
2016-02-01
The historical record of satellite-derived passive microwave brightness temperatures comprises data from multiple imaging radiometers (SMMR, SSM/I-SSMIS, AMSR-E), spanning nearly 40 years of Earth observations from 1978 to the present. Passive microwave data are used to monitor time series of many climatological variables, including ocean wind speeds, cloud liquid water and sea ice concentrations and ice velocity. Gridded versions of passive microwave data have been produced using various map projections (polar stereographic, Lambert azimuthal equal-area, cylindrical equal-area, quarter-degree Platte-Carree) and data formats (flat binary, HDF). However, none of the currently available versions can be rendered in the common visualization standard, geoTIFF, without requiring cartographic reprojection. Furthermore, the reprojection details are complicated and often require expert knowledge of obscure software package options. We are producing a consistently calibrated, completely reprocessed data set of this valuable multi-sensor satellite record, using EASE-Grid 2.0, an improved equal-area projection definition that will require no reprojection for translation into geoTIFF. Our approach has been twofold: 1) define the projection ellipsoid to match the reference datum of the satellite data, and 2) include required file-level metadata for standard projection software to correctly render the data in the geoTIFF standard. The Calibrated, Enhanced Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR), leverages image reconstruction techniques to enhance gridded spatial resolution to 3 km and uses newly available intersensor calibrations to improve the quality of derived geophysical products. We expect that our attention to easy geoTIFF compatibility will foster higher-quality analysis with the CETB product by enabling easy and correct intercomparison with other gridded and in situ data.
Software defined multi-OLT passive optical network for flexible traffic allocation
NASA Astrophysics Data System (ADS)
Zhang, Shizong; Gu, Rentao; Ji, Yuefeng; Zhang, Jiawei; Li, Hui
2016-10-01
With the rapid growth of 4G mobile network and vehicular network services mobile terminal users have increasing demand on data sharing among different radio remote units (RRUs) and roadside units (RSUs). Meanwhile, commercial video-streaming, video/voice conference applications delivered through peer-to-peer (P2P) technology are still keep on stimulating the sharp increment of bandwidth demand in both business and residential subscribers. However, a significant issue is that, although wavelength division multiplexing (WDM) and orthogonal frequency division multiplexing (OFDM) technology have been proposed to fulfil the ever-increasing bandwidth demand in access network, the bandwidth of optical fiber is not unlimited due to the restriction of optical component properties and modulation/demodulation technology, and blindly increase the wavelength cannot meet the cost-sensitive characteristic of the access network. In this paper, we propose a software defined multi-OLT PON architecture to support efficient scheduling of access network traffic. By introducing software defined networking technology and wavelength selective switch into TWDM PON system in central office, multiple OLTs can be considered as a bandwidth resource pool and support flexible traffic allocation for optical network units (ONUs). Moreover, under the configuration of the control plane, ONUs have the capability of changing affiliation between different OLTs under different traffic situations, thus the inter-OLT traffic can be localized and the data exchange pressure of the core network can be released. Considering this architecture is designed to be maximum following the TWDM PON specification, the existing optical distribution network (ODN) investment can be saved and conventional EPON/GPON equipment can be compatible with the proposed architecture. What's more, based on this architecture, we propose a dynamic wavelength scheduling algorithm, which can be deployed as an application on control plane and achieve effective scheduling OLT wavelength resources between different OLTs based on various traffic situation. Simulation results show that, by using the scheduling algorithm, network traffic between different OLTs can be optimized effectively, and the wavelength utilization of the multi-OLT system can be improved due to the flexible wavelength scheduling.
2006-12-01
NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI-AGENT PHYSICALLY INTERACTING SPACECRAFT (AMPHIS) TEST BED by Blake D. Eikenberry...Engineer Degree 4. TITLE AND SUBTITLE Guidance and Navigation Software Architecture Design for the Autonomous Multi- Agent Physically Interacting...iii Approved for public release; distribution is unlimited GUIDANCE AND NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI
Study on Spacelab software development and integration concepts
NASA Technical Reports Server (NTRS)
1974-01-01
A study was conducted to define the complexity and magnitude of the Spacelab software challenge. The study was based on current Spacelab program concepts, anticipated flight schedules, and ground operation plans. The study was primarily directed toward identifying and solving problems related to the experiment flight application and tests and checkout software executing in the Spacelab onboard command and data management subsystem (CDMS) computers and electrical ground support equipment (EGSE). The study provides a conceptual base from which it is possible to proceed into the development phase of the Software Test and Integration Laboratory (STIL) and establishes guidelines for the definition of standards which will ensure that the total Spacelab software is understood prior to entering development.
Experimental land observing data system feasibility study
NASA Technical Reports Server (NTRS)
Buckley, J. L.; Kraiman, H.
1982-01-01
An end-to-end data system to support a Shuttle-based Multispectral Linear Array (MLA) mission in the mid-1980's was defined. The experimental Land Observing System (ELOS) is discussed. A ground system that exploits extensive assets from the LANDSAT-D Program to effectively meet the objectives of the ELOS Mission was defined. The goal of 10 meter pixel precision, the variety of data acquisition capabilities, and the use of Shuttle are key to the mission requirements, Ground mission management functions are met through the use of GSFC's Multi-Satellite Operations Control Center (MSOCC). The MLA Image Generation Facility (MIGF) combines major hardware elements from the Applications Development Data System (ADDS) facility and LANDSAT Assessment System (LAS) with a special purpose MLA interface unit. LANDSAT-D image processing techniques, adapted to MLA characteristics, form the basis for the use of existing software and the definition of new software required.
Springback compensation for a vehicle's steel body panel
NASA Astrophysics Data System (ADS)
Bałon, Paweł; Świątoniowski, Andrzej; Szostak, Janusz; Kiełbasa, Bartłomiej
2017-10-01
This paper presents a structural element of a vehicle, that is made from High Strength Steels. Application of this kind of materials considerably reduces construction mass due to high durability. Nevertheless, it results in appearance of springback that depends mainly on used material as well as part. Springback reduction helps to reach the reference geometry of the element by using the Finite Element Method software. Authors compared two methods of optimization of die shape. The first method defines the compensation of the die shape only for OP-20 and the second multi-operation method defines the compensation of the die shape for the OP-20 and OP-50 operations. Prediction of springback by the trial-and-error method is difficult and labor-intensive. Designing of dies requires using of appropriate FEM software to make them more economic and less time-consuming. Virtual compensation methods make it possible to receive precise result in a short time. Die compensation with software application was experimentally verified by the prototype die. Therefore, springback deformation becomes a critical problem especially for the HSS steel when the geometry is complex.
NASA Technical Reports Server (NTRS)
Richardson, Keith; Wong, Carla
1988-01-01
The role of verification and validation (V and V) in software has been to support and strengthen the software lifecycle and to ensure that the resultant code meets the standards of the requirements documents. Knowledge Based System (KBS) V and V should serve the same role, but the KBS lifecycle is ill-defined. The rationale of a simple form of the KBS lifecycle is presented, including accommodation to certain critical KBS differences from software development.
A cognitive mobile BTS solution with software-defined radioelectric sensing.
Muñoz, Jorge; Alonso, Javier Vales; García, Francisco Quiñoy; Costas, Sergio; Pillado, Marcos; Castaño, Francisco Javier González; Sánchez, Manuel García; Valcarce, Roberto López; Bravo, Cristina López
2013-02-05
Private communications inside large vehicles such as ships may be effectively provided using standard cellular systems. In this paper we propose a new solution based on software-defined radio with electromagnetic sensing support. Software-defined radio allows low-cost developments and, potentially, added-value services not available in commercial cellular networks. The platform of reference, OpenBTS, only supports single-channel cells. Our proposal, however, has the ability of changing BTS channel frequency without disrupting ongoing communications. This ability should be mandatory in vehicular environments, where neighbouring cell configurations may change rapidly, so a moving cell must be reconfigured in real-time to avoid interferences. Full details about frequency occupancy sensing and the channel reselection procedure are provided in this paper. Moreover, a procedure for fast terminal detection is proposed. This may be decisive in emergency situations, e.g., if someone falls overboard. Different tests confirm the feasibility of our proposal and its compatibility with commercial GSM terminals.
A Cognitive Mobile BTS Solution with Software-Defined Radioelectric Sensing
Muñoz, Jorge; Alonso, Javier Vales; García, Francisco Quiñoy; Costas, Secundino; Pillado, Marcos; Castaño, Francisco Javier González; Sánchez, Manuel Garćia; Valcarce, Roberto López; Bravo, Cristina López
2013-01-01
Private communications inside large vehicles such as ships may be effectively provided using standard cellular systems. In this paper we propose a new solution based on software-defined radio with electromagnetic sensing support. Software-defined radio allows low-cost developments and, potentially, added-value services not available in commercial cellular networks. The platform of reference, OpenBTS, only supports single-channel cells. Our proposal, however, has the ability of changing BTS channel frequency without disrupting ongoing communications. This ability should be mandatory in vehicular environments, where neighbouring cell configurations may change rapidly, so a moving cell must be reconfigured in real-time to avoid interferences. Full details about frequency occupancy sensing and the channel reselection procedure are provided in this paper. Moreover, a procedure for fast terminal detection is proposed. This may be decisive in emergency situations, e.g., if someone falls overboard. Different tests confirm the feasibility of our proposal and its compatibility with commercial GSM terminals. PMID:23385417
An Interoperability Framework and Capability Profiling for Manufacturing Software
NASA Astrophysics Data System (ADS)
Matsuda, M.; Arai, E.; Nakano, N.; Wakai, H.; Takeda, H.; Takata, M.; Sasaki, H.
ISO/TC184/SC5/WG4 is working on ISO16100: Manufacturing software capability profiling for interoperability. This paper reports on a manufacturing software interoperability framework and a capability profiling methodology which were proposed and developed through this international standardization activity. Within the context of manufacturing application, a manufacturing software unit is considered to be capable of performing a specific set of function defined by a manufacturing software system architecture. A manufacturing software interoperability framework consists of a set of elements and rules for describing the capability of software units to support the requirements of a manufacturing application. The capability profiling methodology makes use of the domain-specific attributes and methods associated with each specific software unit to describe capability profiles in terms of unit name, manufacturing functions, and other needed class properties. In this methodology, manufacturing software requirements are expressed in terns of software unit capability profiles.
Architecture for Survivable System Processing (ASSP)
NASA Astrophysics Data System (ADS)
Wood, Richard J.
1991-11-01
The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.
Architecture for Survivable System Processing (ASSP)
NASA Technical Reports Server (NTRS)
Wood, Richard J.
1991-01-01
The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.
Software-Defined Architectures for Spectrally Efficient Cognitive Networking in Extreme Environments
NASA Astrophysics Data System (ADS)
Sklivanitis, Georgios
The objective of this dissertation is the design, development, and experimental evaluation of novel algorithms and reconfigurable radio architectures for spectrally efficient cognitive networking in terrestrial, airborne, and underwater environments. Next-generation wireless communication architectures and networking protocols that maximize spectrum utilization efficiency in congested/contested or low-spectral availability (extreme) communication environments can enable a rich body of applications with unprecedented societal impact. In recent years, underwater wireless networks have attracted significant attention for military and commercial applications including oceanographic data collection, disaster prevention, tactical surveillance, offshore exploration, and pollution monitoring. Unmanned aerial systems that are autonomously networked and fully mobile can assist humans in extreme or difficult-to-reach environments and provide cost-effective wireless connectivity for devices without infrastructure coverage. Cognitive radio (CR) has emerged as a promising technology to maximize spectral efficiency in dynamically changing communication environments by adaptively reconfiguring radio communication parameters. At the same time, the fast developing technology of software-defined radio (SDR) platforms has enabled hardware realization of cognitive radio algorithms for opportunistic spectrum access. However, existing algorithmic designs and protocols for shared spectrum access do not effectively capture the interdependencies between radio parameters at the physical (PHY), medium-access control (MAC), and network (NET) layers of the network protocol stack. In addition, existing off-the-shelf radio platforms and SDR programmable architectures are far from fulfilling runtime adaptation and reconfiguration across PHY, MAC, and NET layers. Spectrum allocation in cognitive networks with multi-hop communication requirements depends on the location, network traffic load, and interference profile at each network node. As a result, the development and implementation of algorithms and cross-layer reconfigurable radio platforms that can jointly treat space, time, and frequency as a unified resource to be dynamically optimized according to inter- and intra-network interference constraints is of fundamental importance. In the next chapters, we present novel algorithmic and software/hardware implementation developments toward the deployment of spectrally efficient terrestrial, airborne, and underwater wireless networks. In Chapter 1 we review the state-of-art in commercially available SDR platforms, describe their software and hardware capabilities, and classify them based on their ability to enable rapid prototyping and advance experimental research in wireless networks. Chapter 2 discusses system design and implementation details toward real-time evaluation of a software-radio platform for all-spectrum cognitive channelization in the presence of narrowband or wideband primary stations. All-spectrum channelization is achieved by designing maximum signal-to-interference-plus-noise ratio (SINR) waveforms that span the whole continuum of the device-accessible spectrum, while satisfying peak power and interference temperature (IT) constraints for the secondary and primary users, respectively. In Chapter 3, we introduce the concept of all-spectrum channelization based on max-SINR optimized sparse-binary waveforms, we propose optimal and suboptimal waveform design algorithms, and evaluate their SINR and bit-error-rate (BER) performance in an SDR testbed. Chapter 4 considers the problem of channel estimation with minimal pilot signaling in multi-cell multi-user multi-input multi-output (MIMO) systems with very large antenna arrays at the base station, and proposes a least-squares (LS)-type algorithm that iteratively extracts channel and data estimates from a short record of data measurements. Our algorithmic developments toward spectrally-efficient cognitive networking through joint optimization of channel access code-waveforms and routes in a multi-hop network are described in Chapter 5. Algorithmic designs are software optimized on heterogeneous multi-core general-purpose processor (GPP)-based SDR architectures by leveraging a novel software-radio framework that offers self-optimization and real-time adaptation capabilities at the PHY, MAC, and NET layers of the network protocol stack. Our system design approach is experimentally validated under realistic conditions in a large-scale hybrid ground-air testbed deployment. Chapter 6 reviews the state-of-art in software and hardware platforms for underwater wireless networking and proposes a software-defined acoustic modem prototype that enables (i) cognitive reconfiguration of PHY/MAC parameters, and (ii) cross-technology communication adaptation. The proposed modem design is evaluated in terms of effective communication data rate in both water tank and lake testbed setups. In Chapter 7, we present a novel receiver configuration for code-waveform-based multiple-access underwater communications. The proposed receiver is fully reconfigurable and executes (i) all-spectrum cognitive channelization, and (ii) combined synchronization, channel estimation, and demodulation. Experimental evaluation in terms of SINR and BER show that all-spectrum channelization is a powerful proposition for underwater communications. At the same time, the proposed receiver design can significantly enhance bandwidth utilization. Finally, in Chapter 8, we focus on challenging practical issues that arise in underwater acoustic sensor network setups where co-located multi-antenna sensor deployment is not feasible due to power, computation, and hardware limitations, and design, implement, and evaluate an underwater receiver structure that accounts for multiple carrier frequency and timing offsets in virtual (distributed) MIMO underwater systems.
NASA Astrophysics Data System (ADS)
Rosich Minguell, Josefina; Garzón Lopez, Francisco
2012-09-01
The Mid-resolution InfRAreD Astronomical Spectrograph (MIRADAS, a near-infrared multi-object echelle spectrograph operating at spectral resolution R=20,000 over the 1-2.5μm bandpass) was selected in 2010 by the Gran Telescopio Canarias (GTC) partnership as the next-generation near-infrared spectrograph for the world's largest optical/infrared telescope, and is being developed by an international consortium. The MIRADAS consortium includes the University of Florida, Universidad de Barcelona, Universidad Complutense de Madrid, Instituto de Astrofísica de Canarias, Institut de Física d'Altes Energies, Institut d'Estudis Espacials de Catalunya and Universidad Nacional Autónoma de México. This paper shows an overview of the MIRADAS control software, which follows the standards defined by the telescope to permit the integration of this software on the GTC Control System (GCS). The MIRADAS Control System is based on a distributed architecture according to a component model where every subsystem is selfcontained. The GCS is a distributed environment written in object oriented C++, which runs components in different computers, using CORBA middleware for communications. Each MIRADAS observing mode, including engineering, monitoring and calibration modes, will have its own predefined sequence, which are executed in the GCS Sequencer. These sequences will have the ability of communicating with other telescope subsystems.
Adapting Wireless Technology to Lighting Control and Environmental Sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Teasdale; Francis Rubinstein; Dave Watson
The high cost of retrofitting buildings with advanced lighting control systems is a barrier to adoption of this energy-saving technology. Wireless technology, however, offers a solution to mounting installation costs since it requires no additional wiring to implement. To demonstrate the feasibility of such a system, a prototype wirelessly-controlled advanced lighting system was designed and built. The system includes the following components: a wirelessly-controllable analog circuit module (ACM), a wirelessly-controllable electronic dimmable ballast, a T8 3-lamp fixture, an environmental multi-sensor, a current transducer, and control software. The ACM, dimmable ballast, multi-sensor, and current transducer were all integrated with SmartMesh{trademark} wirelessmore » mesh networking nodes, called motes, enabling wireless communication, sensor monitoring, and actuator control. Each mote-enabled device has a reliable communication path to the SmartMesh Manager, a single board computer that controls network functions and connects the wireless network to a PC running lighting control software. The ACM is capable of locally driving one or more standard 0-10 Volt electronic dimmable ballasts through relay control and a 0-10 Volt controllable output. The mote-integrated electronic dimmable ballast is designed to drive a standard 3-lamp T8 light fixture. The environmental multi-sensor measures occupancy, light level and temperature. The current transducer is used to measure the power consumed by the fixture. Control software was developed to implement advanced lighting algorithms, including daylight ramping, occupancy control, and demand response. Engineering prototypes of each component were fabricated and tested in a bench-scale system. Based on standard industry practices, a cost analysis was conducted. It is estimated that the installation cost of a wireless advanced lighting control system for a retrofit application is at least 30% lower than a comparable wired system for a typical 16,000 square-foot office building, with a payback period of less than 3 years.« less
Critical Software for Human Spaceflight
NASA Technical Reports Server (NTRS)
Preden, Antonio; Kaschner, Jens; Rettig, Felix; Rodriggs, Michael
2017-01-01
The NASA Orion vehicle that will fly to the moon in the next years is propelled along its mission by the European Service Module (ESM), developed by ESA and its prime contractor Airbus Defense and Space. This paper describes the development of the Propulsion Drive Electronics (PDE) Software that provides the interface between the propulsion hardware of the European Service Module with the Orion flight computers, and highlights the challenges that have been faced during the development. Particularly, the specific aspects relevant to Human Spaceflight in an international cooperation are presented, as the compliance to both European and US standards and the software criticality classification to the highest category A. An innovative aspect of the PDE SW is its Time- Triggered Ethernet interface with the Orion Flight Computers, which has never been flown so far on any European spacecraft. Finally the verification aspects are presented, applying the most exigent quality requirements defined in the European Cooperation for Space Standardization (ECSS) standards such as the structural coverage analysis of the object code and the recourse to an independent software verification and validation activity carried on in parallel by a different team.
TERMTrial--terminology-based documentation systems for cooperative clinical trials.
Merzweiler, A; Weber, R; Garde, S; Haux, R; Knaup-Gregori, P
2005-04-01
Within cooperative groups of multi-center clinical trials a standardized documentation is a prerequisite for communication and sharing of data. Standardizing documentation systems means standardizing the underlying terminology. The management and consistent application of terminology systems is a difficult and fault-prone task, which should be supported by appropriate software tools. Today, documentation systems for clinical trials are often implemented as so-called Remote-Data-Entry-Systems (RDE-systems). Although there are many commercial systems, which support the development of RDE-systems there is none offering a comprehensive terminological support. Therefore, we developed the software system TERMTrial which consists of a component for the definition and management of terminology systems for cooperative groups of clinical trials and two components for the terminology-based automatic generation of trial databases and terminology-based interactive design of electronic case report forms (eCRFs). TERMTrial combines the advantages of remote data entry with a comprehensive terminological control.
A Decision-Based Methodology for Object Oriented-Design
1988-12-16
willing to take the time to meet together weekly for mutual encouragement and prayer . Their friendship, uncompromising standards, and lifestyle were...assume the validity of the object-oriented and software engineering principles involved, and define and proto- type a generic, language independent...mean- ingful labels for variables, abstraction requires the ability to define new types that relieve the programmer from having to know or mess with
Autonomous aircraft initiative study
NASA Technical Reports Server (NTRS)
Hewett, Marle D.
1991-01-01
The results of a consulting effort to aid NASA Ames-Dryden in defining a new initiative in aircraft automation are described. The initiative described is a multi-year, multi-center technology development and flight demonstration program. The initiative features the further development of technologies in aircraft automation already being pursued at multiple NASA centers and Department of Defense (DoD) research and Development (R and D) facilities. The proposed initiative involves the development of technologies in intelligent systems, guidance, control, software development, airborne computing, navigation, communications, sensors, unmanned vehicles, and air traffic control. It involves the integration and implementation of these technologies to the extent necessary to conduct selected and incremental flight demonstrations.
Space Generic Open Avionics Architecture (SGOAA) standard specification
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1994-01-01
This standard establishes the Space Generic Open Avionics Architecture (SGOAA). The SGOAA includes a generic functional model, processing structural model, and an architecture interface model. This standard defines the requirements for applying these models to the development of spacecraft core avionics systems. The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture models to the design of a specific avionics hardware/software processing system. This standard defines a generic set of system interface points to facilitate identification of critical services and interfaces. It establishes the requirement for applying appropriate low level detailed implementation standards to those interfaces points. The generic core avionics functions and processing structural models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
NASA's Approach to Software Assurance
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2015-01-01
NASA defines software assurance as: the planned and systematic set of activities that ensure conformance of software life cycle processes and products to requirements, standards, and procedures via quality, safety, reliability, and independent verification and validation. NASA's implementation of this approach to the quality, safety, reliability, security and verification and validation of software is brought together in one discipline, software assurance. Organizationally, NASA has software assurance at each NASA center, a Software Assurance Manager at NASA Headquarters, a Software Assurance Technical Fellow (currently the same person as the SA Manager), and an Independent Verification and Validation Organization with its own facility. An umbrella risk mitigation strategy for safety and mission success assurance of NASA's software, software assurance covers a wide area and is better structured to address the dynamic changes in how software is developed, used, and managed, as well as it's increasingly complex functionality. Being flexible, risk based, and prepared for challenges in software at NASA is essential, especially as much of our software is unique for each mission.
Martinon, Alice; Cronin, Ultan P; Wilkinson, Martin G
2012-01-01
In this article, four types of standards were assessed in a SYBR Green-based real-time PCR procedure for the quantification of Staphylococcus aureus (S. aureus) in DNA samples. The standards were purified S. aureus genomic DNA (type A), circular plasmid DNA containing a thermonuclease (nuc) gene fragment (type B), DNA extracted from defined populations of S. aureus cells generated by Fluorescence Activated Cell Sorting (FACS) technology with (type C) or without purification of DNA by boiling (type D). The optimal efficiency of 2.016 was obtained on Roche LightCycler(®) 4.1. software for type C standards, whereas the lowest efficiency (1.682) corresponded to type D standards. Type C standards appeared to be more suitable for quantitative real-time PCR because of the use of defined populations for construction of standard curves. Overall, Fieller Confidence Interval algorithm may be improved for replicates having a low standard deviation in Cycle Threshold values such as found for type B and C standards. Stabilities of diluted PCR standards stored at -20°C were compared after 0, 7, 14 and 30 days and were lower for type A or C standards compared with type B standards. However, FACS generated standards may be useful for bacterial quantification in real-time PCR assays once optimal storage and temperature conditions are defined.
Generalized Symbolic Execution for Model Checking and Testing
NASA Technical Reports Server (NTRS)
Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)
2003-01-01
Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.
Developing a PLC-friendly state machine model: lessons learned
NASA Astrophysics Data System (ADS)
Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans
2014-07-01
Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we've learned during the development process of such a "PLC-friendly" state machine model.
ERIC Educational Resources Information Center
Fetaji, Bekim; Fetaji, Majlinda
2009-01-01
As a number of recent studies suggest applications of networked computers in education have very inconsistent results ranging from success stories to complete failures. Literally, thousands of e-learning projects have been carried out that greatly differ in their outcomes. Until now, however, there is no systematic or a standardized way of…
ERIC Educational Resources Information Center
Lippert, Margaret
2000-01-01
This abstract of a planned session on access to scientific and technical journals addresses policy and standard issues related to long-term archives; digital archiving models; economic factors; hardware and software issues; multi-publisher electronic journal content integration; format considerations; and future data migration needs. (LRW)
A real-time inverse quantised transform for multi-standard with dynamic resolution support
NASA Astrophysics Data System (ADS)
Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce
2016-06-01
In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.
A comprehensive quality control workflow for paired tumor-normal NGS experiments.
Schroeder, Christopher M; Hilke, Franz J; Löffler, Markus W; Bitzer, Michael; Lenz, Florian; Sturm, Marc
2017-06-01
Quality control (QC) is an important part of all NGS data analysis stages. Many available tools calculate QC metrics from different analysis steps of single sample experiments (raw reads, mapped reads and variant lists). Multi-sample experiments, as sequencing of tumor-normal pairs, require additional QC metrics to ensure validity of results. These multi-sample QC metrics still lack standardization. We therefore suggest a new workflow for QC of DNA sequencing of tumor-normal pairs. With this workflow well-known single-sample QC metrics and additional metrics specific for tumor-normal pairs can be calculated. The segmentation into different tools offers a high flexibility and allows reuse for other purposes. All tools produce qcML, a generic XML format for QC of -omics experiments. qcML uses quality metrics defined in an ontology, which was adapted for NGS. All QC tools are implemented in C ++ and run both under Linux and Windows. Plotting requires python 2.7 and matplotlib. The software is available under the 'GNU General Public License version 2' as part of the ngs-bits project: https://github.com/imgag/ngs-bits. christopher.schroeder@med.uni-tuebingen.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Ka-Band, Multi-Gigabit-Per-Second Transceiver
NASA Technical Reports Server (NTRS)
Simons, Rainee N.; Wintucky, Edwin G.; Smith, Francis J.; Harris, Johnny M.; Landon, David G.; Haddadin, Osama S.; McIntire, William K.; Sun, June Y.
2011-01-01
A document discusses a multi-Gigabit-per-second, Ka-band transceiver with a software-defined modem (SDM) capable of digitally encoding/decoding data and compensating for linear and nonlinear distortions in the end-to-end system, including the traveling-wave tube amplifier (TWTA). This innovation can increase data rates of space-to-ground communication links, and has potential application to NASA s future spacebased Earth observation system. The SDM incorporates an extended version of the industry-standard DVB-S2, and LDPC rate 9/10 FEC codec. The SDM supports a suite of waveforms, including QPSK, 8-PSK, 16-APSK, 32- APSK, 64-APSK, and 128-QAM. The Ka-band and TWTA deliver an output power on the order of 200 W with efficiency greater than 60%, and a passband of at least 3 GHz. The modem and the TWTA together enable a data rate of 20 Gbps with a low bit error rate (BER). The payload data rates for spacecraft in NASA s integrated space communications network can be increased by an order of magnitude (>10 ) over current state-of-practice. This innovation enhances the data rate by using bandwidth-efficient modulation techniques, which transmit a higher number of bits per Hertz of bandwidth than the currently used quadrature phase shift keying (QPSK) waveforms.
BladeCAD: An Interactive Geometric Design Tool for Turbomachinery Blades
NASA Technical Reports Server (NTRS)
Miller, Perry L., IV; Oliver, James H.; Miller, David P.; Tweedt, Daniel L.
1996-01-01
A new metthodology for interactive design of turbomachinery blades is presented. Software implementation of the meth- ods provides a user interface that is intuitive to aero-designers while operating with standardized geometric forms. The primary contribution is that blade sections may be defined with respect to general surfaces of revolution which may be defined to represent the path of fluid flow through the turbomachine. The completed blade design is represented as a non-uniform rational B-spline (NURBS) surface and is written to a standard IGES file which is portable to most design, analysis, and manufacturing applications.
gr-MRI: A software package for magnetic resonance imaging using software defined radios
NASA Astrophysics Data System (ADS)
Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.
2016-09-01
The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.
Tough2{_}MP: A parallel version of TOUGH2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Wu, Yu-Shu; Ding, Chris
2003-04-09
TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less
Prior, Fred W; Erickson, Bradley J; Tarbox, Lawrence
2007-11-01
The Cancer Bioinformatics Grid (caBIG) program was created by the National Cancer Institute to facilitate sharing of IT infrastructure, data, and applications among the National Cancer Institute-sponsored cancer research centers. The program was launched in February 2004 and now links more than 50 cancer centers. In April 2005, the In Vivo Imaging Workspace was added to promote the use of imaging in cancer clinical trials. At the inaugural meeting, four special interest groups (SIGs) were established. The Software SIG was charged with identifying projects that focus on open-source software for image visualization and analysis. To date, two projects have been defined by the Software SIG. The eXtensible Imaging Platform project has produced a rapid application development environment that researchers may use to create targeted workflows customized for specific research projects. The Algorithm Validation Tools project will provide a set of tools and data structures that will be used to capture measurement information and associated needed to allow a gold standard to be defined for the given database against which change analysis algorithms can be tested. Through these and future efforts, the caBIG In Vivo Imaging Workspace Software SIG endeavors to advance imaging informatics and provide new open-source software tools to advance cancer research.
XML-based scripting of multimodality image presentations in multidisciplinary clinical conferences
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Marcus, Phillip; Fine, Ian; Lapstra, Lorelle
2002-05-01
We developed a multi-modality image presentation software for display and analysis of images and related data from different imaging modalities. The software is part of a cardiac image review and presentation platform that supports integration of digital images and data from digital and analog media such as videotapes, analog x-ray films and 35 mm cine films. The software supports standard DICOM image files as well as AVI and PDF data formats. The system is integrated in a digital conferencing room that includes projections of digital and analog sources, remote videoconferencing capabilities, and an electronic whiteboard. The goal of this pilot project is to: 1) develop a new paradigm for image and data management for presentation in a clinically meaningful sequence adapted to case-specific scenarios, 2) design and implement a multi-modality review and conferencing workstation using component technology and customizable 'plug-in' architecture to support complex review and diagnostic tasks applicable to all cardiac imaging modalities and 3) develop an XML-based scripting model of image and data presentation for clinical review and decision making during routine clinical tasks and multidisciplinary clinical conferences.
Integration of genomic and medical data into a 3D atlas of human anatomy.
Turinsky, Andrei L; Fanea, Elena; Trinh, Quang; Dong, Xiaoli; Stromer, Julie N; Shu, Xueling; Wat, Stephen; Hallgrímsson, Benedikt; Hill, Jonathan W; Edwards, Carol; Grosenick, Brenda; Yajima, Masumi; Sensen, Christoph W
2008-01-01
We have developed a framework for the visual integration and exploration of multi-scale biomedical data, which includes anatomical and molecular components. We have also created a Java-based software system that integrates molecular information, such as gene expression data, into a three-dimensional digital atlas of the male adult human anatomy. Our atlas is structured according to the Terminologia Anatomica. The underlying data-indexing mechanism uses open standards and semantic ontology-processing tools to establish the associations between heterogeneous data types. The software system makes an extensive use of virtual reality visualization.
The integration of the risk management process with the lifecycle of medical device software.
Pecoraro, F; Luzi, D
2014-01-01
The application of software in the Medical Device (MD) domain has become central to the improvement of diagnoses and treatments. The new European regulations that specifically address software as an important component of MD, require complex procedures to make software compliant with safety requirements, introducing thereby new challenges in the qualification and classification of MD software as well as in the performance of risk management activities. Under this perspective, the aim of this paper is to propose an integrated framework that combines the activities to be carried out by the manufacturer to develop safe software within the development lifecycle based on the regulatory requirements reported in US and European regulations as well as in the relevant standards and guidelines. A comparative analysis was carried out to identify the main issues related to the application of the current new regulations. In addition, standards and guidelines recently released to harmonise procedures for the validation of MD software have been used to define the risk management activities to be carried out by the manufacturer during the software development process. This paper highlights the main issues related to the qualification and classification of MD software, providing an analysis of the different regulations applied in Europe and the US. A model that integrates the risk management process within the software development lifecycle has been proposed too. It is based on regulatory requirements and considers software risk analysis as a central input to be managed by the manufacturer already at the initial stages of the software design, in order to prevent MD failures. Relevant changes in the process of MD development have been introduced with the recognition of software being an important component of MDs as stated in regulations and standards. This implies the performance of highly iterative processes that have to integrate the risk management in the framework of software development. It also makes it necessary to involve both medical and software engineering competences to safeguard patient and user safety.
Bonaretti, Serena; Vilayphiou, Nicolas; Chan, Caroline Mai; Yu, Andrew; Nishiyama, Kyle; Liu, Danmei; Boutroy, Stephanie; Ghasem-Zadeh, Ali; Boyd, Steven K.; Chapurlat, Roland; McKay, Heather; Shane, Elizabeth; Bouxsein, Mary L.; Black, Dennis M.; Majumdar, Sharmila; Orwoll, Eric S.; Lang, Thomas F.; Khosla, Sundeep; Burghardt, Andrew J.
2017-01-01
Introduction HR-pQCT is increasingly used to assess bone quality, fracture risk and anti-fracture interventions. The contribution of the operator has not been adequately accounted in measurement precision. Operators acquire a 2D projection (“scout view image”) and define the region to be scanned by positioning a “reference line” on a standard anatomical landmark. In this study, we (i) evaluated the contribution of positioning variability to in vivo measurement precision, (ii) measured intra- and inter-operator positioning variability, and (iii) tested if custom training software led to superior reproducibility in new operators compared to experienced operators. Methods To evaluate the operator in vivo measurement precision we compared precision errors calculated in 64 co-registered and non-co-registered scan-rescan images. To quantify operator variability, we developed software that simulates the positioning process of the scanner’s software. Eight experienced operators positioned reference lines on scout view images designed to test intra- and inter-operator reproducibility. Finally, we developed modules for training and evaluation of reference line positioning. We enrolled 6 new operators to participate in a common training, followed by the same reproducibility experiments performed by the experienced group. Results In vivo precision errors were up to three-fold greater (Tt.BMD and Ct.Th) when variability in scan positioning was included. Inter-operator precision errors were significantly greater than short-term intra-operator precision (p<0.001). New trained operators achieved comparable intra-operator reproducibility to experienced operators, and lower inter-operator reproducibility (p<0.001). Precision errors were significantly greater for the radius than for the tibia. Conclusion Operator reference line positioning contributes significantly to in vivo measurement precision and is significantly greater for multi-operator datasets. Inter-operator variability can be significantly reduced using a systematic training platform, now available online (http://webapps.radiology.ucsf.edu/refline/). PMID:27475931
Development of a Multi-Channel, High Frequency QRS Electrocardiograph
NASA Technical Reports Server (NTRS)
DePalma, Jude L.
2003-01-01
With the advent of the ISS era and the potential requirement for increased cardiovascular monitoring of crewmembers during extended EVAs, NASA flight surgeons would stand to benefit from an evolving technology that allows for a more rapid diagnosis of myocardial ischemia compared to standard electrocardiography. Similarly, during the astronaut selection process, NASA flight surgeons and other physicians would also stand to benefit from a completely noninvasive technology that, either at rest or during maximal exercise tests, is more sensitive than standard ECG in identifying the presence of ischemia. Perhaps most importantly, practicing cardiologists and emergency medicine physicians could greatly benefit from such a device as it could augment (or even replace) standard electrocardiography in settings where the rapid diagnosis of myocardial ischemia (or the lack thereof) is required for proper clinical decision-making. A multi-channel, high-frequency QRS electrocardiograph is currently under development in the Life Sciences Research Laboratories at JSC. Specifically the project consisted of writing software code, some of which contained specially-designed digital filters, which will be incorporated into an existing commercial software program that is already designed to collect, plot and analyze conventional 12-lead ECG signals on a desktop, portable or palm PC. The software will derive the high-frequency QRS signals, which will be analyzed (in numerous ways) and plotted alongside of the conventional ECG signals, giving the PC-viewing clinician advanced diagnostic information that has never been available previously in all 12 ECG leads simultaneously. After the hardware and software for the advanced digital ECG monitor have been fully integrated, plans are to use the monitor to begin clinical studies both on healthy subjects and on patients with known coronary artery disease in both the outpatient and hospital settings. The ultimate goal is to get the technology out into the clinical world, where it has the potential to save lives.
Towards interoperable and reproducible QSAR analyses: Exchange of datasets.
Spjuth, Ola; Willighagen, Egon L; Guha, Rajarshi; Eklund, Martin; Wikberg, Jarl Es
2010-06-30
QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but also allows for analyzing the effect descriptors have on the statistical model's performance. The presented Bioclipse plugins equip scientists with graphical tools that make QSAR-ML easily accessible for the community.
Towards interoperable and reproducible QSAR analyses: Exchange of datasets
2010-01-01
Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but also allows for analyzing the effect descriptors have on the statistical model's performance. The presented Bioclipse plugins equip scientists with graphical tools that make QSAR-ML easily accessible for the community. PMID:20591161
RSA cryptography and multi prime RSA cryptography
NASA Astrophysics Data System (ADS)
Sani, Nur Atiqah Abdul; Kamarulhaili, Hailiza
2017-08-01
RSA cryptography is one of the most powerful and popular cryptosystem which is being applied until now. There is one variant of RSA cryptography named Multi Prime RSA (MPRSA) cryptography. MPRSA cryptography is the improved version of RSA cryptography. We only need to modify a few steps in key generation part and apply the Chinese Remainder Theorem (CRT) in the decryption part to get the MPRSA algorithm. The focus of this research is to compare between the standard RSA cryptography and MPRSA cryptography in a few aspects. The research shows that MPRSA cryptography is more efficient than the RSA cryptography. Time complexity using Mathematica software is also conducted and it is proven that MPRSA cryptography has shorter time taken. It also implies the computational time is less than RSA cryptography. Mathematica software version 9.0 and a laptop HP ProBook 4331s are used to check the timing and to implement both algorithms.
Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis.
Gao, Yurui; Burns, Scott S; Lauzon, Carolyn B; Fong, Andrew E; James, Terry A; Lubar, Joel F; Thatcher, Robert W; Twillie, David A; Wirt, Michael D; Zola, Marc A; Logan, Bret W; Anderson, Adam W; Landman, Bennett A
2013-03-29
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.
Integration of XNAT/PACS, DICOM, and research software for automated multi-modal image analysis
NASA Astrophysics Data System (ADS)
Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.
2013-03-01
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.
Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis
Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.
2013-01-01
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software. PMID:24386548
GNSS CORS hardware and software enabling new science
NASA Astrophysics Data System (ADS)
Drummond, P.
2009-12-01
GNSS CORS networks are enabling new opportunities for science and public and private sector business. This paper will explore how the newest geodetic monitoring software and GNSS receiver hardware from Trimble Navigation Ltd are enabling new science. Technology trends and science opportunities will be explored. These trends include the installation of active GNSS control, automation of observations and processing, and the advantages of multi-observable and multi-constellation observations, all performed with the use of off the shelf products and industry standard open-source data formats. Also the possibilities with moving science from an after-the-fact postprocessed model to a real-time epoch-by-epoch solution will be explored. This presentation will also discuss the combination of existing GNSS CORS networks with project specific installations used for monitoring. Experience is showing GNSS is able to provide higher resolution data than previous methods, providing new tools for science, decision makers and financial planners.
The U.S. Environmental Protection Agency (EPA) is developing a comprehensive environmental exposure and risk analysis software system for agency-wide application using the methodology of a Multi-media, Multi-pathway, Multi-receptor Risk Assessment (3MRA) model. This software sys...
Controlling CAMAC instrumentation through the USB port
NASA Astrophysics Data System (ADS)
Ribas, R. V.
2012-02-01
A programmable device to interface CAMAC instrumentation to the USB port of computers, without the need of heavy, noisy and expensive CAMAC crates is described in this article. Up to four single-width modules can be used. Also, all software necessary for a multi-parametric data acquisition system was developed. A standard crate-controller based on the same project is being designed.
ScaMo: Realisation of an OO-functional DSL for cross platform mobile applications development
NASA Astrophysics Data System (ADS)
Macos, Dragan; Solymosi, Andreas
2013-10-01
The software market is dynamically changing: the Internet is going mobile, the software applications are shifting from the desktop hardware onto the mobile devices. The largest markets are the mobile applications for iOS, Android and Windows Phone and for the purpose the typical programming languages include Objective-C, Java and C ♯. The realization of the native applications implies the integration of the developed software into the environments of mentioned mobile operating systems to enable the access to different hardware components of the devices: GPS module, display, GSM module, etc. This paper deals with the definition and possible implementation of an environment for the automatic application generation for multiple mobile platforms. It is based on a DSL for mobile application development, which includes the programming language Scala and a DSL defined in Scala. As part of a multi-stage cross-compiling algorithm, this language is translated into the language of the affected mobile platform. The advantage of our method lies in the expressiveness of the defined language and the transparent source code translation between different languages, which implies, for example, the advantages of debugging and development of the generated code.
Testing of CMA-2000 Microwave Landing System (MLS) airborne receiver
NASA Astrophysics Data System (ADS)
Labreche, L.; Murfin, A. J.
1989-09-01
Microwave landing system (MLS) is a precision approach and landing guidance system which provides position information and various air to ground data. Position information is provided on a wide coverage sector and is determined by an azimuth angle measurement, an elevation angle measurement, and a range measurement. MLS performance standards and testing of the MLS airborne receiver is mainly governed by Technical Standard Order TSO-C104 issued by the Federal Aviation Administration. This TSO defines detailed test procedures for use in determining the required performance under standard and stressed conditions. It also imposes disciplines on software development and testing procedures. Testing performed on the CMA-2000 MLS receiver and methods used in its validation are described. A computer automated test system has been developed to test for compliance with RTCA/DO-177 Minimum Operation Performance Standards. Extensive software verification and traceability tests designed to ensure compliance with RTCA/DO-178 are outlined.
MultiMetEval: Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models
Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko
2012-01-01
Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEval/downloads. PMID:23272111
NASA Technical Reports Server (NTRS)
Weeks, Cindy Lou
1986-01-01
Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.
Space Generic Open Avionics Architecture (SGOAA) standard specification
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1993-01-01
The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of a specific avionics hardware/software system. This standard defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
Standard classification of software documentation
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1976-01-01
General conceptual requirements for standard levels of documentation and for application of these requirements to intended usages. These standards encourage the policy to produce only those forms of documentation that are needed and adequate for the purpose. Documentation standards are defined with respect to detail and format quality. Classes A through D range, in order, from the most definitive down to the least definitive, and categories 1 through 4 range, in order, from high-quality typeset down to handwritten material. Criteria for each of the classes and categories, as well as suggested selection guidelines for each are given.
47 CFR 2.944 - Software defined radios.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false Software defined radios. 2.944 Section 2.944... Authorization § 2.944 Software defined radios. (a) Manufacturers must take steps to ensure that only software that has been approved with a software defined radio can be loaded into the radio. The software must...
47 CFR 2.944 - Software defined radios.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Software defined radios. 2.944 Section 2.944... Authorization § 2.944 Software defined radios. (a) Manufacturers must take steps to ensure that only software that has been approved with a software defined radio can be loaded into the radio. The software must...
47 CFR 2.944 - Software defined radios.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Software defined radios. 2.944 Section 2.944... Authorization § 2.944 Software defined radios. (a) Manufacturers must take steps to ensure that only software that has been approved with a software defined radio can be loaded into the radio. The software must...
47 CFR 2.944 - Software defined radios.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Software defined radios. 2.944 Section 2.944... Authorization § 2.944 Software defined radios. (a) Manufacturers must take steps to ensure that only software that has been approved with a software defined radio can be loaded into the radio. The software must...
47 CFR 2.944 - Software defined radios.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false Software defined radios. 2.944 Section 2.944... Authorization § 2.944 Software defined radios. (a) Manufacturers must take steps to ensure that only software that has been approved with a software defined radio can be loaded into the radio. The software must...
NASA Technical Reports Server (NTRS)
2012-01-01
The topics include: 1) Spectral Profiler Probe for In Situ Snow Grain Size and Composition Stratigraphy; 2) Portable Fourier Transform Spectroscopy for Analysis of Surface Contamination and Quality Control; 3) In Situ Geochemical Analysis and Age Dating of Rocks Using Laser Ablation-Miniature Mass Spectrometer; 4) Physics Mining of Multi-Source Data Sets; 5) Photogrammetry Tool for Forensic Analysis; 6) Connect Global Positioning System RF Module; 7) Simple Cell Balance Circuit; 8) Miniature EVA Software Defined Radio; 9) Remotely Accessible Testbed for Software Defined Radio Development; 10) System-of-Systems Technology-Portfolio-Analysis Tool; 11) VESGEN Software for Mapping and Quantification of Vascular Regulators; 12) Constructing a Database From Multiple 2D Images for Camera Pose Estimation and Robot Localization; 13) Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology; 14) 3D Visualization for Phoenix Mars Lander Science Operations; 15) RxGen General Optical Model Prescription Generator; 16) Carbon Nanotube Bonding Strength Enhancement Using Metal Wicking Process; 17) Multi-Layer Far-Infrared Component Technology; 18) Germanium Lift-Off Masks for Thin Metal Film Patterning; 19) Sealing Materials for Use in Vacuum at High Temperatures; 20) Radiation Shielding System Using a Composite of Carbon Nanotubes Loaded With Electropolymers; 21) Nano Sponges for Drug Delivery and Medicinal Applications; 22) Molecular Technique to Understand Deep Microbial Diversity; 23) Methods and Compositions Based on Culturing Microorganisms in Low Sedimental Fluid Shear Conditions; 24) Secure Peer-to-Peer Networks for Scientific Information Sharing; 25) Multiplexer/Demultiplexer Loading Tool (MDMLT); 26) High-Rate Data-Capture for an Airborne Lidar System; 27) Wavefront Sensing Analysis of Grazing Incidence Optical Systems; 28) Foam-on-Tile Damage Model; 29) Instrument Package Manipulation Through the Generation and Use of an Attenuated-Fluent Gas Fold; 30) Multicolor Detectors for Ultrasensitive Long-Wave Imaging Cameras; 31) Lunar Reconnaissance Orbiter (LRO) Command and Data Handling Flight Electronics Subsystem; and 32) Electro-Optic Segment-Segment Sensors for Radio and Optical Telescopes.
NASA Technical Reports Server (NTRS)
2013-01-01
Topics include: Cloud Absorption Radiometer Autonomous Navigation System - CANS, Software Method for Computed Tomography Cylinder Data Unwrapping, Re-slicing, and Analysis, Discrete Data Qualification System and Method Comprising Noise Series Fault Detection, Simple Laser Communications Terminal for Downlink from Earth Orbit at Rates Exceeding 10 Gb/s, Application Program Interface for the Orion Aerodynamics Database, Hyperspectral Imager-Tracker, Web Application Software for Ground Operations Planning Database (GOPDb) Management, Software Defined Radio with Parallelized Software Architecture, Compact Radar Transceiver with Included Calibration, Software Defined Radio with Parallelized Software Architecture, Phase Change Material Thermal Power Generator, The Thermal Hogan - A Means of Surviving the Lunar Night, Micromachined Active Magnetic Regenerator for Low-Temperature Magnetic Coolers, Nano-Ceramic Coated Plastics, Preparation of a Bimetal Using Mechanical Alloying for Environmental or Industrial Use, Phase Change Material for Temperature Control of Imager or Sounder on GOES Type Satellites in GEO, Dual-Compartment Inflatable Suitlock, Modular Connector Keying Concept, Genesis Ultrapure Water Megasonic Wafer Spin Cleaner, Piezoelectrically Initiated Pyrotechnic Igniter, Folding Elastic Thermal Surface - FETS, Multi-Pass Quadrupole Mass Analyzer, Lunar Sulfur Capture System, Environmental Qualification of a Single-Crystal Silicon Mirror for Spaceflight Use, Planar Superconducting Millimeter-Wave/Terahertz Channelizing Filter, Qualification of UHF Antenna for Extreme Martian Thermal Environments, Ensemble Eclipse: A Process for Prefab Development Environment for the Ensemble Project, ISS Live!, Space Operations Learning Center (SOLC) iPhone/iPad Application, Software to Compare NPP HDF5 Data Files, Planetary Data Systems (PDS) Imaging Node Atlas II, Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit, Translating MAPGEN to ASPEN for MER, Support Routines for In Situ Image Processing, and Semi-Supervised Eigenbasis Novelty Detection.
On the formalization of multi-scale and multi-science processes for integrative biology
Díaz-Zuccarini, Vanessa; Pichardo-Almarza, César
2011-01-01
The aim of this work is to introduce the general concept of ‘Bond Graph’ (BG) techniques applied in the context of multi-physics and multi-scale processes. BG modelling has a natural place in these developments. BGs are inherently coherent as the relationships defined between the ‘elements’ of the graph are strictly defined by causality rules and power (energy) conservation. BGs clearly show how power flows between components of the systems they represent. The ‘effort’ and ‘flow’ variables enable bidirectional information flow in the BG model. When the power level of a system is low, BGs degenerate into signal flow graphs in which information is mainly one-dimensional and power is minimal, i.e. they find a natural limitation when dealing with populations of individuals or purely kinetic models, as the concept of energy conservation in these systems is no longer relevant. The aim of this work is twofold: on the one hand, we will introduce the general concept of BG techniques applied in the context of multi-science and multi-scale models and, on the other hand, we will highlight some of the most promising features in the BG methodology by comparing with examples developed using well-established modelling techniques/software that could suggest developments or refinements to the current state-of-the-art tools, by providing a consistent framework from a structural and energetic point of view. PMID:22670211
Design and Implementation of a Mobile Phone Locator Using Software Defined Radio
2007-09-01
time difference of arrival 15. NUMBER OF PAGES 116 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY CLASSIFICATION OF...THIS PAGE Unclassified 19. SECURITY CLASSIFICATION OF ABSTRACT Unclassified 20. LIMITATION OF ABSTRACT UU NSN 7540012805500 Standard Form 298...relatively inexpensive device called the Universal Software Radio Peripheral (USRP). The USRP consists of a motherboard which performs the analog-to
Benchmarking the ATLAS software through the Kit Validation engine
NASA Astrophysics Data System (ADS)
De Salvo, Alessandro; Brasolin, Franco
2010-04-01
The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.
Reconfigurable firmware-defined radios synthesized from standard digital logic cells
NASA Astrophysics Data System (ADS)
Faisal, Muhammad; Park, Youngmin; Wentzloff, David D.
2011-06-01
This paper presents recent work on reconfigurable all-digital radio architectures. We leverage the flexibility and scalability of synthesized digital cells to construct reconfigurable radio architectures that consume significantly less power than a software defined radio implementing similar architectures. We present two prototypes of such architectures that can receive and demodulate FM and FRS band signals. Moreover, a radio architecture based on a reconfigurable alldigital phase-locked loop for coherent demodulation is presented.
Space Telecommunications Radio System (STRS) Definitions and Acronyms
NASA Technical Reports Server (NTRS)
Briones, Janette C.; Handler, Louis M.; Johnson, Sandra K.; Nappier, Jennifer; Gnepp, Steven; Kacpura, Thomas J.; Reinhart, Richard C.; Hall, Charles S.; Mortensen, Dale
2008-01-01
Software-defined radio is a relatively new technology area, and industry consensus on terminology is not always consistent. Confusion exists when the various organizations and standards bodies define different radio terms associated with the actual amount of reconfigurability of the radios. The Space Telecommunications Radio System (STRS) Definitions and Acronyms Document provides the readers of the STRS documents a common understanding of the terminology used and how they will be applied to the STRS architecture.
Information Sharing for Computing Trust Metrics on COTS Electronic Components
2008-09-01
8 a. Standard SDLCs ...........................8 b. The Waterfall Model ......................9 c. V -shaped Model ...development of a system. There are many well-known SDLC models , the most popular of which are: • Waterfall • V -shaped • Spiral • Agile a. Standard...the SDLC or applied to software and hardware distribution chain. A. JØSANG’S MODEL DEFINED Jøsang expresses "opinions" mathematically as: 1
Defect measurement and analysis of JPL ground software: a case study
NASA Technical Reports Server (NTRS)
Powell, John D.; Spagnuolo, John N., Jr.
2004-01-01
Ground software systems at JPL must meet high assurance standards while remaining on schedule due to relatively immovable launch dates for spacecraft that will be controlled by such systems. Toward this end, the Software Quality Improvement (SQI) project's Measurement and Benchmarking (M&B) team is collecting and analyzing defect data of JPL ground system software projects to build software defect prediction models. The aim of these models is to improve predictability with regard to software quality activities. Predictive models will quantitatively define typical trends for JPL ground systems as well as Critical Discriminators (CDs) to provide explanations for atypical deviations from the norm at JPL. CDs are software characteristics that can be estimated or foreseen early in a software project's planning. Thus, these CDs will assist in planning for the predicted degree to which software quality activities for a project are likely to deviation from the normal JPL ground system based on pasted experience across the lab.
A software defined RTU multi-protocol automatic adaptation data transmission method
NASA Astrophysics Data System (ADS)
Jin, Huiying; Xu, Xingwu; Wang, Zhanfeng; Ma, Weijun; Li, Sheng; Su, Yong; Pan, Yunpeng
2018-02-01
Remote terminal unit (RTU) is the core device of the monitor system in hydrology and water resources. Different devices often have different communication protocols in the application layer, which results in the difficulty in information analysis and communication networking. Therefore, we introduced the idea of software defined hardware, and abstracted the common feature of mainstream communication protocols of RTU application layer, and proposed a uniformed common protocol model. Then, various communication protocol algorithms of application layer are modularized according to the model. The executable codes of these algorithms are labeled by the virtual functions and stored in the flash chips of embedded CPU to form the protocol stack. According to the configuration commands to initialize the RTU communication systems, it is able to achieve dynamic assembling and loading of various application layer communication protocols of RTU and complete the efficient transport of sensor data from RTU to central station when the data acquisition protocol of sensors and various external communication terminals remain unchanged.
Federated software defined network operations for LHC experiments
NASA Astrophysics Data System (ADS)
Kim, Dongkyun; Byeon, Okhwan; Cho, Kihyeon
2013-09-01
The most well-known high-energy physics collaboration, the Large Hadron Collider (LHC), which is based on e-Science, has been facing several challenges presented by its extraordinary instruments in terms of the generation, distribution, and analysis of large amounts of scientific data. Currently, data distribution issues are being resolved by adopting an advanced Internet technology called software defined networking (SDN). Stability of the SDN operations and management is demanded to keep the federated LHC data distribution networks reliable. Therefore, in this paper, an SDN operation architecture based on the distributed virtual network operations center (DvNOC) is proposed to enable LHC researchers to assume full control of their own global end-to-end data dissemination. This may achieve an enhanced data delivery performance based on data traffic offloading with delay variation. The evaluation results indicate that the overall end-to-end data delivery performance can be improved over multi-domain SDN environments based on the proposed federated SDN/DvNOC operation framework.
A DICOM based radiotherapy plan database for research collaboration and reporting
NASA Astrophysics Data System (ADS)
Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.
2014-03-01
Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.
Li, Yajie; Zhao, Yongli; Zhang, Jie; Yu, Xiaosong; Jing, Ruiquan
2017-11-27
Network operators generally provide dedicated lightpaths for customers to meet the demand for high-quality transmission. Considering the variation of traffic load, customers usually rent peak bandwidth that exceeds the practical average traffic requirement. In this case, bandwidth provisioning is unmetered and customers have to pay according to peak bandwidth. Supposing that network operators could keep track of traffic load and allocate bandwidth dynamically, bandwidth can be provided as a metered service and customers would pay for the bandwidth that they actually use. To achieve cost-effective bandwidth provisioning, this paper proposes an autonomic bandwidth adjustment scheme based on data analysis of traffic load. The scheme is implemented in a software defined networking (SDN) controller and is demonstrated in the field trial of multi-vendor optical transport networks. The field trial shows that the proposed scheme can track traffic load and realize autonomic bandwidth adjustment. In addition, a simulation experiment is conducted to evaluate the performance of the proposed scheme. We also investigate the impact of different parameters on autonomic bandwidth adjustment. Simulation results show that the step size and adjustment period have significant influences on bandwidth savings and packet loss. A small value of step size and adjustment period can bring more benefits by tracking traffic variation with high accuracy. For network operators, the scheme can serve as technical support of realizing bandwidth as metered service in the future.
Hardware Architecture Study for NASA's Space Software Defined Radios
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Scardelletti, Maximilian C.; Mortensen, Dale J.; Kacpura, Thomas J.; Andro, Monty; Smith, Carl; Liebetreu, John
2008-01-01
This study defines a hardware architecture approach for software defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general purpose processors, digital signal processors, field programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) in addition to flexible and tunable radio frequency (RF) front-ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and and interfaces. The modules are a logical division of common radio functions that comprise a typical communication radio. This paper describes the architecture details, module definitions, and the typical functions on each module as well as the module interfaces. Trade-offs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify the internal physical implementation within each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.
NASA Astrophysics Data System (ADS)
Kiekebusch, Mario J.; Lucuix, Christian; Erm, Toomas M.; Chiozzi, Gianluca; Zamparelli, Michele; Kern, Lothar; Brast, Roland; Pirani, Werther; Reiss, Roland; Popovic, Dan; Knudstrup, Jens; Duchateau, Michel; Sandrock, Stefan; Di Lieto, Nicola
2014-07-01
ESO is currently in the final phase of the standardization process for PC-based Programmable Logical Controllers (PLCs) as the new platform for the development of control systems for future VLT/VLTI instruments. The standard solution used until now consists of a Local Control Unit (LCU), a VME-based system having a CPU and commercial and proprietary boards. This system includes several layers of software and many thousands of lines of code developed and maintained in house. LCUs have been used for several years as the interface to control instrument functions but now are being replaced by commercial off-the-shelf (COTS) systems based on BECKHOFF Embedded PCs and the EtherCAT fieldbus. ESO is working on the completion of the software framework that enables a seamless integration into the VLT control system in order to be ready to support upcoming instruments like ESPRESSO and ERIS, that will be the first fully VLT compliant instruments using the new standard. The technology evaluation and standardization process has been a long and combined effort of various engineering disciplines like electronics, control and software, working together to define a solution that meets the requirements and minimizes the impact on the observatory operations and maintenance. This paper presents the challenges of the standardization process and the steps involved in such a change. It provides a technical overview of how industrial standards like EtherCAT, OPC-UA, PLCOpen MC and TwinCAT can be used to replace LCU features in various areas like software engineering and programming languages, motion control, time synchronization and astronomical tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomopy is a Python toolbox to perform x-ray data processing, image reconstruction and data exchange tasks at synchrotron facilities. The dependencies of the software are currently as follows: -Python related python standard library (http://docs.python.org/2/library/) numpy (http://www.numpy.org/) scipy (http://scipy.org/) matplotlib (http://matplotlip.org/) sphinx (http://sphinx-doc.org) pil (http://www.pythonware.com/products/pil/) pyhdf (http://pysclint.sourceforge.net/pyhdf/) h5py (http://www.h5py.org) pywt (http://www.pybytes.com/pywavelets/) file.py (https://pyspec.svn.sourceforge.net/svnroot/pyspec/trunk/pyspec/ccd/files.py) -C/C++ related: gridec (anonymous?? C-code written back in 1997 that uses standard C library) fftw (http://www.fftw.org/) tomoRecon (multi-threaded C++ verion of gridrec. Author: Mark Rivers from APS. http://cars9.uchicago.edu/software/epics/tomoRecon.html) epics (http://www.aps.anl.gov/epics/)
Media independent interface. Interface control document
NASA Technical Reports Server (NTRS)
1987-01-01
A Media Independent Interface (MII) is specified, using current standards in the industry. The MII is described in hierarchical fashion. At the base are IEEE/International Standards Organization (ISO) documents (standards) which describe the functionality of the software modules or layers and their interconnection. These documents describe primitives which are to transcent the MII. The intent of the MII is to provide a universal interface to one or more Media Access Contols (MACs) for the Logical Link Controller and Station Manager. This interface includes both a standardized electrical and mechanical interface and a standardized functional specification which defines the services expected from the MAC.
Q&A: Defining Internet Architecture for Learning.
ERIC Educational Resources Information Center
Hernandez-Ramos, Pedro
1999-01-01
Presents Pedro Hernandez-Ramos's thoughts on Educom's Instructional Management Systems (IMS), a global coalition of organizations working together to create standards for software development in distributed learning. Focuses on the organization's relevance to community colleges, the benefits of participation, why IMS is a global effort, and how…
Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization.
Jung, Sang-Kyu; McDonald, Karen
2011-08-16
Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.
Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization
2011-01-01
Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net. PMID:21846353
75 FR 10439 - Cognitive Radio Technologies and Software Defined Radios
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-08
... Technologies and Software Defined Radios AGENCY: Federal Communications Commission. ACTION: Final rule. SUMMARY... concerning the use of open source software to implement security features in software defined radios (SDRs... ongoing technical developments in cognitive and software defined radio (SDR) technologies. 2. On April 20...
Distance education course on spatial multi-hazard risk assessment, using Open Source software
NASA Astrophysics Data System (ADS)
van Westen, C. J.; Frigerio, S.
2009-04-01
As part of the capacity building activities of the United Nations University - ITC School on Disaster Geo-Information Management (UNU-ITC DGIM) the International Institute for Geoinformation Science and Earth Observation (ITC) has developed a distance education course on the application of Geographic Information Systems for multi-hazard risk assessment. This course is designed for academic staff, as well as for professionals working in (non-) governmental organizations where knowledge of disaster risk management is essential. The course guides the participants through the entire process of risk assessment, on the basis of a case study of a city exposed to multiple hazards, in a developing country. The courses consists of eight modules, each with a guide book explaining the theoretical background, and guiding the participants through spatial data requirements for risk assessment, hazard assessment procedures, generation of elements at risk databases, vulnerability assessment, qualitative and quantitative risk assessment methods, risk evaluation and risk reduction. Linked to the theory is a large set of exercises, with exercise descriptions, answer sheets, demos and GIS data. The exercises deal with four different types of hazards: earthquakes, flooding, technological hazards, and landslides. One important consideration in designing the course is that people from developing countries should not be restricted in using it due to financial burdens for software acquisition. Therefore the aim was to use Open Source software as a basis. The GIS exercises are written for the ILWIS software. All exercises have also been integrated into a WebGIS, using the Open source software CartoWeb (based on GNU License). It is modular and customizable thanks to its object-oriented architecture and based on a hierarchical structure (to manage and organize every package of information of every step required in risk assessment). Different switches for every component of the risk assessment course have been defined and through various menus the user can define the options for every exercise. For every layer of information tools for querying, printing, searching and surface analysis are implemented, allowing the option to compare maps at different scale and for on-line interpretations.
Software engineering for ESO's VLT project
NASA Astrophysics Data System (ADS)
Filippi, G.
1994-12-01
This paper reports on the experience at the European Southern Observatory on the application of software engineering techniques to a 200 man-year control software project for the Very Large Telescope (VLT). This shall provide astronomers, before the end of the century, with one of the most powerful telescopes in the world. From the definition of the general model, described in the software management plan, specific activities have been and will be defined: standards for documents and for code development, design approach using a CASE tool, the process of reviewing both documentation and code, quality assurance, test strategy, etc. The initial choices, the current implementation and the future planned activities are presented and, where feedback is already available, pros and cons are discussed.
NASA Astrophysics Data System (ADS)
Daniell, James; Simpson, Alanna; Gunasekara, Rashmin; Baca, Abigail; Schaefer, Andreas; Ishizawa, Oscar; Murnane, Rick; Tijssen, Annegien; Deparday, Vivien; Forni, Marc; Himmelfarb, Anne; Leder, Jan
2015-04-01
Over the past few decades, a plethora of open access software packages for the calculation of earthquake, volcanic, tsunami, storm surge, wind and flood have been produced globally. As part of the World Bank GFDRR Review released at the Understanding Risk 2014 Conference, over 80 such open access risk assessment software packages were examined. Commercial software was not considered in the evaluation. A preliminary analysis was used to determine whether the 80 models were currently supported and if they were open access. This process was used to select a subset of 31 models that include 8 earthquake models, 4 cyclone models, 11 flood models, and 8 storm surge/tsunami models for more detailed analysis. By using multi-criteria analysis (MCDA) and simple descriptions of the software uses, the review allows users to select a few relevant software packages for their own testing and development. The detailed analysis evaluated the models on the basis of over 100 criteria and provides a synopsis of available open access natural hazard risk modelling tools. In addition, volcano software packages have since been added making the compendium of risk software tools in excess of 100. There has been a huge increase in the quality and availability of open access/source software over the past few years. For example, private entities such as Deltares now have an open source policy regarding some flood models (NGHS). In addition, leaders in developing risk models in the public sector, such as Geoscience Australia (EQRM, TCRM, TsuDAT, AnuGA) or CAPRA (ERN-Flood, Hurricane, CRISIS2007 etc.), are launching and/or helping many other initiatives. As we achieve greater interoperability between modelling tools, we will also achieve a future wherein different open source and open access modelling tools will be increasingly connected and adapted towards unified multi-risk model platforms and highly customised solutions. It was seen that many software tools could be improved by enabling user-defined exposure and vulnerability. Without this function, many tools can only be used regionally and not at global or continental scale. It is becoming increasingly easy to use multiple packages for a single region and/or hazard to characterize the uncertainty in the risk, or use as checks for the sensitivities in the analysis. There is a potential for valuable synergy between existing software. A number of open source software packages could be combined to generate a multi-risk model with multiple views of a hazard. This extensive review has simply attempted to provide a platform for dialogue between all open source and open access software packages and to hopefully inspire collaboration between developers, given the great work done by all open access and open source developers.
IEEE 1451.2 based Smart sensor system using ADuc847
NASA Astrophysics Data System (ADS)
Sreejithlal, A.; Ajith, Jose
IEEE 1451 standard defines a standard interface for connecting transducers to microprocessor based data acquisition systems, instrumentation systems, control and field networks. Smart transducer interface module (STIM) acts as a unit which provides signal conditioning, digitization and data packet generation functions to the transducers connected to it. This paper describes the implementation of a microcontroller based smart transducer interface module based on IEEE 1451.2 standard. The module, implemented using ADuc847 microcontroller has 2 transducer channels and is programmed using Embedded C language. The Sensor system consists of a Network Controlled Application Processor (NCAP) module which controls the Smart transducer interface module (STIM) over an IEEE1451.2-RS232 bus. The NCAP module is implemented as a software module in C# language. The hardware details, control principles involved and the software implementation for the STIM are described in detail.
Liu, Hu; Su, Rong-jia; Wu, Min-jie; Zhang, Yi; Qiu, Xiang-jun; Feng, Jian-gang; Xie, Ting; Lu, Shu-liang
2012-06-01
To form a wound information management scheme with objectivity, standardization, and convenience by means of wound information management system. A wound information management system was set up with the acquisition terminal, the defined wound description, the data bank, and related softwares. The efficacy of this system was evaluated in clinical practice. The acquisition terminal was composed of the third generation mobile phone and the software. It was feasible to get access to the wound information, including description, image, and therapeutic plan from the data bank by mobile phone. During 4 months, a collection of a total of 232 wound treatment information was entered, and accordingly standardized data of 38 patients were formed automatically. This system can provide standardized wound information management by standardized techniques of acquisition, transmission, and storage of wound information. It can be used widely in hospitals, especially primary medical institutions. Data resource of the system makes it possible for epidemiological study with large sample size in future.
NASA Astrophysics Data System (ADS)
Živanović, Dragan; Simić, Milan; Kokolanski, Zivko; Denić, Dragan; Dimcev, Vladimir
2018-04-01
Software supported procedure for generation of long-time complex test sentences, suitable for testing the instruments for detection of standard voltage quality (VQ) disturbances is presented in this paper. This solution for test signal generation includes significant improvements of computer-based signal generator presented and described in the previously published paper [1]. The generator is based on virtual instrumentation software for defining the basic signal parameters, data acquisition card NI 6343, and power amplifier for amplification of output voltage level to the nominal RMS voltage value of 230 V. Definition of basic signal parameters in LabVIEW application software is supported using Script files, which allows simple repetition of specific test signals and combination of more different test sequences in the complex composite test waveform. The basic advantage of this generator compared to the similar solutions for signal generation is the possibility for long-time test sequence generation according to predefined complex test scenarios, including various combinations of VQ disturbances defined in accordance with the European standard EN50160. Experimental verification of the presented signal generator capability is performed by testing the commercial power quality analyzer Fluke 435 Series II. In this paper are shown some characteristic complex test signals with various disturbances and logged data obtained from the tested power quality analyzer.
Development of land based radar polarimeter processor system
NASA Technical Reports Server (NTRS)
Kronke, C. W.; Blanchard, A. J.
1983-01-01
The processing subsystem of a land based radar polarimeter was designed and constructed. This subsystem is labeled the remote data acquisition and distribution system (RDADS). The radar polarimeter, an experimental remote sensor, incorporates the RDADS to control all operations of the sensor. The RDADS uses industrial standard components including an 8-bit microprocessor based single board computer, analog input/output boards, a dynamic random access memory board, and power supplis. A high-speed digital electronics board was specially designed and constructed to control range-gating for the radar. A complete system of software programs was developed to operate the RDADS. The software uses a powerful real time, multi-tasking, executive package as an operating system. The hardware and software used in the RDADS are detailed. Future system improvements are recommended.
Design of a secure remote management module for a software-operated medical device.
Burnik, Urban; Dobravec, Štefan; Meža, Marko
2017-12-09
Software-based medical devices need to be maintained throughout their entire life cycle. The efficiency of after-sales maintenance can be improved by managing medical systems remotely. This paper presents how to design the remote access function extensions in order to prevent risks imposed by uncontrolled remote access. A thorough analysis of standards and legislation requirements regarding safe operation and risk management of medical devices is presented. Based on the formal requirements, a multi-layer machine design solution is proposed that eliminates remote connectivity risks by strict separation of regular device functionalities from remote management service, deploys encrypted communication links and uses digital signatures to prevent mishandling of software images. The proposed system may also be used as an efficient version update of the existing medical device designs.
Data to Pictures to Data: Outreach Imaging Software and Metadata
NASA Astrophysics Data System (ADS)
Levay, Z.
2011-07-01
A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.
Certification of COTS Software in NASA Human Rated Flight Systems
NASA Technical Reports Server (NTRS)
Goforth, Andre
2012-01-01
Adoption of commercial off-the-shelf (COTS) products in safety critical systems has been seen as a promising acquisition strategy to improve mission affordability and, yet, has come with significant barriers and challenges. Attempts to integrate COTS software components into NASA human rated flight systems have been, for the most part, complicated by verification and validation (V&V) requirements necessary for flight certification per NASA s own standards. For software that is from COTS sources, and, in general from 3rd party sources, either commercial, government, modified or open source, the expectation is that it meets the same certification criteria as those used for in-house and that it does so as if it were built in-house. The latter is a critical and hidden issue. This paper examines the longstanding barriers and challenges in the use of 3rd party software in safety critical systems and cover recent efforts to use COTS software in NASA s Multi-Purpose Crew Vehicle (MPCV) project. It identifies some core artifacts that without them, the use of COTS and 3rd party software is, for all practical purposes, a nonstarter for affordable and timely insertion into flight critical systems. The paper covers the first use in a flight critical system by NASA of COTS software that has prior FAA certification heritage, which was shown to meet the RTCA-DO-178B standard, and how this certification may, in some cases, be leveraged to allow the use of analysis in lieu of testing. Finally, the paper proposes the establishment of an open source forum for development of safety critical 3rd party software.
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.
1995-01-01
The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.
Surveillance and reconnaissance ground system architecture
NASA Astrophysics Data System (ADS)
Devambez, Francois
2001-12-01
Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.
Metric Evaluation Pipeline for 3d Modeling of Urban Scenes
NASA Astrophysics Data System (ADS)
Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.
2017-05-01
Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.
2013-08-01
release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by
van Dongen, J J M; Lhermitte, L; Böttcher, S; Almeida, J; van der Velden, V H J; Flores-Montero, J; Rawstron, A; Asnafi, V; Lécrevisse, Q; Lucio, P; Mejstrikova, E; Szczepański, T; Kalina, T; de Tute, R; Brüggemann, M; Sedek, L; Cullen, M; Langerak, A W; Mendonça, A; Macintyre, E; Martin-Ayuso, M; Hrusak, O; Vidriales, M B; Orfao, A
2012-01-01
Most consensus leukemia & lymphoma antibody panels consist of lists of markers based on expert opinions, but they have not been validated. Here we present the validated EuroFlow 8-color antibody panels for immunophenotyping of hematological malignancies. The single-tube screening panels and multi-tube classification panels fit into the EuroFlow diagnostic algorithm with entries defined by clinical and laboratory parameters. The panels were constructed in 2–7 sequential design–evaluation–redesign rounds, using novel Infinicyt software tools for multivariate data analysis. Two groups of markers are combined in each 8-color tube: (i) backbone markers to identify distinct cell populations in a sample, and (ii) markers for characterization of specific cell populations. In multi-tube panels, the backbone markers were optimally placed at the same fluorochrome position in every tube, to provide identical multidimensional localization of the target cell population(s). The characterization markers were positioned according to the diagnostic utility of the combined markers. Each proposed antibody combination was tested against reference databases of normal and malignant cells from healthy subjects and WHO-based disease entities, respectively. The EuroFlow studies resulted in validated and flexible 8-color antibody panels for multidimensional identification and characterization of normal and aberrant cells, optimally suited for immunophenotypic screening and classification of hematological malignancies. PMID:22552007
Software defined multi-spectral imaging for Arctic sensor networks
NASA Astrophysics Data System (ADS)
Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi
2016-05-01
Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.
Flight Dynamic Model Exchange using XML
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Hildreth, Bruce L.
2002-01-01
The AIAA Modeling and Simulation Technical Committee has worked for several years to develop a standard by which the information needed to develop physics-based models of aircraft can be specified. The purpose of this standard is to provide a well-defined set of information, definitions, data tables and axis systems so that cooperating organizations can transfer a model from one simulation facility to another with maximum efficiency. This paper proposes using an application of the eXtensible Markup Language (XML) to implement the AIAA simulation standard. The motivation and justification for using a standard such as XML is discussed. Necessary data elements to be supported are outlined. An example of an aerodynamic model as an XML file is given. This example includes definition of independent and dependent variables for function tables, definition of key variables used to define the model, and axis systems used. The final steps necessary for implementation of the standard are presented. Software to take an XML-defined model and import/export it to/from a given simulation facility is discussed, but not demonstrated. That would be the next step in final implementation of standards for physics-based aircraft dynamic models.
Future Standardization of Space Telecommunications Radio System with Core Flight System
NASA Technical Reports Server (NTRS)
Hickey, Joseph P.; Briones, Janette C.; Roche, Rigoberto; Handler, Louis M.; Hall, Steven
2016-01-01
NASA Glenn Research Center (GRC) is integrating the NASA Space Telecommunications Radio System (STRS) Standard with the Core Flight System (cFS). The STRS standard provides a common, consistent framework to develop, qualify, operate and maintain complex, reconfigurable and reprogrammable radio systems. The cFS is a flexible, open architecture that features a plug-and-play software executive called the Core Flight Executive (cFE), a reusable library of software components for flight and space missions and an integrated tool suite. Together, STRS and cFS create a development environment that allows for STRS compliant applications to reference the STRS APIs through the cFS infrastructure. These APis are used to standardize the communication protocols on NASAs space SDRs. The cFE-STRS Operating Environment (OE) is a portable cFS library, which adds the ability to run STRS applications on existing cFS platforms. The purpose of this paper is to discuss the cFE-STRS OE prototype, preliminary experimental results performed using the Advanced Space Radio Platform (ASRP), the GRC Sband Ground Station and the SCaN (Space Communication and Navigation) Testbed currently flying onboard the International Space Station. Additionally, this paper presents a demonstration of the Consultative Committee for Space Data Systems (CCSDS) Spacecraft Onboard Interface Services (SOIS) using electronic data sheets inside cFE. This configuration allows for the data sheets to specify binary formats for data exchange between STRS applications. The integration of STRS with cFS leverages mission-proven platform functions and mitigates barriers to integration with future missions. This reduces flight software development time and the costs of software-defined radio (SDR) platforms. Furthermore, the combined benefits of STRS standardization with the flexibility of cFS provide an effective, reliable and modular framework to minimize software development efforts for spaceflight missions.
NASA Astrophysics Data System (ADS)
Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.
2015-12-01
We have developed a family of solutions to the challenges of integrating diverse data from of biological and geological (BiG) disciplines for Critical Zone (CZ) science. These standards and software solutions have been developed around the new Observations Data Model version 2.0 (ODM2, http://ODM2.org), which was designed as a profile of the Open Geospatial Consortium's (OGC) Observations and Measurements (O&M) standard. The ODM2 standards and software ecosystem has at it's core an information model that balances specificity with flexibility to powerfully and equally serve the needs of multiple dataset types, from multivariate sensor-generated time series to geochemical measurements of specimen hierarchies to multi-dimensional spectral data to biodiversity observations. ODM2 has been adopted as the information model guiding the next generation of cyberinfrastructure development for the Interdisciplinary Earth Data Alliance (http://www.iedadata.org/) and the CUAHSI Water Data Center (https://www.cuahsi.org/wdc). Here we present several components of the ODM2 standards and software ecosystem that were developed specifically to help CZ scientists and their data managers to share and manage data through the national Critical Zone Observatory data integration project (CZOData, http://criticalzone.org/national/data/) and the bio integration with geo for critical zone science data project (BiG CZ Data, http://bigcz.org/). These include the ODM2 Controlled Vocabulary system (http://vocabulary.odm2.org), the YAML Observation Data Archive & exchange (YODA) File Format (https://github.com/ODM2/YODA-File) and the BiG CZ Toolbox, which will combine easy-to-install ODM2 databases (https://github.com/ODM2/ODM2) with a variety of graphical software packages for data management such as ODMTools (https://github.com/ODM2/ODMToolsPython) and the ODM2 Streaming Data Loader (https://github.com/ODM2/ODM2StreamingDataLoader).
A database for TMT interface control documents
NASA Astrophysics Data System (ADS)
Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John
2016-08-01
The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.
A software communication tool for the tele-ICU.
Pimintel, Denise M; Wei, Shang Heng; Odor, Alberto
2013-01-01
The Tele Intensive Care Unit (tele-ICU) supports a high volume, high acuity population of patients. There is a high-volume of incoming and outgoing calls, especially during the evening and night hours, through the tele-ICU hubs. The tele-ICU clinicians must be able to communicate effectively to team members in order to support the care of complex and critically ill patients while supporting and maintaining a standard to improve time to intervention. This study describes a software communication tool that will improve the time to intervention, over the paper-driven communication format presently used in the tele-ICU. The software provides a multi-relational database of message instances to mine information for evaluation and quality improvement for all entities that touch the tele-ICU. The software design incorporates years of critical care and software design experience combined with new skills acquired in an applied Health Informatics program. This software tool will function in the tele-ICU environment and perform as a front-end application that gathers, routes, and displays internal communication messages for intervention by priority and provider.
Research a Novel Integrated and Dynamic Multi-object Trade-Off Mechanism in Software Project
NASA Astrophysics Data System (ADS)
Jiang, Weijin; Xu, Yuhui
Aiming at practical requirements of present software project management and control, the paper presented to construct integrated multi-object trade-off model based on software project process management, so as to actualize integrated and dynamic trade-oil of the multi-object system of project. Based on analyzing basic principle of dynamic controlling and integrated multi-object trade-off system process, the paper integrated method of cybernetics and network technology, through monitoring on some critical reference points according to the control objects, emphatically discussed the integrated and dynamic multi- object trade-off model and corresponding rules and mechanism in order to realize integration of process management and trade-off of multi-object system.
Safe and Efficient Support for Embeded Multi-Processors in ADA
NASA Astrophysics Data System (ADS)
Ruiz, Jose F.
2010-08-01
New software demands increasing processing power, and multi-processor platforms are spreading as the answer to achieve the required performance. Embedded real-time systems are also subject to this trend, but in the case of real-time mission-critical systems, the properties of reliability, predictability and analyzability are also paramount. The Ada 2005 language defined a subset of its tasking model, the Ravenscar profile, that provides the basis for the implementation of deterministic and time analyzable applications on top of a streamlined run-time system. This Ravenscar tasking profile, originally designed for single processors, has proven remarkably useful for modelling verifiable real-time single-processor systems. This paper proposes a simple extension to the Ravenscar profile to support multi-processor systems using a fully partitioned approach. The implementation of this scheme is simple, and it can be used to develop applications amenable to schedulability analysis.
NASA Astrophysics Data System (ADS)
Newman, Andrew J.; Richardson, Casey L.; Kain, Sean M.; Stankiewicz, Paul G.; Guseman, Paul R.; Schreurs, Blake A.; Dunne, Jeffrey A.
2016-05-01
This paper introduces the game of reconnaissance blind multi-chess (RBMC) as a paradigm and test bed for understanding and experimenting with autonomous decision making under uncertainty and in particular managing a network of heterogeneous Intelligence, Surveillance and Reconnaissance (ISR) sensors to maintain situational awareness informing tactical and strategic decision making. The intent is for RBMC to serve as a common reference or challenge problem in fusion and resource management of heterogeneous sensor ensembles across diverse mission areas. We have defined a basic rule set and a framework for creating more complex versions, developed a web-based software realization to serve as an experimentation platform, and developed some initial machine intelligence approaches to playing it.
Network module detection: Affinity search technique with the multi-node topological overlap measure
Li, Ai; Horvath, Steve
2009-01-01
Background Many clustering procedures only allow the user to input a pairwise dissimilarity or distance measure between objects. We propose a clustering method that can input a multi-point dissimilarity measure d(i1, i2, ..., iP) where the number of points P can be larger than 2. The work is motivated by gene network analysis where clusters correspond to modules of highly interconnected nodes. Here, we define modules as clusters of network nodes with high multi-node topological overlap. The topological overlap measure is a robust measure of interconnectedness which is based on shared network neighbors. In previous work, we have shown that the multi-node topological overlap measure yields biologically meaningful results when used as input of network neighborhood analysis. Findings We adapt network neighborhood analysis for the use of module detection. We propose the Module Affinity Search Technique (MAST), which is a generalized version of the Cluster Affinity Search Technique (CAST). MAST can accommodate a multi-node dissimilarity measure. Clusters grow around user-defined or automatically chosen seeds (e.g. hub nodes). We propose both local and global cluster growth stopping rules. We use several simulations and a gene co-expression network application to argue that the MAST approach leads to biologically meaningful results. We compare MAST with hierarchical clustering and partitioning around medoid clustering. Conclusion Our flexible module detection method is implemented in the MTOM software which can be downloaded from the following webpage: PMID:19619323
Network module detection: Affinity search technique with the multi-node topological overlap measure.
Li, Ai; Horvath, Steve
2009-07-20
Many clustering procedures only allow the user to input a pairwise dissimilarity or distance measure between objects. We propose a clustering method that can input a multi-point dissimilarity measure d(i1, i2, ..., iP) where the number of points P can be larger than 2. The work is motivated by gene network analysis where clusters correspond to modules of highly interconnected nodes. Here, we define modules as clusters of network nodes with high multi-node topological overlap. The topological overlap measure is a robust measure of interconnectedness which is based on shared network neighbors. In previous work, we have shown that the multi-node topological overlap measure yields biologically meaningful results when used as input of network neighborhood analysis. We adapt network neighborhood analysis for the use of module detection. We propose the Module Affinity Search Technique (MAST), which is a generalized version of the Cluster Affinity Search Technique (CAST). MAST can accommodate a multi-node dissimilarity measure. Clusters grow around user-defined or automatically chosen seeds (e.g. hub nodes). We propose both local and global cluster growth stopping rules. We use several simulations and a gene co-expression network application to argue that the MAST approach leads to biologically meaningful results. We compare MAST with hierarchical clustering and partitioning around medoid clustering. Our flexible module detection method is implemented in the MTOM software which can be downloaded from the following webpage: http://www.genetics.ucla.edu/labs/horvath/MTOM/
NASA Technical Reports Server (NTRS)
Currit, P. A.
1983-01-01
The Cleanroom software development methodology is designed to take the gamble out of product releases for both suppliers and receivers of the software. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (Mean Time To Failure) of the product at the time of its release. A statistical approach to software product testing using randomly selected samples of test cases is considered. A statistical model is defined for the certification process which uses the timing data recorded during test. A reasonableness argument for this model is provided that uses previously published data on software product execution. Also included is a derivation of the certification model estimators and a comparison of the proposed least squares technique with the more commonly used maximum likelihood estimators.
Churilov, Leonid; Liu, Daniel; Ma, Henry; Christensen, Soren; Nagakane, Yoshinari; Campbell, Bruce; Parsons, Mark W; Levi, Christopher R; Davis, Stephen M; Donnan, Geoffrey A
2013-04-01
The appropriateness of a software platform for rapid MRI assessment of the amount of salvageable brain tissue after stroke is critical for both the validity of the Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) Clinical Trial of stroke thrombolysis beyond 4.5 hours and for stroke patient care outcomes. The objective of this research is to develop and implement a methodology for selecting the acute stroke imaging software platform most appropriate for the setting of a multi-centre clinical trial. A multi-disciplinary decision making panel formulated the set of preferentially independent evaluation attributes. Alternative Multi-Attribute Value Measurement methods were used to identify the best imaging software platform followed by sensitivity analysis to ensure the validity and robustness of the proposed solution. Four alternative imaging software platforms were identified. RApid processing of PerfusIon and Diffusion (RAPID) software was selected as the most appropriate for the needs of the EXTEND trial. A theoretically grounded generic multi-attribute selection methodology for imaging software was developed and implemented. The developed methodology assured both a high quality decision outcome and a rational and transparent decision process. This development contributes to stroke literature in the area of comprehensive evaluation of MRI clinical software. At the time of evaluation, RAPID software presented the most appropriate imaging software platform for use in the EXTEND clinical trial. The proposed multi-attribute imaging software evaluation methodology is based on sound theoretical foundations of multiple criteria decision analysis and can be successfully used for choosing the most appropriate imaging software while ensuring both robust decision process and outcomes. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
A reprogrammable receiver architecture for wireless signal interception
NASA Astrophysics Data System (ADS)
Yao, Timothy S.
2003-09-01
In this paper, a re-programmable receiver architecture, based on software-defined-radio concept, for wireless signal interception is presented. The radio-frequency (RF) signal that the receiver would like to intercept may come from a terrestrial cellular network or communication satellites, which their carrier frequency are in the range from 800 MHz (civilian mobile) to 15 GHz (Ku band). To intercept signals from such a wide range of frequency in these variant communication systems, the traditional way is to deploy multiple receivers to scan and detect the desired signal. This traditional approach is obviously unattractive due to the cost, efficiency, and accuracy. Instead, we propose a universal receiver, which is software-driven and re-configurable, to intercept signals of interest. The software-defined-radio based receiver first intercepts RF energy of wide spectrum (25MHz) through antenna, performs zero-IF down conversion (homodyne architecture) to baseband, and digital channelizes the baseband signal. The channelization module is a bank of high performance digital filters. The bandwidth of the filter bank is programmable according to the wireless communication protocol under watch. In the baseband processing, high-performance digital signal processors carry out the detection process and microprocessors handle the communication protocols. The baseband processing is also re-configurable for different wireless standards and protocol. The advantages of the software-defined-radio architecture over traditional RF receiver make it a favorable technology for the communication signal interception and surveillance.
gr-MRI: A software package for magnetic resonance imaging using software defined radios.
Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A
2016-09-01
The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs. Copyright © 2016. Published by Elsevier Inc.
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
NASA Astrophysics Data System (ADS)
Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.
Multi-Scenario Use Case based Demonstration of Buildings Cybersecurity Framework Webtool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gourisetti, Sri Nikhil G.; Mylrea, Michael E.; Gervais, Easton L.
The purpose of this paper is to demonstrate the cybersecurity and software capabilities of Buildings Cybersecurity Framework (BCF) webtool. The webtool is designed based on BCF document and existing NIST standards. It’s capabilities and features are depicted through a building usecase with four different investment scenarios geared towards improving the cybersecurity posture of the building. BCF webtool also facilitates implementation of the goals outlined in Presidential Executive Order (EO) on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure (May 2017. In realization of the EO goals, BCF includes five core elements: Identify, Protect, Detect, Respond, and Recover, to helpmore » determine various policy and process level vulnerabilities and provide mitigation strategies. With the BCF webtool, an organization can perform a cybersecurity self-assessment; determine the current cybersecurity posture; define investment based goals to achieve a target state; connect the cybersecurity posture with business processes, functions, and continuity; and finally, develop plans to answer critical organizational cybersecurity questions. In this paper, the webtool and its core capabilities are depicted by performing an extensive comparative assessment over four different scenarios.« less
Working with the HL7 metamodel in a Model Driven Engineering context.
Martínez-García, A; García-García, J A; Escalona, M J; Parra-Calderón, C L
2015-10-01
HL7 (Health Level 7) International is an organization that defines health information standards. Most HL7 domain information models have been designed according to a proprietary graphic language whose domain models are based on the HL7 metamodel. Many researchers have considered using HL7 in the MDE (Model-Driven Engineering) context. A limitation has been identified: all MDE tools support UML (Unified Modeling Language), which is a standard model language, but most do not support the HL7 proprietary model language. We want to support software engineers without HL7 experience, thus real-world problems would be modeled by them by defining system requirements in UML that are compliant with HL7 domain models transparently. The objective of the present research is to connect HL7 with software analysis using a generic model-based approach. This paper introduces a first approach to an HL7 MDE solution that considers the MIF (Model Interchange Format) metamodel proposed by HL7 by making use of a plug-in developed in the EA (Enterprise Architect) tool. Copyright © 2015 Elsevier Inc. All rights reserved.
Future Standardization of Space Telecommunications Radio System with Core Flight System
NASA Technical Reports Server (NTRS)
Briones, Janette C.; Hickey, Joseph P.; Roche, Rigoberto; Handler, Louis M.; Hall, Charles S.
2016-01-01
NASA Glenn Research Center (GRC) is integrating the NASA Space Telecommunications Radio System (STRS) Standard with the Core Flight System (cFS), an avionics software operating environment. The STRS standard provides a common, consistent framework to develop, qualify, operate and maintain complex, reconfigurable and reprogrammable radio systems. The cFS is a flexible, open architecture that features a plugand- play software executive called the Core Flight Executive (cFE), a reusable library of software components for flight and space missions and an integrated tool suite. Together, STRS and cFS create a development environment that allows for STRS compliant applications to reference the STRS application programmer interfaces (APIs) that use the cFS infrastructure. These APIs are used to standardize the communication protocols on NASAs space SDRs. The cFS-STRS Operating Environment (OE) is a portable cFS library, which adds the ability to run STRS applications on existing cFS platforms. The purpose of this paper is to discuss the cFS-STRS OE prototype, preliminary experimental results performed using the Advanced Space Radio Platform (ASRP), the GRC S- band Ground Station and the SCaN (Space Communication and Navigation) Testbed currently flying onboard the International Space Station (ISS). Additionally, this paper presents a demonstration of the Consultative Committee for Space Data Systems (CCSDS) Spacecraft Onboard Interface Services (SOIS) using electronic data sheets (EDS) inside cFE. This configuration allows for the data sheets to specify binary formats for data exchange between STRS applications. The integration of STRS with cFS leverages mission-proven platform functions and mitigates barriers to integration with future missions. This reduces flight software development time and the costs of software-defined radio (SDR) platforms. Furthermore, the combined benefits of STRS standardization with the flexibility of cFS provide an effective, reliable and modular framework to minimize software development efforts for spaceflight missions.
NASA Astrophysics Data System (ADS)
Ehlmann, Bryon K.
Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.
Savtchouk, Iaroslav; Carriero, Giovanni; Volterra, Andrea
2018-01-01
Recent advances in fast volumetric imaging have enabled rapid generation of large amounts of multi-dimensional functional data. While many computer frameworks exist for data storage and analysis of the multi-gigabyte Ca 2+ imaging experiments in neurons, they are less useful for analyzing Ca 2+ dynamics in astrocytes, where transients do not follow a predictable spatio-temporal distribution pattern. In this manuscript, we provide a detailed protocol and commentary for recording and analyzing three-dimensional (3D) Ca 2+ transients through time in GCaMP6f-expressing astrocytes of adult brain slices in response to axonal stimulation, using our recently developed tools to perform interactive exploration, filtering, and time-correlation analysis of the transients. In addition to the protocol, we release our in-house software tools and discuss parameters pertinent to conducting axonal stimulation/response experiments across various brain regions and conditions. Our software tools are available from the Volterra Lab webpage at https://wwwfbm.unil.ch/dnf/group/glia-an-active-synaptic-partner/member/volterra-andrea-volterra in the form of software plugins for Image J (NIH)-a de facto standard in scientific image analysis. Three programs are available: MultiROI_TZ_profiler for interactive graphing of several movable ROIs simultaneously, Gaussian_Filter5D for Gaussian filtering in several dimensions, and Correlation_Calculator for computing various cross-correlation parameters on voxel collections through time.
NASA Technical Reports Server (NTRS)
Park, Young W.; Montez, Moises N.
1994-01-01
A candidate onboard space navigation filter demonstrated excellent performance (less than 8 meter level RMS semi-major axis accuracy) in performing orbit determination of a low-Earth orbit Explorer satellite using single-frequency real GPS data. This performance is significantly better than predicted by other simulation studies using dual-frequency GPS data. The study results revealed the significance of two new modeling approaches evaluated in the work. One approach introduces a single-frequency ionospheric correction through pseudo-range and phase range averaging implementation. The other approach demonstrates a precise axis-dependent characterization of dynamic sample space uncertainty to compute a more accurate Kalman filter gain. Additionally, this navigation filter demonstrates a flexibility to accommodate both perturbational dynamic and observational biases required for multi-flight phase and inhomogeneous application environments. This paper reviews the potential application of these methods and the filter structure to terrestrial vehicle and positioning applications. Both the single-frequency ionospheric correction method and the axis-dependent state noise modeling approach offer valuable contributions in cost and accuracy improvements for terrestrial GPS receivers. With a modular design approach to either 'plug-in' or 'unplug' various force models, this multi-flight phase navigation filter design structure also provides a versatile GPS navigation software engine for both atmospheric and exo-atmospheric navigation or positioning use, thereby streamlining the flight phase or application-dependent software requirements. Thus, a standardized GPS navigation software engine that can reduce the development and maintenance cost of commercial GPS receivers is now possible.
IMMAN: free software for information theory-based chemometric analysis.
Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo
2015-05-01
The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA supervised algorithms. Graphic representation for Shannon's distribution of MD calculating software.
Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J
2014-01-01
The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.
NASA Astrophysics Data System (ADS)
McKee, Shawn; Kissel, Ezra; Meekhof, Benjeman; Swany, Martin; Miller, Charles; Gregorowicz, Michael
2017-10-01
We report on the first year of the OSiRIS project (NSF Award #1541335, UM, IU, MSU and WSU) which is targeting the creation of a distributed Ceph storage infrastructure coupled together with software-defined networking to provide high-performance access for well-connected locations on any participating campus. The projects goal is to provide a single scalable, distributed storage infrastructure that allows researchers at each campus to read, write, manage and share data directly from their own computing locations. The NSF CC*DNI DIBBS program which funded OSiRIS is seeking solutions to the challenges of multi-institutional collaborations involving large amounts of data and we are exploring the creative use of Ceph and networking to address those challenges. While OSiRIS will eventually be serving a broad range of science domains, its first adopter will be the LHC ATLAS detector project via the ATLAS Great Lakes Tier-2 (AGLT2) jointly located at the University of Michigan and Michigan State University. Part of our presentation will cover how ATLAS is using the OSiRIS infrastructure and our experiences integrating our first user community. The presentation will also review the motivations for and goals of the project, the technical details of the OSiRIS infrastructure, the challenges in providing such an infrastructure, and the technical choices made to address those challenges. We will conclude with our plans for the remaining 4 years of the project and our vision for what we hope to deliver by the projects end.
Utility of coupling nonlinear optimization methods with numerical modeling software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.
1996-08-05
Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less
Evaluation of the Red Blood Cell Advanced Software Application on the CellaVision DM96.
Criel, M; Godefroid, M; Deckers, B; Devos, H; Cauwelier, B; Emmerechts, J
2016-08-01
The CellaVision Advanced Red Blood Cell (RBC) Software Application is a new software for advanced morphological analysis of RBCs on a digital microscopy system. Upon automated precharacterization into 21 categories, the software offers the possibility of reclassification of RBCs by the operator. We aimed to define the optimal cut-off to detect morphological RBC abnormalities and to evaluate the precharacterization performance of this software. Thirty-eight blood samples of healthy donors and sixty-eight samples of hospitalized patients were analyzed. Different methodologies to define a cut-off between negativity and positivity were used. Sensitivity and specificity were calculated according to these different cut-offs using the manual microscopic method as the gold standard. Imprecision was assessed by measuring analytical within-run and between-run variability and by measuring between-observer variability. By optimizing the cut-off between negativity and positivity, sensitivities exceeded 80% for 'critical' RBC categories (target cells, tear drop cells, spherocytes, sickle cells, and parasites), while specificities exceeded 80% for the other RBC morphological categories. Results of within-run, between-run, and between-observer variabilities were all clinically acceptable. The CellaVision Advanced RBC Software Application is an easy-to-use software that helps to detect most RBC morphological abnormalities in a sensitive and specific way without increasing work load, provided the proper cut-offs are chosen. However, evaluation of the images by an experienced observer remains necessary. © 2016 John Wiley & Sons Ltd.
A future Outlook: Web based Simulation of Hydrodynamic models
NASA Astrophysics Data System (ADS)
Islam, A. S.; Piasecki, M.
2003-12-01
Despite recent advances to present simulation results as 3D graphs or animation contours, the modeling user community still faces some shortcomings when trying to move around and analyze data. Typical problems include the lack of common platforms with standard vocabulary to exchange simulation results from different numerical models, insufficient descriptions about data (metadata), lack of robust search and retrieval tools for data, and difficulties to reuse simulation domain knowledge. This research demonstrates how to create a shared simulation domain in the WWW and run a number of models through multi-user interfaces. Firstly, meta-datasets have been developed to describe hydrodynamic model data based on geographic metadata standard (ISO 19115) that has been extended to satisfy the need of the hydrodynamic modeling community. The Extended Markup Language (XML) is used to publish this metadata by the Resource Description Framework (RDF). Specific domain ontology for Web Based Simulation (WBS) has been developed to explicitly define vocabulary for the knowledge based simulation system. Subsequently, this knowledge based system is converted into an object model using Meta Object Family (MOF). The knowledge based system acts as a Meta model for the object oriented system, which aids in reusing the domain knowledge. Specific simulation software has been developed based on the object oriented model. Finally, all model data is stored in an object relational database. Database back-ends help store, retrieve and query information efficiently. This research uses open source software and technology such as Java Servlet and JSP, Apache web server, Tomcat Servlet Engine, PostgresSQL databases, Protégé ontology editor, RDQL and RQL for querying RDF in semantic level, Jena Java API for RDF. Also, we use international standards such as the ISO 19115 metadata standard, and specifications such as XML, RDF, OWL, XMI, and UML. The final web based simulation product is deployed as Web Archive (WAR) files which is platform and OS independent and can be used by Windows, UNIX, or Linux. Keywords: Apache, ISO 19115, Java Servlet, Jena, JSP, Metadata, MOF, Linux, Ontology, OWL, PostgresSQL, Protégé, RDF, RDQL, RQL, Tomcat, UML, UNIX, Windows, WAR, XML
Intelligent Wireless Sensor Networks for System Health Monitoring
NASA Technical Reports Server (NTRS)
Alena, Rick
2011-01-01
Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network (PAN) standard are finding increasing use in the home automation and emerging smart energy markets. The network and application layers, based on the ZigBee 2007 Standard, provide a convenient framework for component-based software that supports customer solutions from multiple vendors. WSNs provide the inherent fault tolerance required for aerospace applications. The Discovery and Systems Health Group at NASA Ames Research Center has been developing WSN technology for use aboard aircraft and spacecraft for System Health Monitoring of structures and life support systems using funding from the NASA Engineering and Safety Center and Exploration Technology Development and Demonstration Program. This technology provides key advantages for low-power, low-cost ancillary sensing systems particularly across pressure interfaces and in areas where it is difficult to run wires. Intelligence for sensor networks could be defined as the capability of forming dynamic sensor networks, allowing high-level application software to identify and address any sensor that joined the network without the use of any centralized database defining the sensors characteristics. The IEEE 1451 Standard defines methods for the management of intelligent sensor systems and the IEEE 1451.4 section defines Transducer Electronic Datasheets (TEDS), which contain key information regarding the sensor characteristics such as name, description, serial number, calibration information and user information such as location within a vehicle. By locating the TEDS information on the wireless sensor itself and enabling access to this information base from the application software, the application can identify the sensor unambiguously and interpret and present the sensor data stream without reference to any other information. The application software is able to read the status of each sensor module, responding in real-time to changes of PAN configuration, providing the appropriate response for maintaining overall sensor system function, even when sensor modules fail or the WSN is reconfigured. The session will present the architecture and technical feasibility of creating fault-tolerant WSNs for aerospace applications based on our application of the technology to a Structural Health Monitoring testbed. The interim results of WSN development and testing including our software architecture for intelligent sensor management will be discussed in the context of the specific tradeoffs required for effective use. Initial certification measurement techniques and test results gauging WSN susceptibility to Radio Frequency interference are introduced as key challenges for technology adoption. A candidate Developmental and Flight Instrumentation implementation using intelligent sensor networks for wind tunnel and flight tests is developed as a guide to understanding key aspects of the aerospace vehicle design, test and operations life cycle.
A Summary of the Naval Postgraduate School Research Program
1989-08-30
5 Fundamental Theory for Automatically Combining Changes to Software Systems ............................ 6 Database -System Approach to...Software Engineering Environments(SEE’s) .................................. 10 Multilevel Database Security .......................... 11 Temporal... Database Management and Real-Time Database Computers .................................... 12 The Multi-lingual, Multi Model, Multi-Backend Database
The HomePlanet project: a HAVi multi-media network over POF
NASA Astrophysics Data System (ADS)
Roycroft, Brendan; Corbett, Brian; Kelleher, Carmel; Lambkin, John; Bareel, Baudouin; Goudeau, Jacques; Skiczuk, Peter
2005-06-01
This project has developed a low cost in-home network compatible with network standard IEEE1394b. We have developed all components of the network, from the red resonant cavity LEDs and VCSELs as light sources, the driver circuitry, plastic optical fibres for transmission, up to the network management software. We demonstrate plug-and-play operation of S100 and S200 (125 and 250Mbps) data streams using 650nm RCLEDs, and S400 (500 Mbps) data streams using VCSELs. The network software incorporates Home Audio Video interoperability (HAVi), which allows any HAVi device to be hot-plugged into the network and be instantly recognised and controllable over the network.
Applying Standard Interfaces to a Process-Control Language
NASA Technical Reports Server (NTRS)
Berthold, Richard T.
2005-01-01
A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.
Oscillator metrology with software defined radio.
Sherman, Jeff A; Jördens, Robert
2016-05-01
Analog electrical elements such as mixers, filters, transfer oscillators, isolating buffers, dividers, and even transmission lines contribute technical noise and unwanted environmental coupling in time and frequency measurements. Software defined radio (SDR) techniques replace many of these analog components with digital signal processing (DSP) on rapidly sampled signals. We demonstrate that, generically, commercially available multi-channel SDRs are capable of time and frequency metrology, outperforming purpose-built devices by as much as an order-of-magnitude. For example, for signals at 10 MHz and 6 GHz, we observe SDR time deviation noise floors of about 20 fs and 1 fs, respectively, in under 10 ms of averaging. Examining the other complex signal component, we find a relative amplitude measurement instability of 3 × 10(-7) at 5 MHz. We discuss the scalability of a SDR-based system for simultaneous measurement of many clocks. SDR's frequency agility allows for comparison of oscillators at widely different frequencies. We demonstrate a novel and extreme example with optical clock frequencies differing by many terahertz: using a femtosecond-laser frequency comb and SDR, we show femtosecond-level time comparisons of ultra-stable lasers with zero measurement dead-time.
Software Engineering Research/Developer Collaborations (C104)
NASA Technical Reports Server (NTRS)
Shell, Elaine; Shull, Forrest
2005-01-01
The goal of this collaboration was to produce Flight Software Branch (FSB) process standards for software inspections which could be used across three new missions within the FSB. The standard was developed by Dr. Forrest Shull (Fraunhofer Center for Experimental Software Engineering, Maryland) using the Perspective-Based Inspection approach, (PBI research has been funded by SARP) , then tested on a pilot Branch project. Because the short time scale of the collaboration ruled out a quantitative evaluation, it would be decided whether the standard was suitable for roll-out to other Branch projects based on a qualitative measure: whether the standard received high ratings from Branch personnel as to usability and overall satisfaction. The project used for piloting the Perspective-Based Inspection approach was a multi-mission framework designed for reuse. This was a good choice because key representatives from the three new missions would be involved in the inspections. The perspective-based approach was applied to produce inspection procedures tailored for the specific quality needs of the branch. The technical information to do so was largely drawn through a series of interviews with Branch personnel. The framework team used the procedures to review requirements. The inspections were useful for indicating that a restructuring of the requirements document was needed, which led to changes in the development project plan. The standard was sent out to other Branch personnel for review. Branch personnel were very positive. However, important changes were identified because the perspective of Attitude Control System (ACS) developers had not been adequately represented, a result of the specific personnel interviewed. The net result is that with some further work to incorporate the ACS perspective, and in synchrony with the roll out of independent Branch standards, the PBI approach will be implemented in the FSB. Also, the project intends to continue its collaboration with the technology provider (Dr. Forrest Shull) past the end of the grant, to allow a more rigorous quantitative evaluation.
PUS Services Software Building Block Automatic Generation for Space Missions
NASA Astrophysics Data System (ADS)
Candia, S.; Sgaramella, F.; Mele, G.
2008-08-01
The Packet Utilization Standard (PUS) has been specified by the European Committee for Space Standardization (ECSS) and issued as ECSS-E-70-41A to define the application-level interface between Ground Segments and Space Segments. The ECSS-E- 70-41A complements the ECSS-E-50 and the Consultative Committee for Space Data Systems (CCSDS) recommendations for packet telemetry and telecommand. The ECSS-E-70-41A characterizes the identified PUS Services from a functional point of view and the ECSS-E-70-31 standard specifies the rules for their mission-specific tailoring. The current on-board software design for a space mission implies the production of several PUS terminals, each providing a specific tailoring of the PUS services. The associated on-board software building blocks are developed independently, leading to very different design choices and implementations even when the mission tailoring requires very similar services (from the Ground operative perspective). In this scenario, the automatic production of the PUS services building blocks for a mission would be a way to optimize the overall mission economy and improve the robusteness and reliability of the on-board software and of the Ground-Space interactions. This paper presents the Space Software Italia (SSI) activities for the development of an integrated environment to support: the PUS services tailoring activity for a specific mission. the mission-specific PUS services configuration. the generation the UML model of the software building block implementing the mission-specific PUS services and the related source code, support documentation (software requirements, software architecture, test plans/procedures, operational manuals), and the TM/TC database. The paper deals with: (a) the project objectives, (b) the tailoring, configuration, and generation process, (c) the description of the environments supporting the process phases, (d) the characterization of the meta-model used for the generation, (e) the characterization of the reference avionics architecture and of the reference on- board software high-level architecture.
NASA Astrophysics Data System (ADS)
Ziemke, Claas; Kuwahara, Toshinori; Kossev, Ivan
2011-09-01
Even in the field of small satellites, the on-board data handling subsystem has become complex and powerful. With the introduction of powerful CPUs and the availability of considerable amounts of memory on-board a small satellite it has become possible to utilize the flexibility and power of contemporary platform-independent real-time operating systems. Especially the non-commercial sector such like university institutes and community projects such as AMSAT or SSETI are characterized by the inherent lack of financial as well as manpower resources. The opportunity to utilize such real-time operating systems will contribute significantly to achieve a successful mission. Nevertheless the on-board software of a satellite is much more than just an operating system. It has to fulfill a multitude of functional requirements such as: Telecommand interpretation and execution, execution of control loops, generation of telemetry data and frames, failure detection isolation and recovery, the communication with peripherals and so on. Most of the aforementioned tasks are of generic nature and have to be conducted on any satellite with only minor modifications. A general set of functional requirements as well as a protocol for communication is defined in the SA ECSS-E-70-41A standard "Telemetry and telecommand packet utilization". This standard not only defines the communication protocol of the satellite-ground link but also defines a set of so called services which have to be available on-board of every compliant satellite and which are of generic nature. In this paper, a platform-independent and reusable framework is described which is implementing not only the ECSS-E-70-41A standard but also functionalities for interprocess communication, scheduling and a multitude of tasks commonly performed on-board of a satellite. By making use of the capabilities of the high-level programming language C/C++, the powerful open source library BOOST, the real-time operating system RTEMS and finally by providing generic functionalities compliant to the ECSS-E-70-41A standard the proposed framework can provide a great boost in productivity. Together with open source tools such like the GNU tool-chain, Eclipse SDK, the simulation framework OpenSimKit, the emulator QEMU, the proposed on-board software framework forms an integrated development framework. It is possible to design, code and build the on-board software together with the operating system and then run it on a simulated satellite for performance analysis and debugging purposes. This makes it possible to rapidly develop and deploy a full-fledged satellite on-board software with minimal cost and in a limited time frame.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sham, E; Sattarivand, M; Mulroy, L
Purpose: To evaluate planning performance of an automated treatment planning software (BrainLAB; Elements) for stereotactic radiosurgery (SRS) of multiple brain metastases. Methods: Brainlab’s Multiple Metastases Elements (MME) uses single isocentric technique to treat up to 10 cranial planning target volumes (PTVs). The planning algorithm of the MME accounts for multiple PTVs overlapping with one another on the beam eyes view (BEV) and automatically selects a subset of all overlapping PTVs on each arc for sparing normal tissues in the brain. The algorithm also optimizes collimator angles, margins between multi-leaf collimators (MLCs) and PTVs, as well as monitor units (MUs) usingmore » minimization of conformity index (CI) for all targets. Planning performance was evaluated by comparing the MME-calculated treatment plan parameters with the same parameters calculated with the Volumetric Modulated Arc Therapy (VMAT) optimization on Varian’s Eclipse platform. Results: Figures 1 to 3 compare several treatment plan outcomes calculated between the MME and VMAT for 5 clinical multi-targets SRS patient plans. Prescribed target dose was volume-dependent and defined based on the RTOG recommendation. For a total number of 18 PTV’s, mean values for the CI, PITV, and GI were comparable between the MME and VMAT within one standard deviation (σ). However, MME-calculated MDPD was larger than the same VMAT-calculated parameter. While both techniques delivered similar maximum point doses to the critical cranial structures and total MU’s for the 5 patient plans, the MME required less treatment planning time by an order of magnitude compared to VMAT. Conclusion: The MME and VMAT produce similar plan qualities in terms of MUs, target dose conformation, and OAR dose sparing. While the selective use of PTVs for arc-optimization with the MME reduces significantly the total planning time in comparison to VMAT, the target dose homogeneity was also compromised due to its simplified inverse planning algorithm used.« less
NASA Astrophysics Data System (ADS)
André, M. P.; Galperin, M.; Berry, A.; Ojeda-Fournier, H.; O'Boyle, M.; Olson, L.; Comstock, C.; Taylor, A.; Ledgerwood, M.
Our computer-aided diagnostic (CADx) tool uses advanced image processing and artificial intelligence to analyze findings on breast sonography images. The goal is to standardize reporting of such findings using well-defined descriptors and to improve accuracy and reproducibility of interpretation of breast ultrasound by radiologists. This study examined several factors that may impact accuracy and reproducibility of the CADx software, which proved to be highly accurate and stabile over several operating conditions.
Efficient, Multi-Scale Designs Take Flight
NASA Technical Reports Server (NTRS)
2003-01-01
Engineers can solve aerospace design problems faster and more efficiently with a versatile software product that performs automated structural analysis and sizing optimization. Collier Research Corporation's HyperSizer Structural Sizing Software is a design, analysis, and documentation tool that increases productivity and standardization for a design team. Based on established aerospace structural methods for strength, stability, and stiffness, HyperSizer can be used all the way from the conceptual design to in service support. The software originated from NASA s efforts to automate its capability to perform aircraft strength analyses, structural sizing, and weight prediction and reduction. With a strategy to combine finite element analysis with an automated design procedure, NASA s Langley Research Center led the development of a software code known as ST-SIZE from 1988 to 1995. Collier Research employees were principal developers of the code along with Langley researchers. The code evolved into one that could analyze the strength and stability of stiffened panels constructed of any material, including light-weight, fiber-reinforced composites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marney, Luke C.; Siegler, William C.; Parsons, Brendon A.
Two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC – TOFMS) is a highly capable instrumental platform that produces complex and information-rich multi-dimensional chemical data. The complex data can be overwhelming, especially when many samples (of various sample classes) are analyzed with multiple injections for each sample. Thus, the data must be analyzed in such a way to extract the most meaningful information. The pixel-based and peak table-based algorithmic use of Fisher ratios has been used successfully in the past to reduce the multi-dimensional data down to those chemical compounds that are changing between classes relative tomore » those that are not (i.e., chemical feature selection). We report on the initial development of a computationally fast novel tile-based Fisher-ratio software that addresses challenges due to 2D retention time misalignment without explicitly aligning the data, which is a problem for both pixel-based and peak table- based methods. Concurrently, the tile-based Fisher-ratio software maximizes the sensitivity contrast of true positives against a background of potential false positives and noise. To study this software, eight compounds, plus one internal standard, were spiked into diesel at various concentrations. The tile-based F-ratio software was able to discover all spiked analytes, within the complex diesel sample matrix with thousands of potential false positives, in each possible concentration comparison, even at the lowest absolute spiked analyte concentration ratio of 1.06.« less
Control vocabulary software designed for CMIP6
NASA Astrophysics Data System (ADS)
Nadeau, D.; Taylor, K. E.; Williams, D. N.; Ames, S.
2016-12-01
The Coupled Model Intercomparison Project Phase 6 (CMIP6) coordinates a number of intercomparison activities and includes many more experiments than its predecessor, CMIP5. In order to organize and facilitate use of the complex collection of expected CMIP6 model output, a standard set of descriptive information has been defined, which must be stored along with the data. This standard information enables automated machine interpretation of the contents of all model output files. The standard metadata is stored in compliance with the Climate and Forecast (CF) standard, which ensures that it can be interpreted and visualized by many standard software packages. Additional attributes (not standardized by CF) are required by CMIP6 to enhance identification of models and experiments, and to provide additional information critical for interpreting the model results. To ensure that CMIP6 data complies with the standards, a python program called "PrePARE" (Pre-Publication Attribute Reviewer for the ESGF) has been developed to check the model output prior to its publication and release for analysis. If, for example, a required attribute is missing or incorrect (e.g., not included in the reference CMIP6 controlled vocabularies), then PrePare will prevent publication. In some circumstances, missing attributes can be created or incorrect attributes can be replaced automatically by PrePARE, and the program will warn users about the changes that have been made. PrePARE provides a final check on model output assuring adherence to a baseline conformity across the output from all CMIP6 models which will facilitate analysis by climate scientists. PrePARE is flexible and can be easily modified for use by similar projects that have a well-defined set of metadata and controlled vocabularies.
Editorial: Challenges and solutions in GW calculations for complex systems
NASA Astrophysics Data System (ADS)
Giustino, F.; Umari, P.; Rubio, A.
2012-09-01
We report key advances in the area of GW calculations, review the available software implementations and define standardization criteria to render the comparison between GW calculations from different codes meaningful, and identify future major challenges in the area of quasiparticle calculations. This Topical Issue should be a reference point for further developments in the field.
Shuttle mission simulator requirements report, volume 1, revision A
NASA Technical Reports Server (NTRS)
Burke, J. F.
1973-01-01
The tasks are defined required to design, develop produce, and field support a shuttle mission simulator for training crew members and ground support personnel. The requirements for program management, control, systems engineering, design and development are discussed along with the design and construction standards, software design, control and display, communication and tracking, and systems integration.
The purpose of this SOP is to define the procedures for the initial and periodic verification and validation of computer programs. The programs are used during the Arizona NHEXAS project and Border study at the Illinois Institute of Technology (IIT) site. Keywords: computers; s...
ERIC Educational Resources Information Center
Wilkinson-Riddle, G. J.; Patel, Ashok
1998-01-01
Discusses courseware development, including intelligent tutoring systems, under the Teaching and Learning Technology Programme and the Byzantium project that was designed to define computer-aided learning performance standards suitable for numerate business subjects; examine reasons to use computer-aided learning; and improve access to educational…
Ontology for Life-Cycle Modeling of Electrical Distribution Systems: Model View Definition
2013-06-01
building information models ( BIM ) at the coordinated design stage of building construction. 1.3 Approach To...standard for exchanging Building Information Modeling ( BIM ) data, which defines hundreds of classes for common use in software, currently supported by...specifications, Construction Operations Building in- formation exchange (COBie), Building Information Modeling ( BIM ) 16. SECURITY CLASSIFICATION OF:
The Computational Infrastructure for Geodynamics as a Community of Practice
NASA Astrophysics Data System (ADS)
Hwang, L.; Kellogg, L. H.
2016-12-01
Computational Infrastructure for Geodynamics (CIG), geodynamics.org, originated in 2005 out of community recognition that the efforts of individual or small groups of researchers to develop scientifically-sound software is impossible to sustain, duplicates effort, and makes it difficult for scientists to adopt state-of-the art computational methods that promote new discovery. As a community of practice, participants in CIG share an interest in computational modeling in geodynamics and work together on open source software to build the capacity to support complex, extensible, scalable, interoperable, reliable, and reusable software in an effort to increase the return on investment in scientific software development and increase the quality of the resulting software. The group interacts regularly to learn from each other and better their practices formally through webinar series, workshops, and tutorials and informally through listservs and hackathons. Over the past decade, we have learned that successful scientific software development requires at a minimum: collaboration between domain-expert researchers, software developers and computational scientists; clearly identified and committed lead developer(s); well-defined scientific and computational goals that are regularly evaluated and updated; well-defined benchmarks and testing throughout development; attention throughout development to usability and extensibility; understanding and evaluation of the complexity of dependent libraries; and managed user expectations through education, training, and support. CIG's code donation standards provide the basis for recently formalized best practices in software development (geodynamics.org/cig/dev/best-practices/). Best practices include use of version control; widely used, open source software libraries; extensive test suites; portable configuration and build systems; extensive documentation internal and external to the code; and structured, human readable input formats.
Multi-Quadrant Biopsy Technique Improves Diagnostic Ability in Large Heterogeneous Renal Masses.
Abel, E Jason; Heckman, Jennifer E; Hinshaw, Louis; Best, Sara; Lubner, Meghan; Jarrard, David F; Downs, Tracy M; Nakada, Stephen Y; Lee, Fred T; Huang, Wei; Ziemlewicz, Timothy
2015-10-01
Percutaneous biopsy obtained from a single location is prone to sampling error in large heterogeneous renal masses, leading to nondiagnostic results or failure to detect poor prognostic features. We evaluated the accuracy of percutaneous biopsy for large renal masses using a modified multi-quadrant technique vs a standard biopsy technique. Clinical and pathological data for all patients with cT2 or greater renal masses who underwent percutaneous biopsy from 2009 to 2014 were reviewed. The multi-quadrant technique was defined as multiple core biopsies from at least 4 separate solid enhancing areas in the tumor. The incidence of nondiagnostic findings, sarcomatoid features and procedural complications was recorded, and concordance between biopsy specimens and nephrectomy pathology was compared. A total of 122 biopsies were performed for 117 tumors in 116 patients (46 using the standard biopsy technique and 76 using the multi-quadrant technique). Median tumor size was 10 cm (IQR 8-12). Biopsy was nondiagnostic in 5 of 46 (10.9%) standard and 0 of 76 (0%) multi-quadrant biopsies (p=0.007). Renal cell carcinoma was identified in 96 of 115 (82.0%) tumors and nonrenal cell carcinoma tumors were identified in 21 (18.0%). One complication occurred using the standard biopsy technique and no complications were reported using the multi-quadrant technique. Sarcomatoid features were present in 23 of 96 (23.9%) large renal cell carcinomas studied. Sensitivity for identifying sarcomatoid features was higher using the multi-quadrant technique compared to the standard biopsy technique at 13 of 15 (86.7%) vs 2 of 8 (25.0%) (p=0.0062). The multi-quadrant percutaneous biopsy technique increases the ability to identify aggressive pathological features in large renal tumors and decreases nondiagnostic biopsy rates. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Parallel processing architecture for H.264 deblocking filter on multi-core platforms
NASA Astrophysics Data System (ADS)
Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao
2012-03-01
Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.
Gaertner, Jan; Siemens, Waldemar; Antes, Gerd; Meerpohl, Joerg J; Xander, Carola; Schwarzer, Guido; Stock, Stephanie; Becker, Gerhild
2015-09-25
Specialist palliative care (SPC) interventions aim to relieve and prevent suffering in the physical, psychological, social, and spiritual domain. Therefore, SPC is carried out by a multi-professional team with different occupations (e.g., physician, nurse, psychologist, and social worker). Remaining skepticism concerning the need for SPC may be based on the scarcity of high-quality evaluations about the external evidence for SPC. Therefore, we will conduct a systematic review according to Cochrane standards to examine the effects of SPC for adults with advanced illness. The comprehensive systematic literature search will include randomized controlled trials (RCTs) and cluster RCTs. We will search the databases MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL), and PsycINFO. Patients must be adults suffering from life-limiting diseases. Proxy and caregiver outcomes will not be assessed in order to ensure a clear and well-defined research question for this review. Interventions may be in an in- or outpatient setting, e.g., consulting service, palliative care ward, and palliative outpatient clinic. In line with the multi-dimensional scope of palliative care, the primary outcome is quality of life (QoL). Key secondary outcomes are patients' symptom burden, place of death and survival, and health economic aspects. Subgroup analysis will assess results according to cancer type, age, early vs not early SPC, site of care, and setting. Analysis will be performed with the current RevMan software. We will use the Cochrane Collaboration risk of bias assessment tool. The quality of evidence will be judged according to the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach. The available evidence will be summarized and discussed to provide a basis for decision-making among health care professionals and policy makers. For SPC, we believe that multi-professional care is of utmost importance. Therefore, single-profession interventions such as physician consultations will not be included. Based on the multi-dimensional scope of palliative care, we chose QoL as the primary outcome, despite an expected heterogeneity among the QoL outcomes. We consider unidimensional endpoints such as "pain" for the physical domain to be inadequate for capturing the true scope of (S)PC (i.e., QoL) as defined by the World Health Organization. PROSPERO CRD42015020674.
Software Defined Network Monitoring Scheme Using Spectral Graph Theory and Phantom Nodes
2014-09-01
networks is the emergence of software - defined networking ( SDN ) [1]. SDN has existed for the...Chapter III for network monitoring. A. SOFTWARE DEFINED NETWORKS SDNs provide a new and innovative method to simplify network hardware by logically...and R. Giladi, “Performance analysis of software - defined networking ( SDN ),” in Proc. of IEEE 21st International Symposium on Modeling, Analysis
Blinov, Michael L.; Moraru, Ion I.
2011-01-01
Multi-state molecules and multi-component complexes are commonly involved in cellular signaling. Accounting for molecules that have multiple potential states, such as a protein that may be phosphorylated on multiple residues, and molecules that combine to form heterogeneous complexes located among multiple compartments, generates an effect of combinatorial complexity. Models involving relatively few signaling molecules can include thousands of distinct chemical species. Several software tools (StochSim, BioNetGen) are already available to deal with combinatorial complexity. Such tools need information standards if models are to be shared, jointly evaluated and developed. Here we discuss XML conventions that can be adopted for modeling biochemical reaction networks described by user-specified reaction rules. These could form a basis for possible future extensions of the Systems Biology Markup Language (SBML). PMID:21464833
Ogawa, Yasushi; Fawaz, Farah; Reyes, Candice; Lai, Julie; Pungor, Erno
2007-01-01
Parameter settings of a parallel line analysis procedure were defined by applying statistical analysis procedures to the absorbance data from a cell-based potency bioassay for a recombinant adenovirus, Adenovirus 5 Fibroblast Growth Factor-4 (Ad5FGF-4). The parallel line analysis was performed with a commercially available software, PLA 1.2. The software performs Dixon outlier test on replicates of the absorbance data, performs linear regression analysis to define linear region of the absorbance data, and tests parallelism between the linear regions of standard and sample. Width of Fiducial limit, expressed as a percent of the measured potency, was developed as a criterion for rejection of the assay data and to significantly improve the reliability of the assay results. With the linear range-finding criteria of the software set to a minimum of 5 consecutive dilutions and best statistical outcome, and in combination with the Fiducial limit width acceptance criterion of <135%, 13% of the assay results were rejected. With these criteria applied, the assay was found to be linear over the range of 0.25 to 4 relative potency units, defined as the potency of the sample normalized to the potency of Ad5FGF-4 standard containing 6 x 10(6) adenovirus particles/mL. The overall precision of the assay was estimated to be 52%. Without the application of Fiducial limit width criterion, the assay results were not linear over the range, and an overall precision of 76% was calculated from the data. An absolute unit of potency for the assay was defined by using the parallel line analysis procedure as the amount of Ad5FGF-4 that results in an absorbance value that is 121% of the average absorbance readings of the wells containing cells not infected with the adenovirus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan; Piro, Markus H.A.
Thermochimica is a software library that determines a unique combination of phases and their compositions at thermochemical equilibrium. Thermochimica can be used for stand-alone calculations or it can be directly coupled to other codes. This release of the software does not have a graphical user interface (GUI) and it can be executed from the command line or from an Application Programming Interface (API). Also, it is not intended for thermodynamic model development or for constructing phase diagrams. The main purpose of the software is to be directly coupled with a multi-physics code to provide material properties and boundary conditions formore » various physical phenomena. Significant research efforts have been dedicated to enhance computational performance through advanced algorithm development, such as improved estimation techniques and non-linear solvers. Various useful parameters can be provided as output from Thermochimica, such as: determination of which phases are stable at equilibrium, the mass of solution species and phases at equilibrium, mole fractions of solution phase constituents, thermochemical activities (which are related to partial pressures for gaseous species), chemical potentials of solution species and phases, and integral Gibbs energy (referenced relative to standard state). The overall goal is to provide an open source computational tool to enhance the predictive capability of multi-physics codes without significantly impeding computational performance.« less
A Software Tool for Integrated Optical Design Analysis
NASA Technical Reports Server (NTRS)
Moore, Jim; Troy, Ed; DePlachett, Charles; Montgomery, Edward (Technical Monitor)
2001-01-01
Design of large precision optical systems requires multi-disciplinary analysis, modeling, and design. Thermal, structural and optical characteristics of the hardware must be accurately understood in order to design a system capable of accomplishing the performance requirements. The interactions between each of the disciplines become stronger as systems are designed lighter weight for space applications. This coupling dictates a concurrent engineering design approach. In the past, integrated modeling tools have been developed that attempt to integrate all of the complex analysis within the framework of a single model. This often results in modeling simplifications and it requires engineering specialist to learn new applications. The software described in this presentation addresses the concurrent engineering task using a different approach. The software tool, Integrated Optical Design Analysis (IODA), uses data fusion technology to enable a cross discipline team of engineering experts to concurrently design an optical system using their standard validated engineering design tools.
The 4MOST facility control software
NASA Astrophysics Data System (ADS)
Pramskiy, Alexander; Mandel, Holger; Rothmaier, Florian; Stilz, Ingo; Winkler, Roland; Hahn, Thomas
2016-07-01
The 4-m Multi-Object Spectrographic Telescope (4MOST) is one high-resolution (R 18000) and two lowresolution (R fi 5000) spectrographs covering the wavelength range between 390 and 950 nm. The spectrographs will be installed on ESO VISTA telescope and will be fed by approximately 2400 fibres. The instrument is capable to simultaneously obtain spectra of about 2400 objects distributed over an hexagonal field-of-view of four square degrees. This paper aims at giving an overview of the control software design, which is based on the standard ESO VLT software architecture and customised to fit the needs of the 4MOST instrument. In particular, the facility control software is intended to arrange the precise positioning of the fibres, to schedule and observe many surveys in parallel, and to combine the output from the three spectrographs. Moreover, 4MOST's software will include user-friendly graphical user interfaces that enable users to interact with the facility control system and to monitor all data-taking and calibration tasks of the instrument. A secondary guiding system will be implemented to correct for any fibre exure and thus to improve 4MOST's guiding performance. The large amount of fibres requires the custom design of data exchange to avoid performance issues. The observation sequences are designed to use spectrographs in parallel with synchronous points for data exchange between subsystems. In order to control hardware devices, Programmable Logic Controller (PLC) components will be used, the new standard for future instruments at ESO.
NASA Technical Reports Server (NTRS)
Jacob, Joseph; Katz, Daniel; Prince, Thomas; Berriman, Graham; Good, John; Laity, Anastasia
2006-01-01
The final version (3.0) of the Montage software has been released. To recapitulate from previous NASA Tech Briefs articles about Montage: This software generates custom, science-grade mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. This software can be executed on single-processor computers, multi-processor computers, and such networks of geographically dispersed computers as the National Science Foundation s TeraGrid or NASA s Information Power Grid. The primary advantage of running Montage in a grid environment is that computations can be done on a remote supercomputer for efficiency. Multiple computers at different sites can be used for different parts of a computation a significant advantage in cases of computations for large mosaics that demand more processor time than is available at any one site. Version 3.0 incorporates several improvements over prior versions. The most significant improvement is that this version is accessible to scientists located anywhere, through operational Web services that provide access to data from several large astronomical surveys and construct mosaics on either local workstations or remote computational grids as needed.
2017-03-20
computation, Prime Implicates, Boolean Abstraction, real- time embedded software, software synthesis, correct by construction software design , model...types for time -dependent data-flow networks". J.-P. Talpin, P. Jouvelot, S. Shukla. ACM-IEEE Conference on Methods and Models for System Design ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
NASA Astrophysics Data System (ADS)
Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee
2016-05-01
Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.
Huttary, Rudolf; Goubergrits, Leonid; Schütte, Christof; Bernhard, Stefan
2017-08-01
It has not yet been possible to obtain modeling approaches suitable for covering a wide range of real world scenarios in cardiovascular physiology because many of the system parameters are uncertain or even unknown. Natural variability and statistical variation of cardiovascular system parameters in healthy and diseased conditions are characteristic features for understanding cardiovascular diseases in more detail. This paper presents SISCA, a novel software framework for cardiovascular system modeling and its MATLAB implementation. The framework defines a multi-model statistical ensemble approach for dimension reduced, multi-compartment models and focuses on statistical variation, system identification and patient-specific simulation based on clinical data. We also discuss a data-driven modeling scenario as a use case example. The regarded dataset originated from routine clinical examinations and comprised typical pre and post surgery clinical data from a patient diagnosed with coarctation of aorta. We conducted patient and disease specific pre/post surgery modeling by adapting a validated nominal multi-compartment model with respect to structure and parametrization using metadata and MRI geometry. In both models, the simulation reproduced measured pressures and flows fairly well with respect to stenosis and stent treatment and by pre-treatment cross stenosis phase shift of the pulse wave. However, with post-treatment data showing unrealistic phase shifts and other more obvious inconsistencies within the dataset, the methods and results we present suggest that conditioning and uncertainty management of routine clinical data sets needs significantly more attention to obtain reasonable results in patient-specific cardiovascular modeling. Copyright © 2017 Elsevier Ltd. All rights reserved.
MUSQA: a CS method to build a multi-standard quality management system
NASA Astrophysics Data System (ADS)
Cros, Elizabeth; Sneed, Isabelle
2002-07-01
CS Communication & Systèmes, through its long quality management experience, has been able to build and evolve its Quality Management System according to clients requirements, norms, standards and models (ISO, DO178, ECSS, CMM, ...), evolving norms (transition from ISO 9001:1994 to ISO 9001:2000) and the TQM approach, being currently deployed. The aim of this paper is to show how, from this enriching and instructive experience, CS has defined and formalised its method: MuSQA (Multi-Standard Quality Approach). This method allows to built a new Quality Management System or simplify and unify an existing one. MuSQA objective is to provide any organisation with an open Quality Management System, which is able to evolve easily and turns to be a useful instrument for everyone, operational as well as non-operational staff.
Hill, Jon; Davis, Katie E
2014-01-01
Building large supertrees involves the collection, storage, and processing of thousands of individual phylogenies to create large phylogenies with thousands to tens of thousands of taxa. Such large phylogenies are useful for macroevolutionary studies, comparative biology and in conservation and biodiversity. No easy to use and fully integrated software package currently exists to carry out this task. Here, we present a new Python-based software package that uses well defined XML schema to manage both data and metadata. It builds on previous versions by 1) including new processing steps, such as Safe Taxonomic Reduction, 2) using a user-friendly GUI that guides the user to complete at least the minimum information required and includes context-sensitive documentation, and 3) a revised storage format that integrates both tree- and meta-data into a single file. These data can then be manipulated according to a well-defined, but flexible, processing pipeline using either the GUI or a command-line based tool. Processing steps include standardising names, deleting or replacing taxa, ensuring adequate taxonomic overlap, ensuring data independence, and safe taxonomic reduction. This software has been successfully used to store and process data consisting of over 1000 trees ready for analyses using standard supertree methods. This software makes large supertree creation a much easier task and provides far greater flexibility for further work.
2011-01-01
Background The Molecular Interaction Map (MIM) notation offers a standard set of symbols and rules on their usage for the depiction of cellular signaling network diagrams. Such diagrams are essential for disseminating biological information in a concise manner. A lack of software tools for the notation restricts wider usage of the notation. Development of software is facilitated by a more detailed specification regarding software requirements than has previously existed for the MIM notation. Results A formal implementation of the MIM notation was developed based on a core set of previously defined glyphs. This implementation provides a detailed specification of the properties of the elements of the MIM notation. Building upon this specification, a machine-readable format is provided as a standardized mechanism for the storage and exchange of MIM diagrams. This new format is accompanied by a Java-based application programming interface to help software developers to integrate MIM support into software projects. A validation mechanism is also provided to determine whether MIM datasets are in accordance with syntax rules provided by the new specification. Conclusions The work presented here provides key foundational components to promote software development for the MIM notation. These components will speed up the development of interoperable tools supporting the MIM notation and will aid in the translation of data stored in MIM diagrams to other standardized formats. Several projects utilizing this implementation of the notation are outlined herein. The MIM specification is available as an additional file to this publication. Source code, libraries, documentation, and examples are available at http://discover.nci.nih.gov/mim. PMID:21586134
Luna, Augustin; Karac, Evrim I; Sunshine, Margot; Chang, Lucas; Nussinov, Ruth; Aladjem, Mirit I; Kohn, Kurt W
2011-05-17
The Molecular Interaction Map (MIM) notation offers a standard set of symbols and rules on their usage for the depiction of cellular signaling network diagrams. Such diagrams are essential for disseminating biological information in a concise manner. A lack of software tools for the notation restricts wider usage of the notation. Development of software is facilitated by a more detailed specification regarding software requirements than has previously existed for the MIM notation. A formal implementation of the MIM notation was developed based on a core set of previously defined glyphs. This implementation provides a detailed specification of the properties of the elements of the MIM notation. Building upon this specification, a machine-readable format is provided as a standardized mechanism for the storage and exchange of MIM diagrams. This new format is accompanied by a Java-based application programming interface to help software developers to integrate MIM support into software projects. A validation mechanism is also provided to determine whether MIM datasets are in accordance with syntax rules provided by the new specification. The work presented here provides key foundational components to promote software development for the MIM notation. These components will speed up the development of interoperable tools supporting the MIM notation and will aid in the translation of data stored in MIM diagrams to other standardized formats. Several projects utilizing this implementation of the notation are outlined herein. The MIM specification is available as an additional file to this publication. Source code, libraries, documentation, and examples are available at http://discover.nci.nih.gov/mim.
Development of a Multi-frequency Interferometer Telescope for Radio Astronomy (MITRA)
NASA Astrophysics Data System (ADS)
Ingala, Dominique Guelord Kumamputu
2015-03-01
This dissertation describes the development and construction of the Multi-frequency Interferometer Telescope for Radio Astronomy (MITRA) at the Durban University of Technology. The MITRA station consists of 2 antenna arrays separated by a baseline distance of 8 m. Each array consists of 8 Log-Periodic Dipole Antennas (LPDAs) operating from 200 MHz to 800 MHz. The design and construction of the LPDA antenna and receiver system is described. The receiver topology provides an equivalent noise temperature of 113.1 K and 55.1 dB of gain. The Intermediate Frequency (IF) stage was designed to produce a fixed IF frequency of 800 MHz. The digital Back-End and correlator were implemented using a low cost Software Defined Radio (SDR) platform and Gnu-Radio software. Gnu-Octave was used for data analysis to generate the relevant received signal parameters including total power, real, and imaginary, magnitude and phase components. Measured results show that interference fringes were successfully detected within the bandwidth of the receiver using a Radio Frequency (RF) generator as a simulated source. This research was presented at the IEEE Africon 2013 / URSI Session Mauritius, and published in the proceedings.
larvalign: Aligning Gene Expression Patterns from the Larval Brain of Drosophila melanogaster.
Muenzing, Sascha E A; Strauch, Martin; Truman, James W; Bühler, Katja; Thum, Andreas S; Merhof, Dorit
2018-01-01
The larval brain of the fruit fly Drosophila melanogaster is a small, tractable model system for neuroscience. Genes for fluorescent marker proteins can be expressed in defined, spatially restricted neuron populations. Here, we introduce the methods for 1) generating a standard template of the larval central nervous system (CNS), 2) spatial mapping of expression patterns from different larvae into a reference space defined by the standard template. We provide a manually annotated gold standard that serves for evaluation of the registration framework involved in template generation and mapping. A method for registration quality assessment enables the automatic detection of registration errors, and a semi-automatic registration method allows one to correct registrations, which is a prerequisite for a high-quality, curated database of expression patterns. All computational methods are available within the larvalign software package: https://github.com/larvalign/larvalign/releases/tag/v1.0.
Capurro, Daniel; Echeverry, Aisen; Figueroa, Rosa; Guiñez, Sergio; Taramasco, Carla; Galindo, César; Avendaño, Angélica; García, Alejandra; Härtel, Steffen
2017-01-01
Despite the continuous technical advancements around health information standards, a critical component to their widespread adoption involves political agreement between a diverse set of stakeholders. Countries that have addressed this issue have used diverse strategies. In this vision paper we present the path that Chile is taking to establish a national program to implement health information standards and achieve interoperability. The Chilean government established an inter-agency program to define the current interoperability situation, existing gaps, barriers, and facilitators for interoperable health information systems. As an answer to the identified issues, the government decided to fund a consortium of Chilean universities to create the National Center for Health Information Systems. This consortium should encourage the interaction between all health care stakeholders, both public and private, to advance the selection of national standards and define certification procedures for software and human resources in health information technologies.
Lee, Joong Ho; Tanaka, Eiji; Woo, Yanghee; Ali, Güner; Son, Taeil; Kim, Hyoung-Il; Hyung, Woo Jin
2017-12-01
The recent scientific and technologic advances have profoundly affected the training of surgeons worldwide. We describe a novel intraoperative real-time training module, the Advanced Robotic Multi-display Educational System (ARMES). We created a real-time training module, which can provide a standardized step by step guidance to robotic distal subtotal gastrectomy with D2 lymphadenectomy procedures, ARMES. The short video clips of 20 key steps in the standardized procedure for robotic gastrectomy were created and integrated with TilePro™ software to delivery on da Vinci Surgical Systems (Intuitive Surgical, Sunnyvale, CA). We successfully performed the robotic distal subtotal gastrectomy with D2 lymphadenectomy for patient with gastric cancer employing this new teaching method without any transfer errors or system failures. Using this technique, the total operative time was 197 min and blood loss was 50 mL and there were no intra- or post-operative complications. Our innovative real-time mentoring module, ARMES, enables standardized, systematic guidance during surgical procedures. © 2017 Wiley Periodicals, Inc.
MOFA Software for the COBRA Toolbox
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griesemer, Marc; Navid, Ali
MOFA-COBRA is a software code for Matlab that performs Multi-Objective Flux Analysis (MOFA), a solving of linear programming problems. Teh leading software package for conducting different types of analyses using constrain-based models is the COBRA Toolbox for Matlab. MOFA-COBRA is an added tool for COBRA that solves multi-objective problems using a novel algorithm.
Prototype Packaged Databases and Software in Health
Gardenier, Turkan K.
1980-01-01
This paper describes the recent demand for packaged databases and software for health applications in light of developments in mini-and micro-computer technology. Specific features for defining prospective user groups are discussed; criticisms generated for large-scale epidemiological data use as a means of replacing clinical trials and associated controls are posed to the reader. The available collaborative efforts for access and analysis of jointly structured health data are stressed, with recommendations for new analytical techniques specifically geared to monitoring data such as the CTSS (Cumulative Transitional State Score) generated for tacking ongoing patient status over time in clinical trials. Examples of graphic display are given from the Domestic Information Display System (DIDS) which is a collaborative multi-agency effort to computerize and make accessible user-specified U.S. and local maps relating to health, environment, socio-economic and energy data.
Organizational Deviance and Multi-Factor Leadership
ERIC Educational Resources Information Center
Aksu, Ali
2016-01-01
Organizational deviant behaviors can be defined as behaviors that have deviated from standards and uncongenial to organization's expectations. When such behaviors have been thought to damage the organization, it can be said that reducing the deviation behaviors at minimum level is necessary for a healthy organization. The aim of this research is…
Modeling and Analysis of Space Based Transceivers
NASA Technical Reports Server (NTRS)
Moore, Michael S.; Price, Jeremy C.; Reinhart, Richard; Liebetreu, John; Kacpura, Tom J.
2005-01-01
This paper presents the tool chain, methodology, and results of an on-going study being performed jointly by Space Communication Experts at NASA Glenn Research Center (GRC), General Dynamics C4 Systems (GD), and Southwest Research Institute (SwRI). The team is evaluating the applicability and tradeoffs concerning the use of Software Defined Radio (SDR) technologies for Space missions. The Space Telecommunications Radio Systems (STRS) project is developing an approach toward building SDR-based transceivers for space communications applications based on an accompanying software architecture that can be used to implement transceivers for NASA space missions. The study is assessing the overall cost and benefit of employing SDR technologies in general, and of developing a software architecture standard for its space SDR transceivers. The study is considering the cost and benefit of existing architectures, such as the Joint Tactical Radio Systems (JTRS) Software Communications Architecture (SCA), as well as potential new space-specific architectures.
Towards a Framework for Evaluating and Comparing Diagnosis Algorithms
NASA Technical Reports Server (NTRS)
Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia,David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander
2009-01-01
Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.
Scale factor measure method without turntable for angular rate gyroscope
NASA Astrophysics Data System (ADS)
Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua
2018-03-01
In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.
Lopa, Silvia; Piraino, Francesco; Kemp, Raymond J; Di Caro, Clelia; Lovati, Arianna B; Di Giancamillo, Alessia; Moroni, Lorenzo; Peretti, Giuseppe M; Rasponi, Marco; Moretti, Matteo
2015-07-01
Three-dimensional (3D) culture models are widely used in basic and translational research. In this study, to generate and culture multiple 3D cell spheroids, we exploited laser ablation and replica molding for the fabrication of polydimethylsiloxane (PDMS) multi-well chips, which were validated using articular chondrocytes (ACs). Multi-well ACs spheroids were comparable or superior to standard spheroids, as revealed by glycosaminoglycan and type-II collagen deposition. Moreover, the use of our multi-well chips significantly reduced the operation time for cell seeding and medium refresh. Exploiting a similar approach, we used clinical-grade fibrin to generate implantable multi-well constructs allowing for the precise distribution of multiple cell types. Multi-well fibrin constructs were seeded with ACs generating high cell density regions, as shown by histology and cell fluorescent staining. Multi-well constructs were compared to standard constructs with homogeneously distributed ACs. After 7 days in vitro, expression of SOX9, ACAN, COL2A1, and COMP was increased in both constructs, with multi-well constructs expressing significantly higher levels of chondrogenic genes than standard constructs. After 5 weeks in vivo, we found that despite a dramatic size reduction, the cell distribution pattern was maintained and glycosaminoglycan content per wet weight was significantly increased respect to pre-implantation samples. In conclusion, multi-well chips for the generation and culture of multiple cell spheroids can be fabricated by low-cost rapid prototyping techniques. Furthermore, these techniques can be used to generate implantable constructs with defined architecture and controlled cell distribution, allowing for in vitro and in vivo investigation of cell interactions in a 3D environment. © 2015 Wiley Periodicals, Inc.
Standardization of clinical pharmacist's activities: Methodology.
Gregorová, Jana; Rychlíčková, Jitka; Šaloun, Jan
2017-09-01
Establishing standardized and controlled system of work at a clinical pharmacy department and establishing effective recording of activities of a group of four clinical pharmacist when providing clinical pharmaceutical care (CPC) in a hospital. The duration of evaluated period is 5.5 years. The first part was defining the purpose, methods and activities of clinical pharmaceutical care, the next part was designing the software for recording patient's data and CPC activities. To verify the functionality of our system the third part was conducted (from January 1, 2015 to June 30, 2015). CPC activities were defined precisely. During the 6 months period, 3946 patients were reviewed (17% of patients admitted), in this group, 41% patients was labeled as risk (these patients had one or more risk factor). 1722 repeated reviews were performed, 884 drug therapy recommendations were recorded. The calculated average time necessary for one CPC activity is 28 min. During the 5 year period, standardized system of work in clinical pharmacy department was established. This system is based on clearly defined activities and it enables external control. Our results supply data for negotiations with health insurance companies.
An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency
NASA Astrophysics Data System (ADS)
Phillips, Dewanne Marie
Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software architecture framework and acquisition methodology to improve the resiliency of space systems from a software perspective with an emphasis on the early phases of the systems engineering life cycle. This methodology involves seven steps: 1) Define technical resiliency requirements, 1a) Identify standards/policy for software resiliency, 2) Develop a request for proposal (RFP)/statement of work (SOW) for resilient space systems software, 3) Define software resiliency goals for space systems, 4) Establish software resiliency quality attributes, 5) Perform architectural tradeoffs and identify risks, 6) Conduct architecture assessments as part of the procurement process, and 7) Ascertain space system software architecture resiliency metrics. Data illustrates that software vulnerabilities can lead to opportunities for malicious cyber activities, which could degrade the space mission capability for the user community. Reducing the number of vulnerabilities by improving architecture and software system engineering practices can contribute to making space systems more resilient. Since cyber-attacks are enabled by shortfalls in software, robust software engineering practices and an architectural design are foundational to resiliency, which is a quality that allows the system to "take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". To achieve software resiliency for space systems, acquirers and suppliers must identify relevant factors and systems engineering practices to apply across the lifecycle, in software requirements analysis, architecture development, design, implementation, verification and validation, and maintenance phases.
NASA Astrophysics Data System (ADS)
Orlich, A.; Hutchings, J. K.; Green, T. M.
2013-12-01
The Ice Watch Program is an open source forum to access in situ Arctic sea ice conditions. It provides the research community and additional stakeholders a convenient resource to monitor sea ice and its role in understanding the Arctic as a system by implementing a standardized observation protocol and hosting a multi-service data portal. International vessels use the Arctic Shipborne Sea Ice Standardization Tool (ASSIST) software to report near-real time sea ice conditions while underway. Essential observations of total ice concentration, distribution of multi-year ice and other ice types, as well as their respective stage of melt are reported. These current and historic sea ice conditions are visualized on interactive maps and in a variety of statistical analyses, and with all data sets available to download for further investigation. The summer of 2012 was the debut of the ASSIST software and the Ice Watch campaign, with research vessels from six nations reporting from a wide spatio-temporal scale spanning from the Beaufort Sea, across the North Pole and Arctic Basin, the coast of Greenland and into the Kara and Barents Seas during mid-season melt and into the first stages of freeze-up. The 2013 summer field season sustained the observation and data archiving record, with participation from some of the same cruises as well as other geographic and seasonal realms covered by new users. These results are presented to illustrate the evolution of the program, increased participation and critical statistics of ice regime change and record of melt and freeze processes revealed by the data. As an ongoing effort, Ice Watch/ASSIST aims to standardize observations of Arctic-specific sea ice features and conditions while utilizing nomenclature and coding based on the World Meteorological Organization (WMO) standards and the Antarctic Sea Ice and Processes & Climate (ASPeCt) protocol. Instigated by members of the CliC Sea Ice Working Group, the program has evolved with coordination from the International Arctic Research Center, software development by the Geographic Information Network of Alaska, and funding support from the Japanese Aerospace Exploration Agency (JAXA), the Japan Agency for Marine-Earth Science & Technology (JAMSTEC), and the National Science Foundation (NSF).
Standard Spacecraft Interfaces and IP Network Architectures: Prototyping Activities at the GSFC
NASA Technical Reports Server (NTRS)
Schnurr, Richard; Marquart, Jane; Lin, Michael
2003-01-01
Advancements in fright semiconductor technology have opened the door for IP-based networking in spacecraft architectures. The GSFC believes the same signlJicant cost savings gained using MIL-STD-1553/1773 as a standard low rate interface for spacecraft busses cun be realized for highspeed network interfaces. To that end, GSFC is developing hardware and software to support a seamless, space mission IP network based on Ethernet and MIL-STD-1553. The Ethernet network shall connect all fright computers and communications systems using interface standards defined by the CCSDS Standard Onboard InterFace (SOIF) Panel. This paper shall discuss the prototyping effort underway at GSFC and expected results.
Space Telecommunications Radio System (STRS) Architecture. Part 1; Tutorial - Overview
NASA Technical Reports Server (NTRS)
Handler, Louis M.; Briones, Janette C.; Mortensen, Dale J.; Reinhart, Richard C.
2012-01-01
Space Telecommunications Radio System (STRS) Architecture Standard provides a NASA standard for software-defined radio. STRS is being demonstrated in the Space Communications and Navigation (SCaN) Testbed formerly known as Communications, Navigation and Networking Configurable Testbed (CoNNeCT). Ground station radios communicating the SCaN testbed are also being written to comply with the STRS architecture. The STRS Architecture Tutorial Overview presents a general introduction to the STRS architecture standard developed at the NASA Glenn Research Center (GRC), addresses frequently asked questions, and clarifies methods of implementing the standard. The STRS architecture should be used as a base for many of NASA s future telecommunications technologies. The presentation will provide a basic understanding of STRS.
Dresen, S; Ferreirós, N; Gnann, H; Zimmermann, R; Weinmann, W
2010-04-01
The multi-target screening method described in this work allows the simultaneous detection and identification of 700 drugs and metabolites in biological fluids using a hybrid triple-quadrupole linear ion trap mass spectrometer in a single analytical run. After standardization of the method, the retention times of 700 compounds were determined and transitions for each compound were selected by a "scheduled" survey MRM scan, followed by an information-dependent acquisition using the sensitive enhanced product ion scan of a Q TRAP hybrid instrument. The identification of the compounds in the samples analyzed was accomplished by searching the tandem mass spectrometry (MS/MS) spectra against the library we developed, which contains electrospray ionization-MS/MS spectra of over 1,250 compounds. The multi-target screening method together with the library was included in a software program for routine screening and quantitation to achieve automated acquisition and library searching. With the help of this software application, the time for evaluation and interpretation of the results could be drastically reduced. This new multi-target screening method has been successfully applied for the analysis of postmortem and traffic offense samples as well as proficiency testing, and complements screening with immunoassays, gas chromatography-mass spectrometry, and liquid chromatography-diode-array detection. Other possible applications are analysis in clinical toxicology (for intoxication cases), in psychiatry (antidepressants and other psychoactive drugs), and in forensic toxicology (drugs and driving, workplace drug testing, oral fluid analysis, drug-facilitated sexual assault).
Automatic Prediction of Protein 3D Structures by Probabilistic Multi-template Homology Modeling.
Meier, Armin; Söding, Johannes
2015-10-01
Homology modeling predicts the 3D structure of a query protein based on the sequence alignment with one or more template proteins of known structure. Its great importance for biological research is owed to its speed, simplicity, reliability and wide applicability, covering more than half of the residues in protein sequence space. Although multiple templates have been shown to generally increase model quality over single templates, the information from multiple templates has so far been combined using empirically motivated, heuristic approaches. We present here a rigorous statistical framework for multi-template homology modeling. First, we find that the query proteins' atomic distance restraints can be accurately described by two-component Gaussian mixtures. This insight allowed us to apply the standard laws of probability theory to combine restraints from multiple templates. Second, we derive theoretically optimal weights to correct for the redundancy among related templates. Third, a heuristic template selection strategy is proposed. We improve the average GDT-ha model quality score by 11% over single template modeling and by 6.5% over a conventional multi-template approach on a set of 1000 query proteins. Robustness with respect to wrong constraints is likewise improved. We have integrated our multi-template modeling approach with the popular MODELLER homology modeling software in our free HHpred server http://toolkit.tuebingen.mpg.de/hhpred and also offer open source software for running MODELLER with the new restraints at https://bitbucket.org/soedinglab/hh-suite.
NASA Technical Reports Server (NTRS)
Rasmussen, Robert; Bennett, Matthew
2006-01-01
The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.
NASA Technical Reports Server (NTRS)
Wood, Richard J.
1992-01-01
The Architecture for Survivable Systems Processing (ASSP) program is a two phase program whose objective is the derivation, specification, development and validation of an open system architecture capable of supporting advanced processing needs of space, ground, and launch vehicle operations. The output of the first phase is a set of hardware and software standards and specifications defining this architecture at three levels. The second phase will validate these standards and develop the technology necessary to achieve strategic hardness, packaging density, throughput requirements, and interoperability/interchangeability.
Brian: a simulator for spiking neural networks in python.
Goodman, Dan; Brette, Romain
2008-01-01
"Brian" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Myer, Robert R.; Latorella, Kara A.; Comstock, James R., Jr.
2011-01-01
The Multi-Attribute Task Battery (MAT Battery). is a computer-based task designed to evaluate operator performance and workload, has been redeveloped to operate in Windows XP Service Pack 3, Windows Vista and Windows 7 operating systems.MATB-II includes essentially the same tasks as the original MAT Battery, plus new configuration options including a graphical user interface for controlling modes of operation. MATB-II can be executed either in training or testing mode, as defined by the MATB-II configuration file. The configuration file also allows set up of the default timeouts for the tasks, the flow rates of the pumps and tank levels of the Resource Management (RESMAN) task. MATB-II comes with a default event file that an experimenter can modify and adapt
Analysis of Cisco Open Network Environment (ONE) OpenFlow Controller Implementation
2014-08-01
Software - Defined Networking ( SDN ), when fully realized, offer many improvements over the current rigid and...functionalities like handshake, connection setup, switch management, and security. 15. SUBJECT TERMS OpenFlow, software - defined networking , Cisco ONE, SDN ...innovating packet-forwarding technologies. Network device roles are strictly defined with little or no flexibility. In Software - Defined Networks ( SDNs ),
Unambiguous UML Composite Structures: The OMEGA2 Experience
NASA Astrophysics Data System (ADS)
Ober, Iulian; Dragomir, Iulia
Starting from version 2.0, UML introduced hierarchical composite structures, which are a very expressive way of defining complex software architectures, but which have a very loosely defined semantics in the standard. In this paper we propose a set of consistency rules that ensure UML composite structures are unambiguous and can be given a precise semantics. Our primary application of the static consistency rules defined in this paper is within the OMEGA UML profile [6], but these rules are general and applicable to other hierarchical component models based on the same concepts, such as MARTE GCM or SysML. The rule set has been formalized in OCL and is currently used in the OMEGA UML compiler.
NASA Astrophysics Data System (ADS)
Kuhlmann, Arne; Herd, Daniel; Röβler, Benjamin; Gallmann, Eva; Jungbluth, Thomas
In pig production software and electronic systems are widely used for process control and management. Unfortunately most devices on farms are proprietary solutions and autonomically working. To unify data communication of devices in agricultural husbandry, the international standard ISOagriNET (ISO 17532:2007) was developed. It defines data formats and exchange protocols, to link up devices like climate controls, feeding systems and sensors, but also management software. The aim of the research project, "Information and Data Collection in Livestock Systems" is to develop an ISOagriNET compliant IT system, a so called Farming Cell. It integrates all electronic components to acquire the available data and information for pig fattening. That way, an additional benefit to humans, animals and the environment regarding process control and documentation, can be generated. Developing the Farming Cell is very complex; in detail it is very difficult and long-winded to integrate hardware and software by various vendors into an ISOagriNET compliant IT system. This ISOagriNET prototype shows as a test environment the potential of this new standard.
NASA Astrophysics Data System (ADS)
Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration
2017-01-01
Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students
NASA Astrophysics Data System (ADS)
Kong, Xiang-Zhao; Tutolo, Benjamin M.; Saar, Martin O.
2013-02-01
SUPCRT92 is a widely used software package for calculating the standard thermodynamic properties of minerals, gases, aqueous species, and reactions. However, it is labor-intensive and error-prone to use it directly to produce databases for geochemical modeling programs such as EQ3/6, the Geochemist's Workbench, and TOUGHREACT. DBCreate is a SUPCRT92-based software program written in FORTRAN90/95 and was developed in order to produce the required databases for these programs in a rapid and convenient way. This paper describes the overall structure of the program and provides detailed usage instructions.
Payload transportation system study
NASA Technical Reports Server (NTRS)
1976-01-01
A standard size set of shuttle payload transportation equipment was defined that will substantially reduce the cost of payload transportation and accommodate a wide range of payloads with minimum impact on payload design. The system was designed to accommodate payload shipments between the level 4 payload integration sites and the launch site during the calendar years 1979-1982. In addition to defining transportation multi-use mission support equipment (T-MMSE) the mode of travel, prime movers, and ancillary equipment required in the transportation process were also considered. Consistent with the STS goals of low cost and the use of standardized interfaces, the transportation system was designed to commercial grade standards and uses the payload flight mounting interfaces for transportation. The technical, cost, and programmatic data required to permit selection of a baseline system of MMSE for intersite movement of shuttle payloads were developed.
NASA Astrophysics Data System (ADS)
Berres, A.; Karthik, R.; Nugent, P.; Sorokine, A.; Myers, A.; Pang, H.
2017-12-01
Building an integrated data infrastructure that can meet the needs of a sustainable energy-water resource management requires a robust data management and geovisual analytics platform, capable of cross-domain scientific discovery and knowledge generation. Such a platform can facilitate the investigation of diverse complex research and policy questions for emerging priorities in Energy-Water Nexus (EWN) science areas. Using advanced data analytics, machine learning techniques, multi-dimensional statistical tools, and interactive geovisualization components, such a multi-layered federated platform is being developed, the Energy-Water Nexus Knowledge Discovery Framework (EWN-KDF). This platform utilizes several enterprise-grade software design concepts and standards such as extensible service-oriented architecture, open standard protocols, event-driven programming model, enterprise service bus, and adaptive user interfaces to provide a strategic value to the integrative computational and data infrastructure. EWN-KDF is built on the Compute and Data Environment for Science (CADES) environment in Oak Ridge National Laboratory (ORNL).
NASA Technical Reports Server (NTRS)
Hall, David G.; Bridges, James
1992-01-01
A sophisticated, multi-channel computerized data acquisition and processing system was developed at the NASA LeRC for use in noise experiments. This technology, which is available for transfer to industry, provides a convenient, cost-effective alternative to analog tape recording for high frequency acoustic measurements. This system provides 32-channel acquisition of microphone signals with an analysis bandwidth up to 100 kHz per channel. Cost was minimized through the use of off-the-shelf components. Requirements to allow for future expansion were met by choosing equipment which adheres to established industry standards for hardware and software. Data processing capabilities include narrow band and 1/3 octave spectral analysis, compensation for microphone frequency response/directivity, and correction of acoustic data to standard day conditions. The system was used successfully in a major wind tunnel test program at NASA LeRC to acquire and analyze jet noise data in support of the High Speed Civil Transport (HSCT) program.
15 CFR 30.37 - Miscellaneous exemptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... requirements of the licensing Federal agency. (f) Exports of technology and software as defined in 15 CFR 772... required for mass-market software. For purposes of this part, mass-market software is defined as software... of commodities and software intended for use by individual USPPIs or by employees or representatives...
15 CFR 30.37 - Miscellaneous exemptions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... requirements of the licensing Federal agency. (f) Exports of technology and software as defined in 15 CFR 772... required for mass-market software. For purposes of this part, mass-market software is defined as software... of commodities and software intended for use by individual USPPIs or by employees or representatives...
15 CFR 30.37 - Miscellaneous exemptions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... requirements of the licensing Federal agency. (f) Exports of technology and software as defined in 15 CFR 772... required for mass-market software. For purposes of this part, mass-market software is defined as software... of commodities and software intended for use by individual USPPIs or by employees or representatives...
15 CFR 30.37 - Miscellaneous exemptions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... requirements of the licensing Federal agency. (f) Exports of technology and software as defined in 15 CFR 772... required for mass-market software. For purposes of this part, mass-market software is defined as software... of commodities and software intended for use by individual USPPIs or by employees or representatives...
NASA Astrophysics Data System (ADS)
Croitoru, Bogdan; Tulbure, Adrian; Abrudean, Mihail; Secara, Mihai
2015-02-01
The present paper describes a software method for creating / managing one type of Transducer Electronic Datasheet (TEDS) according to IEEE 1451.4 standard in order to develop a prototype of smart multi-sensor platform (with up to ten different analog sensors simultaneously connected) with Plug and Play capabilities over ETHERNET and Wi-Fi. In the experiments were used: one analog temperature sensor, one analog light sensor, one PIC32-based microcontroller development board with analog and digital I/O ports and other computing resources, one 24LC256 I2C (Inter Integrated Circuit standard) serial Electrically Erasable Programmable Read Only Memory (EEPROM) memory with 32KB available space and 3 bytes internal buffer for page writes (1 byte for data and 2 bytes for address). It was developed a prototype algorithm for writing and reading TEDS information to / from I2C EEPROM memories using the standard C language (up to ten different TEDS blocks coexisting in the same EEPROM device at once). The algorithm is able to write and read one type of TEDS: transducer information with standard TEDS content. A second software application, written in VB.NET platform, was developed in order to access the EEPROM sensor information from a computer through a serial interface (USB).
Active X based standards for healthcare integration.
Greenberg, D S; Welcker, B
1998-02-01
With cost pressures brought to the forefront by the growth of managed care, the integration of healthcare information systems is more important than ever. Providers of healthcare information are under increasing pressure to provide timely information to end users in a cost effective manner. Organizations have had to decide between the strong functionality that a multi-vendor 'best of breed' architecture provides and the strong integration provided by a single-vendor solution. As connectivity between systems increased, these interfaces were migrated to work across serial and eventually, network, connections. In addition, the content of the information became standardized through efforts like HL7 and ANSI X12 and Edifact. Although content-based standards go a long way towards facilitating interoperability, there is also quite a bit of work required to connect two systems even when they both adhere to the standard. A key to accomplishing this goal is increasing the connectivity between disparate systems in the healthcare environment. Microsoft is working with healthcare organizations and independent software vendors to bring Microsoft's powerful enterprise object technology, ActiveX, to the healthcare industry. Whilst object orientation has been heralded as the 'next big thing' in computer applications development, Microsoft believe that, in fact, component software is the technology which will provide the greatest benefit to end users.
Cost-aware request routing in multi-geography cloud data centres using software-defined networking
NASA Astrophysics Data System (ADS)
Yuan, Haitao; Bi, Jing; Li, Bo Hu; Tan, Wei
2017-03-01
Current geographically distributed cloud data centres (CDCs) require gigantic energy and bandwidth costs to provide multiple cloud applications to users around the world. Previous studies only focus on energy cost minimisation in distributed CDCs. However, a CDC provider needs to deliver gigantic data between users and distributed CDCs through internet service providers (ISPs). Geographical diversity of bandwidth and energy costs brings a highly challenging problem of how to minimise the total cost of a CDC provider. With the recently emerging software-defined networking, we study the total cost minimisation problem for a CDC provider by exploiting geographical diversity of energy and bandwidth costs. We formulate the total cost minimisation problem as a mixed integer non-linear programming (MINLP). Then, we develop heuristic algorithms to solve the problem and to provide a cost-aware request routing for joint optimisation of the selection of ISPs and the number of servers in distributed CDCs. Besides, to tackle the dynamic workload in distributed CDCs, this article proposes a regression-based workload prediction method to obtain future incoming workload. Finally, this work evaluates the cost-aware request routing by trace-driven simulation and compares it with the existing approaches to demonstrate its effectiveness.
Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo
2014-04-21
Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.
Multi-version software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1989-01-01
A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.
Quality control and assurance for validation of DOS/I measurements
NASA Astrophysics Data System (ADS)
Cerussi, Albert; Durkin, Amanda; Kwong, Richard; Quang, Timothy; Hill, Brian; Tromberg, Bruce J.; MacKinnon, Nick; Mantulin, William W.
2010-02-01
Ongoing multi-center clinical trials are crucial for Biophotonics to gain acceptance in medical imaging. In these trials, quality control (QC) and assurance (QA) are key to success and provide "data insurance". Quality control and assurance deal with standardization, validation, and compliance of procedures, materials and instrumentation. Specifically, QC/QA involves systematic assessment of testing materials, instrumentation performance, standard operating procedures, data logging, analysis, and reporting. QC and QA are important for FDA accreditation and acceptance by the clinical community. Our Biophotonics research in the Network for Translational Research in Optical Imaging (NTROI) program for breast cancer characterization focuses on QA/QC issues primarily related to the broadband Diffuse Optical Spectroscopy and Imaging (DOS/I) instrumentation, because this is an emerging technology with limited standardized QC/QA in place. In the multi-center trial environment, we implement QA/QC procedures: 1. Standardize and validate calibration standards and procedures. (DOS/I technology requires both frequency domain and spectral calibration procedures using tissue simulating phantoms and reflectance standards, respectively.) 2. Standardize and validate data acquisition, processing and visualization (optimize instrument software-EZDOS; centralize data processing) 3. Monitor, catalog and maintain instrument performance (document performance; modularize maintenance; integrate new technology) 4. Standardize and coordinate trial data entry (from individual sites) into centralized database 5. Monitor, audit and communicate all research procedures (database, teleconferences, training sessions) between participants ensuring "calibration". This manuscript describes our ongoing efforts, successes and challenges implementing these strategies.
Considerations for Software Defined Networking (SDN): Approaches and use cases
NASA Astrophysics Data System (ADS)
Bakshi, K.
Software Defined Networking (SDN) is an evolutionary approach to network design and functionality based on the ability to programmatically modify the behavior of network devices. SDN uses user-customizable and configurable software that's independent of hardware to enable networked systems to expand data flow control. SDN is in large part about understanding and managing a network as a unified abstraction. It will make networks more flexible, dynamic, and cost-efficient, while greatly simplifying operational complexity. And this advanced solution provides several benefits including network and service customizability, configurability, improved operations, and increased performance. There are several approaches to SDN and its practical implementation. Among them, two have risen to prominence with differences in pedigree and implementation. This paper's main focus will be to define, review, and evaluate salient approaches and use cases of the OpenFlow and Virtual Network Overlay approaches to SDN. OpenFlow is a communication protocol that gives access to the forwarding plane of a network's switches and routers. The Virtual Network Overlay relies on a completely virtualized network infrastructure and services to abstract the underlying physical network, which allows the overlay to be mobile to other physical networks. This is an important requirement for cloud computing, where applications and associated network services are migrated to cloud service providers and remote data centers on the fly as resource demands dictate. The paper will discuss how and where SDN can be applied and implemented, including research and academia, virtual multitenant data center, and cloud computing applications. Specific attention will be given to the cloud computing use case, where automated provisioning and programmable overlay for scalable multi-tenancy is leveraged via the SDN approach.
Lessons Learned on Quality (of) Standards
NASA Astrophysics Data System (ADS)
Gerlich, Rainer; Gerlich, Ralf
2011-08-01
Standards are used to describe and ensure the quality of products, services and processes throughout almost all branches of industry, including the field of software engineering. Contractors and suppliers are obligated by their customers and certification authorities to follow a certain set of standards during development. For example, a customer can easier actively participate in and control the contractor's process when enforcing a standard process..However, as with any requirement, a standard may also impede the contractor or supplier in assuring actual quality of the product in the sense of fitness for the purpose intended by the customer.This is the case when a standard defines specific quality assurance activities requiring a considerable amount of effort while other more efficient but equivalent or even superior approaches are blocked. Then improvement of the ratio between cost and quality exceeding miniscule advances is heavily impeded.While in some parts being too specific in defining the mechanisms of the enforced process, standards are sometimes too weak in defining the principles or goals on control of product quality.Therefore this paper addresses the following issues: (1) Which conclusions can be drawn on the quality and efficiency of a standard? (2) If and how is it possible to improve or evolve a standard? (3) How well does a standard guide a user towards high quality of the end product?One conclusion is that the analyzed standards do interfere with technological innovation, though the standards leave a lot of freedom for concretization and are understood as technology-independent.Another conclusion is that standards are not only a matter of quality but also a matter of competitiveness of the industry depending on resulting costs and time-to- market. When the costs induced by a standard are not adequate to the achievable quality, industry encounters a significant disadvantage.
Development of NASA's Next Generation L-Band Digital Beamforming Synthetic Aperture Radar (DBSAR-2)
NASA Technical Reports Server (NTRS)
Rincon, Rafael; Fatoyinbo, Temilola; Osmanoglu, Batuhan; Lee, Seung-Kuk; Ranson, K. Jon; Marrero, Victor; Yeary, Mark
2014-01-01
NASA's Next generation Digital Beamforming SAR (DBSAR-2) is a state-of-the-art airborne L-band radar developed at the NASA Goddard Space Flight Center (GSFC). The instrument builds upon the advanced architectures in NASA's DBSAR-1 and EcoSAR instruments. The new instrument employs a 16-channel radar architecture characterized by multi-mode operation, software defined waveform generation, digital beamforming, and configurable radar parameters. The instrument has been design to support several disciplines in Earth and Planetary sciences. The instrument was recently completed, and tested and calibrated in a anechoic chamber.
Data format standard for sharing light source measurements
NASA Astrophysics Data System (ADS)
Gregory, G. Groot; Ashdown, Ian; Brandenburg, Willi; Chabaud, Dominique; Dross, Oliver; Gangadhara, Sanjay; Garcia, Kevin; Gauvin, Michael; Hansen, Dirk; Haraguchi, Kei; Hasna, Günther; Jiao, Jianzhong; Kelley, Ryan; Koshel, John; Muschaweck, Julius
2013-09-01
Optical design requires accurate characterization of light sources for computer aided design (CAD) software. Various methods have been used to model sources, from accurate physical models to measurement of light output. It has become common practice for designers to include measured source data for design simulations. Typically, a measured source will contain rays which sample the output distribution of the source. The ray data must then be exported to various formats suitable for import into optical analysis or design software. Source manufacturers are also making measurements of their products and supplying CAD models along with ray data sets for designers. The increasing availability of data has been beneficial to the design community but has caused a large expansion in storage needs for the source manufacturers since each software program uses a unique format to describe the source distribution. In 2012, the Illuminating Engineering Society (IES) formed a working group to understand the data requirements for ray data and recommend a standard file format. The working group included representatives from software companies supplying the analysis and design tools, source measurement companies providing metrology, source manufacturers creating the data and users from the design community. Within one year the working group proposed a file format which was recently approved by the IES for publication as TM-25. This paper will discuss the process used to define the proposed format, highlight some of the significant decisions leading to the format and list the data to be included in the first version of the standard.
Characterization and demonstration of a 12-channel Laser-Doppler vibrometer
NASA Astrophysics Data System (ADS)
Haist, T.; Lingel, C.; Osten, W.; Bendel, K.; Giesen, M.; Gartner, M.; Rembe, C.
2013-04-01
Scanning laser-Doppler vibrometry is the standard optical, non-contact technology for vibration measurement applications in all areas of mechanical engineering. The vibration signals are measured from the different measurement points at different time points. This requires synchronization and the technology is limited to repeatable or periodic events. We have explored a new solution for the optical setup of the sensing system of a multi-channel vibrometer that we present in this paper. Our optical system is a 12-channel vibrometer and consists of a 12-channel interferometer unit which is connected with 12 optical fibers to a sensor head with 12 fiber-coupled objective lenses. Every objective lens can be focused manually and is placed in a sphere which can be tilted and fixed by a blocking screw. Thus it is possible to adjust a user defined measurement grid by hand. The user can define the geometry of the measurement grid in a camera image displayed in the software by just clicking on the laser foci. We use synchronous analog-digital conversion for the 12 heterodyne detector signals and a digital 12-channel-demodulator which is connected via USB to a computer. We can realize high deflection angles, good sensitivity, proper resolution, sufficient vibration bandwidth, and high maximum vibration amplitudes. In this paper, we demonstrate the optical and electrical setup of the manually adjustable 12-channel vibrometer, we present the experimentally evaluated performance of our device, and we present first measurements from real automotive applications.
Perfect count: a novel approach for the single platform enumeration of absolute CD4+ T-lymphocytes.
Storie, Ian; Sawle, Alex; Goodfellow, Karen; Whitby, Liam; Granger, Vivian; Ward, Rosalie Y; Peel, Janet; Smart, Theresa; Reilly, John T; Barnett, David
2004-01-01
The derivation of reliable CD4(+) T lymphocyte counts is vital for the monitoring of disease progression and therapeutic effectiveness in HIV(+) individuals. Flow cytometry has emerged as the method of choice for CD4(+) T lymphocyte enumeration, with single-platform technology, coupled with reference counting beads, fast becoming the "gold standard." However, although single-platform, bead-based, sample acquisition requires the ratio of beads to cells to remain unchanged, there is no available method, until recently, to monitor this. Perfect Count beads have been developed to address this issue and to incorporate two bead populations, with different densities, to allow the detection of inadequate mixing. Comparison of the relative proportions of both beads with the manufacture's defined limits enables an internal QC check during sample acquisition. In this study, we have compared CD4(+) T lymphocyte counts, obtained from 104 HIV(+) patients, using TruCount beads with MultiSet software (defined as the predicated method) and the new Perfect Count beads, incorporating an in house sequential gating strategy. We have demonstrated an excellent degree of correlation between the predicate method and the Perfect Count system (r(2) = 0.9955; Bland Altman bias +27 CD4(+) T lymphocytes/microl). The Perfect Count system is a robust method for performing single platform absolute counts and has the added advantage of having internal QC checks. Such an approach enables the operator to identify potential problems during sample preparation, acquisition and analysis. Copyright 2003 Wiley-Liss, Inc.
Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu
2018-05-01
The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.
ALFA: The new ALICE-FAIR software framework
NASA Astrophysics Data System (ADS)
Al-Turany, M.; Buncic, P.; Hristov, P.; Kollegger, T.; Kouzinopoulos, C.; Lebedev, A.; Lindenstruth, V.; Manafov, A.; Richter, M.; Rybalchenko, A.; Vande Vyvre, P.; Winckler, N.
2015-12-01
The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of large parts of a common software framework in an experiment independent way. The FairRoot project has already shown the feasibility of such an approach for the FAIR experiments and extending it beyond FAIR to experiments at other facilities[1, 2]. The ALFA framework is a joint development between ALICE Online- Offline (O2) and FairRoot teams. ALFA is designed as a flexible, elastic system, which balances reliability and ease of development with performance using multi-processing and multithreading. A message- based approach has been adopted; such an approach will support the use of the software on different hardware platforms, including heterogeneous systems. Each process in ALFA assumes limited communication and reliance on other processes. Such a design will add horizontal scaling (multiple processes) to vertical scaling provided by multiple threads to meet computing and throughput demands. ALFA does not dictate any application protocols. Potentially, any content-based processor or any source can change the application protocol. The framework supports different serialization standards for data exchange between different hardware and software languages.
NASA Astrophysics Data System (ADS)
Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.
Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.
A Matrix Approach to Software Process Definition
NASA Technical Reports Server (NTRS)
Schultz, David; Bachman, Judith; Landis, Linda; Stark, Mike; Godfrey, Sally; Morisio, Maurizio; Powers, Edward I. (Technical Monitor)
2000-01-01
The Software Engineering Laboratory (SEL) is currently engaged in a Methodology and Metrics program for the Information Systems Center (ISC) at Goddard Space Flight Center (GSFC). This paper addresses the Methodology portion of the program. The purpose of the Methodology effort is to assist a software team lead in selecting and tailoring a software development or maintenance process for a specific GSFC project. It is intended that this process will also be compliant with both ISO 9001 and the Software Engineering Institute's Capability Maturity Model (CMM). Under the Methodology program, we have defined four standard ISO-compliant software processes for the ISC, and three tailoring criteria that team leads can use to categorize their projects. The team lead would select a process and appropriate tailoring factors, from which a software process tailored to the specific project could be generated. Our objective in the Methodology program is to present software process information in a structured fashion, to make it easy for a team lead to characterize the type of software engineering to be performed, and to apply tailoring parameters to search for an appropriate software process description. This will enable the team lead to follow a proven, effective software process and also satisfy NASA's requirement for compliance with ISO 9001 and the anticipated requirement for CMM assessment. This work is also intended to support the deployment of sound software processes across the ISC.
Implementation and Testing of the JANUS Standard with SSC Pacific’s Software-Defined Acoustic Modem
2017-12-01
Communications Outpost (FDECO) Innovative Naval Prototype (INP) Program by the Advanced Photonic Technologies Branch (Code 55360), Space and Naval Warfare... Communications and Networks Division iii EXECUTIVE SUMMARY This report presents Space and Naval Warfare (SPAWAR) Systems Center Pacific’s (SSC... Frequency -Hopped Binary Frequency Shift Keying Office of Naval Research Innovative Naval Prototype Forward Deployed Energy and Communications Outpost
ERIC Educational Resources Information Center
Pinder, Patrice Juliet
2008-01-01
Historically, very little research that meets the scientifically based standards as defined by the No Child Left Behind Act has been conducted on the effectiveness of educational technology on student achievement. The purpose of this study was to explore and seek to understand urban city teachers' perspectives on the benefits or effects of…
Fault Injection Validation of a Safety-Critical TMR Sysem
NASA Astrophysics Data System (ADS)
Irrera, Ivano; Madeira, Henrique; Zentai, Andras; Hergovics, Beata
2016-08-01
Digital systems and their software are the core technology for controlling and monitoring industrial systems in practically all activity domains. Functional safety standards such as the European standard EN 50128 for railway applications define the procedures and technical requirements for the development of software for railway control and protection systems. The validation of such systems is a highly demanding task. In this paper we discuss the use of fault injection techniques, which have been used extensively in several domains, particularly in the space domain, to complement the traditional procedures to validate a SIL (Safety Integrity Level) 4 system for railway signalling, implementing a TMR (Triple Modular Redundancy) architecture. The fault injection tool is based on JTAG technology. The results of our injection campaign showed a high degree of tolerance to most of the injected faults, but several cases of unexpected behaviour have also been observed, helping understanding worst-case scenarios.
Citizen Observatories: A Standards Based Architecture
NASA Astrophysics Data System (ADS)
Simonis, Ingo
2015-04-01
A number of large-scale research projects are currently under way exploring the various components of citizen observatories, e.g. CITI-SENSE (http://www.citi-sense.eu), Citclops (http://citclops.eu), COBWEB (http://cobwebproject.eu), OMNISCIENTIS (http://www.omniscientis.eu), and WeSenseIt (http://www.wesenseit.eu). Common to all projects is the motivation to develop a platform enabling effective participation by citizens in environmental projects, while considering important aspects such as security, privacy, long-term storage and availability, accessibility of raw and processed data and its proper integration into catalogues and international exchange and collaboration systems such as GEOSS or INSPIRE. This paper describes the software architecture implemented for setting up crowdsourcing campaigns using standardized components, interfaces, security features, and distribution capabilities. It illustrates the Citizen Observatory Toolkit, a software suite that allows defining crowdsourcing campaigns, to invite registered and unregistered participants to participate in crowdsourcing campaigns, and to analyze, process, and visualize raw and quality enhanced crowd sourcing data and derived products. The Citizen Observatory Toolkit is not a single software product. Instead, it is a framework of components that are built using internationally adopted standards wherever possible (e.g. OGC standards from Sensor Web Enablement, GeoPackage, and Web Mapping and Processing Services, as well as security and metadata/cataloguing standards), defines profiles of those standards where necessary (e.g. SWE O&M profile, SensorML profile), and implements design decisions based on the motivation to maximize interoperability and reusability of all components. The toolkit contains tools to set up, manage and maintain crowdsourcing campaigns, allows building on-demand apps optimized for the specific sampling focus, supports offline and online sampling modes using modern cell phones with built-in sensing technologies, automates the upload of the raw data, and handles conflation services to match quality requirements and analysis challenges. The strict implementation of all components using internationally adopted standards ensures maximal interoperability and reusability of all components. The Citizen Observatory Toolkit is currently developed as part of the COBWEB research project. COBWEB is partially funded by the European Programme FP7/2007-2013 under grant agreement n° 308513; part of the topic ENV.2012.6.5-1 "Developing community based environmental monitoring and information systems using innovative and novel earth observation applications.
IPeak: An open source tool to combine results from multiple MS/MS search engines.
Wen, Bo; Du, Chaoqin; Li, Guilin; Ghali, Fawaz; Jones, Andrew R; Käll, Lukas; Xu, Shaohang; Zhou, Ruo; Ren, Zhe; Feng, Qiang; Xu, Xun; Wang, Jun
2015-09-01
Liquid chromatography coupled tandem mass spectrometry (LC-MS/MS) is an important technique for detecting peptides in proteomics studies. Here, we present an open source software tool, termed IPeak, a peptide identification pipeline that is designed to combine the Percolator post-processing algorithm and multi-search strategy to enhance the sensitivity of peptide identifications without compromising accuracy. IPeak provides a graphical user interface (GUI) as well as a command-line interface, which is implemented in JAVA and can work on all three major operating system platforms: Windows, Linux/Unix and OS X. IPeak has been designed to work with the mzIdentML standard from the Proteomics Standards Initiative (PSI) as an input and output, and also been fully integrated into the associated mzidLibrary project, providing access to the overall pipeline, as well as modules for calling Percolator on individual search engine result files. The integration thus enables IPeak (and Percolator) to be used in conjunction with any software packages implementing the mzIdentML data standard. IPeak is freely available and can be downloaded under an Apache 2.0 license at https://code.google.com/p/mzidentml-lib/. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data
NASA Astrophysics Data System (ADS)
Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.
2013-05-01
Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.
DeviceEditor visual biological CAD canvas
2012-01-01
Background Biological Computer Aided Design (bioCAD) assists the de novo design and selection of existing genetic components to achieve a desired biological activity, as part of an integrated design-build-test cycle. To meet the emerging needs of Synthetic Biology, bioCAD tools must address the increasing prevalence of combinatorial library design, design rule specification, and scar-less multi-part DNA assembly. Results We report the development and deployment of web-based bioCAD software, DeviceEditor, which provides a graphical design environment that mimics the intuitive visual whiteboard design process practiced in biological laboratories. The key innovations of DeviceEditor include visual combinatorial library design, direct integration with scar-less multi-part DNA assembly design automation, and a graphical user interface for the creation and modification of design specification rules. We demonstrate how biological designs are rendered on the DeviceEditor canvas, and we present effective visualizations of genetic component ordering and combinatorial variations within complex designs. Conclusions DeviceEditor liberates researchers from DNA base-pair manipulation, and enables users to create successful prototypes using standardized, functional, and visual abstractions. Open and documented software interfaces support further integration of DeviceEditor with other bioCAD tools and software platforms. DeviceEditor saves researcher time and institutional resources through correct-by-construction design, the automation of tedious tasks, design reuse, and the minimization of DNA assembly costs. PMID:22373390
Jay, Raman; Heckman, J E; Hinshaw, L; Best, S; Lubner, M; Jarrard, D F; Downs, T M; Nakada, S Y; Lee, F T; Huang, W; Ziemlewicz, T
2017-03-01
Percutaneous biopsy obtained from a single location is prone to sampling error in large heterogeneous renal masses, leading to nondiagnostic results or failure to detect poor prognostic features. We evaluated the accuracy of percutaneous biopsy for large renal masses using a modified multi-quadrant technique vs. a standard biopsy technique. Clinical and pathological data for all patients with cT2 or greater renal masses who underwent percutaneous biopsy from 2009 to 2014 were reviewed. The multi-quadrant technique was defined as multiple core biopsies from at least 4 separate solid enhancing areas in the tumor. The incidence of nondiagnostic findings, sarcomatoid features and procedural complications was recorded, and concordance between biopsy specimens and nephrectomy pathology was compared. A total of 122 biopsies were performed for 117 tumors in 116 patients (46 using the standard biopsy technique and 76 using the multi-quadrant technique). Median tumor size was 10cm (IQR: 8-12). Biopsy was nondiagnostic in 5 of 46 (10.9%) standard and 0 of 76 (0%) multi-quadrant biopsies (P = 0.007). Renal cell carcinoma was identified in 96 of 115 (82.0%) tumors and nonrenal cell carcinoma tumors were identified in 21 (18.0%). One complication occurred using the standard biopsy technique and no complications were reported using the multi-quadrant technique. Sarcomatoid features were present in 23 of 96 (23.9%) large renal cell carcinomas studied. Sensitivity for identifying sarcomatoid features was higher using the multi-quadrant technique compared to the standard biopsy technique at 13 of 15 (86.7%) vs. 2 of 8 (25.0%) (P = 0.0062). The multi-quadrant percutaneous biopsy technique increases the ability to identify aggressive pathological features in large renal tumors and decreases nondiagnostic biopsy rates. Copyright © 2017. Published by Elsevier Inc.
The Wettzell System Monitoring Concept and First Realizations
NASA Technical Reports Server (NTRS)
Ettl, Martin; Neidhardt, Alexander; Muehlbauer, Matthias; Ploetz, Christian; Beaudoin, Christopher
2010-01-01
Automated monitoring of operational system parameters for the geodetic space techniques is becoming more important in order to improve the geodetic data and to ensure the safety and stability of automatic and remote-controlled observations. Therefore, the Wettzell group has developed the system monitoring software, SysMon, which is based on a reliable, remotely-controllable hardware/software realization. A multi-layered data logging system based on a fanless, robust industrial PC with an internal database system is used to collect data from several external, serial, bus, or PCI-based sensors. The internal communication is realized with Remote Procedure Calls (RPC) and uses generative programming with the interface software generator idl2rpc.pl developed at Wettzell. Each data monitoring stream can be configured individually via configuration files to define the logging rates or analog-digital-conversion parameters. First realizations are currently installed at the new laser ranging system at Wettzell to address safety issues and at the VLBI station O Higgins as a meteorological data logger. The system monitoring concept should be realized for the Wettzell radio telescope in the near future.
2014-01-01
Abstract Building large supertrees involves the collection, storage, and processing of thousands of individual phylogenies to create large phylogenies with thousands to tens of thousands of taxa. Such large phylogenies are useful for macroevolutionary studies, comparative biology and in conservation and biodiversity. No easy to use and fully integrated software package currently exists to carry out this task. Here, we present a new Python-based software package that uses well defined XML schema to manage both data and metadata. It builds on previous versions by 1) including new processing steps, such as Safe Taxonomic Reduction, 2) using a user-friendly GUI that guides the user to complete at least the minimum information required and includes context-sensitive documentation, and 3) a revised storage format that integrates both tree- and meta-data into a single file. These data can then be manipulated according to a well-defined, but flexible, processing pipeline using either the GUI or a command-line based tool. Processing steps include standardising names, deleting or replacing taxa, ensuring adequate taxonomic overlap, ensuring data independence, and safe taxonomic reduction. This software has been successfully used to store and process data consisting of over 1000 trees ready for analyses using standard supertree methods. This software makes large supertree creation a much easier task and provides far greater flexibility for further work. PMID:24891820
Specifications and implementation of the RT MHD control system for the EC launcher of FTU
NASA Astrophysics Data System (ADS)
Galperti, C.; Alessi, E.; Boncagni, L.; Bruschi, A.; Granucci, G.; Grosso, A.; Iannone, F.; Marchetto, C.; Nowak, S.; Panella, M.; Sozzi, C.; Tilia, B.
2012-09-01
To perform real time plasma control experiments using EC heating waves by using the new fast launcher installed on FTU a dedicated data acquisition and elaboration system has been designed recently. A prototypical version of the acquisition/control system has been recently developed and will be tested on FTU machine in its next experimental campaign. The open-source framework MARTe (Multi-threaded Application Real-Time executor) on Linux/RTAI real-time operating system has been chosen as software platform to realize the control system. Standard open-architecture industrial PCs, based either on VME bus and CompactPCI bus equipped with standard input/output cards are the chosen hardware platform.
Status of emerging standards for data definitions and transfer in the petroleum industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winczewski, L.M.
1991-03-01
Leading-edge hardware and software to store, retrieve, process, analyze, visualize, and interpret geoscience and petroleum data are improving continuously. A babel of definitions and formats for common industry data items limits the overall effectiveness of these computer-aided exploration and production tools. Custom data conversion required to load applications causes delays and exposes data content to error and degradation. Emerging industry-wide standards for management of geoscience and petroleum-related data are poised to overcome long-standing internal barriers to the full exploitation of these high-tech hardware/software systems. Industry technical organizations, such as AAPG, SEG, and API, have been actively pursuing industry-wide standards formore » data transfer, data definitions, and data models. These standard-defining groups are non-fee and solicit active participation from the entire petroleum community. The status of the most active of these groups is presented here. Data transfer standards are being pursued within AAPG (AAPG-B Data Transfer Standard), API (DLIS, for log data) and SEG (SEG-DEF, for seismic data). Converging data definitions, models, and glossaries are coming from the Petroleum Industry Data Dictionary Group (PIDD) and from subcommittees of the AAPG Computer Applications Committee. The National Computer Graphics Association is promoting development of standards for transfer of geographically oriented data. The API Well-Number standard is undergoing revision.« less
A USB-2 based portable data acquisition system for detector development and nuclear research
NASA Astrophysics Data System (ADS)
Jiang, Hao; Ojaruega, M.; Becchetti, F. D.; Griffin, H. C.; Torres-Isea, R. O.
2011-10-01
A highly portable high-speed CAMAC data acquisition system has been developed using Kmax software (Sparrow, Inc.) for Macintosh laptop and tower computers. It uses a USB-2 interface to the CAMAC crate controller with custom-written software drivers. Kmax permits 2D parameter gating and specific algorithms have been developed to facilitate the rapid evaluation of various multi-element nuclear detectors for energy and time-of-flight measurements. This includes tests using neutrons from 252Cf and a 2.5 MeV neutron generator as well as standard gamma calibration sources such as 60Co and 137Cs. In addition, the system has been used to measure gamma-gamma coincidences over extended time periods using radioactive sources (e.g., Ra-228, Pa-233, Np-237, and Am-243).
Equivalent circuit modeling of a piezo-patch energy harvester on a thin plate with AC-DC conversion
NASA Astrophysics Data System (ADS)
Bayik, B.; Aghakhani, A.; Basdogan, I.; Erturk, A.
2016-05-01
As an alternative to beam-like structures, piezoelectric patch-based energy harvesters attached to thin plates can be readily integrated to plate-like structures in automotive, marine, and aerospace applications, in order to directly exploit structural vibration modes of the host system without mass loading and volumetric occupancy of cantilever attachments. In this paper, a multi-mode equivalent circuit model of a piezo-patch energy harvester integrated to a thin plate is developed and coupled with a standard AC-DC conversion circuit. Equivalent circuit parameters are obtained in two different ways: (1) from the modal analysis solution of a distributed-parameter analytical model and (2) from the finite-element numerical model of the harvester by accounting for two-way coupling. After the analytical modeling effort, multi-mode equivalent circuit representation of the harvester is obtained via electronic circuit simulation software SPICE. Using the SPICE software, electromechanical response of the piezoelectric energy harvester connected to linear and nonlinear circuit elements are computed. Simulation results are validated for the standard AC-AC and AC-DC configurations. For the AC input-AC output problem, voltage frequency response functions are calculated for various resistive loads, and they show excellent agreement with modal analysis-based analytical closed-form solution and with the finite-element model. For the standard ideal AC input-DC output case, a full-wave rectifier and a smoothing capacitor are added to the harvester circuit for conversion of the AC voltage to a stable DC voltage, which is also validated against an existing solution by treating the single-mode plate dynamics as a single-degree-of-freedom system.
Space-Based Reconfigurable Software Defined Radio Test Bed Aboard International Space Station
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Lux, James P.
2014-01-01
The National Aeronautical and Space Administration (NASA) recently launched a new software defined radio research test bed to the International Space Station. The test bed, sponsored by the Space Communications and Navigation (SCaN) Office within NASA is referred to as the SCaN Testbed. The SCaN Testbed is a highly capable communications system, composed of three software defined radios, integrated into a flight system, and mounted to the truss of the International Space Station. Software defined radios offer the future promise of in-flight reconfigurability, autonomy, and eventually cognitive operation. The adoption of software defined radios offers space missions a new way to develop and operate space transceivers for communications and navigation. Reconfigurable or software defined radios with communications and navigation functions implemented in software or VHDL (Very High Speed Hardware Description Language) provide the capability to change the functionality of the radio during development or after launch. The ability to change the operating characteristics of a radio through software once deployed to space offers the flexibility to adapt to new science opportunities, recover from anomalies within the science payload or communication system, and potentially reduce development cost and risk by adapting generic space platforms to meet specific mission requirements. The software defined radios on the SCaN Testbed are each compliant to NASA's Space Telecommunications Radio System (STRS) Architecture. The STRS Architecture is an open, non-proprietary architecture that defines interfaces for the connections between radio components. It provides an operating environment to abstract the communication waveform application from the underlying platform specific hardware such as digital-to-analog converters, analog-to-digital converters, oscillators, RF attenuators, automatic gain control circuits, FPGAs, general-purpose processors, etc. and the interconnections among different radio components.
Standardization in synthetic biology: an engineering discipline coming of age.
Decoene, Thomas; De Paepe, Brecht; Maertens, Jo; Coussement, Pieter; Peters, Gert; De Maeseneire, Sofie L; De Mey, Marjan
2018-08-01
Leaping DNA read-and-write technologies, and extensive automation and miniaturization are radically transforming the field of biological experimentation by providing the tools that enable the cost-effective high-throughput required to address the enormous complexity of biological systems. However, standardization of the synthetic biology workflow has not kept abreast with dwindling technical and resource constraints, leading, for example, to the collection of multi-level and multi-omics large data sets that end up disconnected or remain under- or even unexploited. In this contribution, we critically evaluate the various efforts, and the (limited) success thereof, in order to introduce standards for defining, designing, assembling, characterizing, and sharing synthetic biology parts. The causes for this success or the lack thereof, as well as possible solutions to overcome these, are discussed. Akin to other engineering disciplines, extensive standardization will undoubtedly speed-up and reduce the cost of bioprocess development. In this respect, further implementation of synthetic biology standards will be crucial for the field in order to redeem its promise, i.e. to enable predictable forward engineering.
Recommendations for a service framework to access astronomical archives
NASA Technical Reports Server (NTRS)
Travisano, J. J.; Pollizzi, J.
1992-01-01
There are a large number of astronomical archives and catalogs on-line for network access, with many different user interfaces and features. Some systems are moving towards distributed access, supplying users with client software for their home sites which connects to servers at the archive site. Many of the issues involved in defining a standard framework of services that archive/catalog suppliers can use to achieve a basic level of interoperability are described. Such a framework would simplify the development of client and server programs to access the wide variety of astronomical archive systems. The primary services that are supplied by current systems include: catalog browsing, dataset retrieval, name resolution, and data analysis. The following issues (and probably more) need to be considered in establishing a standard set of client/server interfaces and protocols: Archive Access - dataset retrieval, delivery, file formats, data browsing, analysis, etc.; Catalog Access - database management systems, query languages, data formats, synchronous/asynchronous mode of operation, etc.; Interoperability - transaction/message protocols, distributed processing mechanisms (DCE, ONC/SunRPC, etc), networking protocols, etc.; Security - user registration, authorization/authentication mechanisms, etc.; Service Directory - service registration, lookup, port/task mapping, parameters, etc.; Software - public vs proprietary, client/server software, standard interfaces to client/server functions, software distribution, operating system portability, data portability, etc. Several archive/catalog groups, notably the Astrophysics Data System (ADS), are already working in many of these areas. In the process of developing StarView, which is the user interface to the Space Telescope Data Archive and Distribution Service (ST-DADS), these issues and the work of others were analyzed. A framework of standard interfaces for accessing services on any archive system which would benefit archive user and supplier alike is proposed.
NASA Technical Reports Server (NTRS)
Sledd, Annette; Danford, Mike; Key, Brian
2002-01-01
The EXpedite the PRocessing of Experiments to Space Station or EXPRESS Rack System was developed to provide Space Station accommodations for subrack payloads. The EXPRESS Rack accepts Space Shuttle middeck locker type payloads and International Subrack Interface Standard (ISIS) Drawer payloads, allowing previously flown payloads an opportunity to transition to the International Space Station. The EXPRESS Rack provides power, data command and control, video, water cooling, air cooling, vacuum exhaust, and Nitrogen supply to payloads. The EXPRESS Rack system also includes transportation racks to transport payloads to and from the Space Station, Suitcase Simulators to allow a payload developer to verify data interfaces at the development site, Functional Checkout Units to allow payload checkout at KSC prior to launch, and trainer racks for the astronauts to learn how to operate the EXPRESS Racks prior to flight. Standard hardware and software interfaces provided by the EXPRESS Rack simplify the integration processes, and facilitate simpler ISS payload development. Whereas most ISS Payload facilities are designed to accommodate one specific type of science, the EXPRESS Rack is designed to accommodate multi-discipline research within the same rack allowing for the independent operation of each subrack payload. On-orbit operations began with the EXPRESS Rack Project on April 24, 2001, with one rack operating continuously to support long-running payloads. The other on-orbit EXPRESS Racks operate based on payload need and resource availability. Sustaining Engineering and Logistics and Maintenance functions are in place to maintain operations and to provide software upgrades.
The Extension of ISS Resources for Multi-Discipline Subrack Payloads
NASA Technical Reports Server (NTRS)
Sledd, Annette M.; Gilbert, Paul A. (Technical Monitor)
2002-01-01
The EXpedite the processing of Experiments to Space Station or EXPRESS Rack System was developed to provide Space Station accommodations for subrack payloads. The EXPRESS Rack accepts Space Shuttle middeck locker type payloads and International Subrack Interface Standard (ISIS) Drawer payloads, allowing previously flown payloads an opportunity to transition to the International Space Station. The EXPRESS Rack provides power, data command and control, video, water cooling, air cooling, vacuum exhaust, and Nitrogen supply to payloads. The EXPRESS Rack system also includes transportation racks to transport payloads to and from the Space Station, Suitcase Simulators to allow a payload developer to verify data interfaces at the development site, Functional Checkout Units to allow payload checkout at KSC prior to launch, and trainer racks for the astronauts to learn how to operate the EXPRESS Racks prior to flight. Standard hardware and software interfaces provided by the EXPRESS Rack simplify the integration processes, and facilitate simpler ISS payload development. Whereas most ISS Payload facilities are designed to accommodate one specific type of science, the EXPRESS Rack is designed to accommodate multi-discipline research within the same rack allowing for the independent operation of each subrack payload. On-orbit operations began with the EXPRESS Rack Project on April 24, 2001, with one rack operating continuously to support long-running payloads. The other on-orbit EXPRESS Racks operate based on payload need and resource availability. Sustaining Engineering and Logistics and Maintenance functions are in place to maintain operations and to provide software upgrades.
The deegree framework - Spatial Data Infrastructure solution for end-users and developers
NASA Astrophysics Data System (ADS)
Kiehle, Christian; Poth, Andreas
2010-05-01
The open source software framework deegree is a comprehensive implementation of standards as defined by ISO and Open Geospatial Consortium (OGC). It has been developed with two goals in mind: provide a uniform framework for implementing Spatial Data Infrastructures (SDI) and adhering to standards as strictly as possible. Although being open source software (Lesser GNU Public License, LGPL), deegree has been developed with a business model in mind: providing the general building blocks of SDIs without license fees and offer customization, consulting and tailoring by specialized companies. The core of deegree is a comprehensive Java Application Programming Interface (API) offering access to spatial features, analysis, metadata and coordinate reference systems. As a library, deegree can and has been integrated as a core module inside spatial information systems. It is reference implementation for several OGC standards and based on an ISO 19107 geometry model. For end users, deegree is shipped as a web application providing easy-to-set-up components for web mapping and spatial analysis. Since 2000, deegree has been the backbone of many productive SDIs, first and foremost for governmental stakeholders (e.g. Federal Agency for Cartography and Geodesy in Germany, the Ministry of Housing, Spatial Planning and the Environment in the Netherlands, etc.) as well as for research and development projects as an early adoption of standards, drafts and discussion papers. Besides mature standards like Web Map Service, Web Feature Service and Catalogue Services, deegree also implements rather new standards like the Sensor Observation Service, the Web Processing Service and the Web Coordinate Transformation Service (WCTS). While a robust background in standardization (knowledge and implementation) is a must for consultancy, standard-compliant services and encodings alone do not provide solutions for customers. The added value is comprised by a sophisticated set of client software, desktop and web environments. A focus lies on different client solutions for specific standards like the Web Processing Service and the Web Coordinate Transformation Service. On the other hand, complex geoportal solutions comprised of multiple standards and enhanced by components for user management, security and map client functionality show the demanding requirements of real world solutions. The XPlan-GML-standard as defined by the German spatial planing authorities is a good example of how complex real-world requirements can get. XPlan-GML is intended to provide a framework for digital spatial planning documents and requires complex Geography Markup Language (GML) features along with Symbology Encoding (SE), Filter Encoding (FE), Web Map Services (WMS), Web Feature Services (WFS). This complex infrastructure should be used by urban and spatial planners and therefore requires a user-friendly graphical interface hiding the complexity of the underlying infrastructure. Based on challenges faced within customer projects, the importance of easy to use software components is focused. SDI solution should be build upon ISO/OGC-standards, but more important, should be user-friendly and support the users in spatial data management and analysis.
[Comparision of Different Methods of Area Measurement in Irregular Scar].
Ran, D; Li, W J; Sun, Q G; Li, J Q; Xia, Q
2016-10-01
To determine a measurement standard of irregular scar area by comparing the advantages and disadvantages of different measurement methods in measuring same irregular scar area. Irregular scar area was scanned by digital scanning and measured by coordinate reading method, AutoCAD pixel method, Photoshop lasso pixel method, Photoshop magic bar filled pixel method and Foxit PDF reading software, and some aspects of these methods such as measurement time, repeatability, whether could be recorded and whether could be traced were compared and analyzed. There was no significant difference in the scar areas by the measurement methods above. However, there was statistical difference in the measurement time and repeatability by one or multi performers and only Foxit PDF reading software could be traced back. The methods above can be used for measuring scar area, but each one has its advantages and disadvantages. It is necessary to develop new measurement software for forensic identification. Copyright© by the Editorial Department of Journal of Forensic Medicine
A Browser-Based Multi-User Working Environment for Physicists
NASA Astrophysics Data System (ADS)
Erdmann, M.; Fischer, R.; Glaser, C.; Klingebiel, D.; Komm, M.; Müller, G.; Rieger, M.; Steggemann, J.; Urban, M.; Winchen, T.
2014-06-01
Many programs in experimental particle physics do not yet have a graphical interface, or demand strong platform and software requirements. With the most recent development of the VISPA project, we provide graphical interfaces to existing software programs and access to multiple computing clusters through standard web browsers. The scalable clientserver system allows analyses to be performed in sizable teams, and disburdens the individual physicist from installing and maintaining a software environment. The VISPA graphical interfaces are implemented in HTML, JavaScript and extensions to the Python webserver. The webserver uses SSH and RPC to access user data, code and processes on remote sites. As example applications we present graphical interfaces for steering the reconstruction framework OFFLINE of the Pierre-Auger experiment, and the analysis development toolkit PXL. The browser based VISPA system was field-tested in biweekly homework of a third year physics course by more than 100 students. We discuss the system deployment and the evaluation by the students.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-11
.... Based on initial comparative research, it appears that the proposed OPM-selected EHB-benchmark plans are... include any discriminatory benefit design elements as defined under 45 CFR 156.125. Response: In response... OPM-selected benchmarks and substitutions not be allowed in States having standard benefit designs...
Forde, Arnell S.; Flocks, James G.; Wiese, Dana S.; Fredericks, Jake J.
2016-03-29
The archived trace data are in standard SEG Y rev. 0 format (Barry and others, 1975); the first 3,200 bytes of the card image header are in American Standard Code for Information Interchange (ASCII) format instead of Extended Binary Coded Decimal Interchange Code (EBCDIC) format. The SEG Y files are available on the DVD version of this report or online, downloadable via the USGS Coastal and Marine Geoscience Data System (http://cmgds.marine.usgs.gov). The data are also available for viewing using GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org) multi-platform open source software. The Web version of this archive does not contain the SEG Y trace files. To obtain the complete DVD archive, contact USGS Information Services at 1-888-ASK-USGS or infoservices@usgs.gov. The SEG Y files may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU) (Cohen and Stockwell, 2010). See the How To Download SEG Y Data page for download instructions. The printable profiles are provided as Graphics Interchange Format (GIF) images processed and gained using SU software and can be viewed from theProfiles page or by using the links located on the trackline maps; refer to the Software page for links to example SU processing scripts.
Multi-Mission Automated Task Invocation Subsystem
NASA Technical Reports Server (NTRS)
Cheng, Cecilia S.; Patel, Rajesh R.; Sayfi, Elias M.; Lee, Hyun H.
2009-01-01
Multi-Mission Automated Task Invocation Subsystem (MATIS) is software that establishes a distributed data-processing framework for automated generation of instrument data products from a spacecraft mission. Each mission may set up a set of MATIS servers for processing its data products. MATIS embodies lessons learned in experience with prior instrument- data-product-generation software. MATIS is an event-driven workflow manager that interprets project-specific, user-defined rules for managing processes. It executes programs in response to specific events under specific conditions according to the rules. Because requirements of different missions are too diverse to be satisfied by one program, MATIS accommodates plug-in programs. MATIS is flexible in that users can control such processing parameters as how many pipelines to run and on which computing machines to run them. MATIS has a fail-safe capability. At each step, MATIS captures and retains pertinent information needed to complete the step and start the next step. In the event of a restart, this information is retrieved so that processing can be resumed appropriately. At this writing, it is planned to develop a graphical user interface (GUI) for monitoring and controlling a product generation engine in MATIS. The GUI would enable users to schedule multiple processes and manage the data products produced in the processes. Although MATIS was initially designed for instrument data product generation,
Characterisation of group behaviour surface texturing with multi-layers fitting method
NASA Astrophysics Data System (ADS)
Kang, Zhengyang; Fu, Yonghong; Ji, Jinghu; Wang, Hao
2016-07-01
Surface texturing was widely applied in improving the tribological properties of mechanical components, but study of measurement of this technology was still insufficient. This study proposed the multi-layers fitting (MLF) method to characterise the dimples array texture surface. Based on the synergistic effect among the dimples, the 3D morphology of texture surface was rebuilt by 2D stylus profiler in the MLF method. The feasible regions of texture patterns and sensitive parameters were confirmed by non-linear programming, and the processing software of MLF method was developed based on the Matlab®. The characterisation parameters system of dimples was defined mathematically, and the accuracy of MLF method was investigated by comparison experiment. The surface texture specimens were made by laser surface texturing technology, in which high consistency of dimples' size and distribution was achieved. Then, 2D profiles of different dimples were captured by employing Hommel-T1000 stylus profiler, and the data were further processed by MLF software to rebuild 3D morphology of single dimple. The experiment results indicated that the MLF characterisation results were similar to those of Wyko T1100, the white light interference microscope. It was also found that the stability of MLF characterisation results highly depended on the number of captured cross-sections.
Depth-tunable three-dimensional display with interactive light field control
NASA Astrophysics Data System (ADS)
Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chenyu; Dou, Wenhua; Xiao, Liquan
2016-07-01
A software-defined depth-tunable three-dimensional (3D) display with interactive 3D depth control is presented. With the proposed post-processing system, the disparity of the multi-view media can be freely adjusted. Benefiting from a wealth of information inherently contains in dense multi-view images captured with parallel arrangement camera array, the 3D light field is built and the light field structure is controlled to adjust the disparity without additional acquired depth information since the light field structure itself contains depth information. A statistical analysis based on the least square is carried out to extract the depth information inherently exists in the light field structure and the accurate depth information can be used to re-parameterize light fields for the autostereoscopic display, and a smooth motion parallax can be guaranteed. Experimental results show that the system is convenient and effective to adjust the 3D scene performance in the 3D display.
NASA Astrophysics Data System (ADS)
Li, Ming; Yin, Hongxi; Xing, Fangyuan; Wang, Jingchao; Wang, Honghuan
2016-02-01
With the features of network virtualization and resource programming, Software Defined Optical Network (SDON) is considered as the future development trend of optical network, provisioning a more flexible, efficient and open network function, supporting intraconnection and interconnection of data centers. Meanwhile cloud platform can provide powerful computing, storage and management capabilities. In this paper, with the coordination of SDON and cloud platform, a multi-domain SDON architecture based on cloud control plane has been proposed, which is composed of data centers with database (DB), path computation element (PCE), SDON controller and orchestrator. In addition, the structure of the multidomain SDON orchestrator and OpenFlow-enabled optical node are proposed to realize the combination of centralized and distributed effective management and control platform. Finally, the functional verification and demonstration are performed through our optical experiment network.
Zero-forcing pre-coding for MIMO WiMAX transceivers: Performance analysis and implementation issues
NASA Astrophysics Data System (ADS)
Cattoni, A. F.; Le Moullec, Y.; Sacchi, C.
Next generation wireless communication networks are expected to achieve ever increasing data rates. Multi-User Multiple-Input-Multiple-Output (MU-MIMO) is a key technique to obtain the expected performance, because such a technique combines the high capacity achievable using MIMO channel with the benefits of space division multiple access. In MU-MIMO systems, the base stations transmit signals to two or more users over the same channel, for this reason every user can experience inter-user interference. This paper provides a capacity analysis of an online, interference-based pre-coding algorithm able to mitigate the multi-user interference of the MU-MIMO systems in the context of a realistic WiMAX application scenario. Simulation results show that pre-coding can significantly increase the channel capacity. Furthermore, the paper presents several feasibility considerations for implementation of the analyzed technique in a possible FPGA-based software defined radio.
Wong, Linda; Hill, Beth L; Hunsberger, Benjamin C; Bagwell, C Bruce; Curtis, Adam D; Davis, Bruce H
2015-01-01
Leuko64™ (Trillium Diagnostics) is a flow cytometric assay that measures neutrophil CD64 expression and serves as an in vitro indicator of infection/sepsis or the presence of a systemic acute inflammatory response. Leuko64 assay currently utilizes QuantiCALC, a semiautomated software that employs cluster algorithms to define cell populations. The software reduces subjective gating decisions, resulting in interanalyst variability of <5%. We evaluated a completely automated approach to measuring neutrophil CD64 expression using GemStone™ (Verity Software House) and probability state modeling (PSM). Four hundred and fifty-seven human blood samples were processed using the Leuko64 assay. Samples were analyzed on four different flow cytometer models: BD FACSCanto II, BD FACScan, BC Gallios/Navios, and BC FC500. A probability state model was designed to identify calibration beads and three leukocyte subpopulations based on differences in intensity levels of several parameters. PSM automatically calculates CD64 index values for each cell population using equations programmed into the model. GemStone software uses PSM that requires no operator intervention, thus totally automating data analysis and internal quality control flagging. Expert analysis with the predicate method (QuantiCALC) was performed. Interanalyst precision was evaluated for both methods of data analysis. PSM with GemStone correlates well with the expert manual analysis, r(2) = 0.99675 for the neutrophil CD64 index values with no intermethod bias detected. The average interanalyst imprecision for the QuantiCALC method was 1.06% (range 0.00-7.94%), which was reduced to 0.00% with the GemStone PSM. The operator-to-operator agreement in GemStone was a perfect correlation, r(2) = 1.000. Automated quantification of CD64 index values produced results that strongly correlate with expert analysis using a standard gate-based data analysis method. PSM successfully evaluated flow cytometric data generated by multiple instruments across multiple lots of the Leuko64 kit in all 457 cases. The probability-based method provides greater objectivity, higher data analysis speed, and allows for greater precision for in vitro diagnostic flow cytometric assays. © 2015 International Clinical Cytometry Society.
NASA Astrophysics Data System (ADS)
Friedman, Gary; Schwuttke, Ursula M.; Burliegh, Scott; Chow, Sanguan; Parlier, Randy; Lee, Lorrine; Castro, Henry; Gersbach, Jim
1993-03-01
In the early days of JPL's solar system exploration, each spacecraft mission required its own dedicated data system with all software applications written in the mainframe's native assembly language. Although these early telemetry processing systems were a triumph of engineering in their day, since that time the computer industry has advanced to the point where it is now advantageous to replace these systems with more modern technology. The Space Flight Operations Center (SFOC) Prototype group was established in 1985 as a workstation and software laboratory. The charter of the lab was to determine if it was possible to construct a multimission telemetry processing system using commercial, off-the-shelf computers that communicated via networks. The staff of the lab mirrored that of a typical skunk works operation -- a small, multi-disciplinary team with a great deal of autonomy that could get complex tasks done quickly. In an effort to determine which approaches would be useful, the prototype group experimented with all types of operating systems, inter-process communication mechanisms, network protocols, packet size parameters. Out of that pioneering work came the confidence that a multi-mission telemetry processing system could be built using high-level languages running in a heterogeneous, networked workstation environment. Experience revealed that the operating systems on all nodes should be similar (i.e., all VMS or all PC-DOS or all UNIX), and that a unique Data Transport Subsystem tool needed to be built to address the incompatibilities of network standards, byte ordering, and socket buffering. The advantages of building a telemetry processing system based on emerging industry standards were numerous: by employing these standards, we would no longer be locked into a single vendor. When new technology came to market which offered ten times the performance at one eighth the cost, it would be possible to attach the new machine to the network, re-compile the application code, and run. In addition, we would no longer be plagued with lack of manufacturer support when we encountered obscure bugs. And maybe, hopefully, the eternal elusive goal of software portability across different vendors' platforms would finally be available. Some highlights of our prototyping efforts are described.
NASA Technical Reports Server (NTRS)
Friedman, Gary; Schwuttke, Ursula M.; Burliegh, Scott; Chow, Sanguan; Parlier, Randy; Lee, Lorrine; Castro, Henry; Gersbach, Jim
1993-01-01
In the early days of JPL's solar system exploration, each spacecraft mission required its own dedicated data system with all software applications written in the mainframe's native assembly language. Although these early telemetry processing systems were a triumph of engineering in their day, since that time the computer industry has advanced to the point where it is now advantageous to replace these systems with more modern technology. The Space Flight Operations Center (SFOC) Prototype group was established in 1985 as a workstation and software laboratory. The charter of the lab was to determine if it was possible to construct a multimission telemetry processing system using commercial, off-the-shelf computers that communicated via networks. The staff of the lab mirrored that of a typical skunk works operation -- a small, multi-disciplinary team with a great deal of autonomy that could get complex tasks done quickly. In an effort to determine which approaches would be useful, the prototype group experimented with all types of operating systems, inter-process communication mechanisms, network protocols, packet size parameters. Out of that pioneering work came the confidence that a multi-mission telemetry processing system could be built using high-level languages running in a heterogeneous, networked workstation environment. Experience revealed that the operating systems on all nodes should be similar (i.e., all VMS or all PC-DOS or all UNIX), and that a unique Data Transport Subsystem tool needed to be built to address the incompatibilities of network standards, byte ordering, and socket buffering. The advantages of building a telemetry processing system based on emerging industry standards were numerous: by employing these standards, we would no longer be locked into a single vendor. When new technology came to market which offered ten times the performance at one eighth the cost, it would be possible to attach the new machine to the network, re-compile the application code, and run. In addition, we would no longer be plagued with lack of manufacturer support when we encountered obscure bugs. And maybe, hopefully, the eternal elusive goal of software portability across different vendors' platforms would finally be available. Some highlights of our prototyping efforts are described.
Adapting Wireless Technology to Lighting Control and Environmental Sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Teasdale; Francis Rubinstein; David S. Watson
Although advanced lighting control systems offer significant energy savings, the high cost of retrofitting buildings with advanced lighting control systems is a barrier to adoption of this energy-saving technology. Wireless technology, however, offers a solution to mounting installation costs since it requires no additional wiring to implement. To demonstrate the feasibility of such a system, a prototype wirelessly-controlled advanced lighting system was designed and built. The system includes the following components: a wirelessly-controllable analog circuit module (ACM), a wirelessly-controllable electronic dimmable ballast, a T8 3-lamp fixture, an environmental multi-sensor, a current transducer, and control software. The ACM, dimmable ballast, multi-sensor,more » and current transducer were all integrated with SmartMesh{trademark} wireless mesh networking nodes, called motes, enabling wireless communication, sensor monitoring, and actuator control. Each mote-enabled device has a reliable communication path to the SmartMesh Manager, a single board computer that controls network functions and connects the wireless network to a PC running lighting control software. The ACM is capable of locally driving one or more standard 0-10 Volt electronic dimmable ballasts through relay control and a 0-10 Volt controllable output, in addition to 0-24 Volt and 0-10 Volt inputs. The mote-integrated electronic dimmable ballast is designed to drive a standard 3-lamp T8 light fixture. The environmental multisensor measures occupancy, light level and temperature. The current transducer is used to measure the power consumed by the fixture. Control software was developed to implement advanced lighting algorithms, including open and closed-loop daylight ramping, occupancy control, and demand response. Engineering prototypes of each component were fabricated and tested in a bench-scale system. Based on standard industry practices, a cost analysis was conducted. It is estimated that the installation cost of a wireless advanced lighting control system for a retrofit application is at least 20% lower than a comparable wired system for a typical 16,000 square-foot office building, with a payback period of less than 3 years. At 30% market penetration saturation, a cumulative 695 Billion kWh of energy could be saved through 2025, a cost savings of $52 Billion.« less
Geometric representation methods for multi-type self-defining remote sensing data sets
NASA Technical Reports Server (NTRS)
Anuta, P. E.
1980-01-01
Efficient and convenient representation of remote sensing data is highly important for an effective utilization. The task of merging different data types is currently dealt with by treating each case as an individual problem. A description is provided of work which is carried out to standardize the multidata merging process. The basic concept of the new approach is that of the self-defining data set (SDDS). The creation of a standard is proposed. This standard would be such that data which may be of interest in a large number of earth resources remote sensing applications would be in a format which allows convenient and automatic merging. Attention is given to details regarding the multidata merging problem, a geometric description of multitype data sets, image reconstruction from track-type data, a data set generation system, and an example multitype data set.
Implementation and Testing of the JANUS Standard with SSC Pacific’s Software-Defined Acoustic Modem
2017-10-01
Communications Outpost (FDECO) Innovative Naval Prototype (INP) Program by the Advanced Photonic Technologies Branch (Code 55360), Space and Naval Warfare...underwater acoustic communication operations with NATO and non-NATO military and civilian maritime assets. iv ACRONYMS SPAWAR Space and Naval Warfare...the center frequency [1]. The ease of implementation and proven robustness in harsh underwater acoustic communication channels paved the way for
Database Design and Management in Engineering Optimization.
1988-02-01
scientific and engineer- Q.- ’ method In the mid-19SOs along with modern digital com- ing applications. The paper highlights the difference puters, have made...is continuously tion software can call standard subroutines from the DBMS redefined in an application program, DDL must have j libary to define...operations. .. " type data usually encountered in engineering applications. GFDGT: Computes the number of digits needed to display " "’ A user
Control of autonomous ground vehicles: a brief technical review
NASA Astrophysics Data System (ADS)
Babak, Shahian-Jahromi; Hussain, Syed A.; Karakas, Burak; Cetin, Sabri
2017-07-01
This paper presents a brief review of the developments achieved in autonomous vehicle systems technology. A concise history of autonomous driver assistance systems is presented, followed by a review of current state of the art sensor technology used in autonomous vehicles. Standard sensor fusion method that has been recently explored is discussed. Finally, advances in embedded software methodologies that define the logic between sensory information and actuation decisions are reviewed.
The computational structural mechanics testbed procedures manual
NASA Technical Reports Server (NTRS)
Stewart, Caroline B. (Compiler)
1991-01-01
The purpose of this manual is to document the standard high level command language procedures of the Computational Structural Mechanics (CSM) Testbed software system. A description of each procedure including its function, commands, data interface, and use is presented. This manual is designed to assist users in defining and using command procedures to perform structural analysis in the CSM Testbed User's Manual and the CSM Testbed Data Library Description.
Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models
NASA Astrophysics Data System (ADS)
Chu, A.
2014-12-01
Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.
Hardware independence checkout software
NASA Technical Reports Server (NTRS)
Cameron, Barry W.; Helbig, H. R.
1990-01-01
ACSI has developed a program utilizing CLIPS to assess compliance with various programming standards. Essentially the program parses C code to extract the names of all function calls. These are asserted as CLIPS facts which also include information about line numbers, source file names, and called functions. Rules have been devised to establish functions called that have not been defined in any of the source parsed. These are compared against lists of standards (represented as facts) using rules that check intersections and/or unions of these. By piping the output into other processes the source is appropriately commented by generating and executing parsed scripts.
Scenario driven data modelling: a method for integrating diverse sources of data and data streams
2011-01-01
Background Biology is rapidly becoming a data intensive, data-driven science. It is essential that data is represented and connected in ways that best represent its full conceptual content and allows both automated integration and data driven decision-making. Recent advancements in distributed multi-relational directed graphs, implemented in the form of the Semantic Web make it possible to deal with complicated heterogeneous data in new and interesting ways. Results This paper presents a new approach, scenario driven data modelling (SDDM), that integrates multi-relational directed graphs with data streams. SDDM can be applied to virtually any data integration challenge with widely divergent types of data and data streams. In this work, we explored integrating genetics data with reports from traditional media. SDDM was applied to the New Delhi metallo-beta-lactamase gene (NDM-1), an emerging global health threat. The SDDM process constructed a scenario, created a RDF multi-relational directed graph that linked diverse types of data to the Semantic Web, implemented RDF conversion tools (RDFizers) to bring content into the Sematic Web, identified data streams and analytical routines to analyse those streams, and identified user requirements and graph traversals to meet end-user requirements. Conclusions We provided an example where SDDM was applied to a complex data integration challenge. The process created a model of the emerging NDM-1 health threat, identified and filled gaps in that model, and constructed reliable software that monitored data streams based on the scenario derived multi-relational directed graph. The SDDM process significantly reduced the software requirements phase by letting the scenario and resulting multi-relational directed graph define what is possible and then set the scope of the user requirements. Approaches like SDDM will be critical to the future of data intensive, data-driven science because they automate the process of converting massive data streams into usable knowledge. PMID:22165854
Multi-Objective Reinforcement Learning-based Deep Neural Networks for Cognitive Space Communications
NASA Technical Reports Server (NTRS)
Ferreria, Paulo; Paffenroth, Randy; Wyglinski, Alexander M.; Hackett, Timothy; Bilen, Sven; Reinhart, Richard; Mortensen, Dale
2017-01-01
Future communication subsystems of space exploration missions can potentially benefit from software-defined radios (SDRs) controlled by machine learning algorithms. In this paper, we propose a novel hybrid radio resource allocation management control algorithm that integrates multi-objective reinforcement learning and deep artificial neural networks. The objective is to efficiently manage communications system resources by monitoring performance functions with common dependent variables that result in conflicting goals. The uncertainty in the performance of thousands of different possible combinations of radio parameters makes the trade-off between exploration and exploitation in reinforcement learning (RL) much more challenging for future critical space-based missions. Thus, the system should spend as little time as possible on exploring actions, and whenever it explores an action, it should perform at acceptable levels most of the time. The proposed approach enables on-line learning by interactions with the environment and restricts poor resource allocation performance through virtual environment exploration. Improvements in the multiobjective performance can be achieved via transmitter parameter adaptation on a packet-basis, with poorly predicted performance promptly resulting in rejected decisions. Simulations presented in this work considered the DVB-S2 standard adaptive transmitter parameters and additional ones expected to be present in future adaptive radio systems. Performance results are provided by analysis of the proposed hybrid algorithm when operating across a satellite communication channel from Earth to GEO orbit during clear sky conditions. The proposed approach constitutes part of the core cognitive engine proof-of-concept to be delivered to the NASA Glenn Research Center SCaN Testbed located onboard the International Space Station.
Li, Ting; Hong, Jun; Zhang, Jinhua; Guo, Feng
2014-03-15
The improvement of the resolution of brain signal and the ability to control external device has been the most important goal in BMI research field. This paper describes a non-invasive brain-actuated manipulator experiment, which defined a paradigm for the motion control of a serial manipulator based on motor imagery and shared control. The techniques of component selection, spatial filtering and classification of motor imagery were involved. Small-world neural network (SWNN) was used to classify five brain states. To verify the effectiveness of the proposed classifier, we replace the SWNN classifier by a radial basis function (RBF) networks neural network, a standard multi-layered feed-forward backpropagation network (SMN) and a multi-SVM classifier, with the same features for the classification. The results also indicate that the proposed classifier achieves a 3.83% improvement over the best results of other classifiers. We proposed a shared control method consisting of two control patterns to expand the control of BMI from the software angle. The job of path building for reaching the 'end' point was designated as an assessment task. We recorded all paths contributed by subjects and picked up relevant parameters as evaluation coefficients. With the assistance of two control patterns and series of machine learning algorithms, the proposed BMI originally achieved the motion control of a manipulator in the whole workspace. According to experimental results, we confirmed the feasibility of the proposed BMI method for 3D motion control of a manipulator using EEG during motor imagery. Copyright © 2013 Elsevier B.V. All rights reserved.
Multi-Objective Reinforcement Learning-Based Deep Neural Networks for Cognitive Space Communications
NASA Technical Reports Server (NTRS)
Ferreria, Paulo Victor R.; Paffenroth, Randy; Wyglinski, Alexander M.; Hackett, Timothy M.; Bilen, Sven G.; Reinhart, Richard C.; Mortensen, Dale J.
2017-01-01
Future communication subsystems of space exploration missions can potentially benefit from software-defined radios (SDRs) controlled by machine learning algorithms. In this paper, we propose a novel hybrid radio resource allocation management control algorithm that integrates multi-objective reinforcement learning and deep artificial neural networks. The objective is to efficiently manage communications system resources by monitoring performance functions with common dependent variables that result in conflicting goals. The uncertainty in the performance of thousands of different possible combinations of radio parameters makes the trade-off between exploration and exploitation in reinforcement learning (RL) much more challenging for future critical space-based missions. Thus, the system should spend as little time as possible on exploring actions, and whenever it explores an action, it should perform at acceptable levels most of the time. The proposed approach enables on-line learning by interactions with the environment and restricts poor resource allocation performance through virtual environment exploration. Improvements in the multiobjective performance can be achieved via transmitter parameter adaptation on a packet-basis, with poorly predicted performance promptly resulting in rejected decisions. Simulations presented in this work considered the DVB-S2 standard adaptive transmitter parameters and additional ones expected to be present in future adaptive radio systems. Performance results are provided by analysis of the proposed hybrid algorithm when operating across a satellite communication channel from Earth to GEO orbit during clear sky conditions. The proposed approach constitutes part of the core cognitive engine proof-of-concept to be delivered to the NASA Glenn Research Center SCaN Testbed located onboard the International Space Station.
Packet utilisation definitions for the ESA XMM mission
NASA Technical Reports Server (NTRS)
Nye, H. R.
1994-01-01
XMM, ESA's X-Ray Multi-Mirror satellite, due for launch at the end of 1999 will be the first ESA scientific spacecraft to implement the ESA packet telecommand and telemetry standards and will be the first ESOC-controlled science mission to take advantage of the new flight control system infrastructure development (based on object-oriented design and distributed-system architecture) due for deployment in 1995. The implementation of the packet standards is well defined at packet transport level. However, the standard relevant to the application level (the ESA Packet Utilization Standard) covers a wide range of on-board 'services' applicable in varying degrees to the needs of XMM. In defining which parts of the ESA PUS to implement, the XMM project first considered the mission objectives and the derived operations concept and went on to identify a minimum set of packet definitions compatible with these aspects. This paper sets the scene as above and then describes the services needed for XMM and the telecommand and telemetry packet types necessary to support each service.
NASA Astrophysics Data System (ADS)
Bennetti, Andrea; Ansari, Salim; Dewhirst, Tori; Catanese, Giuseppe
2010-08-01
The development of satellites and ground systems (and the technologies that support them) is complex and demands a great deal of rigor in the management of both the information it relies upon and the information it generates via the performance of well established processes. To this extent for the past fifteen years Sapienza Consulting has been supporting the European Space Agency (ESA) in the management of this information and provided ESA with ECSS (European Cooperation for Space Standardization) Standards based Project Management (PM), Product Assurance (PA) and Quality Assurance (QA) software applications. In 2009 Sapienza recognised the need to modernize, standardizing and integrate its core ECSS-based software tools into a single yet modularised suite of applications named ECLIPSE aimed at: • Fulfilling a wider range of historical and emerging requirements, • Providing a better experience for users, • Increasing the value of the information it collects and manages • Lowering the cost of ownership and operation • Increasing collaboration within and between space sector organizations • Aiding in the performance of several PM, PA, QA, and configuration management tasks in adherence to ECSS standards. In this paper, Sapienza will first present the toolset, and a rationale for its development, describing and justifying its architecture, and basic modules composition. Having defined the toolset architecture, this paper will address the current status of the individual applications. A compliance assessment will be presented for each module in the toolset with respect to the ECSS standard it addresses. Lastly experience from early industry and Institutional users will be presented.
Software Engineering Laboratory (SEL) cleanroom process model
NASA Technical Reports Server (NTRS)
Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon
1991-01-01
The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to set of accepted processes and products for achieving each criterion; (5) Select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to a set of accepted processes and products for achieving each criterion; (5) select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
Löck, Steffen; Roth, Klaus; Skripcak, Tomas; Worbs, Mario; Helmbrecht, Stephan; Jakobi, Annika; Just, Uwe; Krause, Mechthild; Baumann, Michael; Enghardt, Wolfgang; Lühr, Armin
2015-09-01
To guarantee equal access to optimal radiotherapy, a concept of patient assignment to photon or particle radiotherapy using remote treatment plan exchange and comparison - ReCompare - was proposed. We demonstrate the implementation of this concept and present its clinical applicability. The ReCompare concept was implemented using a client-server based software solution. A clinical workflow for the remote treatment plan exchange and comparison was defined. The steps required by the user and performed by the software for a complete plan transfer were described and an additional module for dose-response modeling was added. The ReCompare software was successfully tested in cooperation with three external partner clinics and worked meeting all required specifications. It was compatible with several standard treatment planning systems, ensured patient data protection, and integrated in the clinical workflow. The ReCompare software can be applied to support non-particle radiotherapy institutions with the patient-specific treatment decision on the optimal irradiation modality by remote treatment plan exchange and comparison. Copyright © 2015. Published by Elsevier GmbH.
A Multi-User Microcomputer System for Small Libraries.
ERIC Educational Resources Information Center
Leggate, Peter
1988-01-01
Describes the development of Bookshelf, a multi-user microcomputer system for small libraries that uses an integrated software package. The discussion covers the design parameters of the package, which were based on a survey of seven small libraries, and some characteristics of the software. (three notes with references) (CLB)
In-network adaptation of SHVC video in software-defined networks
NASA Astrophysics Data System (ADS)
Awobuluyi, Olatunde; Nightingale, James; Wang, Qi; Alcaraz Calero, Jose Maria; Grecos, Christos
2016-04-01
Software Defined Networks (SDN), when combined with Network Function Virtualization (NFV) represents a paradigm shift in how future networks will behave and be managed. SDN's are expected to provide the underpinning technologies for future innovations such as 5G mobile networks and the Internet of Everything. The SDN architecture offers features that facilitate an abstracted and centralized global network view in which packet forwarding or dropping decisions are based on application flows. Software Defined Networks facilitate a wide range of network management tasks, including the adaptation of real-time video streams as they traverse the network. SHVC, the scalable extension to the recent H.265 standard is a new video encoding standard that supports ultra-high definition video streams with spatial resolutions of up to 7680×4320 and frame rates of 60fps or more. The massive increase in bandwidth required to deliver these U-HD video streams dwarfs the bandwidth requirements of current high definition (HD) video. Such large bandwidth increases pose very significant challenges for network operators. In this paper we go substantially beyond the limited number of existing implementations and proposals for video streaming in SDN's all of which have primarily focused on traffic engineering solutions such as load balancing. By implementing and empirically evaluating an SDN enabled Media Adaptation Network Entity (MANE) we provide a valuable empirical insight into the benefits and limitations of SDN enabled video adaptation for real time video applications. The SDN-MANE is the video adaptation component of our Video Quality Assurance Manager (VQAM) SDN control plane application, which also includes an SDN monitoring component to acquire network metrics and a decision making engine using algorithms to determine the optimum adaptation strategy for any real time video application flow given the current network conditions. Our proposed VQAM application has been implemented and evaluated on an SDN allowing us to provide important benchmarks for video streaming over SDN and for SDN control plane latency.
Accelerator controls at CERN: Some converging trends
NASA Astrophysics Data System (ADS)
Kuiper, B.
1990-08-01
CERN's growing services to the high-energy physics community using frozen resources has led to the implementation of "Technical Boards", mandated to assist the management by making recommendations for rationalizations in various technological domains. The Board on Process Control and Electronics for Accelerators, TEBOCO, has emphasized four main lines which might yield economy in resources. First, a common architecture for accelerator controls has been agreed between the three accelerator divisions. Second, a common hardware/software kit has been defined, from which the large majority of future process interfacing may be composed. A support service for this kit is an essential part of the plan. Third, high-level protocols have been developed for standardizing access to process devices. They derive from agreed standard models of the devices and involve a standard control message. This should ease application development and mobility of equipment. Fourth, a common software engineering methodology and a commercial package of application development tools have been adopted. Some rationalization in the field of the man-machine interface and in matters of synchronization is also under way.
A BPMN solution for chaining OGC services to quality assure location-based crowdsourced data
NASA Astrophysics Data System (ADS)
Meek, Sam; Jackson, Mike; Leibovici, Didier G.
2016-02-01
The Open Geospatial Consortium (OGC) Web Processing Service (WPS) standard enables access to a centralized repository of processes and services from compliant clients. A crucial part of the standard includes the provision to chain disparate processes and services to form a reusable workflow. To date this has been realized by methods such as embedding XML requests, using Business Process Execution Language (BPEL) engines and other external orchestration engines. Although these allow the user to define tasks and data artifacts as web services, they are often considered inflexible and complicated, often due to vendor specific solutions and inaccessible documentation. This paper introduces a new method of flexible service chaining using the standard Business Process Markup Notation (BPMN). A prototype system has been developed upon an existing open source BPMN suite to illustrate the advantages of the approach. The motivation for the software design is qualification of crowdsourced data for use in policy-making. The software is tested as part of a project that seeks to qualify, assure, and add value to crowdsourced data in a biological monitoring use case.
From data to knowledge: The future of multi-omics data analysis for the rhizosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen White, Richard; Borkum, Mark I.; Rivas-Ubach, Albert
The rhizosphere is the interface between a plant's roots and its surrounding soil. The rhizosphere microbiome, a complex microbial ecosystem, nourishes the terrestrial biosphere. Integrated multi-omics is a modern approach to systems biology that analyzes and interprets the datasets of multiple -omes of both individual organisms and multi-organism communities and consortia. The successful usage and application of integrated multi-omics to rhizospheric science is predicated upon the availability of rhizosphere-specific data, metadata and software. This review analyzes the availability of multi-omics data, metadata and software for rhizospheric science, identifying potential issues, challenges and opportunities.
A Unified Algebraic and Logic-Based Framework Towards Safe Routing Implementations
2015-08-13
Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative...and debugging several SDN applications. Example-based SDN synthesis. Recent emergence of software - defined networks offers an opportunity to design...domain of Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative networking
Kalina, Tomas; Flores-Montero, Juan; Lecrevisse, Quentin; Pedreira, Carlos E; van der Velden, Vincent H J; Novakova, Michaela; Mejstrikova, Ester; Hrusak, Ondrej; Böttcher, Sebastian; Karsch, Dennis; Sędek, Łukasz; Trinquand, Amelie; Boeckx, Nancy; Caetano, Joana; Asnafi, Vahid; Lucio, Paulo; Lima, Margarida; Helena Santos, Ana; Bonaccorso, Paola; van der Sluijs-Gelling, Alita J; Langerak, Anton W; Martin-Ayuso, Marta; Szczepański, Tomasz; van Dongen, Jacques J M; Orfao, Alberto
2015-02-01
Flow cytometric immunophenotyping has become essential for accurate diagnosis, classification, and disease monitoring in hemato-oncology. The EuroFlow Consortium has established a fully standardized "all-in-one" pipeline consisting of standardized instrument settings, reagent panels, and sample preparation protocols and software for data analysis and disease classification. For its reproducible implementation, parallel development of a quality assurance (QA) program was required. Here, we report on the results of four consecutive annual rounds of the novel external QA EuroFlow program. The novel QA scheme aimed at monitoring the whole flow cytometric analysis process (cytometer setting, sample preparation, acquisition and analysis) by reading the median fluorescence intensities (MedFI) of defined lymphocytes' subsets. Each QA participant applied the predefined reagents' panel on blood cells of local healthy donors. A uniform gating strategy was applied to define lymphocyte subsets and to read MedFI values per marker. The MedFI values were compared with reference data and deviations from reference values were quantified using performance score metrics. In four annual QA rounds, we analyzed 123 blood samples from local healthy donors on 14 different instruments in 11 laboratories from nine European countries. The immunophenotype of defined cellular subsets appeared sufficiently standardized to permit unified (software) data analysis. The coefficient of variation of MedFI for 7 of 11 markers performed repeatedly below 30%, average MedFI in each QA round ranged from 86 to 125% from overall median. Calculation of performance scores was instrumental to pinpoint standardization failures and their causes. Overall, the new EuroFlow QA system for the first time allowed to quantify the technical variation that is introduced in the measurement of fluorescence intensities in a multicentric setting over an extended period of time. EuroFlow QA is a proficiency test specific for laboratories that use standardized EuroFlow protocols. It may be used to complement, but not replace, established proficiency tests. © 2014 International Society for Advancement of Cytometry. © 2014 International Society for Advancement of Cytometry.
The contaminant analysis automation robot implementation for the automated laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younkin, J.R.; Igou, R.E.; Urenda, T.D.
1995-12-31
The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLMmore » when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation.« less
A software for managing chemical processes in a multi-user laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camino, Fernando E.
Here, we report a software for logging chemical processes in a multi-user laboratory, which implements a work flow designed to reduce hazardous situations associated with the disposal of chemicals in incompatible waste containers. The software allows users to perform only those processes displayed in their list of authorized chemical processes and provides the location and label code of waste containers, among other useful information. The software has been used for six years in the cleanroom of the Center for Functional Nanomaterials at Brookhaven National Laboratory and has been an important factor for the excellent safety record of the Center.
A software for managing chemical processes in a multi-user laboratory
Camino, Fernando E.
2016-10-26
Here, we report a software for logging chemical processes in a multi-user laboratory, which implements a work flow designed to reduce hazardous situations associated with the disposal of chemicals in incompatible waste containers. The software allows users to perform only those processes displayed in their list of authorized chemical processes and provides the location and label code of waste containers, among other useful information. The software has been used for six years in the cleanroom of the Center for Functional Nanomaterials at Brookhaven National Laboratory and has been an important factor for the excellent safety record of the Center.
NASA Technical Reports Server (NTRS)
Thompson David S.; Soni, Bharat K.
2001-01-01
An integrated geometry/grid/simulation software package, ICEG2D, is being developed to automate computational fluid dynamics (CFD) simulations for single- and multi-element airfoils with ice accretions. The current version, ICEG213 (v2.0), was designed to automatically perform four primary functions: (1) generate a grid-ready surface definition based on the geometrical characteristics of the iced airfoil surface, (2) generate high-quality structured and generalized grids starting from a defined surface definition, (3) generate the input and restart files needed to run the structured grid CFD solver NPARC or the generalized grid CFD solver HYBFL2D, and (4) using the flow solutions, generate solution-adaptive grids. ICEG2D (v2.0) can be operated in either a batch mode using a script file or in an interactive mode by entering directives from a command line within a Unix shell. This report summarizes activities completed in the first two years of a three-year research and development program to address automation issues related to CFD simulations for airfoils with ice accretions. As well as describing the technology employed in the software, this document serves as a users manual providing installation and operating instructions. An evaluation of the software is also presented.
On process optimization considering LCA methodology.
Pieragostini, Carla; Mussati, Miguel C; Aguirre, Pío
2012-04-15
The goal of this work is to research the state-of-the-art in process optimization techniques and tools based on LCA, focused in the process engineering field. A collection of methods, approaches, applications, specific software packages, and insights regarding experiences and progress made in applying the LCA methodology coupled to optimization frameworks is provided, and general trends are identified. The "cradle-to-gate" concept to define the system boundaries is the most used approach in practice, instead of the "cradle-to-grave" approach. Normally, the relationship between inventory data and impact category indicators is linearly expressed by the characterization factors; then, synergic effects of the contaminants are neglected. Among the LCIA methods, the eco-indicator 99, which is based on the endpoint category and the panel method, is the most used in practice. A single environmental impact function, resulting from the aggregation of environmental impacts, is formulated as the environmental objective in most analyzed cases. SimaPro is the most used software for LCA applications in literature analyzed. The multi-objective optimization is the most used approach for dealing with this kind of problems, where the ε-constraint method for generating the Pareto set is the most applied technique. However, a renewed interest in formulating a single economic objective function in optimization frameworks can be observed, favored by the development of life cycle cost software and progress made in assessing costs of environmental externalities. Finally, a trend to deal with multi-period scenarios into integrated LCA-optimization frameworks can be distinguished providing more accurate results upon data availability. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Siamidis, John; Yuko, Jim
2014-01-01
The Space Communications and Navigation (SCaN) Program Office at NASA Headquarters oversees all of NASAs space communications activities. SCaN manages and directs the ground-based facilities and services provided by the Deep Space Network (DSN), Near Earth Network (NEN), and the Space Network (SN). Through the SCaN Program Office, NASA GRC developed a Software Defined Radio (SDR) testbed experiment (SCaN testbed experiment) for use on the International Space Station (ISS). It is comprised of three different SDR radios, the Jet Propulsion Laboratory (JPL) radio, Harris Corporation radio, and the General Dynamics Corporation radio. The SCaN testbed experiment provides an on-orbit, adaptable, SDR Space Telecommunications Radio System (STRS) - based facility to conduct a suite of experiments to advance the Software Defined Radio, Space Telecommunications Radio Systems (STRS) standards, reduce risk (Technology Readiness Level (TRL) advancement) for candidate Constellation future space flight hardware software, and demonstrate space communication links critical to future NASA exploration missions. The SCaN testbed project provides NASA, industry, other Government agencies, and academic partners the opportunity to develop and field communications, navigation, and networking technologies in the laboratory and space environment based on reconfigurable, software defined radio platforms and the STRS Architecture.The SCaN testbed is resident on the P3 Express Logistics Carrier (ELC) on the exterior truss of the International Space Station (ISS). The SCaN testbed payload launched on the Japanese Aerospace Exploration Agency (JAXA) H-II Transfer Vehicle (HTV) and was installed on the ISS P3 ELC located on the inboard RAM P3 site. The daily operations and testing are managed out of NASA GRC in the Telescience Support Center (TSC).
Multi-Scale Modeling of an Integrated 3D Braided Composite with Applications to Helicopter Arm
NASA Astrophysics Data System (ADS)
Zhang, Diantang; Chen, Li; Sun, Ying; Zhang, Yifan; Qian, Kun
2017-10-01
A study is conducted with the aim of developing multi-scale analytical method for designing the composite helicopter arm with three-dimensional (3D) five-directional braided structure. Based on the analysis of 3D braided microstructure, the multi-scale finite element modeling is developed. Finite element analysis on the load capacity of 3D five-directional braided composites helicopter arm is carried out using the software ABAQUS/Standard. The influences of the braiding angle and loading condition on the stress and strain distribution of the helicopter arm are simulated. The results show that the proposed multi-scale method is capable of accurately predicting the mechanical properties of 3D braided composites, validated by the comparison the stress-strain curves of meso-scale RVCs. Furthermore, it is found that the braiding angle is an important factor affecting the mechanical properties of 3D five-directional braided composite helicopter arm. Based on the optimized structure parameters, the nearly net-shaped composite helicopter arm is fabricated using a novel resin transfer mould (RTM) process.
Rahman, M Azizur; Rusteberg, Bernd; Gogu, R C; Lobo Ferreira, J P; Sauter, Martin
2012-05-30
This study reports the development of a new spatial multi-criteria decision analysis (SMCDA) software tool for selecting suitable sites for Managed Aquifer Recharge (MAR) systems. The new SMCDA software tool functions based on the combination of existing multi-criteria evaluation methods with modern decision analysis techniques. More specifically, non-compensatory screening, criteria standardization and weighting, and Analytical Hierarchy Process (AHP) have been combined with Weighted Linear Combination (WLC) and Ordered Weighted Averaging (OWA). This SMCDA tool may be implemented with a wide range of decision maker's preferences. The tool's user-friendly interface helps guide the decision maker through the sequential steps for site selection, those steps namely being constraint mapping, criteria hierarchy, criteria standardization and weighting, and criteria overlay. The tool offers some predetermined default criteria and standard methods to increase the trade-off between ease-of-use and efficiency. Integrated into ArcGIS, the tool has the advantage of using GIS tools for spatial analysis, and herein data may be processed and displayed. The tool is non-site specific, adaptive, and comprehensive, and may be applied to any type of site-selection problem. For demonstrating the robustness of the new tool, a case study was planned and executed at Algarve Region, Portugal. The efficiency of the SMCDA tool in the decision making process for selecting suitable sites for MAR was also demonstrated. Specific aspects of the tool such as built-in default criteria, explicit decision steps, and flexibility in choosing different options were key features, which benefited the study. The new SMCDA tool can be augmented by groundwater flow and transport modeling so as to achieve a more comprehensive approach to the selection process for the best locations of the MAR infiltration basins, as well as the locations of recovery wells and areas of groundwater protection. The new spatial multicriteria analysis tool has already been implemented within the GIS based Gabardine decision support system as an innovative MAR planning tool. Copyright © 2012 Elsevier Ltd. All rights reserved.
Software Program: Software Management Guidebook
NASA Technical Reports Server (NTRS)
1996-01-01
The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.
Generating Mosaics of Astronomical Images
NASA Technical Reports Server (NTRS)
Bergou, Attila; Berriman, Bruce; Good, John; Jacob, Joseph; Katz, Daniel; Laity, Anastasia; Prince, Thomas; Williams, Roy
2005-01-01
"Montage" is the name of a service of the National Virtual Observatory (NVO), and of software being developed to implement the service via the World Wide Web. Montage generates science-grade custom mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. "Science-grade" in this context signifies that terrestrial and instrumental features are removed from images in a way that can be described quantitatively. "Custom" refers to user-specified parameters of projection, coordinates, size, rotation, and spatial sampling. The greatest value of Montage is expected to lie in its ability to analyze images at multiple wavelengths, delivering them on a common projection, coordinate system, and spatial sampling, and thereby enabling further analysis as though they were part of a single, multi-wavelength image. Montage will be deployed as a computation-intensive service through existing astronomy portals and other Web sites. It will be integrated into the emerging NVO architecture and will be executed on the TeraGrid. The Montage software will also be portable and publicly available.
Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion
NASA Astrophysics Data System (ADS)
Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison
2016-11-01
Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, H Al; Erickson, B; Paulson, E
Purpose: MRI-based adaptive brachytherapy (ABT) is an emerging treatment modality for patients with gynecological tumors. However, MR image intensity non-uniformities (IINU) can vary from fraction to fraction, complicating image interpretation and auto-contouring accuracy. We demonstrate here an automated MR image standardization and auto-contouring strategy for MRI-based ABT of cervix cancer. Methods: MR image standardization consisted of: 1) IINU correction using the MNI N3 algorithm, 2) noise filtering using anisotropic diffusion, and 3) signal intensity normalization using the volumetric median. This post-processing chain was implemented as a series of custom Matlab and Java extensions in MIM (v6.4.5, MIM Software) and wasmore » applied to 3D T2 SPACE images of six patients undergoing MRI-based ABT at 3T. Coefficients of variation (CV=σ/µ) were calculated for both original and standardized images and compared using Mann-Whitney tests. Patient-specific cumulative MR atlases of bladder, rectum, and sigmoid contours were constructed throughout ABT, using original and standardized MR images from all previous ABT fractions. Auto-contouring was performed in MIM two ways: 1) best-match of one atlas image to the daily MR image, 2) multi-match of all previous fraction atlas images to the daily MR image. Dice’s Similarity Coefficients (DSCs) were calculated for auto-generated contours relative to reference contours for both original and standardized MR images and compared using Mann-Whitney tests. Results: Significant improvements in CV were detected following MR image standardization (p=0.0043), demonstrating an improvement in MR image uniformity. DSCs consistently increased for auto-contoured bladder, rectum, and sigmoid following MR image standardization, with the highest DSCs detected when the combination of MR image standardization and multi-match cumulative atlas-based auto-contouring was utilized. Conclusion: MR image standardization significantly improves MR image uniformity. The combination of MR image standardization and multi-match cumulative atlas-based auto-contouring produced the highest DSCs and is a promising strategy for MRI-based ABT for cervix cancer.« less
Providing scalable system software for high-end simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenberg, D.
1997-12-31
Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.
Pioche, Mathieu; Rivory, Jérôme; Nishizawa, Toshihiro; Uraoka, Toshio; Touzet, Sandrine; O'Brien, Marc; Saurin, Jean-Christophe; Ponchon, Thierry; Denis, Angélique; Yahagi, Naohisa
2016-12-01
Background and study aim: Endoscopic submucosal dissection (ESD) is currently the reference method to achieve an en bloc resection for large lesions; however, the technique is difficult and risky, with a long learning curve. In order to reduce the morbidity, training courses that use animal models are recommended. Recently, self-learning software has been developed to assist students in their training. The aim of this study was to evaluate the impact of this tool on the ESD learning curve. Methods: A prospective, randomized, comparative study enrolled 39 students who were experienced in interventional endoscopy. Each student was randomized to one of two groups and performed 30 ESDs of 30 mm standardized lesions in a bovine colon model. The software group used the self-learning software whereas the control group only observed an ESD procedure video. The primary outcome was the rate of successful ESD procedures, defined as complete en bloc resection without any perforation and performed in less than 75 minutes. Results: A total of 39 students performed 1170 ESDs. Success was achieved in 404 (70.9 %) in the software group and 367 (61.2 %) in the control group ( P = 0.03). Among the successful procedures, there were no significant differences between the software and control groups in terms of perforation rate (22 [4.0 %] vs. 29 [5.1 %], respectively; P = 0.27) and mean (SD) procedure duration (34.1 [13.4] vs. 32.3 [14.0] minutes, respectively; P = 0.52). For the 30th procedure, the rate of complete resection was superior in the software group (84.2 %) compared with the control group (50.0 %; P = 0.01). Conclusion: ESD self-learning software was effective in improving the quality of resection compared with a standard teaching method using procedure videos. This result suggests the benefit of incorporating such software into teaching programs. © Georg Thieme Verlag KG Stuttgart · New York.
Implementation of an open adoption research data management system for clinical studies.
Müller, Jan; Heiss, Kirsten Ingmar; Oberhoffer, Renate
2017-07-06
Research institutions need to manage multiple studies with individual data sets, processing rules and different permissions. So far, there is no standard technology that provides an easy to use environment to create databases and user interfaces for clinical trials or research studies. Therefore various software solutions are being used-from custom software, explicitly designed for a specific study, to cost intensive commercial Clinical Trial Management Systems (CTMS) up to very basic approaches with self-designed Microsoft ® databases. The technology applied to conduct those studies varies tremendously from study to study, making it difficult to evaluate data across various studies (meta-analysis) and keeping a defined level of quality in database design, data processing, displaying and exporting. Furthermore, the systems being used to collect study data are often operated redundantly to systems used in patient care. As a consequence the data collection in studies is inefficient and data quality may suffer from unsynchronized datasets, non-normalized database scenarios and manually executed data transfers. With OpenCampus Research we implemented an open adoption software (OAS) solution on an open source basis, which provides a standard environment for state-of-the-art research database management at low cost.
Method and apparatus for measuring the mass flow rate of a fluid
Evans, Robert P.; Wilkins, S. Curtis; Goodrich, Lorenzo D.; Blotter, Jonathan D.
2002-01-01
A non invasive method and apparatus is provided to measure the mass flow rate of a multi-phase fluid. An accelerometer is attached to a pipe carrying a multi-phase fluid. Flow related measurements in pipes are sensitive to random velocity fluctuations whose magnitude is proportional to the mean mass flow rate. An analysis of the signal produced by the accelerometer shows a relationship between the mass flow of a fluid and the noise component of the signal of an accelerometer. The noise signal, as defined by the standard deviation of the accelerometer signal allows the method and apparatus of the present invention to non-intrusively measure the mass flow rate of a multi-phase fluid.
Kalendar, Ruslan; Tselykh, Timofey V; Khassenov, Bekbolat; Ramanculov, Erlan M
2017-01-01
This chapter introduces the FastPCR software as an integrated tool environment for PCR primer and probe design, which predicts properties of oligonucleotides based on experimental studies of the PCR efficiency. The software provides comprehensive facilities for designing primers for most PCR applications and their combinations. These include the standard PCR as well as the multiplex, long-distance, inverse, real-time, group-specific, unique, overlap extension PCR for multi-fragments assembling cloning and loop-mediated isothermal amplification (LAMP). It also contains a built-in program to design oligonucleotide sets both for long sequence assembly by ligase chain reaction and for design of amplicons that tile across a region(s) of interest. The software calculates the melting temperature for the standard and degenerate oligonucleotides including locked nucleic acid (LNA) and other modifications. It also provides analyses for a set of primers with the prediction of oligonucleotide properties, dimer and G/C-quadruplex detection, linguistic complexity as well as a primer dilution and resuspension calculator. The program consists of various bioinformatical tools for analysis of sequences with the GC or AT skew, CG% and GA% content, and the purine-pyrimidine skew. It also analyzes the linguistic sequence complexity and performs generation of random DNA sequence as well as restriction endonucleases analysis. The program allows to find or create restriction enzyme recognition sites for coding sequences and supports the clustering of sequences. It performs efficient and complete detection of various repeat types with visual display. The FastPCR software allows the sequence file batch processing that is essential for automation. The program is available for download at http://primerdigital.com/fastpcr.html , and its online version is located at http://primerdigital.com/tools/pcr.html .
Service-oriented Software Defined Optical Networks for Cloud Computing
NASA Astrophysics Data System (ADS)
Liu, Yuze; Li, Hui; Ji, Yuefeng
2017-10-01
With the development of big data and cloud computing technology, the traditional software-defined network is facing new challenges (e.g., ubiquitous accessibility, higher bandwidth, more flexible management and greater security). This paper proposes a new service-oriented software defined optical network architecture, including a resource layer, a service abstract layer, a control layer and an application layer. We then dwell on the corresponding service providing method. Different service ID is used to identify the service a device can offer. Finally, we experimentally evaluate that proposed service providing method can be applied to transmit different services based on the service ID in the service-oriented software defined optical network.
Scientific Visualization of Radio Astronomy Data using Gesture Interaction
NASA Astrophysics Data System (ADS)
Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.
2015-09-01
MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.
Integrated System Health Management: Foundational Concepts, Approach, and Implementation
NASA Technical Reports Server (NTRS)
Figueroa, Fernando
2009-01-01
A sound basis to guide the community in the conception and implementation of ISHM (Integrated System Health Management) capability in operational systems was provided. The concept of "ISHM Model of a System" and a related architecture defined as a unique Data, Information, and Knowledge (DIaK) architecture were described. The ISHM architecture is independent of the typical system architecture, which is based on grouping physical elements that are assembled to make up a subsystem, and subsystems combine to form systems, etc. It was emphasized that ISHM capability needs to be implemented first at a low functional capability level (FCL), or limited ability to detect anomalies, diagnose, determine consequences, etc. As algorithms and tools to augment or improve the FCL are identified, they should be incorporated into the system. This means that the architecture, DIaK management, and software, must be modular and standards-based, in order to enable systematic augmentation of FCL (no ad-hoc modifications). A set of technologies (and tools) needed to implement ISHM were described. One essential tool is a software environment to create the ISHM Model. The software environment encapsulates DIaK, and an infrastructure to focus DIaK on determining health (detect anomalies, determine causes, determine effects, and provide integrated awareness of the system to the operator). The environment includes gateways to communicate in accordance to standards, specially the IEEE 1451.1 Standard for Smart Sensors and Actuators.
A Facility and Architecture for Autonomy Research
NASA Technical Reports Server (NTRS)
Pisanich, Greg; Clancy, Daniel (Technical Monitor)
2002-01-01
Autonomy is a key enabling factor in the advancement of the remote robotic exploration. There is currently a large gap between autonomy software at the research level and software that is ready for insertion into near-term space missions. The Mission Simulation Facility (MST) will bridge this gap by providing a simulation framework and suite of simulation tools to support research in autonomy for remote exploration. This system will allow developers of autonomy software to test their models in a high-fidelity simulation and evaluate their system's performance against a set of integrated, standardized simulations. The Mission Simulation ToolKit (MST) uses a distributed architecture with a communication layer that is built on top of the standardized High Level Architecture (HLA). This architecture enables the use of existing high fidelity models, allows mixing simulation components from various computing platforms and enforces the use of a standardized high-level interface among components. The components needed to achieve a realistic simulation can be grouped into four categories: environment generation (terrain, environmental features), robotic platform behavior (robot dynamics), instrument models (camera/spectrometer/etc.), and data analysis. The MST will provide basic components in these areas but allows users to plug-in easily any refined model by means of a communication protocol. Finally, a description file defines the robot and environment parameters for easy configuration and ensures that all the simulation models share the same information.
NASA Astrophysics Data System (ADS)
Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav
2017-10-01
In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.
A generic multi-flex-body dynamics, controls simulation tool for space station
NASA Technical Reports Server (NTRS)
London, Ken W.; Lee, John F.; Singh, Ramen P.; Schubele, Buddy
1991-01-01
An order (n) multiflex body Space Station simulation tool is introduced. The flex multibody modeling is generic enough to model all phases of Space Station from build up through to Assembly Complete configuration and beyond. Multibody subsystems such as the Mobile Servicing System (MSS) undergoing a prescribed translation and rotation are also allowed. The software includes aerodynamic, gravity gradient, and magnetic field models. User defined controllers can be discrete or continuous. Extensive preprocessing of 'body by body' NASTRAN flex data is built in. A significant aspect, too, is the integrated controls design capability which includes model reduction and analytic linearization.
Automatic Generation of Supervisory Control System Software Using Graph Composition
NASA Astrophysics Data System (ADS)
Nakata, Hideo; Sano, Tatsuro; Kojima, Taizo; Seo, Kazuo; Uchida, Tomoyuki; Nakamura, Yasuaki
This paper describes the automatic generation of system descriptions for SCADA (Supervisory Control And Data Acquisition) systems. The proposed method produces various types of data and programs for SCADA systems from equipment definitions using conversion rules. At first, this method makes directed graphs, which represent connections between the equipment, from equipment definitions. System descriptions are generated using the conversion rules, by analyzing these directed graphs, and finding the groups of equipment that involve similar operations. This method can make the conversion rules multi levels by using the composition of graphs, and can reduce the number of rules. The developer can define and manage these rules efficiently.
COMBINE archive and OMEX format: one file to share all information to reproduce a modeling project.
Bergmann, Frank T; Adams, Richard; Moodie, Stuart; Cooper, Jonathan; Glont, Mihai; Golebiewski, Martin; Hucka, Michael; Laibe, Camille; Miller, Andrew K; Nickerson, David P; Olivier, Brett G; Rodriguez, Nicolas; Sauro, Herbert M; Scharm, Martin; Soiland-Reyes, Stian; Waltemath, Dagmar; Yvon, Florent; Le Novère, Nicolas
2014-12-14
With the ever increasing use of computational models in the biosciences, the need to share models and reproduce the results of published studies efficiently and easily is becoming more important. To this end, various standards have been proposed that can be used to describe models, simulations, data or other essential information in a consistent fashion. These constitute various separate components required to reproduce a given published scientific result. We describe the Open Modeling EXchange format (OMEX). Together with the use of other standard formats from the Computational Modeling in Biology Network (COMBINE), OMEX is the basis of the COMBINE Archive, a single file that supports the exchange of all the information necessary for a modeling and simulation experiment in biology. An OMEX file is a ZIP container that includes a manifest file, listing the content of the archive, an optional metadata file adding information about the archive and its content, and the files describing the model. The content of a COMBINE Archive consists of files encoded in COMBINE standards whenever possible, but may include additional files defined by an Internet Media Type. Several tools that support the COMBINE Archive are available, either as independent libraries or embedded in modeling software. The COMBINE Archive facilitates the reproduction of modeling and simulation experiments in biology by embedding all the relevant information in one file. Having all the information stored and exchanged at once also helps in building activity logs and audit trails. We anticipate that the COMBINE Archive will become a significant help for modellers, as the domain moves to larger, more complex experiments such as multi-scale models of organs, digital organisms, and bioengineering.
Considerations for the Next Revision of STRS
NASA Technical Reports Server (NTRS)
Johnson, Sandra K.; Handler, Louis M.; Briones, Janette C.
2016-01-01
Development of NASAs Software Defined Radio architecture, the Space Telecommunication Radio System (STRS), was initiated in 2004 with a goal of reducing the cost, risk and schedule when implementing Software Defined Radios (SDR) for NASA space missions. Since STRS was first flown in 2012 on three Software Defined Radios on the Space Communication and Navigation (SCaN) Testbed, only minor changes have been made to the architecture. Multiple entities have since implemented the architecture and have provided significant feedback for consideration for the next revision of the standard. The focus for the first set of updates to the architecture is items that enhance application portability. Items that require modifications to existing applications before migrating to the updated architecture will only be considered if there is compelling reasons to make the change. The significant suggestions that were further evaluated for consideration include expanding and clarifying the timing Application Programming Interfaces (APIs), improving handle name and identification (ID) definitions and use, and multiple items related to implementation of STRS Devices. In addition to ideas suggested while implementing STRS, SDR technology has evolved significantly and this impact to the architecture needs to be considered. These include incorporating cognitive concepts - learning from past decisions and making new decisions that the radio can act upon. SDRs are also being developed that do not contain a General Purpose Module which is currently required for the platform to be STRS compliant. The purpose of this paper is to discuss the comments received, provide a summary of the evaluation considerations, and examine planned dispositions
Reyon, Deepak; Khayter, Cyd; Regan, Maureen R; Joung, J Keith; Sander, Jeffry D
2012-10-01
Engineered transcription activator-like effector nucleases (TALENs) are broadly useful tools for performing targeted genome editing in a wide variety of organisms and cell types including plants, zebrafish, C. elegans, rat, human somatic cells, and human pluripotent stem cells. Here we describe detailed protocols for the serial, hierarchical assembly of TALENs that require neither PCR nor specialized multi-fragment ligations and that can be implemented by any laboratory. These restriction enzyme and ligation (REAL)-based protocols can be practiced using plasmid libraries and user-friendly, Web-based software that both identifies target sites in sequences of interest and generates printable graphical guides that facilitate assembly of TALENs. With the described platform of reagents, protocols, and software, researchers can easily engineer multiple TALENs within 2 weeks using standard cloning techniques. 2012 by John Wiley & Sons, Inc.
Common software and interface for different helmet-mounted display (HMD) aircraft symbology sets
NASA Astrophysics Data System (ADS)
Mulholland, Fred F.
2000-06-01
Different aircraft in different services and countries have their own set of symbology they want displayed on their HMD. Even as flight symbolgy is standardized, there will still be some differences for types of aircraft, different weapons, different sensors, and different countries. As an HMD supplier, we want to provide a system that can be used across all these applications with no changes in the system, including no changes in the software. This HMD system must also provide the flexibility to accommodate new symbology as it is developed over the years, again, with no change in the HMD software. VSI has developed their HMD software to accommodate F-15, F- 16, F-18, and F-22 symbology sets for the Joint Helmet Mounted Cueing System. It also has the flexibility to accommodate the aircraft types and services of the Joint Strike Fighter: Conventional Takeoff and Landing variant for the USAF, Carrier-based Variant for the USN, and the Short Takeoff and Vertical Landing variant for the USMC and U.K. Royal Navy and Air Force. The key to this flexibility is the interface definition. The interface parameters are established at power-on with the download of an interface definition data set. This data set is used to interpret symbology commands from the aircraft OFP during operation and provide graphic commands to the HMD software. This presentation will define the graphics commands, provide an example of how the interface definition data set is defined, and then show how symbology commands produce a display.
NASA Technical Reports Server (NTRS)
1989-01-01
At their March 1988 meeting, members of the National Aeronautics and Space Administration (NASA) Information Resources Management (IRM) Council expressed concern that NASA may not have the infrastructure necessary to support the use of Ada for major NASA software projects. Members also observed that the agency has no coordinated strategy for applying its experiences with Ada to subsequent projects (Hinners, 27 June 1988). To deal with these problems, the IRM Council chair appointed an intercenter Ada and Software Management Assessment Working Group (ASMAWG). They prepared a report (McGarry et al., March 1989) entitled, 'Ada and Software Management in NASA: Findings and Recommendations'. That report presented a series of recommendations intended to enable NASA to develop better software at lower cost through the use of Ada and other state-of-the-art software engineering technologies. The purpose here is to describe the steps (called objectives) by which this goal may be achieved, to identify the NASA officials or organizations responsible for carrying out the steps, and to define a schedule for doing so. This document sets forth four goals: adopt agency-wide software standards and policies; use Ada as the programming language for all mission software; establish an infrastructure to support software engineering, including the use of Ada, and to leverage the agency's software experience; and build the agency's knowledge base in Ada and software engineering. A schedule for achieving the objectives and goals is given.
[Requirements for the successful installation of an data management system].
Benson, M; Junger, A; Quinzio, L; Hempelmann, G
2002-08-01
Due to increasing requirements on medical documentation, especially with reference to the German Social Law binding towards quality management and introducing a new billing system (DRGs), an increasing number of departments consider to implement a patient data management system (PDMS). The installation should be professionally planned as a project in order to insure and complete a successful installation. The following aspects are essential: composition of the project group, definition of goals, finance, networking, space considerations, hardware, software, configuration, education and support. Project and finance planning must be prepared before beginning the project and the project process must be constantly evaluated. In selecting the software, certain characteristics should be considered: use of standards, configurability, intercommunicability and modularity. Our experience has taught us that vaguely defined goals, insufficient project planning and the existing management culture are responsible for the failure of PDMS installations. The software used tends to play a less important role.
OpenMS: a flexible open-source software platform for mass spectrometry data analysis.
Röst, Hannes L; Sachsenberg, Timo; Aiche, Stephan; Bielow, Chris; Weisser, Hendrik; Aicheler, Fabian; Andreotti, Sandro; Ehrlich, Hans-Christian; Gutenbrunner, Petra; Kenar, Erhan; Liang, Xiao; Nahnsen, Sven; Nilse, Lars; Pfeuffer, Julianus; Rosenberger, George; Rurik, Marc; Schmitt, Uwe; Veit, Johannes; Walzer, Mathias; Wojnar, David; Wolski, Witold E; Schilling, Oliver; Choudhary, Jyoti S; Malmström, Lars; Aebersold, Ruedi; Reinert, Knut; Kohlbacher, Oliver
2016-08-30
High-resolution mass spectrometry (MS) has become an important tool in the life sciences, contributing to the diagnosis and understanding of human diseases, elucidating biomolecular structural information and characterizing cellular signaling networks. However, the rapid growth in the volume and complexity of MS data makes transparent, accurate and reproducible analysis difficult. We present OpenMS 2.0 (http://www.openms.de), a robust, open-source, cross-platform software specifically designed for the flexible and reproducible analysis of high-throughput MS data. The extensible OpenMS software implements common mass spectrometric data processing tasks through a well-defined application programming interface in C++ and Python and through standardized open data formats. OpenMS additionally provides a set of 185 tools and ready-made workflows for common mass spectrometric data processing tasks, which enable users to perform complex quantitative mass spectrometric analyses with ease.
BioModels.net Web Services, a free and integrated toolkit for computational modelling software.
Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille
2010-05-01
Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.
OxMaR: open source free software for online minimization and randomization for clinical trials.
O'Callaghan, Christopher A
2014-01-01
Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.
Bucur, Anca; van Leeuwen, Jasper; Chen, Njin-Zu; Claerhout, Brecht; de Schepper, Kristof; Perez-Rey, David; Paraiso-Medina, Sergio; Alonso-Calvo, Raul; Mehta, Keyur; Krykwinski, Cyril
2016-01-01
This paper describes a new Cohort Selection application implemented to support streamlining the definition phase of multi-centric clinical research in oncology. Our approach aims at both ease of use and precision in defining the selection filters expressing the characteristics of the desired population. The application leverages our standards-based Semantic Interoperability Solution and a Groovy DSL to provide high expressiveness in the definition of filters and flexibility in their composition into complex selection graphs including splits and merges. Widely-adopted ontologies such as SNOMED-CT are used to represent the semantics of the data and to express concepts in the application filters, facilitating data sharing and collaboration on joint research questions in large communities of clinical users. The application supports patient data exploration and efficient collaboration in multi-site, heterogeneous and distributed data environments. PMID:27570644
Platform-Independence and Scheduling In a Multi-Threaded Real-Time Simulation
NASA Technical Reports Server (NTRS)
Sugden, Paul P.; Rau, Melissa A.; Kenney, P. Sean
2001-01-01
Aviation research often relies on real-time, pilot-in-the-loop flight simulation as a means to develop new flight software, flight hardware, or pilot procedures. Often these simulations become so complex that a single processor is incapable of performing the necessary computations within a fixed time-step. Threads are an elegant means to distribute the computational work-load when running on a symmetric multi-processor machine. However, programming with threads often requires operating system specific calls that reduce code portability and maintainability. While a multi-threaded simulation allows a significant increase in the simulation complexity, it also increases the workload of a simulation operator by requiring that the operator determine which models run on which thread. To address these concerns an object-oriented design was implemented in the NASA Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. The design provides a portable and maintainable means to use threads and also provides a mechanism to automatically load balance the simulation models.
A Bayesian account of quantum histories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marlow, Thomas
2006-05-15
We investigate whether quantum history theories can be consistent with Bayesian reasoning and whether such an analysis helps clarify the interpretation of such theories. First, we summarise and extend recent work categorising two different approaches to formalising multi-time measurements in quantum theory. The standard approach consists of describing an ordered series of measurements in terms of history propositions with non-additive 'probabilities.' The non-standard approach consists of defining multi-time measurements to consist of sets of exclusive and exhaustive history propositions and recovering the single-time exclusivity of results when discussing single-time history propositions. We analyse whether such history propositions can be consistentmore » with Bayes' rule. We show that certain class of histories are given a natural Bayesian interpretation, namely, the linearly positive histories originally introduced by Goldstein and Page. Thus, we argue that this gives a certain amount of interpretational clarity to the non-standard approach. We also attempt a justification of our analysis using Cox's axioms of probability theory.« less
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Kacpura, Thomas J.; Smith, Carl R.; Liebetreu, John; Hill, Gary; Mortensen, Dale J.; Andro, Monty; Scardelletti, Maximilian C.; Farrington, Allen
2008-01-01
This report defines a hardware architecture approach for software-defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general-purpose processors, digital signal processors, field programmable gate arrays, and application-specific integrated circuits (ASICs) in addition to flexible and tunable radiofrequency front ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and interfaces. The modules are a logical division of common radio functions that compose a typical communication radio. This report describes the architecture details, the module definitions, the typical functions on each module, and the module interfaces. Tradeoffs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify a physical implementation internally on each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.
Real-time SHVC software decoding with multi-threaded parallel processing
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu
2014-09-01
This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.
Planning of electroporation-based treatments using Web-based treatment-planning software.
Pavliha, Denis; Kos, Bor; Marčan, Marija; Zupanič, Anže; Serša, Gregor; Miklavčič, Damijan
2013-11-01
Electroporation-based treatment combining high-voltage electric pulses and poorly permanent cytotoxic drugs, i.e., electrochemotherapy (ECT), is currently used for treating superficial tumor nodules by following standard operating procedures. Besides ECT, another electroporation-based treatment, nonthermal irreversible electroporation (N-TIRE), is also efficient at ablating deep-seated tumors. To perform ECT or N-TIRE of deep-seated tumors, following standard operating procedures is not sufficient and patient-specific treatment planning is required for successful treatment. Treatment planning is required because of the use of individual long-needle electrodes and the diverse shape, size and location of deep-seated tumors. Many institutions that already perform ECT of superficial metastases could benefit from treatment-planning software that would enable the preparation of patient-specific treatment plans. To this end, we have developed a Web-based treatment-planning software for planning electroporation-based treatments that does not require prior engineering knowledge from the user (e.g., the clinician). The software includes algorithms for automatic tissue segmentation and, after segmentation, generation of a 3D model of the tissue. The procedure allows the user to define how the electrodes will be inserted. Finally, electric field distribution is computed, the position of electrodes and the voltage to be applied are optimized using the 3D model and a downloadable treatment plan is made available to the user.
Darwin Core: An Evolving Community-Developed Biodiversity Data Standard
Wieczorek, John; Bloom, David; Guralnick, Robert; Blum, Stan; Döring, Markus; Giovanni, Renato; Robertson, Tim; Vieglais, David
2012-01-01
Biodiversity data derive from myriad sources stored in various formats on many distinct hardware and software platforms. An essential step towards understanding global patterns of biodiversity is to provide a standardized view of these heterogeneous data sources to improve interoperability. Fundamental to this advance are definitions of common terms. This paper describes the evolution and development of Darwin Core, a data standard for publishing and integrating biodiversity information. We focus on the categories of terms that define the standard, differences between simple and relational Darwin Core, how the standard has been implemented, and the community processes that are essential for maintenance and growth of the standard. We present case-study extensions of the Darwin Core into new research communities, including metagenomics and genetic resources. We close by showing how Darwin Core records are integrated to create new knowledge products documenting species distributions and changes due to environmental perturbations. PMID:22238640
NASA Astrophysics Data System (ADS)
Perraud, Jean-Michel; Bennett, James C.; Bridgart, Robert; Robertson, David E.
2016-04-01
Research undertaken through the Water Information Research and Development Alliance (WIRADA) has laid the foundations for continuous deterministic and ensemble short-term forecasting services. One output of this research is the software Short-term Water Information Forecasting Tools version 2 (SWIFT2). SWIFT2 is developed for use in research on short term streamflow forecasting techniques as well as operational forecasting services at the Australian Bureau of Meteorology. The variety of uses in research and operations requires a modular software system whose components can be arranged in applications that are fit for each particular purpose, without unnecessary software duplication. SWIFT2 modelling structures consist of sub-areas of hydrologic models, nodes and links with in-stream routing and reservoirs. While this modelling structure is customary, SWIFT2 is built from the ground up for computational and data intensive applications such as ensemble forecasts necessary for the estimation of the uncertainty in forecasts. Support for parallel computation on multiple processors or on a compute cluster is a primary use case. A convention is defined to store large multi-dimensional forecasting data and its metadata using the netCDF library. SWIFT2 is written in modern C++ with state of the art software engineering techniques and practices. A salient technical feature is a well-defined application programming interface (API) to facilitate access from different applications and technologies. SWIFT2 is already seamlessly accessible on Windows and Linux via packages in R, Python, Matlab and .NET languages such as C# and F#. Command line or graphical front-end applications are also feasible. This poster gives an overview of the technology stack, and illustrates the resulting features of SWIFT2 for users. Research and operational uses share the same common core C++ modelling shell for consistency, but augmented by different software modules suitable for each context. The accessibility via interactive modelling languages is particularly amenable to using SWIFT2 in exploratory research, with a dynamic and versatile experimental modelling workflow. This does not come at the expense of the stability and reliability required for use in operations, where only mature and stable components are used.
NASA Astrophysics Data System (ADS)
Aditya, Pusala; Kumar, Hari; Kumar, Sunil; Rajashekar, Muralikrishna, M.; Muthukumar, V. Sai; Kumar, B. Siva; Sai, S. Siva Sankara; Rao, G. Nageshwar
2013-06-01
We report here the optical and non-linear optical properties of six different novel bis-chalcones of D-π-A-π-D derivatives of diarylideneacetone (DBA). These derivatives have been synthesized by Claisen-Schmidt condensation reaction and were well characterized by using FTIR, 1HNMR, 13CNMR, UV-Visible absorption and mass spectroscopic techniques. The optical bandgap for each of the DBA derivatives were determined both experimentally (UV-Visible spectra & Tauc Plot) and theoretically by ab intio DFT calculations using SIESTA software package. They were found to be in close agreement with each other. The Second Harmonic Generation from these organic chromophores were studied by standard Kurtz and Perry Powder SHG method at 1064 nm. They were found to have superior SHG conversion efficiency when compared to urea (standard sample). Further, we investigated the Multi-Photon absorption properties were using conventional open aperture z-scan technique. These DBA derivatives exhibited strong two photon absorption in the order of 1e-11m/W. Hence, these are potential candidate for various photonic applications like optical power limiting, photonic switching and frequency conversion.
Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification
Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R.
2013-01-01
Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability: The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html. PMID:23966761
Modeling of a 3DTV service in the software-defined networking architecture
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2014-11-01
In this article a newly developed concept towards modeling of a multimedia service offering stereoscopic motion imagery is presented. Proposed model is based on the approach of utilization of Software-defined Networking or Software Defined Networks architecture (SDN). The definition of 3D television service spanning SDN concept is identified, exposing basic characteristic of a 3DTV service in a modern networking organization layout. Furthermore, exemplary functionalities of the proposed 3DTV model are depicted. It is indicated that modeling of a 3DTV service in the Software-defined Networking architecture leads to multiplicity of improvements, especially towards flexibility of a service supporting heterogeneity of end user devices.
Planning the Unplanned Experiment: Assessing the Efficacy of Standards for Safety Critical Software
NASA Technical Reports Server (NTRS)
Graydon, Patrick J.; Holloway, C. Michael
2015-01-01
We need well-founded means of determining whether software is t for use in safety-critical applications. While software in industries such as aviation has an excellent safety record, the fact that software aws have contributed to deaths illustrates the need for justi ably high con dence in software. It is often argued that software is t for safety-critical use because it conforms to a standard for software in safety-critical systems. But little is known about whether such standards `work.' Reliance upon a standard without knowing whether it works is an experiment; without collecting data to assess the standard, this experiment is unplanned. This paper reports on a workshop intended to explore how standards could practicably be assessed. Planning the Unplanned Experiment: Assessing the Ecacy of Standards for Safety Critical Software (AESSCS) was held on 13 May 2014 in conjunction with the European Dependable Computing Conference (EDCC). We summarize and elaborate on the workshop's discussion of the topic, including both the presented positions and the dialogue that ensued.
Fast data transmission in dynamic data acquisition system for plasma diagnostics
NASA Astrophysics Data System (ADS)
Byszuk, Adrian; Poźniak, Krzysztof; Zabołotny, Wojciech M.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Cieszewski, Radosław; Juszczyk, Bartłomiej; Kolasiński, Piotr; Zienkiewicz, Paweł; Chernyshova, Maryna; Czarski, Tomasz
2014-11-01
This paper describes architecture of a new data acquisition system (DAQ) targeted mainly at plasma diagnostic experiments. Modular architecture, in combination with selected hardware components, allows for straightforward reconfiguration of the whole system, both offline and online. Main emphasis will be put into the implementation of data transmission subsystem in said system. One of the biggest advantages of described system is modular architecture with well defined boundaries between main components: analog frontend (AFE), digital backplane and acquisition/control software. Usage of a FPGA chips allows for a high flexibility in design of analog frontends, including ADC <--> FPGA interface. Data transmission between backplane boards and user software was accomplished with the use of industry-standard PCI Express (PCIe) technology. PCIe implementation includes both FPGA firmware and Linux device driver. High flexibility of PCIe connections was accomplished due to use of configurable PCIe switch. Whenever it's possible, described DAQ system tries to make use of standard off-the-shelf (OTF) components, including typical x86 CPU & motherboard (acting as PCIe controller) and cabling.
Advanced data management system architectures testbed
NASA Technical Reports Server (NTRS)
Grant, Terry
1990-01-01
The objective of the Architecture and Tools Testbed is to provide a working, experimental focus to the evolving automation applications for the Space Station Freedom data management system. Emphasis is on defining and refining real-world applications including the following: the validation of user needs; understanding system requirements and capabilities; and extending capabilities. The approach is to provide an open, distributed system of high performance workstations representing both the standard data processors and networks and advanced RISC-based processors and multiprocessor systems. The system provides a base from which to develop and evaluate new performance and risk management concepts and for sharing the results. Participants are given a common view of requirements and capability via: remote login to the testbed; standard, natural user interfaces to simulations and emulations; special attention to user manuals for all software tools; and E-mail communication. The testbed elements which instantiate the approach are briefly described including the workstations, the software simulation and monitoring tools, and performance and fault tolerance experiments.
Optical design of free face reflective headlamps
NASA Astrophysics Data System (ADS)
Cen, Zhao Feng; Li, Xiao Tong; Deng, Shi Tao
2005-02-01
Headlamps are installed at the head of automobiles for road lighting. About the illumination and anti-dazzle, some standards such as the standard of ECE are established. Now more and more free face reflective headlamps (FFR headlamps) are applied, and the light distribution design of FFR mirror becomes an important subject in the field of automobile assembling part. In this paper the surface shape of FFR headlamps is analyzed and described as a multi-partition aspherical surface with some simple parameters. According to the fundamental principles of geometrical optics and using the theory of ray transmission with energy, millions of real rays emitted from lower beam filament and high beam filament are traced and the relative intensity of illumination at the test screen with distance of 25m from the automobiles is obtained. In this paper the description of FFR mirrors is discussed, the algorithm of FFR headlamp design is presented, the flow chart is given and the light distribution simulation software with friendly interfaces is developed. In the light distribution graphic interface of the software, the illumination area could be dragged to a certain position while the parameters of current partition at the FFR mirror will be automatically changed. Using this software the FFR headlamps satisfying criteria will be designed very quickly and the 3D coordinates of any points at the mirror will be obtained. This makes CAM of FFR headlamps easy.
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Rossetti, D.; Simula, F.; Tosoratto, L.; Vicini, P.
2015-12-01
In the attempt to develop an interconnection architecture optimized for hybrid HPC systems dedicated to scientific computing, we designed APEnet+, a point-to-point, low-latency and high-performance network controller supporting 6 fully bidirectional off-board links over a 3D torus topology. The first release of APEnet+ (named V4) was a board based on a 40 nm Altera FPGA, integrating 6 channels at 34 Gbps of raw bandwidth per direction and a PCIe Gen2 x8 host interface. It has been the first-of-its-kind device to implement an RDMA protocol to directly read/write data from/to Fermi and Kepler NVIDIA GPUs using NVIDIA peer-to-peer and GPUDirect RDMA protocols, obtaining real zero-copy GPU-to-GPU transfers over the network. The latest generation of APEnet+ systems (now named V5) implements a PCIe Gen3 x8 host interface on a 28 nm Altera Stratix V FPGA, with multi-standard fast transceivers (up to 14.4 Gbps) and an increased amount of configurable internal resources and hardware IP cores to support main interconnection standard protocols. Herein we present the APEnet+ V5 architecture, the status of its hardware and its system software design. Both its Linux Device Driver and the low-level libraries have been redeveloped to support the PCIe Gen3 protocol, introducing optimizations and solutions based on hardware/software co-design.
Development of an Integrated Biospecimen Database among the Regional Biobanks in Korea.
Park, Hyun Sang; Cho, Hune; Kim, Hwa Sun
2016-04-01
This study developed an integrated database for 15 regional biobanks that provides large quantities of high-quality bio-data to researchers to be used for the prevention of disease, for the development of personalized medicines, and in genetics studies. We collected raw data, managed independently by 15 regional biobanks, for database modeling and analyzed and defined the metadata of the items. We also built a three-step (high, middle, and low) classification system for classifying the item concepts based on the metadata. To generate clear meanings of the items, clinical items were defined using the Systematized Nomenclature of Medicine Clinical Terms, and specimen items were defined using the Logical Observation Identifiers Names and Codes. To optimize database performance, we set up a multi-column index based on the classification system and the international standard code. As a result of subdividing 7,197,252 raw data items collected, we refined the metadata into 1,796 clinical items and 1,792 specimen items. The classification system consists of 15 high, 163 middle, and 3,588 low class items. International standard codes were linked to 69.9% of the clinical items and 71.7% of the specimen items. The database consists of 18 tables based on a table from MySQL Server 5.6. As a result of the performance evaluation, the multi-column index shortened query time by as much as nine times. The database developed was based on an international standard terminology system, providing an infrastructure that can integrate the 7,197,252 raw data items managed by the 15 regional biobanks. In particular, it resolved the inevitable interoperability issues in the exchange of information among the biobanks, and provided a solution to the synonym problem, which arises when the same concept is expressed in a variety of ways.
Defining Medical Levels of Care for Exploration Missions
NASA Technical Reports Server (NTRS)
Hailey, M.; Reyes, D.; Urbina, M.; Rubin, D.; Antonsen, E.
2017-01-01
NASA medical care standards establish requirements for providing health and medical programs for crewmembers during all phases of a mission. These requirements are intended to prevent or mitigate negative health consequences of long-duration spaceflight, thereby optimizing crew health and performance over the course of the mission. Current standards are documented in the two volumes of the NASA-STD-3001 Space Flight Human-System Standard document, established by the Office of the Chief Health and Medical Officer. Its purpose is to provide uniform technical standards for the design, selection, and application of medical hardware, software, processes, procedures, practices, and methods for human-rated systems. NASA-STD-3001 Vol. 1 identifies five levels of care for human spaceflight. These levels of care are accompanied by several components that illustrate the type of medical care expected for each. The Exploration Medical Capability (ExMC) of the Human Research Program has expanded the context of these provided levels of care and components. This supplemental information includes definitions for each component of care and example actions that describe the type of capabilities that coincide with the definition. This interpretation is necessary in order to fully and systematically define the capabilities required for each level of care in order to define the medical requirements and plan for infrastructure needed for medical systems of future exploration missions, such as one to Mars.
NASA Astrophysics Data System (ADS)
Baluev, Roman V.
2013-08-01
We present PlanetPack, a new software tool that we developed to facilitate and standardize the advanced analysis of radial velocity (RV) data for the goal of exoplanets detection, characterization, and basic dynamical N-body simulations. PlanetPack is a command-line interpreter, that can run either in an interactive mode or in a batch mode of automatic script interpretation. Its major abilities include: (i) advanced RV curve fitting with the proper maximum-likelihood treatment of unknown RV jitter; (ii) user-friendly multi-Keplerian as well as Newtonian N-body RV fits; (iii) use of more efficient maximum-likelihood periodograms that involve the full multi-planet fitting (sometimes called as “residual” or “recursive” periodograms); (iv) easily calculatable parametric 2D likelihood function level contours, reflecting the asymptotic confidence regions; (v) fitting under some useful functional constraints is user-friendly; (vi) basic tasks of short- and long-term planetary dynamical simulation using a fast Everhart-type integrator based on Gauss-Legendre spacings; (vii) fitting the data with red noise (auto-correlated errors); (viii) various analytical and numerical methods for the tasks of determining the statistical significance. It is planned that further functionality may be added to PlanetPack in the future. During the development of this software, a lot of effort was made to improve the calculational speed, especially for CPU-demanding tasks. PlanetPack was written in pure C++ (standard of 1998/2003), and is expected to be compilable and useable on a wide range of platforms.
NASA Astrophysics Data System (ADS)
Arif Shah, Muhammad; Hashim, Rathiah; Shah, Adil Ali; Farooq Khattak, Umar
2016-11-01
Developing software through Global Software Development (GSD) became very common now days in the software industry. Pakistan is one of the countries where projects are taken and designed from different countries including Afghanistan. The purpose of this paper is to identify and provide an analysis on several communication barriers that can have a negative impact on the project and to provide management guidelines for medium size software organizations working in Pakistan with clients from Afghanistan and to overcome these communication barriers and challenges organizations face when coordinating with client. Initially we performed a literature review to identify different communication barriers and to check if there are any standardized communications management guidelines for medium size software houses provided in the past. The second stage of the research paper involves guidelines with vendor's perspective that include interviews and focus group discussions with different stakeholders and employees of software houses with clients from Afghanistan. Based on those interviews and discussions we established communication management guidelines in order to overcome the communication problems and barriers working with clients from Afghanistan. As a result of the literature review, we have identified that barriers such as cultural barriers and language barrier were one of the main reasons behind the project failure and suggested that software organizations working in Pakistan should follow certain defined communication guidelines in order to overcome communication barriers that affect the project directly.
NASA Technical Reports Server (NTRS)
Pedings, Marc
2007-01-01
RT-Display is a MATLAB-based data acquisition environment designed to use a variety of commercial off-the-shelf (COTS) hardware to digitize analog signals to a standard data format usable by other post-acquisition data analysis tools. This software presents the acquired data in real time using a variety of signal-processing algorithms. The acquired data is stored in a standard Operator Interactive Signal Processing Software (OISPS) data-formatted file. RT-Display is primarily configured to use the Agilent VXI (or equivalent) data acquisition boards used in such systems as MIDDAS (Multi-channel Integrated Dynamic Data Acquisition System). The software is generalized and deployable in almost any testing environment, without limitations or proprietary configuration for a specific test program or project. With the Agilent hardware configured and in place, users can start the program and, in one step, immediately begin digitizing multiple channels of data. Once the acquisition is completed, data is converted into a common binary format that also can be translated to specific formats used by external analysis software, such as OISPS and PC-Signal (product of AI Signal Research Inc.). RT-Display at the time of this reporting was certified on Agilent hardware capable of acquisition up to 196,608 samples per second. Data signals are presented to the user on-screen simultaneously for 16 channels. Each channel can be viewed individually, with a maximum capability of 160 signal channels (depending on hardware configuration). Current signal presentations include: time data, fast Fourier transforms (FFT), and power spectral density plots (PSD). Additional processing algorithms can be easily incorporated into this environment.
2015-03-26
REAL-TIME RF-DNA FINGERPRINTING OF ZIGBEE DEVICES USING A SOFTWARE-DEFINED RADIO WITH FPGA...not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-054 REAL-TIME RF-DNA FINGERPRINTING OF ZIGBEE DEVICES USING A...REAL-TIME RF-DNA FINGERPRINTING OF ZIGBEE DEVICES USING A SOFTWARE-DEFINED RADIO WITH FPGA PROCESSING William M. Lowder, BSEE, BSCPE
Using Cognitive Control in Software Defined Networking for Port Scan Detection
2017-07-01
ARL-TR-8059 ● July 2017 US Army Research Laboratory Using Cognitive Control in Software-Defined Networking for Port Scan...Cognitive Control in Software-Defined Networking for Port Scan Detection by Vinod K Mishra Computational and Information Sciences Directorate, ARL...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) July 2017 2. REPORT TYPE
Scalable software architecture for on-line multi-camera video processing
NASA Astrophysics Data System (ADS)
Camplani, Massimo; Salgado, Luis
2011-03-01
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.
Design and application of BIM based digital sand table for construction management
NASA Astrophysics Data System (ADS)
Fuquan, JI; Jianqiang, LI; Weijia, LIU
2018-05-01
This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.
Cognitive software defined radar: waveform design for clutter and interference suppression
NASA Astrophysics Data System (ADS)
Kirk, Benjamin H.; Owen, Jonathan W.; Narayanan, Ram M.; Blunt, Shannon D.; Martone, Anthony F.; Sherbondy, Kelly D.
2017-05-01
Clutter and radio frequency interference (RFI) are prevalent issues in the field of radar and are specifically of interest to of cognitive radar. Here, methods for applying and testing the utility of cognitive radar for clutter and RFI mitigation are explored. Using the adaptable transmit capability, environmental database, and general "awareness" of a cognitive radar system (i.e. spectrum sensing, geographical location, etc.), a matched waveform is synthesized that improves the signal-to-clutter ratio (SCR), assuming at least an estimate of the target response and the environmental clutter response are known a prior i. RFI may also be mitigated by sensing the RF spectrum and adapting the transmit center frequency and bandwidth using methods that optimize bandwidth and signal-to-interference plus noise ratio (SINR) (i.e. the spectrum sensing, multi-objective (SS-MO) algorithm). The improvement is shown by a decrease in the noise floor. The above methods' effectiveness are examined via a test-bed developed around a software defined radio (SDR). Testing and the general use of commercial off the shelf (COTS) devices are desirable for their cost effectiveness, general ease of use, as well as technical and community support, but these devices provide design challenges in order to be effective. The universal software radio peripheral (USRP) X310 SDR is a relatively cheap and portable device that has all the system components of a basic cognitive radar. Design challenges of the SDR include phase coherency between channels, bandwidth limitations, dynamic range, and speed of computation and data communication / recording.
Principles to Products: Toward Realizing MOS 2.0
NASA Technical Reports Server (NTRS)
Bindschadler, Duane L.; Delp, Christopher L.
2012-01-01
This is a report on the Operations Revitalization Initiative, part of the ongoing NASA-funded Advanced Multi-Mission Operations Systems (AMMOS) program. We are implementing products that significantly improve efficiency and effectiveness of Mission Operations Systems (MOS) for deep-space missions. We take a multi-mission approach, in keeping with our organization's charter to "provide multi-mission tools and services that enable mission customers to operate at a lower total cost to NASA." Focusing first on architectural fundamentals of the MOS, we review the effort's progress. In particular, we note the use of stakeholder interactions and consideration of past lessons learned to motivate a set of Principles that guide the evolution of the AMMOS. Thus guided, we have created essential patterns and connections (detailed in companion papers) that are explicitly modeled and support elaboration at multiple levels of detail (system, sub-system, element...) throughout a MOS. This architecture is realized in design and implementation products that provide lifecycle support to a Mission at the system and subsystem level. The products include adaptable multi-mission engineering documentation that describes essentials such as operational concepts and scenarios, requirements, interfaces and agreements, information models, and mission operations processes. Because we have adopted a model-based system engineering method, these documents and their contents are meaningfully related to one another and to the system model. This means they are both more rigorous and reusable (from mission to mission) than standard system engineering products. The use of models also enables detailed, early (e.g., formulation phase) insight into the impact of changes (e.g., to interfaces or to software) that is rigorous and complete, allowing better decisions on cost or technical trades. Finally, our work provides clear and rigorous specification of operations needs to software developers, further enabling significant gains in productivity.
Chew, Avenell L.; Lamey, Tina; McLaren, Terri; De Roach, John
2016-01-01
Purpose To present en face optical coherence tomography (OCT) images generated by graph-search theory algorithm-based custom software and examine correlation with other imaging modalities. Methods En face OCT images derived from high density OCT volumetric scans of 3 healthy subjects and 4 patients using a custom algorithm (graph-search theory) and commercial software (Heidelberg Eye Explorer software (Heidelberg Engineering)) were compared and correlated with near infrared reflectance, fundus autofluorescence, adaptive optics flood-illumination ophthalmoscopy (AO-FIO) and microperimetry. Results Commercial software was unable to generate accurate en face OCT images in eyes with retinal pigment epithelium (RPE) pathology due to segmentation error at the level of Bruch’s membrane (BM). Accurate segmentation of the basal RPE and BM was achieved using custom software. The en face OCT images from eyes with isolated interdigitation or ellipsoid zone pathology were of similar quality between custom software and Heidelberg Eye Explorer software in the absence of any other significant outer retinal pathology. En face OCT images demonstrated angioid streaks, lesions of acute macular neuroretinopathy, hydroxychloroquine toxicity and Bietti crystalline deposits that correlated with other imaging modalities. Conclusions Graph-search theory algorithm helps to overcome the limitations of outer retinal segmentation inaccuracies in commercial software. En face OCT images can provide detailed topography of the reflectivity within a specific layer of the retina which correlates with other forms of fundus imaging. Our results highlight the need for standardization of image reflectivity to facilitate quantification of en face OCT images and longitudinal analysis. PMID:27959968
Standardized terminology for clinical trial protocols based on top-level ontological categories.
Heller, B; Herre, H; Lippoldt, K; Loeffler, M
2004-01-01
This paper describes a new method for the ontologically based standardization of concepts with regard to the quality assurance of clinical trial protocols. We developed a data dictionary for medical and trial-specific terms in which concepts and relations are defined context-dependently. The data dictionary is provided to different medical research networks by means of the software tool Onto-Builder via the internet. The data dictionary is based on domain-specific ontologies and the top-level ontology of GOL. The concepts and relations described in the data dictionary are represented in natural language, semi-formally or formally according to their use.
NASA Technical Reports Server (NTRS)
Harris, H. M.; Bergam, M. J.; Kim, S. L.; Smith, E. A.
1987-01-01
Shuttle Mission Design and Operations Software (SMDOS) assists in design and operation of missions involving spacecraft in low orbits around Earth by providing orbital and graphics information. SMDOS performs following five functions: display two world and two polar maps or any user-defined window 5 degrees high in latitude by 5 degrees wide in longitude in one of eight standard projections; designate Earth sites by points or polygon shapes; plot spacecraft ground track with 1-min demarcation lines; display, by means of different colors, availability of Tracking and Data Relay Satellite to Shuttle; and calculate available times and orbits to view particular site, and corresponding look angles. SMDOS written in Laboratory Micro-systems FORTH (1979 standard)
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
A proposal for an SDN-based SIEPON architecture
NASA Astrophysics Data System (ADS)
Khalili, Hamzeh; Sallent, Sebastià; Piney, José Ramón; Rincón, David
2017-11-01
Passive Optical Network (PON) elements such as Optical Line Terminal (OLT) and Optical Network Units (ONUs) are currently managed by inflexible legacy network management systems. Software-Defined Networking (SDN) is a new networking paradigm that improves the operation and management of networks. In this paper, we propose a novel architecture, based on the SDN concept, for Ethernet Passive Optical Networks (EPON) that includes the Service Interoperability standard (SIEPON). In our proposal, the OLT is partially virtualized and some of its functionalities are allocated to the core network management system, while the OLT itself is replaced by an OpenFlow (OF) switch. A new MultiPoint MAC Control (MPMC) sublayer extension based on the OpenFlow protocol is presented. This would allow the SDN controller to manage and enhance the resource utilization, flow monitoring, bandwidth assignment, quality-of-service (QoS) guarantees, and energy management of the optical network access, to name a few possibilities. The OpenFlow switch is extended with synchronous ports to retain the time-critical nature of the EPON network. OpenFlow messages are also extended with new functionalities to implement the concept of EPON Service Paths (ESPs). Our simulation-based results demonstrate the effectiveness of the new architecture, while retaining a similar (or improved) performance in terms of delay and throughput when compared to legacy PONs.
Caranguian, Luther Paul R; Pancho-Festin, Susan; Sison, Luis G
2012-01-01
In this study, we focused on the interoperability and authentication of medical devices in the context of telemedical systems. A recent standard called the ISO/IEEE 11073 Personal Health Device (X73-PHD) Standards addresses the device interoperability problem by defining common protocols for agent (medical device) and manager (appliance) interface. The X73-PHD standard however has not addressed security and authentication of medical devices which is important in establishing integrity of a telemedical system. We have designed and implemented a security policy within the X73-PHD standards. The policy will enable device authentication using Asymmetric-Key Cryptography and the RSA algorithm as the digital signature scheme. We used two approaches for performing the digital signatures: direct software implementation and use of embedded security modules (ESM). The two approaches were evaluated and compared in terms of execution time and memory requirement. For the standard 2048-bit RSA, ESM calculates digital signatures only 12% of the total time for the direct implementation. Moreover, analysis shows that ESM offers more security advantage such as secure storage of keys compared to using direct implementation. Interoperability with other systems was verified by testing the system with LNI Healthlink, a manager software that implements the X73-PHD standard. Lastly, security analysis was done and the system's response to common attacks on authentication systems was analyzed and several measures were implemented to protect the system against them.
NeXML: rich, extensible, and verifiable representation of comparative data and metadata.
Vos, Rutger A; Balhoff, James P; Caravas, Jason A; Holder, Mark T; Lapp, Hilmar; Maddison, Wayne P; Midford, Peter E; Priyam, Anurag; Sukumaran, Jeet; Xia, Xuhua; Stoltzfus, Arlin
2012-07-01
In scientific research, integration and synthesis require a common understanding of where data come from, how much they can be trusted, and what they may be used for. To make such an understanding computer-accessible requires standards for exchanging richly annotated data. The challenges of conveying reusable data are particularly acute in regard to evolutionary comparative analysis, which comprises an ever-expanding list of data types, methods, research aims, and subdisciplines. To facilitate interoperability in evolutionary comparative analysis, we present NeXML, an XML standard (inspired by the current standard, NEXUS) that supports exchange of richly annotated comparative data. NeXML defines syntax for operational taxonomic units, character-state matrices, and phylogenetic trees and networks. Documents can be validated unambiguously. Importantly, any data element can be annotated, to an arbitrary degree of richness, using a system that is both flexible and rigorous. We describe how the use of NeXML by the TreeBASE and Phenoscape projects satisfies user needs that cannot be satisfied with other available file formats. By relying on XML Schema Definition, the design of NeXML facilitates the development and deployment of software for processing, transforming, and querying documents. The adoption of NeXML for practical use is facilitated by the availability of (1) an online manual with code samples and a reference to all defined elements and attributes, (2) programming toolkits in most of the languages used commonly in evolutionary informatics, and (3) input-output support in several widely used software applications. An active, open, community-based development process enables future revision and expansion of NeXML.
NeXML: Rich, Extensible, and Verifiable Representation of Comparative Data and Metadata
Vos, Rutger A.; Balhoff, James P.; Caravas, Jason A.; Holder, Mark T.; Lapp, Hilmar; Maddison, Wayne P.; Midford, Peter E.; Priyam, Anurag; Sukumaran, Jeet; Xia, Xuhua; Stoltzfus, Arlin
2012-01-01
Abstract In scientific research, integration and synthesis require a common understanding of where data come from, how much they can be trusted, and what they may be used for. To make such an understanding computer-accessible requires standards for exchanging richly annotated data. The challenges of conveying reusable data are particularly acute in regard to evolutionary comparative analysis, which comprises an ever-expanding list of data types, methods, research aims, and subdisciplines. To facilitate interoperability in evolutionary comparative analysis, we present NeXML, an XML standard (inspired by the current standard, NEXUS) that supports exchange of richly annotated comparative data. NeXML defines syntax for operational taxonomic units, character-state matrices, and phylogenetic trees and networks. Documents can be validated unambiguously. Importantly, any data element can be annotated, to an arbitrary degree of richness, using a system that is both flexible and rigorous. We describe how the use of NeXML by the TreeBASE and Phenoscape projects satisfies user needs that cannot be satisfied with other available file formats. By relying on XML Schema Definition, the design of NeXML facilitates the development and deployment of software for processing, transforming, and querying documents. The adoption of NeXML for practical use is facilitated by the availability of (1) an online manual with code samples and a reference to all defined elements and attributes, (2) programming toolkits in most of the languages used commonly in evolutionary informatics, and (3) input–output support in several widely used software applications. An active, open, community-based development process enables future revision and expansion of NeXML. PMID:22357728
The United States Environmental Protection Agency (EPA) is developing a comprehensive environmental exposure and risk analysis software system for agency-wide application using the methodology of a Multi-media, Multi-pathway, Multi-receptor Risk Assessment (3MRA) model. This sof...
Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets
NASA Technical Reports Server (NTRS)
Brugel, Edward W.; Domik, Gitta O.; Ayres, Thomas R.
1993-01-01
The goal of this project was to support the scientific analysis of multi-spectral astrophysical data by means of scientific visualization. Scientific visualization offers its greatest value if it is not used as a method separate or alternative to other data analysis methods but rather in addition to these methods. Together with quantitative analysis of data, such as offered by statistical analysis, image or signal processing, visualization attempts to explore all information inherent in astrophysical data in the most effective way. Data visualization is one aspect of data analysis. Our taxonomy as developed in Section 2 includes identification and access to existing information, preprocessing and quantitative analysis of data, visual representation and the user interface as major components to the software environment of astrophysical data analysis. In pursuing our goal to provide methods and tools for scientific visualization of multi-spectral astrophysical data, we therefore looked at scientific data analysis as one whole process, adding visualization tools to an already existing environment and integrating the various components that define a scientific data analysis environment. As long as the software development process of each component is separate from all other components, users of data analysis software are constantly interrupted in their scientific work in order to convert from one data format to another, or to move from one storage medium to another, or to switch from one user interface to another. We also took an in-depth look at scientific visualization and its underlying concepts, current visualization systems, their contributions, and their shortcomings. The role of data visualization is to stimulate mental processes different from quantitative data analysis, such as the perception of spatial relationships or the discovery of patterns or anomalies while browsing through large data sets. Visualization often leads to an intuitive understanding of the meaning of data values and their relationships by sacrificing accuracy in interpreting the data values. In order to be accurate in the interpretation, data values need to be measured, computed on, and compared to theoretical or empirical models (quantitative analysis). If visualization software hampers quantitative analysis (which happens with some commercial visualization products), its use is greatly diminished for astrophysical data analysis. The software system STAR (Scientific Toolkit for Astrophysical Research) was developed as a prototype during the course of the project to better understand the pragmatic concerns raised in the project. STAR led to a better understanding on the importance of collaboration between astrophysicists and computer scientists.
Pyrodiversity promotes avian diversity over the decade following forest fire
Morgan W. Tingley; Viviana Ruiz-Gutiérrez; Robert L. Wilkerson; Christine A. Howell; Rodney B. Siegel
2016-01-01
An emerging hypothesis in fire ecology is that pyrodiversity increases species diversity.We test whether pyrodiversityâdefined as the standard deviation of fire severityâincreases avian biodiversity at two spatial scales, and whether and how this relationship may change in the decade following fire. We use a dynamic Bayesian community model applied to a multi-year...
A Software Product Line Process to Develop Agents for the IoT.
Ayala, Inmaculada; Amor, Mercedes; Fuentes, Lidia; Troya, José M
2015-07-01
One of the most important challenges of this decade is the Internet of Things (IoT), which aims to enable things to be connected anytime, anyplace, with anything and anyone, ideally using any path/network and any service. IoT systems are usually composed of heterogeneous and interconnected lightweight devices that support applications that are subject to change in their external environment and in the functioning of these devices. The management of the variability of these changes, autonomously, is a challenge in the development of these systems. Agents are a good option for developing self-managed IoT systems due to their distributed nature, context-awareness and self-adaptation. Our goal is to enhance the development of IoT applications using agents and software product lines (SPL). Specifically, we propose to use Self-StarMASMAS, multi-agent system) agents and to define an SPL process using the Common Variability Language. In this contribution, we propose an SPL process for Self-StarMAS, paying particular attention to agents embedded in sensor motes.
Software Development Projects: Estimation of Cost and Effort (A Manager’s Digest).
1982-12-01
it later when changes need to be made to it. Various levels of difficulty are experiencel .4 due to the skill level of the programmer, poor Orcaram...impurities that if eliminatel 4 reduce the level of complexity of the program. They are as follows: 16 1. Complementary Operations: unreduced expressions 2...greater quality than that to support standard business applications. Remus defines quality as "...the number of program defects normalized by size over
Cloud Service Provider Methods for Managing Insider Threats: Analysis Phase 1
2013-11-01
of Standards and Technology (NIST) Special Publication 800-145 (NIST SP 800-145) defines three types of cloud services : Software as a Service ( SaaS ...among these three models. NIST SP 800-145 describes the three service models as follows: SaaS —The capability provided to the consumer is to use the...Cloud Service Provider Methods for Managing Insider Threats: Analysis Phase I Greg Porter November 2013 TECHNICAL NOTE CMU/SEI-2013-TN-020
2016-03-01
in-vitro decision to incubate a startup, Lexumo [7], which is developing a commercial Software as a Service ( SaaS ) vulnerability assessment...LTS Label Transition System MUSE Mining and Understanding Software Enclaves RTEMS Real-Time Executive for Multi-processor Systems SaaS Software ...as a Service SSA Static Single Assignment SWE Software Epistemology UD/DU Def-Use/Use-Def Chains (Dataflow Graph)
Ka-band Technologies for Small Spacecraft Communications via Relays and Direct Data Downlink
NASA Technical Reports Server (NTRS)
Budinger, James M.; Niederhaus, Charles; Reinhart, Richard; Downey, Joe; Roberts, Anthony
2016-01-01
As the scientific capabilities and number of small spacecraft missions in the near Earth region increase, standard yet configurable user spacecraft terminals operating in Ka-band are needed to lower mission cost and risk and enable significantly higher data return than current UHF or S-band terminals. These compact Ka-band terminals are intended to operate with both the current and next generation of Ka-band relay satellites and via direct data communications with near Earth tracking terminals. This presentation provides an overview of emerging NASA-sponsored and commercially provided technologies in software defined radios (SDRs), transceivers, and electronically steered antennas that will enable data rates from hundreds of kbps to over 1 Gbps and operate in multiple frequency bands (such as S- and X-bands) and expand the use of NASA's common Ka-bands frequencies: 22.55-23.15 GHz for forward data or uplink; and 25.5-27.0 GHz for return data or downlink. Reductions in mass, power and volume come from integration of multiple radio functions, operations in Ka-band, high efficiency amplifiers and receivers, and compact, flat and vibration free electronically steered narrow beam antennas for up to + 60 degrees field of regard. The software defined near Earth space transceiver (SD-NEST) described in the presentation is intended to be compliant with NASA's space telecommunications radio system (STRS) standard for communications waveforms and hardware interoperability.
NASA Technical Reports Server (NTRS)
1972-01-01
The shuttle GN&C software functions for horizontal flight operations are defined. Software functional requirements are grouped into two categories: first horizontal flight requirements and full mission horizontal flight requirements. The document privides the intial step in the shuttle GN&C software design process. It also serves as a management tool to identify analyses which are required to define requirements.
Chi, Bryan; DeLeeuw, Ronald J; Coe, Bradley P; MacAulay, Calum; Lam, Wan L
2004-02-09
Array comparative genomic hybridization (CGH) is a technique which detects copy number differences in DNA segments. Complete sequencing of the human genome and the development of an array representing a tiling set of tens of thousands of DNA segments spanning the entire human genome has made high resolution copy number analysis throughout the genome possible. Since array CGH provides signal ratio for each DNA segment, visualization would require the reassembly of individual data points into chromosome profiles. We have developed a visualization tool for displaying whole genome array CGH data in the context of chromosomal location. SeeGH is an application that translates spot signal ratio data from array CGH experiments to displays of high resolution chromosome profiles. Data is imported from a simple tab delimited text file obtained from standard microarray image analysis software. SeeGH processes the signal ratio data and graphically displays it in a conventional CGH karyotype diagram with the added features of magnification and DNA segment annotation. In this process, SeeGH imports the data into a database, calculates the average ratio and standard deviation for each replicate spot, and links them to chromosome regions for graphical display. Once the data is displayed, users have the option of hiding or flagging DNA segments based on user defined criteria, and retrieve annotation information such as clone name, NCBI sequence accession number, ratio, base pair position on the chromosome, and standard deviation. SeeGH represents a novel software tool used to view and analyze array CGH data. The software gives users the ability to view the data in an overall genomic view as well as magnify specific chromosomal regions facilitating the precise localization of genetic alterations. SeeGH is easily installed and runs on Microsoft Windows 2000 or later environments.
Image analysis software versus direct anthropometry for breast measurements.
Quieregatto, Paulo Rogério; Hochman, Bernardo; Furtado, Fabianne; Machado, Aline Fernanda Perez; Sabino Neto, Miguel; Ferreira, Lydia Masako
2014-10-01
To compare breast measurements performed using the software packages ImageTool(r), AutoCAD(r) and Adobe Photoshop(r) with direct anthropometric measurements. Points were marked on the breasts and arms of 40 volunteer women aged between 18 and 60 years. When connecting the points, seven linear segments and one angular measurement on each half of the body, and one medial segment common to both body halves were defined. The volunteers were photographed in a standardized manner. Photogrammetric measurements were performed by three independent observers using the three software packages and compared to direct anthropometric measurements made with calipers and a protractor. Measurements obtained with AutoCAD(r) were the most reproducible and those made with ImageTool(r) were the most similar to direct anthropometry, while measurements with Adobe Photoshop(r) showed the largest differences. Except for angular measurements, significant differences were found between measurements of line segments made using the three software packages and those obtained by direct anthropometry. AutoCAD(r) provided the highest precision and intermediate accuracy; ImageTool(r) had the highest accuracy and lowest precision; and Adobe Photoshop(r) showed intermediate precision and the worst accuracy among the three software packages.
System approach to distributed sensor management
NASA Astrophysics Data System (ADS)
Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid
2010-04-01
Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.
Spacecraft Onboard Interface Services: Current Status and Roadmap
NASA Astrophysics Data System (ADS)
Prochazka, Marek; Lopez Trescastro, Jorge; Krueger, Sabine
2016-08-01
Spacecraft Onboard Interface Services (SOIS) is a set of CCSDS standards defining communication stack services to interact with hardware equipment onboard spacecraft. In 2014 ESA kicked off three parallel activities to critically review the SOIS standards, use legacy spacecraft flight software (FSW), make it compliant to a preselected subset of SOIS standards and make performance and architecture assessment. As a part of the three parallel activities, led by Airbus DS Toulouse, OHB Bremen and Thales Alenia Space Cannes respectively, it was to provide feedback back to ESA and CCSDS and also to propose a roadmap of transition towards an operational FSW system fully compliant to applicable SOIS standards. The objective of the paper is twofold: Firstly it is to summarise main results of the three parallel activities and secondly, based on the results, to propose a roadmap for the future.