Educational interactive multimedia software: The impact of interactivity on learning
NASA Astrophysics Data System (ADS)
Reamon, Derek Trent
This dissertation discusses the design, development, deployment and testing of two versions of educational interactive multimedia software. Both versions of the software are focused on teaching mechanical engineering undergraduates about the fundamentals of direct-current (DC) motor physics and selection. The two versions of Motor Workshop software cover the same basic materials on motors, but differ in the level of interactivity between the students and the software. Here, the level of interactivity refers to the particular role of the computer in the interaction between the user and the software. In one version, the students navigate through information that is organized by topic, reading text, and viewing embedded video clips; this is referred to as "low-level interactivity" software because the computer simply presents the content. In the other version, the students are given a task to accomplish---they must design a small motor-driven 'virtual' vehicle that competes against computer-generated opponents. The interaction is guided by the software which offers advice from 'experts' and provides contextual information; we refer to this as "high-level interactivity" software because the computer is actively participating in the interaction. The software was used in two sets of experiments, where students using the low-level interactivity software served as the 'control group,' and students using the highly interactive software were the 'treatment group.' Data, including pre- and post-performance tests, questionnaire responses, learning style characterizations, activity tracking logs and videotapes were collected for analysis. Statistical and observational research methods were applied to the various data to test the hypothesis that the level of interactivity effects the learning situation, with higher levels of interactivity being more effective for learning. The results show that both the low-level and high-level interactive versions of the software were effective in promoting learning about the subject of motors. The focus of learning varied between users of the two versions, however. The low-level version was more effective for teaching concepts and terminology, while the high-level version seemed to be more effective for teaching engineering applications.
An overview of very high level software design methods
NASA Technical Reports Server (NTRS)
Asdjodi, Maryam; Hooper, James W.
1988-01-01
Very High Level design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very high level design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very high level design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.
Verification and Validation in a Rapid Software Development Process
NASA Technical Reports Server (NTRS)
Callahan, John R.; Easterbrook, Steve M.
1997-01-01
The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.
Programming Language Software For Graphics Applications
NASA Technical Reports Server (NTRS)
Beckman, Brian C.
1993-01-01
New approach reduces repetitive development of features common to different applications. High-level programming language and interactive environment with access to graphical hardware and software created by adding graphical commands and other constructs to standardized, general-purpose programming language, "Scheme". Designed for use in developing other software incorporating interactive computer-graphics capabilities into application programs. Provides alternative to programming entire applications in C or FORTRAN, specifically ameliorating design and implementation of complex control and data structures typifying applications with interactive graphics. Enables experimental programming and rapid development of prototype software, and yields high-level programs serving as executable versions of software-design documentation.
Porting and refurbishment of the WSS TNG control software
NASA Astrophysics Data System (ADS)
Caproni, Alessandro; Zacchei, Andrea; Vuerli, Claudio; Pucillo, Mauro
2004-09-01
The Workstation Software Sytem (WSS) is the high level control software of the Italian Galileo Galilei Telescope settled in La Palma Canary Island developed at the beginning of '90 for HP-UX workstations. WSS may be seen as a middle layer software system that manages the communications between the real time systems (VME), different workstations and high level applications providing a uniform distributed environment. The project to port the control software from the HP workstation to Linux environment started at the end of 2001. It is aimed to refurbish the control software introducing some of the new software technologies and languages, available for free in the Linux operating system. The project was realized by gradually substituting each HP workstation with a Linux PC with the goal to avoid main changes in the original software running under HP-UX. Three main phases characterized the project: creation of a simulated control room with several Linux PCs running WSS (to check all the functionality); insertion in the simulated control room of some HPs (to check the mixed environment); substitution of HP workstation in the real control room. From a software point of view, the project introduces some new technologies, like multi-threading, and the possibility to develop high level WSS applications with almost every programming language that implements the Berkley sockets. A library to develop java applications has also been created and tested.
Quality Market: Design and Field Study of Prediction Market for Software Quality Control
ERIC Educational Resources Information Center
Krishnamurthy, Janaki
2010-01-01
Given the increasing competition in the software industry and the critical consequences of software errors, it has become important for companies to achieve high levels of software quality. While cost reduction and timeliness of projects continue to be important measures, software companies are placing increasing attention on identifying the user…
Python based high-level synthesis compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard
2014-11-01
This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.
Software Tools for Development on the Peregrine System | High-Performance
Computing | NREL Software Tools for Development on the Peregrine System Software Tools for and manage software at the source code level. Cross-Platform Make and SCons The "Cross-Platform Make" (CMake) package is from Kitware, and SCons is a modern software build tool based on Python
Dynamic Weather Routes Architecture Overview
NASA Technical Reports Server (NTRS)
Eslami, Hassan; Eshow, Michelle
2014-01-01
Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.
Software and electronic developments for TUG - T60 robotic telescope
NASA Astrophysics Data System (ADS)
Parmaksizoglu, M.; Dindar, M.; Kirbiyik, H.; Helhel, S.
2014-12-01
A robotic telescope is a telescope that can make observations without hands-on human control. Its low level behavior is automatic and computer-controlled. Robotic telescopes usually run under the control of a scheduler, which provides high-level control by selecting astronomical targets for observation. TUBITAK National Observatory (TUG) T60 Robotic Telescope is controlled by open source OCAAS software, formally named TALON. This study introduces the improvements on TALON software, new electronic and mechanic designs. The designs and software improvements were implemented in the T60 telescope control software and tested on the real system successfully.
ACTS: from ATLAS software towards a common track reconstruction software
NASA Astrophysics Data System (ADS)
Gumpert, C.; Salzburger, A.; Kiehn, M.; Hrdinka, J.; Calace, N.; ATLAS Collaboration
2017-10-01
Reconstruction of charged particles’ trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is developed with special emphasis on thread-safety to support parallel execution of the code and data structures are optimised for vectorisation to speed up linear algebra operations. The implementation is agnostic to the details of the detection technologies and magnetic field configuration which makes it applicable to many different experiments.
NASA Technical Reports Server (NTRS)
1992-01-01
This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.
The Diamond Beamline Controls and Data Acquisition Software Architecture
NASA Astrophysics Data System (ADS)
Rees, N.
2010-06-01
The software for the Diamond Light Source beamlines[1] is based on two complementary software frameworks: low level control is provided by the Experimental Physics and Industrial Control System (EPICS) framework[2][3] and the high level user interface is provided by the Java based Generic Data Acquisition or GDA[4][5]. EPICS provides a widely used, robust, generic interface across a wide range of hardware where the user interfaces are focused on serving the needs of engineers and beamline scientists to obtain detailed low level views of all aspects of the beamline control systems. The GDA system provides a high-level system that combines an understanding of scientific concepts, such as reciprocal lattice coordinates, a flexible python syntax scripting interface for the scientific user to control their data acquisition, and graphical user interfaces where necessary. This paper describes the beamline software architecture in more detail, highlighting how these complementary frameworks provide a flexible system that can accommodate a wide range of requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laros, James H.; Grant, Ryan; Levenhagen, Michael J.
Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.
NASA Astrophysics Data System (ADS)
Kristianti, Y.; Prabawanto, S.; Suhendra, S.
2017-09-01
This study aims to examine the ability of critical thinking and students who attain learning mathematics with learning model ASSURE assisted Autograph software. The design of this study was experimental group with pre-test and post-test control group. The experimental group obtained a mathematics learning with ASSURE-assisted model Autograph software and the control group acquired the mathematics learning with the conventional model. The data are obtained from the research results through critical thinking skills tests. This research was conducted at junior high school level with research population in one of junior high school student in Subang Regency of Lesson Year 2016/2017 and research sample of class VIII student in one of junior high school in Subang Regency for 2 classes. Analysis of research data is administered quantitatively. Quantitative data analysis was performed on the normalized gain level between the two sample groups using a one-way anova test. The results show that mathematics learning with ASSURE assisted model Autograph software can improve the critical thinking ability of junior high school students. Mathematical learning using ASSURE-assisted model Autograph software is significantly better in improving the critical thinking skills of junior high school students compared with conventional models.
Framework for Development of Object-Oriented Software
NASA Technical Reports Server (NTRS)
Perez-Poveda, Gus; Ciavarella, Tony; Nieten, Dan
2004-01-01
The Real-Time Control (RTC) Application Framework is a high-level software framework written in C++ that supports the rapid design and implementation of object-oriented application programs. This framework provides built-in functionality that solves common software development problems within distributed client-server, multi-threaded, and embedded programming environments. When using the RTC Framework to develop software for a specific domain, designers and implementers can focus entirely on the details of the domain-specific software rather than on creating custom solutions, utilities, and frameworks for the complexities of the programming environment. The RTC Framework was originally developed as part of a Space Shuttle Launch Processing System (LPS) replacement project called Checkout and Launch Control System (CLCS). As a result of the framework s development, CLCS software development time was reduced by 66 percent. The framework is generic enough for developing applications outside of the launch-processing system domain. Other applicable high-level domains include command and control systems and simulation/ training systems.
Cost evaluation of a DSN high level real-time language
NASA Technical Reports Server (NTRS)
Mckenzie, M.
1977-01-01
The hypothesis that the implementation of a DSN High Level Real Time Language will reduce real time software expenditures is explored. The High Level Real Time Language is found to be both affordable and cost-effective.
A Software Application for Assessing Readability in the Japanese EFL Context
ERIC Educational Resources Information Center
Ozasa, Toshiaki; Weir, George R. S.; Fukui, Masayasu
2010-01-01
We have been engaged in developing a readability index and its application software attuned for Japanese EFL learners. The index program, Ozasa-Fukui Year Level Program, Ver. 1.0, was used in developing the readability metric Ozasa-Fukui Year Level Index but tended to assume a high level of computer knowledge in its users. As a result, the…
The need for scientific software engineering in the pharmaceutical industry
NASA Astrophysics Data System (ADS)
Luty, Brock; Rose, Peter W.
2017-03-01
Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.
The need for scientific software engineering in the pharmaceutical industry.
Luty, Brock; Rose, Peter W
2017-03-01
Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.
Design, Development, and Automated Verification of an Integrity-Protected Hypervisor
2012-07-16
mechanism for implementing software virtualization. Since hypervisors execute at a very high privilege level, they must be secure. A fundamental security...using the CBMC model checker. CBMC verified XMHF?s implementation ? about 4700 lines of C code ? in about 80 seconds using less than 2GB of RAM. 15...Hypervisors are a popular mechanism for implementing software virtualization. Since hypervisors execute at a very high privilege level, they must be
High-energy physics software parallelization using database techniques
NASA Astrophysics Data System (ADS)
Argante, E.; van der Stok, P. D. V.; Willers, I.
1997-02-01
A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradimg, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI.
Software Management Environment (SME) concepts and architecture, revision 1
NASA Technical Reports Server (NTRS)
Hendrick, Robert; Kistler, David; Valett, Jon
1992-01-01
This document presents the concepts and architecture of the Software Management Environment (SME), developed for the Software Engineering Branch of the Flight Dynamic Division (FDD) of GSFC. The SME provides an integrated set of experience-based management tools that can assist software development managers in managing and planning flight dynamics software development projects. This document provides a high-level description of the types of information required to implement such an automated management tool.
Teaching Robotics Software with the Open Hardware Mobile Manipulator
ERIC Educational Resources Information Center
Vona, M.; Shekar, N. H.
2013-01-01
The "open hardware mobile manipulator" (OHMM) is a new open platform with a unique combination of features for teaching robotics software and algorithms. On-board low- and high-level processors support real-time embedded programming and motor control, as well as higher-level coding with contemporary libraries. Full hardware designs and…
NASA Astrophysics Data System (ADS)
Steiger, Damian S.; Haener, Thomas; Troyer, Matthias
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.
Student Use of Scaffolding Software: Relationships with Motivation and Conceptual Understanding
ERIC Educational Resources Information Center
Butler, Kyle A.; Lumpe, Andrew
2008-01-01
This study was designed to theoretically articulate and empirically assess the role of computer scaffolds. In this project, several examples of educational software were developed to scaffold the learning of students performing high level cognitive activities. The software used in this study, Artemis, focused on scaffolding the learning of…
The Impact of Software on Associate Degree Programs in Electronic Engineering Technology.
ERIC Educational Resources Information Center
Hata, David M.
1986-01-01
Assesses the range and extent of computer assisted instruction software available in electronic engineering technology education. Examines the need for software skills in four areas: (1) high-level languages; (2) assembly language; (3) computer-aided engineering; and (4) computer-aided instruction. Outlines strategies for the future in three…
Colossal Tooling Design: 3D Simulation for Ergonomic Analysis
NASA Technical Reports Server (NTRS)
Hunter, Steve L.; Dischinger, Charles; Thomas, Robert E.; Babai, Majid
2003-01-01
The application of high-level 3D simulation software to the design phase of colossal mandrel tooling for composite aerospace fuel tanks was accomplished to discover and resolve safety and human engineering problems. The analyses were conducted to determine safety, ergonomic and human engineering aspects of the disassembly process of the fuel tank composite shell mandrel. Three-dimensional graphics high-level software, incorporating various ergonomic analysis algorithms, was utilized to determine if the process was within safety and health boundaries for the workers carrying out these tasks. In addition, the graphical software was extremely helpful in the identification of material handling equipment and devices for the mandrel tooling assembly/disassembly process.
NASA Astrophysics Data System (ADS)
Silva, N.; Esper, A.
2012-01-01
The work presented in this article represents the results of applying RAMS analysis to a critical space control system, both at system and software levels. The system level RAMS analysis allowed the assignment of criticalities to the high level components, which was further refined by a tailored software level RAMS analysis. The importance of the software level RAMS analysis in the identification of new failure modes and its impact on the system level RAMS analysis is discussed. Recommendations of changes in the software architecture have also been proposed in order to reduce the criticality of the SW components to an acceptable minimum. The dependability analysis was performed in accordance to ECSS-Q-ST-80, which had to be tailored and complemented in some aspects. This tailoring will also be detailed in the article and lessons learned from the application of this tailoring will be shared, stating the importance to space systems safety evaluations. The paper presents the applied techniques, the relevant results obtained, the effort required for performing the tasks and the planned strategy for ROI estimation, as well as the soft skills required and acquired during these activities.
Development of Efficient Authoring Software for e-Learning Contents
NASA Astrophysics Data System (ADS)
Kozono, Kazutake; Teramoto, Akemi; Akiyama, Hidenori
The contents creation in e-Learning system becomes an important problem. The contents of e-Learning should include figure and voice media for a high-level educational effect. However, the use of figure and voice complicates the operation of authoring software considerably. A new authoring software, which can build e-Learning contents efficiently, has been developed to solve this problem. This paper reports development results of the authoring software.
2014-08-01
technologies and processes to achieve a required level of confidence that software systems and services function in the intended manner. 1.3 Security Example...that took three high-voltage lines out of service and a software fail- ure (a race condition3) that disabled the computing service that notified the... service had failed. Instead of analyzing the details of the alarm server failure, the reviewers asked why the following software assurance claim had
1993-12-01
Unclassified/Unlimited 13. ABSTRACT ~Maximum 2W0 worr*J The purpose of this thesis is to develop a high-level model to create seli"adapting software which...Department of Computer Science ABSTRACT The purpose of this thesis is to develop a high-level model to create self-adapting software which teaches learning...stimulating and demanding. The power of the system model described herein is that it can vary as needed by the individual student. The system will
Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi
2010-05-01
The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.
NASA Technical Reports Server (NTRS)
Hamilton, M.
1973-01-01
The entry guidance software functional requirements (requirements design phase), its architectural requirements (specifications design phase), and the entry guidance software verified code are discussed. It was found that the proper integration of designs at both the requirements and specifications levels are of high priority consideration.
MYST: a comprehensive high-level AO control tool for GeMS
NASA Astrophysics Data System (ADS)
Rigaut, F.; Neichel, B.; Bec, M.; Garcia-Rissman, A.
2010-07-01
Myst is the Gemini MCAO System (GeMS) high level control GUI. It is written in yorick, python and C. In this paper, we review the software architecture of Myst and its primary purposes, which are many: Real-time display, high level diagnostics, calibrations, and executor/sequencer of high level actions (closing the loop, coordinating dithers, etc).
CrossTalk: The Journal of Defense Software Engineering. Volume 18, Number 11
2005-11-01
languages. Our discipline of software engineering has really experienced phenomenal growth right before our eyes. A sign that software design has...approach on a high level of abstraction. The main emphasis is on the identification and allocation of a needed functionality (e.g., a target tracker ), rather...messaging software that is the backbone of teenage culture. As increasing security constraints will increase the cost of developing and main- taining any
MYRaf: A new Approach with IRAF for Astronomical Photometric Reduction
NASA Astrophysics Data System (ADS)
Kilic, Y.; Shameoni Niaei, M.; Özeren, F. F.; Yesilyaprak, C.
2016-12-01
In this study, the design and some developments of MYRaf software for astronomical photometric reduction are presented. MYRaf software is an easy to use, reliable, and has a fast IRAF aperture photometry GUI tools. MYRaf software is an important step for the automated software process of robotic telescopes, and uses IRAF, PyRAF, matplotlib, ginga, alipy, and Sextractor with the general-purpose and high-level programming language Python and uses the QT framework.
Robust, Flexible Motion Control for the Mars Explorer Rovers
NASA Technical Reports Server (NTRS)
Maimone, Mark; Biesiadecki, Jeffrey
2007-01-01
The Mobility Flight Software, running on computers aboard the Mars Explorer Rover (MER) robotic vehicles Spirit and Opportunity, affords the robustness and flexibility of control to enable safe and effective operation of these vehicles in traversing natural terrain. It can make the vehicles perform specific maneuvers commanded from Earth, and/or can autonomously administer multiple aspects of mobility, including choice of motion, measurement of actual motion, and even selection of targets to be approached. Motion of a vehicle can be commanded by use of multiple layers of control, ranging from motor control at a low level, direct drive operations (e.g., motion along a circular arc, motion along a straight line, or turn in place) at an intermediate level to goal-position driving (that is, driving to a specified location) at a high level. The software can also perform high-level assessment of terrain and selection of safe paths across the terrain: this involves processing of the digital equivalent of a local traversability map generated from images acquired by stereoscopic pairs of cameras aboard the vehicles. Other functions of the software include interacting with the rest of the MER flight software and performing safety checks.
Reconfigurable Autonomy for Future Planetary Rovers
NASA Astrophysics Data System (ADS)
Burroughes, Guy
Extra-terrestrial Planetary rover systems are uniquely remote, placing constraints in regard to communication, environmental uncertainty, and limited physical resources, and requiring a high level of fault tolerance and resistance to hardware degradation. This thesis presents a novel self-reconfiguring autonomous software architecture designed to meet the needs of extraterrestrial planetary environments. At runtime it can safely reconfigure low-level control systems, high-level decisional autonomy systems, and managed software architecture. The architecture can perform automatic Verification and Validation of self-reconfiguration at run-time, and enables a system to be self-optimising, self-protecting, and self-healing. A novel self-monitoring system, which is non-invasive, efficient, tunable, and autonomously deploying, is also presented. The architecture was validated through the use-case of a highly autonomous extra-terrestrial planetary exploration rover. Three major forms of reconfiguration were demonstrated and tested: first, high level adjustment of system internal architecture and goal; second, software module modification; and third, low level alteration of hardware control in response to degradation of hardware and environmental change. The architecture was demonstrated to be robust and effective in a Mars sample return mission use-case testing the operational aspects of a novel, reconfigurable guidance, navigation, and control system for a planetary rover, all operating in concert through a scenario that required reconfiguration of all elements of the system.
Generating Safety-Critical PLC Code From a High-Level Application Software Specification
NASA Technical Reports Server (NTRS)
2008-01-01
The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laros III, James H.; DeBonis, David; Grant, Ryan
Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover themore » entire software space, from generic hardware interfaces to the input from the computer facility manager.« less
Autonomy Software: V&V Challenges and Characteristics
NASA Technical Reports Server (NTRS)
Schumann, Johann; Visser, Willem
2006-01-01
The successful operation of unmanned air vehicles requires software with a high degree of autonomy. Only if high level functions can be carried out without human control and intervention, complex missions in a changing and potentially unknown environment can be carried out successfully. Autonomy software is highly mission and safety critical: failures, caused by flaws in the software cannot only jeopardize the mission, but could also endanger human life (e.g., a crash of an UAV in a densely populated area). Due to its large size, high complexity, and use of specialized algorithms (planner, constraint-solver, etc.), autonomy software poses specific challenges for its verification, validation, and certification. -- - we have carried out a survey among researchers aid scientists at NASA to study these issues. In this paper, we will present major results of this study, discussing the broad spectrum. of notions and characteristics of autonomy software and its challenges for design and development. A main focus of this survey was to evaluate verification and validation (V&V) issues and challenges, compared to the development of "traditional" safety-critical software. We will discuss important issues in V&V of autonomous software and advanced V&V tools which can help to mitigate software risks. Results of this survey will help to identify and understand safety concerns in autonomy software and will lead to improved strategies for mitigation of these risks.
Khomtchouk, Bohdan B; Van Booven, Derek J; Wahlestedt, Claes
2014-01-01
The graphical visualization of gene expression data using heatmaps has become an integral component of modern-day medical research. Heatmaps are used extensively to plot quantitative differences in gene expression levels, such as those measured with RNAseq and microarray experiments, to provide qualitative large-scale views of the transcriptonomic landscape. Creating high-quality heatmaps is a computationally intensive task, often requiring considerable programming experience, particularly for customizing features to a specific dataset at hand. Software to create publication-quality heatmaps is developed with the R programming language, C++ programming language, and OpenGL application programming interface (API) to create industry-grade high performance graphics. We create a graphical user interface (GUI) software package called HeatmapGenerator for Windows OS and Mac OS X as an intuitive, user-friendly alternative to researchers with minimal prior coding experience to allow them to create publication-quality heatmaps using R graphics without sacrificing their desired level of customization. The simplicity of HeatmapGenerator is that it only requires the user to upload a preformatted input file and download the publicly available R software language, among a few other operating system-specific requirements. Advanced features such as color, text labels, scaling, legend construction, and even database storage can be easily customized with no prior programming knowledge. We provide an intuitive and user-friendly software package, HeatmapGenerator, to create high-quality, customizable heatmaps generated using the high-resolution color graphics capabilities of R. The software is available for Microsoft Windows and Apple Mac OS X. HeatmapGenerator is released under the GNU General Public License and publicly available at: http://sourceforge.net/projects/heatmapgenerator/. The Mac OS X direct download is available at: http://sourceforge.net/projects/heatmapgenerator/files/HeatmapGenerator_MAC_OSX.tar.gz/download. The Windows OS direct download is available at: http://sourceforge.net/projects/heatmapgenerator/files/HeatmapGenerator_WINDOWS.zip/download.
Software architecture of the III/FBI segment of the FBI's integrated automated identification system
NASA Astrophysics Data System (ADS)
Booker, Brian T.
1997-02-01
This paper will describe the software architecture of the Interstate Identification Index (III/FBI) Segment of the FBI's Integrated Automated Fingerprint Identification System (IAFIS). IAFIS is currently under development, with deployment to begin in 1998. III/FBI will provide the repository of criminal history and photographs for criminal subjects, as well as identification data for military and civilian federal employees. Services provided by III/FBI include maintenance of the criminal and civil data, subject search of the criminal and civil data, and response generation services for IAFIS. III/FBI software will be comprised of both COTS and an estimated 250,000 lines of developed C code. This paper will describe the following: (1) the high-level requirements of the III/FBI software; (2) the decomposition of the III/FBI software into Computer Software Configuration Items (CSCIs); (3) the top-level design of the III/FBI CSCIs; and (4) the relationships among the developed CSCIs and the COTS products that will comprise the III/FBI software.
The FoReVer Methodology: A MBSE Framework for Formal Verification
NASA Astrophysics Data System (ADS)
Baracchi, Laura; Mazzini, Silvia; Cimatti, Alessandro; Tonetta, Stefano; Garcia, Gerald
2013-08-01
The need for high level of confidence and operational integrity in critical space (software) systems is well recognized in the Space industry and has been addressed so far through rigorous System and Software Development Processes and stringent Verification and Validation regimes. The Model Based Space System Engineering process (MBSSE) derived in the System and Software Functional Requirement Techniques study (SSFRT) focused on the application of model based engineering technologies to support the space system and software development processes, from mission level requirements to software implementation through model refinements and translations. In this paper we report on our work in the ESA-funded FoReVer project where we aim at developing methodological, theoretical and technological support for a systematic approach to the space avionics system development, in phases 0/A/B/C. FoReVer enriches the MBSSE process with contract-based formal verification of properties, at different stages from system to software, through a step-wise refinement approach, with the support for a Software Reference Architecture.
Development, Validation and Integration of the ATLAS Trigger System Software in Run 2
NASA Astrophysics Data System (ADS)
Keyes, Robert; ATLAS Collaboration
2017-10-01
The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.
Software Quality Control at Belle II
NASA Astrophysics Data System (ADS)
Ritter, M.; Kuhr, T.; Hauth, T.; Gebard, T.; Kristof, M.; Pulvermacher, C.;
2017-10-01
Over the last seven years the software stack of the next generation B factory experiment Belle II has grown to over one million lines of C++ and Python code, counting only the part included in offline software releases. There are several thousand commits to the central repository by about 100 individual developers per year. To keep a coherent software stack of high quality that it can be sustained and used efficiently for data acquisition, simulation, reconstruction, and analysis over the lifetime of the Belle II experiment is a challenge. A set of tools is employed to monitor the quality of the software and provide fast feedback to the developers. They are integrated in a machinery that is controlled by a buildbot master and automates the quality checks. The tools include different compilers, cppcheck, the clang static analyzer, valgrind memcheck, doxygen, a geometry overlap checker, a check for missing or extra library links, unit tests, steering file level tests, a sophisticated high-level validation suite, and an issue tracker. The technological development infrastructure is complemented by organizational means to coordinate the development.
Kimiafar, Khalil; Sarbaz, Masoumeh; Sheikhtaheri, Abbas
2016-01-01
Background: There are no general strategies or tools to evaluate daily lesson plans; however, assessments conducted using traditional methods usually include course plans. This study aimed to evaluate the strengths and weaknesses of online survey software in collecting data on education in medical fields and the application of such softwares to evaluate students' views and modification of lesson plans. Methods: After investigating the available online survey software, esurveypro was selected for assessing daily lesson plans. After using the software for one semester, a questionnaire was prepared to assess the advantages and disadvantages of this method and students' views in a cross-sectional study. Results: The majority of the students (51.7%) rated the evaluation of classes per session (lesson plans) using the online survey as useful or very useful. About 51% (n=36) of the students considered this method effective in improving the management of each session, 67.1% (n=47) considered it effective in improving the management of sessions for the next semester, and 51.4% (n=36) said it had a high impact on improving the educational content of subsequent sessions. Finally, 61.4% (n=43) students expressed high and very high levels of satisfaction with using an online survey at each session. Conclusion: The use of online surveys may be appropriate to improve lesson plans and educational planning at different levels. This method can be used for other evaluations and for assessing people's opinions at different levels of an educational system.
Open high-level data formats and software for gamma-ray astronomy
NASA Astrophysics Data System (ADS)
Deil, Christoph; Boisson, Catherine; Kosack, Karl; Perkins, Jeremy; King, Johannes; Eger, Peter; Mayer, Michael; Wood, Matthew; Zabalza, Victor; Knödlseder, Jürgen; Hassan, Tarek; Mohrmann, Lars; Ziegler, Alexander; Khelifi, Bruno; Dorner, Daniela; Maier, Gernot; Pedaletti, Giovanna; Rosado, Jaime; Contreras, José Luis; Lefaucheur, Julien; Brügge, Kai; Servillat, Mathieu; Terrier, Régis; Walter, Roland; Lombardi, Saverio
2017-01-01
In gamma-ray astronomy, a variety of data formats and proprietary software have been traditionally used, often developed for one specific mission or experiment. Especially for ground-based imaging atmospheric Cherenkov telescopes (IACTs), data and software are mostly private to the collaborations operating the telescopes. However, there is a general movement in science towards the use of open data and software. In addition, the next-generation IACT instrument, the Cherenkov Telescope Array (CTA), will be operated as an open observatory. We have created a Github organisation at https://github.com/open-gamma-ray-astro where we are developing high-level data format specifications. A public mailing list was set up at https://lists.nasa.gov/mailman/listinfo/open-gamma-ray-astro and a first face-to-face meeting on the IACT high-level data model and formats took place in April 2016 in Meudon (France). This open multi-mission effort will help to accelerate the development of open data formats and open-source software for gamma-ray astronomy, leading to synergies in the development of analysis codes and eventually better scientific results (reproducible, multi-mission). This write-up presents this effort for the first time, explaining the motivation and context, the available resources and process we use, as well as the status and planned next steps for the data format specifications. We hope that it will stimulate feedback and future contributions from the gamma-ray astronomy community.
Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic
NASA Technical Reports Server (NTRS)
Leucht, Kurt W.; Semmel, Glenn S.
2008-01-01
The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.
ERIC Educational Resources Information Center
Hainey, Thomas; Connolly, Thomas M.; Stansfield, Mark; Boyle, Elizabeth A.
2011-01-01
A highly important part of software engineering education is requirements collection and analysis which is one of the initial stages of the Database Application Lifecycle and arguably the most important stage of the Software Development Lifecycle. No other conceptual work is as difficult to rectify at a later stage or as damaging to the overall…
Open Technology Approaches to Geospatial Interface Design
NASA Astrophysics Data System (ADS)
Crevensten, B.; Simmons, D.; Alaska Satellite Facility
2011-12-01
What problems do you not want your software developers to be solving? Choosing open technologies across the entire stack of software development-from low-level shared libraries to high-level user interaction implementations-is a way to help ensure that customized software yields innovative and valuable tools for Earth Scientists. This demonstration will review developments in web application technologies and the recurring patterns of interaction design regarding exploration and discovery of geospatial data through the Vertex: ASF's Dataportal interface, a project utilizing current open web application standards and technologies including HTML5, jQueryUI, Backbone.js and the Jasmine unit testing framework.
A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems
NASA Astrophysics Data System (ADS)
Li, Yu; Oberweis, Andreas
Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
The use of emulator-based simulators for on-board software maintenance
NASA Astrophysics Data System (ADS)
Irvine, M. M.; Dartnell, A.
2002-07-01
Traditionally, onboard software maintenance activities within the space sector are performed using hardware-based facilities. These facilities are developed around the use of hardware emulation or breadboards containing target processors. Some sort of environment is provided around the hardware to support the maintenance actives. However, these environments are not easy to use to set-up the required test scenarios, particularly when the onboard software executes in a dynamic I/O environment, e.g. attitude control software, or data handling software. In addition, the hardware and/or environment may not support the test set-up required during investigations into software anomalies, e.g. raise spurious interrupt, fail memory, etc, and the overall "visibility" of the software executing may be limited. The Software Maintenance Simulator (SOMSIM) is a tool that can support the traditional maintenance facilities. The following list contains some of the main benefits that SOMSIM can provide: Low cost flexible extension to existing product - operational simulator containing software processor emulator; System-level high-fidelity test-bed in which software "executes"; Provides a high degree of control/configuration over the entire "system", including contingency conditions perhaps not possible with real hardware; High visibility and control over execution of emulated software. This paper describes the SOMSIM concept in more detail, and also describes the SOMSIM study being carried out for ESA/ESOC by VEGA IT GmbH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willenbring, James M.; Bartlett, Roscoe Ainsworth; Heroux, Michael Allen
2012-01-01
Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for manymore » CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.« less
Prototyping with Data Dictionaries for Requirements Analysis.
1985-03-01
statistical packages and software for screen layout. These items work at a higher level than another category of prototyping tool, program generators... Program generators are software packages which, when given specifications, produce source listings, usually in a high order language such as COBCL...with users and this will not happen if he must stop to develcp a detailed program . [Ref. 241] Hardware as well as software should be considered in
Final Report - Regulatory Considerations for Adaptive Systems
NASA Technical Reports Server (NTRS)
Wilkinson, Chris; Lynch, Jonathan; Bharadwaj, Raj
2013-01-01
This report documents the findings of a preliminary research study into new approaches to the software design assurance of adaptive systems. We suggest a methodology to overcome the software validation and verification difficulties posed by the underlying assumption of non-adaptive software in the requirementsbased- testing verification methods in RTCA/DO-178B and C. An analysis of the relevant RTCA/DO-178B and C objectives is presented showing the reasons for the difficulties that arise in showing satisfaction of the objectives and suggested additional means by which they could be satisfied. We suggest that the software design assurance problem for adaptive systems is principally one of developing correct and complete high level requirements and system level constraints that define the necessary system functional and safety properties to assure the safe use of adaptive systems. We show how analytical techniques such as model based design, mathematical modeling and formal or formal-like methods can be used to both validate the high level functional and safety requirements, establish necessary constraints and provide the verification evidence for the satisfaction of requirements and constraints that supplements conventional testing. Finally the report identifies the follow-on research topics needed to implement this methodology.
Software design for analysis of multichannel intracardial and body surface electrocardiograms.
Potse, Mark; Linnenbank, André C; Grimbergen, Cornelis A
2002-11-01
Analysis of multichannel ECG recordings (body surface maps (BSMs) and intracardial maps) requires special software. We created a software package and a user interface on top of a commercial data analysis package (MATLAB) by a combination of high-level and low-level programming. Our software was created to satisfy the needs of a diverse group of researchers. It can handle a large variety of recording configurations. It allows for interactive usage through a fast and robust user interface, and batch processing for the analysis of large amounts of data. The package is user-extensible, includes routines for both common and experimental data processing tasks, and works on several computer platforms. The source code is made intelligible using software for structured documentation and is available to the users. The package is currently used by more than ten research groups analysing ECG data worldwide.
Measuring and assessing maintainability at the end of high level design
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Morasca, Sandro; Basili, Victor R.
1993-01-01
Software architecture appears to be one of the main factors affecting software maintainability. Therefore, in order to be able to predict and assess maintainability early in the development process we need to be able to measure the high-level design characteristics that affect the change process. To this end, we propose a measurement approach, which is based on precise assumptions derived from the change process, which is based on Object-Oriented Design principles and is partially language independent. We define metrics for cohesion, coupling, and visibility in order to capture the difficulty of isolating, understanding, designing and validating changes.
NASA Astrophysics Data System (ADS)
Candia, Sante; Lisio, Giovanni; Campolo, Giovanni; Pascucci, Dario
2010-08-01
The Avionics Software (ASW), in charge of controlling the Low Earth Orbit (LEO) Spacecraft PRIMA Platform (Piattaforma Ri-configurabile Italiana Multi-Applicativa), is evolving towards a highly modular and re-usable architecture based on an architectural framework allowing the effective integration of the software building blocks (SWBBs) providing the on-board control functions. During the recent years, the PRIMA ASW design and production processes have been improved to reach the following objectives: (a) at PUS Services level, separation of the mission-independent software mechanisms from the mission-dependent configuration information; (b) at Application level, identification of mission-independent recurrent functions for promoting abstraction and obtaining a more efficient and safe ASW production, with positive implications also on the software validation activities. This paper is dedicated to the characterisation activity which has been performed at Application level for a software component abstracting a set of functions for the generic On-Board Assembly (OBA), a set of hardware units used to deliver an on-board service. Moreover, the ASW production process is specified to show how it results after the introduction of the new design features.
Kimiafar, Khalil; Sarbaz, Masoumeh; Sheikhtaheri, Abbas
2016-01-01
Background: There are no general strategies or tools to evaluate daily lesson plans; however, assessments conducted using traditional methods usually include course plans. This study aimed to evaluate the strengths and weaknesses of online survey software in collecting data on education in medical fields and the application of such softwares to evaluate students' views and modification of lesson plans. Methods: After investigating the available online survey software, esurveypro was selected for assessing daily lesson plans. After using the software for one semester, a questionnaire was prepared to assess the advantages and disadvantages of this method and students’ views in a cross-sectional study. Results: The majority of the students (51.7%) rated the evaluation of classes per session (lesson plans) using the online survey as useful or very useful. About 51% (n=36) of the students considered this method effective in improving the management of each session, 67.1% (n=47) considered it effective in improving the management of sessions for the next semester, and 51.4% (n=36) said it had a high impact on improving the educational content of subsequent sessions. Finally, 61.4% (n=43) students expressed high and very high levels of satisfaction with using an online survey at each session. Conclusion: The use of online surveys may be appropriate to improve lesson plans and educational planning at different levels. This method can be used for other evaluations and for assessing people’s opinions at different levels of an educational system. PMID:28491839
The Legacy of Space Shuttle Flight Software
NASA Technical Reports Server (NTRS)
Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.
2011-01-01
The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Steven Adriel
The following discussion contains a high-level description of methods used to implement software for data processing. It describes the required directory structures and file handling required to use Excel's Visual Basic for Applications programming language and how to identify shot, test and capture types to appropriately process data. It also describes how to interface with the software.
The IHMC CmapTools software in research and education: a multi-level use case in Space Meteorology
NASA Astrophysics Data System (ADS)
Messerotti, Mauro
2010-05-01
The IHMC (Institute for Human and Machine Cognition, Florida University System, USA) CmapTools software is a powerful multi-platform tool for knowledge modelling in graphical form based on concept maps. In this work we present its application for the high-level development of a set of multi-level concept maps in the framework of Space Meteorology to act as the kernel of a space meteorology domain ontology. This is an example of a research use case, as a domain ontology coded in machine-readable form via e.g. OWL (Web Ontology Language) is suitable to be an active layer of any knowledge management system embedded in a Virtual Observatory (VO). Apart from being manageable at machine level, concept maps developed via CmapTools are intrinsically human-readable and can embed hyperlinks and objects of many kinds. Therefore they are suitable to be published on the web: the coded knowledge can be exploited for educational purposes by the students and the public, as the level of information can be naturally organized among linked concept maps in progressively increasing complexity levels. Hence CmapTools and its advanced version COE (Concept-map Ontology Editor) represent effective and user-friendly software tools for high-level knowledge represention in research and education.
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
Software engineering processes for Class D missions
NASA Astrophysics Data System (ADS)
Killough, Ronnie; Rose, Debi
2013-09-01
Software engineering processes are often seen as anathemas; thoughts of CMMI key process areas and NPR 7150.2A compliance matrices can motivate a software developer to consider other career fields. However, with adequate definition, common-sense application, and an appropriate level of built-in flexibility, software engineering processes provide a critical framework in which to conduct a successful software development project. One problem is that current models seem to be built around an underlying assumption of "bigness," and assume that all elements of the process are applicable to all software projects regardless of size and tolerance for risk. This is best illustrated in NASA's NPR 7150.2A in which, aside from some special provisions for manned missions, the software processes are to be applied based solely on the criticality of the software to the mission, completely agnostic of the mission class itself. That is, the processes applicable to a Class A mission (high priority, very low risk tolerance, very high national significance) are precisely the same as those applicable to a Class D mission (low priority, high risk tolerance, low national significance). This paper will propose changes to NPR 7150.2A, taking mission class into consideration, and discuss how some of these changes are being piloted for a current Class D mission—the Cyclone Global Navigation Satellite System (CYGNSS).
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Malone, Linda
2007-01-01
It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.
Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.
Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H
2015-05-01
Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets.
Nolden, Marco; Zelzer, Sascha; Seitel, Alexander; Wald, Diana; Müller, Michael; Franz, Alfred M; Maleike, Daniel; Fangerau, Markus; Baumhauer, Matthias; Maier-Hein, Lena; Maier-Hein, Klaus H; Meinzer, Hans-Peter; Wolf, Ivo
2013-07-01
The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control. MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams. MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process. MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today's and tomorrow's clinically motivated research.
Hardware/software codesign for embedded RISC core
NASA Astrophysics Data System (ADS)
Liu, Peng
2001-12-01
This paper describes hardware/software codesign method of the extendible embedded RISC core VIRGO, which based on MIPS-I instruction set architecture. VIRGO is described by Verilog hardware description language that has five-stage pipeline with shared 32-bit cache/memory interface, and it is controlled by distributed control scheme. Every pipeline stage has one small controller, which controls the pipeline stage status and cooperation among the pipeline phase. Since description use high level language and structure is distributed, VIRGO core has highly extension that can meet the requirements of application. We take look at the high-definition television MPEG2 MPHL decoder chip, constructed the hardware/software codesign virtual prototyping machine that can research on VIRGO core instruction set architecture, and system on chip memory size requirements, and system on chip software, etc. We also can evaluate the system on chip design and RISC instruction set based on the virtual prototyping machine platform.
Lyerla, R; Gouws, E; García-Calleja, J M; Zaniewski, E
2006-06-01
This paper describes improvements and updates to an established approach to making epidemiological estimates of HIV prevalence in countries with low level and concentrated epidemics. The structure of the software used to make estimates is briefly described, with particular attention to changes and improvements. The approach focuses on identifying populations which, through their behaviour, are at high risk of infection with HIV or who are exposed through the risk behaviour of their sexual partners. Estimates of size and HIV prevalence of these populations allow the total number of HIV infected people in a country or region to be estimated. Major changes in the software focus on the move away from short term projections and towards developing an epidemiological curve that more accurately represents the change in prevalence of HIV over time. The software continues to provide an output file for use in the Spectrum software so as to estimate the demographic impact of HIV infection at country level.
Automating software design system DESTA
NASA Technical Reports Server (NTRS)
Lovitsky, Vladimir A.; Pearce, Patricia D.
1992-01-01
'DESTA' is the acronym for the Dialogue Evolutionary Synthesizer of Turnkey Algorithms by means of a natural language (Russian or English) functional specification of algorithms or software being developed. DESTA represents the computer-aided and/or automatic artificial intelligence 'forgiving' system which provides users with software tools support for algorithm and/or structured program development. The DESTA system is intended to provide support for the higher levels and earlier stages of engineering design of software in contrast to conventional Computer Aided Design (CAD) systems which provide low level tools for use at a stage when the major planning and structuring decisions have already been taken. DESTA is a knowledge-intensive system. The main features of the knowledge are procedures, functions, modules, operating system commands, batch files, their natural language specifications, and their interlinks. The specific domain for the DESTA system is a high level programming language like Turbo Pascal 6.0. The DESTA system is operational and runs on an IBM PC computer.
Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs
2011-01-01
This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.
Methodology for object-oriented real-time systems analysis and design: Software engineering
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1991-01-01
Successful application of software engineering methodologies requires an integrated analysis and design life-cycle in which the various phases flow smoothly 'seamlessly' from analysis through design to implementation. Furthermore, different analysis methodologies often lead to different structuring of the system so that the transition from analysis to design may be awkward depending on the design methodology to be used. This is especially important when object-oriented programming is to be used for implementation when the original specification and perhaps high-level design is non-object oriented. Two approaches to real-time systems analysis which can lead to an object-oriented design are contrasted: (1) modeling the system using structured analysis with real-time extensions which emphasizes data and control flows followed by the abstraction of objects where the operations or methods of the objects correspond to processes in the data flow diagrams and then design in terms of these objects; and (2) modeling the system from the beginning as a set of naturally occurring concurrent entities (objects) each having its own time-behavior defined by a set of states and state-transition rules and seamlessly transforming the analysis models into high-level design models. A new concept of a 'real-time systems-analysis object' is introduced and becomes the basic building block of a series of seamlessly-connected models which progress from the object-oriented real-time systems analysis and design system analysis logical models through the physical architectural models and the high-level design stages. The methodology is appropriate to the overall specification including hardware and software modules. In software modules, the systems analysis objects are transformed into software objects.
Pouplin, Samuel; Roche, Nicolas; Antoine, Jean-Yves; Vaugier, Isabelle; Pottier, Sandra; Figere, Marjorie; Bensmail, Djamel
2017-06-01
To determine whether activation of the frequency of use and automatic learning parameters of word prediction software has an impact on text input speed. Forty-five participants with cervical spinal cord injury between C4 and C8 Asia A or B accepted to participate to this study. Participants were separated in two groups: a high lesion group for participants with lesion level is at or above C5 Asia AIS A or B and a low lesion group for participants with lesion is between C6 and C8 Asia AIS A or B. A single evaluation session was carried out for each participant. Text input speed was evaluated during three copying tasks: • without word prediction software (WITHOUT condition) • with automatic learning of words and frequency of use deactivated (NOT_ACTIV condition) • with automatic learning of words and frequency of use activated (ACTIV condition) Results: Text input speed was significantly higher in the WITHOUT than the NOT_ACTIV (p< 0.001) or ACTIV conditions (p = 0.02) for participants with low lesions. Text input speed was significantly higher in the ACTIV than in the NOT_ACTIV (p = 0.002) or WITHOUT (p < 0.001) conditions for participants with high lesions. Use of word prediction software with the activation of frequency of use and automatic learning increased text input speed in participants with high-level tetraplegia. For participants with low-level tetraplegia, the use of word prediction software with frequency of use and automatic learning activated only decreased the number of errors. Implications in rehabilitation Access to technology can be difficult for persons with disabilities such as cervical spinal cord injury (SCI). Several methods have been developed to increase text input speed such as word prediction software.This study show that parameter of word prediction software (frequency of use) affected text input speed in persons with cervical SCI and differed according to the level of the lesion. • For persons with high-level lesion, our results suggest that this parameter must be activated so that text input speed is increased. • For persons with low lesion group, this parameter must be activated so that the numbers of errors are decreased. • In all cases, the activation of the parameter of frequency of use is essential in order to improve the efficiency of the word prediction software. • Health-related professionals should use these results in their clinical practice for better results and therefore better patients 'satisfaction.
Simulation of fiber optic liquid level sensor demodulation system
NASA Astrophysics Data System (ADS)
Yi, Cong-qin; Luo, Yun; Zhang, Zheng-ping
Measuring liquid level with high accuracy is an urgent requirement. This paper mainly focus on the demodulation system of fiber-optic liquid level sensor based on Fabry-Perot cavity, design and simulate the demodulation system by the single-chip simulation software.
Techniques for development of safety-related software for surgical robots.
Varley, P
1999-12-01
Regulatory bodies require evidence that software controlling potentially hazardous devices is developed to good manufacturing practices. Effective techniques used in other industries assume long timescales and high staffing levels and can be unsuitable for use without adaptation in developing electronic healthcare devices. This paper discusses a set of techniques used in practice to develop software for a particular innovative medical product, an endoscopic camera manipulator. These techniques include identification of potential hazards and tracing their mitigating factors through the project lifecycle.
Rocca-Serra, Philippe; Brandizi, Marco; Maguire, Eamonn; Sklyar, Nataliya; Taylor, Chris; Begley, Kimberly; Field, Dawn; Harris, Stephen; Hide, Winston; Hofmann, Oliver; Neumann, Steffen; Sterk, Peter; Tong, Weida; Sansone, Susanna-Assunta
2010-01-01
Summary: The first open source software suite for experimentalists and curators that (i) assists in the annotation and local management of experimental metadata from high-throughput studies employing one or a combination of omics and other technologies; (ii) empowers users to uptake community-defined checklists and ontologies; and (iii) facilitates submission to international public repositories. Availability and Implementation: Software, documentation, case studies and implementations at http://www.isa-tools.org Contact: isatools@googlegroups.com PMID:20679334
An object oriented implementation of the Yeadon human inertia model
Dembia, Christopher; Moore, Jason K.; Hubbard, Mont
2015-01-01
We present an open source software implementation of a popular mathematical method developed by M.R. Yeadon for calculating the body and segment inertia parameters of a human body. The software is written in a high level open source language and provides three interfaces for manipulating the data and the model: a Python API, a command-line user interface, and a graphical user interface. Thus the software can fit into various data processing pipelines and requires only simple geometrical measures as input. PMID:25717365
An object oriented implementation of the Yeadon human inertia model.
Dembia, Christopher; Moore, Jason K; Hubbard, Mont
2014-01-01
We present an open source software implementation of a popular mathematical method developed by M.R. Yeadon for calculating the body and segment inertia parameters of a human body. The software is written in a high level open source language and provides three interfaces for manipulating the data and the model: a Python API, a command-line user interface, and a graphical user interface. Thus the software can fit into various data processing pipelines and requires only simple geometrical measures as input.
Algorithmic synthesis using Python compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej
2015-09-01
This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.
Software engineering and Ada in design
NASA Technical Reports Server (NTRS)
Oneill, Don
1986-01-01
Modern software engineering promises significant reductions in software costs and improvements in software quality. The Ada language is the focus for these software methodology and tool improvements. The IBM FSD approach, including the software engineering practices that guide the systematic design and development of software products and the management of the software process are examined. The revised Ada design language adaptation is revealed. This four level design methodology is detailed including the purpose of each level, the management strategy that integrates the software design activity with the program milestones, and the technical strategy that maps the Ada constructs to each level of design. A complete description of each design level is provided along with specific design language recording guidelines for each level. Finally, some testimony is offered on education, tools, architecture, and metrics resulting from project use of the four level Ada design language adaptation.
A NASA family of minicomputer systems, Appendix A
NASA Technical Reports Server (NTRS)
Deregt, M. P.; Dulfer, J. E.
1972-01-01
This investigation was undertaken to establish sufficient specifications, or standards, for minicomputer hardware and software to provide NASA with realizable economics in quantity purchases, interchangeability of minicomputers, software, storage and peripherals, and a uniformly high quality. The standards will define minicomputer system component types, each specialized to its intended NASA application, in as many levels of capacity as required.
Tailoring Software for Multiple Processor Systems
1982-10-01
resource management decisions . Despite the lack of programming support, the use of multiple processor systems has grown sub- -stantially. Software has...making resource management decisions . Specifically, program- 1 mers need not allocate specific hardware resources to individual program components...Instead, such allocation decisions are automatically made based on high-level resource directives stated by ap- plication programmers, where each directive
A General Sparse Tensor Framework for Electronic Structure Theory
Manzer, Samuel; Epifanovsky, Evgeny; Krylov, Anna I.; ...
2017-01-24
Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. But, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in order to provide access to high-level software constructs to treat such problems. Our implementation supports arbitrary multi-dimensional sparsity in all input and output tensors. We then avoid cumbersome machine-generatedmore » code by implementing all functionality as a high-level symbolic C++ language library and demonstrate that our implementation attains very high performance for linear-scaling sparse tensor contractions.« less
Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs
2011-01-01
This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages. PMID:21253357
A Language for Specifying Compiler Optimizations for Generic Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcock, Jeremiah J.
2007-01-01
Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allowmore » the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.« less
NASA Technical Reports Server (NTRS)
Utz, Hans Heinrich
2011-01-01
This talk gives an overview of the the Robot Applications Programmers Interface Delegate (RAPID) as well as the distributed systems middleware Data Distribution Service (DDS). DDS is an open software standard, RAPID is cleared for open-source release under NOSA. RAPID specifies data-structures and semantics for high-level telemetry published by NASA robotic software. These data-structures are supported by multiple robotic platforms at Johnson Space Center (JSC), Jet Propulsion Laboratory (JPL) and Ames Research Center (ARC), providing high-level interoperability between those platforms. DDS is used as the middleware for data transfer. The feature set of the middleware heavily influences the design decision made in the RAPID specification. So it is appropriate to discuss both in this introductory talk.
NASA Technical Reports Server (NTRS)
Bordano, Aldo; Uhde-Lacovara, JO; Devall, Ray; Partin, Charles; Sugano, Jeff; Doane, Kent; Compton, Jim
1993-01-01
The Navigation, Control and Aeronautics Division (NCAD) at NASA-JSC is exploring ways of producing Guidance, Navigation and Control (GN&C) flight software faster, better, and cheaper. To achieve these goals NCAD established two hardware/software facilities that take an avionics design project from initial inception through high fidelity real-time hardware-in-the-loop testing. Commercially available software products are used to develop the GN&C algorithms in block diagram form and then automatically generate source code from these diagrams. A high fidelity real-time hardware-in-the-loop laboratory provides users with the capability to analyze mass memory usage within the targeted flight computer, verify hardware interfaces, conduct system level verification, performance, acceptance testing, as well as mission verification using reconfigurable and mission unique data. To evaluate these concepts and tools, NCAD embarked on a project to build a real-time 6 DOF simulation of the Soyuz Assured Crew Return Vehicle flight software. To date, a productivity increase of 185 percent has been seen over traditional NASA methods for developing flight software.
NASA PC software evaluation project
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Kuan, Julie C.
1986-01-01
The USL NASA PC software evaluation project is intended to provide a structured framework for facilitating the development of quality NASA PC software products. The project will assist NASA PC development staff to understand the characteristics and functions of NASA PC software products. Based on the results of the project teams' evaluations and recommendations, users can judge the reliability, usability, acceptability, maintainability and customizability of all the PC software products. The objective here is to provide initial, high-level specifications and guidelines for NASA PC software evaluation. The primary tasks to be addressed in this project are as follows: to gain a strong understanding of what software evaluation entails and how to organize a structured software evaluation process; to define a structured methodology for conducting the software evaluation process; to develop a set of PC software evaluation criteria and evaluation rating scales; and to conduct PC software evaluations in accordance with the identified methodology. Communication Packages, Network System Software, Graphics Support Software, Environment Management Software, General Utilities. This report represents one of the 72 attachment reports to the University of Southwestern Louisiana's Final Report on NASA Grant NGT-19-010-900. Accordingly, appropriate care should be taken in using this report out of context of the full Final Report.
Software on the Peregrine System | High-Performance Computing | NREL
. Development Tools View list of tools for build automation, version control, and high-level or specialized scripting. Toolchains Learn about the available toolchains to build applications from source code
A Facility and Architecture for Autonomy Research
NASA Technical Reports Server (NTRS)
Pisanich, Greg; Clancy, Daniel (Technical Monitor)
2002-01-01
Autonomy is a key enabling factor in the advancement of the remote robotic exploration. There is currently a large gap between autonomy software at the research level and software that is ready for insertion into near-term space missions. The Mission Simulation Facility (MST) will bridge this gap by providing a simulation framework and suite of simulation tools to support research in autonomy for remote exploration. This system will allow developers of autonomy software to test their models in a high-fidelity simulation and evaluate their system's performance against a set of integrated, standardized simulations. The Mission Simulation ToolKit (MST) uses a distributed architecture with a communication layer that is built on top of the standardized High Level Architecture (HLA). This architecture enables the use of existing high fidelity models, allows mixing simulation components from various computing platforms and enforces the use of a standardized high-level interface among components. The components needed to achieve a realistic simulation can be grouped into four categories: environment generation (terrain, environmental features), robotic platform behavior (robot dynamics), instrument models (camera/spectrometer/etc.), and data analysis. The MST will provide basic components in these areas but allows users to plug-in easily any refined model by means of a communication protocol. Finally, a description file defines the robot and environment parameters for easy configuration and ensures that all the simulation models share the same information.
ERIC Educational Resources Information Center
Boon, Richard T.; Fore, Cecil, III; Spencer, Vicky G.
2007-01-01
The purpose of this study was to examine teachers' attitudes and perceptions toward the use of technology-based instruction (i.e., "Inspiration 6 software") as an effective instructional strategy in inclusive social studies classes. Three high school social studies teachers, one general education and two special education teachers, completed a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warrant, Marilyn M.; Garcia, Rudy J.; Zhang, Pengchu
2004-09-15
Tomcat-Projects_RF is a software package for analyzing sensor data obtained from a database and displaying the results with Java Servlet Pages (JSP). SQL Views into the dataset are tailored for personnel having different roles in monitoring the items in a storage facility. For example, an inspector, a host treaty compliance officer, a system engineer and software developers were the users identified that would need to access data at different levels of detail, The analysis provides a high level status of the storage facility and allows the user to go deeper into the data details if the user desires.
The Implementation of Satellite Attitude Control System Software Using Object Oriented Design
NASA Technical Reports Server (NTRS)
Reid, W. Mark; Hansell, William; Phillips, Tom; Anderson, Mark O.; Drury, Derek
1998-01-01
NASA established the Small Explorer (SNMX) program in 1988 to provide frequent opportunities for highly focused and relatively inexpensive space science missions. The SMEX program has produced five satellites, three of which have been successfully launched. The remaining two spacecraft are scheduled for launch within the coming year. NASA has recently developed a prototype for the next generation Small Explorer spacecraft (SMEX-Lite). This paper describes the object-oriented design (OOD) of the SMEX-Lite Attitude Control System (ACS) software. The SMEX-Lite ACS is three-axis controlled and is capable of performing sub-arc-minute pointing. This paper first describes high level requirements governing the SMEX-Lite ACS software architecture. Next, the context in which the software resides is explained. The paper describes the principles of encapsulation, inheritance, and polymorphism with respect to the implementation of an ACS software system. This paper will also discuss the design of several ACS software components. Specifically, object-oriented designs are presented for sensor data processing, attitude determination, attitude control, and failure detection. Finally, this paper will address the establishment of the ACS Foundation Class (AFC) Library. The AFC is a large software repository, requiring a minimal amount of code modifications to produce ACS software for future projects.
Achieving behavioral control with millisecond resolution in a high-level programming environment.
Asaad, Wael F; Eskandar, Emad N
2008-08-30
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the 1 ms time-scale that is relevant for the alignment of behavioral and neural events.
Using software to predict occupational hearing loss in the mining industry.
Azman, A S; Li, M; Thompson, J K
2016-01-01
Powerful mining systems typically generate high-level noise that can damage the hearing ability of miners. Engineering noise controls are the most desirable and effective control for overexposure to noise. However, the effects of these noise controls on the actual hearing status of workers are not easily measured. A tool that can provide guidance in assigning workers to jobs based on the noise levels to which they will be exposed is highly desirable. Therefore, the Pittsburgh Mining Research Division (PMRD) of the U.S. National Institute for Occupational Safety and Health (NIOSH) developed a tool to estimate in a systematic way the hearing loss due to occupational noise exposure and to evaluate the effectiveness of developed engineering controls. This computer program is based on the ISO 1999 standard and can be used to estimate the loss of hearing ability caused by occupational noise exposures. In this paper, the functionalities of this software are discussed and several case studies related to mining machinery are presented to demonstrate the functionalities of this software.
Uranus: a rapid prototyping tool for FPGA embedded computer vision
NASA Astrophysics Data System (ADS)
Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.
2007-01-01
The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.
NASA Technical Reports Server (NTRS)
Brodsky, Alexander; Segal, Victor E.
1999-01-01
The EOSCUBE constraint database system is designed to be a software productivity tool for high-level specification and efficient generation of EOSDIS and other scientific products. These products are typically derived from large volumes of multidimensional data which are collected via a range of scientific instruments.
Science of Security Lablet - Scalability and Usability
2014-12-16
mobile computing [19]. However, the high-level infrastructure design and our own implementation (both described throughout this paper) can easily...critical and infrastructural systems demands high levels of sophistication in the technical aspects of cybersecurity, software and hardware design...Forget, S. Komanduri, Alessandro Acquisti, Nicolas Christin, Lorrie Cranor, Rahul Telang. "Security Behavior Observatory: Infrastructure for Long-term
Towards an Open, Distributed Software Architecture for UxS Operations
NASA Technical Reports Server (NTRS)
Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Allen, B. Danette
2015-01-01
To address the growing need to evaluate, test, and certify an ever expanding ecosystem of UxS platforms in preparation of cultural integration, NASA Langley Research Center's Autonomy Incubator (AI) has taken on the challenge of developing a software framework in which UxS platforms developed by third parties can be integrated into a single system which provides evaluation and testing, mission planning and operation, and out-of-the-box autonomy and data fusion capabilities. This software framework, named AEON (Autonomous Entity Operations Network), has two main goals. The first goal is the development of a cross-platform, extensible, onboard software system that provides autonomy at the mission execution and course-planning level, a highly configurable data fusion framework sensitive to the platform's available sensor hardware, and plug-and-play compatibility with a wide array of computer systems, sensors, software, and controls hardware. The second goal is the development of a ground control system that acts as a test-bed for integration of the proposed heterogeneous fleet, and allows for complex mission planning, tracking, and debugging capabilities. The ground control system should also be highly extensible and allow plug-and-play interoperability with third party software systems. In order to achieve these goals, this paper proposes an open, distributed software architecture which utilizes at its core the Data Distribution Service (DDS) standards, established by the Object Management Group (OMG), for inter-process communication and data flow. The design decisions proposed herein leverage the advantages of existing robotics software architectures and the DDS standards to develop software that is scalable, high-performance, fault tolerant, modular, and readily interoperable with external platforms and software.
NASA Astrophysics Data System (ADS)
Gunawan, D.; Amalia, A.; Rahmat, R. F.; Muchtar, M. A.; Siregar, I.
2018-02-01
Identification of software maturity level is a technique to determine the quality of the software. By identifying the software maturity level, the weaknesses of the software can be observed. As a result, the recommendations might be a reference for future software maintenance and development. This paper discusses the software Capability Level (CL) with case studies on Quality Management Unit (Unit Manajemen Mutu) University of Sumatera Utara (UMM-USU). This research utilized Standard CMMI Appraisal Method for Process Improvement class C (SCAMPI C) model with continuous representation. This model focuses on activities for developing quality products and services. The observation is done in three process areas, such as Project Planning (PP), Project Monitoring and Control (PMC), and Requirements Management (REQM). According to the measurement of software capability level for UMM-USU software, turns out that the capability level for the observed process area is in the range of CL1 and CL2. Planning Project (PP) is the only process area which reaches capability level 2, meanwhile, PMC and REQM are still in CL 1 or in performed level. This research reveals several weaknesses of existing UMM-USU software. Therefore, this study proposes several recommendations for UMM-USU to improve capability level for observed process areas.
pyam: Python Implementation of YaM
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan
2012-01-01
pyam is a software development framework with tools for facilitating the rapid development of software in a concurrent software development environment. pyam provides solutions for development challenges associated with software reuse, managing multiple software configurations, developing software product lines, and multiple platform development and build management. pyam uses release-early, release-often development cycles to allow developers to integrate their changes incrementally into the system on a continual basis. It facilitates the creation and merging of branches to support the isolated development of immature software to avoid impacting the stability of the development effort. It uses modules and packages to organize and share software across multiple software products, and uses the concepts of link and work modules to reduce sandbox setup times even when the code-base is large. One sidebenefit is the enforcement of a strong module-level encapsulation of a module s functionality and interface. This increases design transparency, system stability, and software reuse. pyam is written in Python and is organized as a set of utilities on top of the open source SVN software version control package. All development software is organized into a collection of modules. pyam packages are defined as sub-collections of the available modules. Developers can set up private sandboxes for module/package development. All module/package development takes place on private SVN branches. High-level pyam commands support the setup, update, and release of modules and packages. Released and pre-built versions of modules are available to developers. Developers can tailor the source/link module mix for their sandboxes so that new sandboxes (even large ones) can be built up easily and quickly by pointing to pre-existing module releases. All inter-module interfaces are publicly exported via links. A minimal, but uniform, convention is used for building modules.
ERIC Educational Resources Information Center
Journal of Chemical Education, 1989
1989-01-01
Reviews three chemistry software programs at the high school and college general chemistry level for the Apple II family. Includes "Chemical Nomenclature and Balancing Equations,""Principles of Stoichiometry," and "Solubility." (MVL)
From Goal-Oriented Requirements to Event-B Specifications
NASA Technical Reports Server (NTRS)
Aziz, Benjamin; Arenas, Alvaro E.; Bicarregui, Juan; Ponsard, Christophe; Massonet, Philippe
2009-01-01
In goal-oriented requirements engineering methodologies, goals are structured into refinement trees from high-level system-wide goals down to fine-grained requirements assigned to specific software/ hardware/human agents that can realise them. Functional goals assigned to software agents need to be operationalised into specification of services that the agent should provide to realise those requirements. In this paper, we propose an approach for operationalising requirements into specifications expressed in the Event-B formalism. Our approach has the benefit of aiding software designers by bridging the gap between declarative requirements and operational system specifications in a rigorous manner, enabling powerful correctness proofs and allowing further refinements down to the implementation level. Our solution is based on verifying that a consistent Event-B machine exhibits properties corresponding to requirements.
A high-level 3D visualization API for Java and ImageJ.
Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin
2010-05-21
Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.
Automated Engineering Design (AED); An approach to automated documentation
NASA Technical Reports Server (NTRS)
Mcclure, C. W.
1970-01-01
The automated engineering design (AED) is reviewed, consisting of a high level systems programming language, a series of modular precoded subroutines, and a set of powerful software machine tools that effectively automate the production and design of new languages. AED is used primarily for development of problem and user-oriented languages. Software production phases are diagramed, and factors which inhibit effective documentation are evaluated.
NASA Technical Reports Server (NTRS)
Hancock, David W., III
1999-01-01
This document provides the Software Management Plan for the GLAS Standard Data Software (SDS) supporting the GLAS instrument of the EOS ICESat Spacecraft. The SDS encompasses the ICESat Science Investigator-led Processing System (I-SIPS) Software and the Instrument Support Terminal (IST) Software. For the I-SIPS Software, the SDS will produce Level 0, Level 1, and Level 2 data products as well as the associated product quality assessments and descriptive information. For the IST Software, the SDS will accommodate the GLAS instrument support areas of engineering status, command, performance assessment, and instrument health status.
NASA Technical Reports Server (NTRS)
Wolverton, David A.; Dickson, Richard W.; Clinedinst, Winston C.; Slominski, Christopher J.
1993-01-01
The flight software developed for the Flight Management/Flight Controls (FM/FC) MicroVAX computer used on the Transport Systems Research Vehicle for Advanced Transport Operating Systems (ATOPS) research is described. The FM/FC software computes navigation position estimates, guidance commands, and those commands issued to the control surfaces to direct the aircraft in flight. Various modes of flight are provided for, ranging from computer assisted manual modes to fully automatic modes including automatic landing. A high-level system overview as well as a description of each software module comprising the system is provided. Digital systems diagrams are included for each major flight control component and selected flight management functions.
A software methodology for compiling quantum programs
NASA Astrophysics Data System (ADS)
Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias
2018-04-01
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.
A new software on TUG-T60 autonomous telescope for astronomical transient events
NASA Astrophysics Data System (ADS)
Dindar, Murat; Helhel, Selçuk; Esenoğlu, Hasan; Parmaksızoğlu, Murat
2015-03-01
Robotic telescopes usually run under the control of a scheduler, which provides high-level control by selecting astronomical targets for observation. TÜBİTAK (Scientific and Technological Research Council of Turkey) National Observatory (TUG)-T60 Robotic Telescope is controlled by open-source OCAAS software, formally named Talon. This study introduces new software which was designed for Talon to catch GRB, GAIA and transient alerts. The new GRB software module (daemon process) alertd is running with all other modules of Talon such as telescoped; focus, dome; camerad and telrun. Maximum slew velocity and acceleration limits of the T60 telescope are enough fast for the GRB and transient observations.
NASA Technical Reports Server (NTRS)
Moss, Thomas; Nurge, Mark; Perusich, Stephen
2011-01-01
The In-Situ Resource Utilization (ISRU) Regolith & Environmental Science and Oxygen & Lunar Volatiles Extraction (RESOLVE) software provides operation of the physical plant from a remote location with a high-level interface that can access and control the data from external software applications of other subsystems. This software allows autonomous control over the entire system with manual computer control of individual system/process components. It gives non-programmer operators the capability to easily modify the high-level autonomous sequencing while the software is in operation, as well as the ability to modify the low-level, file-based sequences prior to the system operation. Local automated control in a distributed system is also enabled where component control is maintained during the loss of network connectivity with the remote workstation. This innovation also minimizes network traffic. The software architecture commands and controls the latest generation of RESOLVE processes used to obtain, process, and quantify lunar regolith. The system is grouped into six sub-processes: Drill, Crush, Reactor, Lunar Water Resource Demonstration (LWRD), Regolith Volatiles Characterization (RVC) (see example), and Regolith Oxygen Extraction (ROE). Some processes are independent, some are dependent on other processes, and some are independent but run concurrently with other processes. The first goal is to analyze the volatiles emanating from lunar regolith, such as water, carbon monoxide, carbon dioxide, ammonia, hydrogen, and others. This is done by heating the soil and analyzing and capturing the volatilized product. The second goal is to produce water by reducing the soil at high temperatures with hydrogen. This is done by raising the reactor temperature in the range of 800 to 900 C, causing the reaction to progress by adding hydrogen, and then capturing the water product in a desiccant bed. The software needs to run the entire unit and all sub-processes; however, throughout testing, many variables and parameters need to be changed as more is learned about the system operation. The Master Events Controller (MEC) is run on a standard laptop PC using Windows XP. This PC runs in parallel to another laptop that monitors the GC, and a third PC that monitors the drilling/ crushing operation. These three PCs interface to the process through a CompactRIO, OPC Servers, and modems.
New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database
NASA Technical Reports Server (NTRS)
Laher, Russ; Rector, John
2004-01-01
Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.
A component-based software environment for visualizing large macromolecular assemblies.
Sanner, Michel F
2005-03-01
The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.
Analyzing and designing object-oriented missile simulations with concurrency
NASA Astrophysics Data System (ADS)
Randorf, Jeffrey Allen
2000-11-01
A software object model for the six degree-of-freedom missile modeling domain is presented. As a precursor, a domain analysis of the missile modeling domain was started, based on the Feature-Oriented Domain Analysis (FODA) technique described by the Software Engineering Institute (SEI). It was subsequently determined the FODA methodology is functionally equivalent to the Object Modeling Technique. The analysis used legacy software documentation and code from the ENDOSIM, KDEC, and TFrames 6-DOF modeling tools, including other technical literature. The SEI Object Connection Architecture (OCA) was the template for designing the object model. Three variants of the OCA were considered---a reference structure, a recursive structure, and a reference structure with augmentation for flight vehicle modeling. The reference OCA design option was chosen for maintaining simplicity while not compromising the expressive power of the OMT model. The missile architecture was then analyzed for potential areas of concurrent computing. It was shown how protected objects could be used for data passing between OCA object managers, allowing concurrent access without changing the OCA reference design intent or structure. The implementation language was the 1995 release of Ada. OCA software components were shown how to be expressed as Ada child packages. While acceleration of several low level and other high operations level are possible on proper hardware, there was a 33% degradation of 4th order Runge-Kutta integrator performance of two simultaneous ordinary differential equations using Ada tasking on a single processor machine. The Defense Department's High Level Architecture was introduced and explained in context with the OCA. It was shown the HLA and OCA were not mutually exclusive architectures, but complimentary. HLA was shown as an interoperability solution, with the OCA as an architectural vehicle for software reuse. Further directions for implementing a 6-DOF missile modeling environment are discussed.
Hierarchical Simulation to Assess Hardware and Software Dependability
NASA Technical Reports Server (NTRS)
Ries, Gregory Lawrence
1997-01-01
This thesis presents a method for conducting hierarchical simulations to assess system hardware and software dependability. The method is intended to model embedded microprocessor systems. A key contribution of the thesis is the idea of using fault dictionaries to propagate fault effects upward from the level of abstraction where a fault model is assumed to the system level where the ultimate impact of the fault is observed. A second important contribution is the analysis of the software behavior under faults as well as the hardware behavior. The simulation method is demonstrated and validated in four case studies analyzing Myrinet, a commercial, high-speed networking system. One key result from the case studies shows that the simulation method predicts the same fault impact 87.5% of the time as is obtained by similar fault injections into a real Myrinet system. Reasons for the remaining discrepancy are examined in the thesis. A second key result shows the reduction in the number of simulations needed due to the fault dictionary method. In one case study, 500 faults were injected at the chip level, but only 255 propagated to the system level. Of these 255 faults, 110 shared identical fault dictionary entries at the system level and so did not need to be resimulated. The necessary number of system-level simulations was therefore reduced from 500 to 145. Finally, the case studies show how the simulation method can be used to improve the dependability of the target system. The simulation analysis was used to add recovery to the target software for the most common fault propagation mechanisms that would cause the software to hang. After the modification, the number of hangs was reduced by 60% for fault injections into the real system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal
Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication andmore » kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.« less
Achieving behavioral control with millisecond resolution in a high-level programming environment
Asaad, Wael F.; Eskandar, Emad N.
2008-01-01
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the one millisecond time-scale that is relevant for the alignment of behavioral and neural events. PMID:18606188
IB2d: a Python and MATLAB implementation of the immersed boundary method.
Battista, Nicholas A; Strickland, W Christopher; Miller, Laura A
2017-03-29
The development of fluid-structure interaction (FSI) software involves trade-offs between ease of use, generality, performance, and cost. Typically there are large learning curves when using low-level software to model the interaction of an elastic structure immersed in a uniform density fluid. Many existing codes are not publicly available, and the commercial software that exists usually requires expensive licenses and may not be as robust or allow the necessary flexibility that in house codes can provide. We present an open source immersed boundary software package, IB2d, with full implementations in both MATLAB and Python, that is capable of running a vast range of biomechanics models and is accessible to scientists who have experience in high-level programming environments. IB2d contains multiple options for constructing material properties of the fiber structure, as well as the advection-diffusion of a chemical gradient, muscle mechanics models, and artificial forcing to drive boundaries with a preferred motion.
A fault-tolerant intelligent robotic control system
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Tso, Kam Sing
1993-01-01
This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.
Software Fault Tolerance: A Tutorial
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2000-01-01
Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.
NASA Astrophysics Data System (ADS)
Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.
2015-12-01
Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.
NASA Technical Reports Server (NTRS)
1973-01-01
Techniques are considered which would be used to characterize areospace computers with the space shuttle application as end usage. The system level digital problems which have been encountered and documented are surveyed. From the large cross section of tests, an optimum set is recommended that has a high probability of discovering documented system level digital problems within laboratory environments. Defined is a baseline hardware, software system which is required as a laboratory tool to test aerospace computers. Hardware and software baselines and additions necessary to interface the UTE to aerospace computers for test purposes are outlined.
The CMS High Level Trigger System: Experience and Future Development
NASA Astrophysics Data System (ADS)
Bauer, G.; Behrens, U.; Bowen, M.; Branson, J.; Bukowiec, S.; Cittolin, S.; Coarasa, J. A.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Flossdorf, A.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Hwong, Y. L.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Polese, G.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Shpakov, D.; Simon, S.; Spataru, A. C.; Sumorok, K.
2012-12-01
The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.
The K9 On-Board Rover Architecture
NASA Technical Reports Server (NTRS)
Bresina, John L.; Bualat, Maria; Fair, Michael; Washington, Richard; Wright, Anne
2006-01-01
This paper describes the software architecture of NASA Ames Research Center s K9 rover. The goal of the onboard software architecture team was to develop a modular, flexible framework that would allow both high- and low-level control of the K9 hardware. Examples of low-level control are the simple drive or pan/tilt commands which are handled by the resource managers, and examples of high-level control are the command sequences which are handled by the conditional executive. In between these two control levels are complex behavioral commands which are handled by the pilot, such as drive to goal with obstacle avoidance or visually servo to a target. This paper presents the design of the architecture as of Fall 2000. We describe the state of the architecture implementation as well as its current evolution. An early version of the architecture was used for K9 operations during a dual-rover field experiment conducted by NASA Ames Research Center (ARC) and the Jet Propulsion Laboratory (JPL) from May 14 to May 16, 2000.
The Nebula Standard Computer Architecture,
good target for high level languages, the designers also adopted a visibility approach in architecture design that provides more freedom for the hardware implementor while still maintaining software portability. (Author)
Analyzing and Predicting Effort Associated with Finding and Fixing Software Faults
NASA Technical Reports Server (NTRS)
Hamill, Maggie; Goseva-Popstojanova, Katerina
2016-01-01
Context: Software developers spend a significant amount of time fixing faults. However, not many papers have addressed the actual effort needed to fix software faults. Objective: The objective of this paper is twofold: (1) analysis of the effort needed to fix software faults and how it was affected by several factors and (2) prediction of the level of fix implementation effort based on the information provided in software change requests. Method: The work is based on data related to 1200 failures, extracted from the change tracking system of a large NASA mission. The analysis includes descriptive and inferential statistics. Predictions are made using three supervised machine learning algorithms and three sampling techniques aimed at addressing the imbalanced data problem. Results: Our results show that (1) 83% of the total fix implementation effort was associated with only 20% of failures. (2) Both safety critical failures and post-release failures required three times more effort to fix compared to non-critical and pre-release counterparts, respectively. (3) Failures with fixes spread across multiple components or across multiple types of software artifacts required more effort. The spread across artifacts was more costly than spread across components. (4) Surprisingly, some types of faults associated with later life-cycle activities did not require significant effort. (5) The level of fix implementation effort was predicted with 73% overall accuracy using the original, imbalanced data. Using oversampling techniques improved the overall accuracy up to 77%. More importantly, oversampling significantly improved the prediction of the high level effort, from 31% to around 85%. Conclusions: This paper shows the importance of tying software failures to changes made to fix all associated faults, in one or more software components and/or in one or more software artifacts, and the benefit of studying how the spread of faults and other factors affect the fix implementation effort.
ONRASIA Scientific Information Bulletin. Volume 17, Number 4, October-December 1992
1992-12-01
Corporate Software Planning Oki’s Software Improvement Environment and Engineering Div. Strategy - Effective Management - Improvement of Informa- Mr...a CCITT-supported implementation phase. -- David K fore we have effectively implemented ISO Standard high-level language Kahaner, ONRASIA them...power with a system fo- putation) Result cused on their specific needs. Result Effective speed should be (Readers can write to me or to 150 times VP-400
NASA Astrophysics Data System (ADS)
Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.
2016-12-01
We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.
Behavior driven testing in ALMA telescope calibration software
NASA Astrophysics Data System (ADS)
Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang
2016-07-01
ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.
Integrated software system for low level waste management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worku, G.
1995-12-31
In the continually changing and uncertain world of low level waste management, many generators in the US are faced with the prospect of having to store their waste on site for the indefinite future. This consequently increases the set of tasks performed by the generators in the areas of packaging, characterizing, classifying, screening (if a set of acceptance criteria applies), and managing the inventory for the duration of onsite storage. When disposal sites become available, it is expected that the work will require re-evaluating the waste packages, including possible re-processing, re-packaging, or re-classifying in preparation for shipment for disposal undermore » the regulatory requirements of the time. In this day and age, when there is wide use of computers and computer literacy is at high levels, an important waste management tool would be an integrated software system that aids waste management personnel in conducting these tasks quickly and accurately. It has become evident that such an integrated radwaste management software system offers great benefits to radwaste generators both in the US and other countries. This paper discusses one such approach to integrated radwaste management utilizing some globally accepted radiological assessment software applications.« less
Individual Differences in Diabetes Risk: Role of Sleep Disturbances
2008-08-01
Automated Neuropsychological Assessment Metrics (ANAM) software, a commercial version of the Walter Reed Army Institute for Research battery of...ghrelin levels, decreased levels of leptin and increased hunger and appetite. We also review cross-sectional epidemiologic studies associating...Leptin is a satiety factor: high leptin levels inhibit and low leptin levels stimulate hunger and food intake. The data shown in Figure 4
PINT, A Modern Software Package for Pulsar Timing
NASA Astrophysics Data System (ADS)
Luo, Jing; Ransom, Scott M.; Demorest, Paul; Ray, Paul S.; Stovall, Kevin; Jenet, Fredrick; Ellis, Justin; van Haasteren, Rutger; Bachetti, Matteo; NANOGrav PINT developer team
2018-01-01
Pulsar timing, first developed decades ago, has provided an extremely wide range of knowledge about our universe. It has been responsible for many important discoveries, such as the discovery of the first exoplanet and the orbital period decay of double neutron star systems. Currently pulsar timing is the leading technique for detecting low frequency (about 10^-9 Hertz) gravitational waves (GW) using an array of pulsars as the detectors. To achieve this goal, high precision pulsar timing data, at about nanoseconds level, is required. Most high precision pulsar timing data are analyzed using the widely adopted software TEMPO/TEMPO2. But for a robust and believable GW detection, it is important to have independent software that can cross-check the result. In this poster we present the new generation pulsar timing software PINT. This package will provide a robust system to cross check high-precision timing results, completely independent of TEMPO and TEMPO2. In addition, PINT is designed to be a package that is easy to extend and modify, through use of flexible code architecture and a modern programming language, Python, with modern technology and libraries.
SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.
Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen
2013-03-01
Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.
MINDS: A microcomputer interactive data system for 8086-based controllers
NASA Technical Reports Server (NTRS)
Soeder, J. F.
1985-01-01
A microcomputer interactive data system (MINDS) software package for the 8086 family of microcomputers is described. To enhance program understandability and ease of code maintenance, the software is written in PL/M-86, Intel Corporation's high-level system implementation language. The MINDS software is intended to run in residence with real-time digital control software to provide displays of steady-state and transient data. In addition, the MINDS package provides classic monitor capabilities along with extended provisions for debugging an executing control system. The software uses the CP/M-86 operating system developed by Digital Research, Inc., to provide program load capabilities along with a uniform file structure for data and table storage. Finally, a library of input and output subroutines to be used with consoles equipped with PL/M-86 and assembly language is described.
Long-term Preservation of Data Analysis Capabilities
NASA Astrophysics Data System (ADS)
Gabriel, C.; Arviset, C.; Ibarra, A.; Pollock, A.
2015-09-01
While the long-term preservation of scientific data obtained by large astrophysics missions is ensured through science archives, the issue of data analysis software preservation has hardly been addressed. Efforts by large data centres have contributed so far to maintain some instrument or mission-specific data reduction packages on top of high-level general purpose data analysis software. However, it is always difficult to keep software alive without support and maintenance once the active phase of a mission is over. This is especially difficult in the budgetary model followed by space agencies. We discuss the importance of extending the lifetime of dedicated data analysis packages and review diverse strategies under development at ESA using new paradigms such as Virtual Machines, Cloud Computing, and Software as a Service for making possible full availability of data analysis and calibration software for decades at minimal cost.
NEXUS - Resilient Intelligent Middleware
NASA Astrophysics Data System (ADS)
Kaveh, N.; Hercock, R. Ghanea
Service-oriented computing, a composition of distributed-object computing, component-based, and Web-based concepts, is becoming the widespread choice for developing dynamic heterogeneous software assets available as services across a network. One of the major strengths of service-oriented technologies is the high abstraction layer and large granularity level at which software assets are viewed compared to traditional object-oriented technologies. Collaboration through encapsulated and separately defined service interfaces creates a service-oriented environment, whereby multiple services can be linked together through their interfaces to compose a functional system. This approach enables better integration of legacy and non-legacy services, via wrapper interfaces, and allows for service composition at a more abstract level especially in cases such as vertical market stacks. The heterogeneous nature of service-oriented technologies and the granularity of their software components makes them a suitable computing model in the pervasive domain.
A curriculum for real-time computer and control systems engineering
NASA Technical Reports Server (NTRS)
Halang, Wolfgang A.
1990-01-01
An outline of a syllabus for the education of real-time-systems engineers is given. This comprises the treatment of basic concepts, real-time software engineering, and programming in high-level real-time languages, real-time operating systems with special emphasis on such topics as task scheduling, hardware architectures, and especially distributed automation structures, process interfacing, system reliability and fault-tolerance, and integrated project development support systems. Accompanying course material and laboratory work are outlined, and suggestions for establishing a laboratory with advanced, but low-cost, hardware and software are provided. How the curriculum can be extended into a second semester is discussed, and areas for possible graduate research are listed. The suitable selection of a high-level real-time language and supporting operating system for teaching purposes is considered.
LabVIEW Interface for PCI-SpaceWire Interface Card
NASA Technical Reports Server (NTRS)
Lux, James; Loya, Frank; Bachmann, Alex
2005-01-01
This software provides a LabView interface to the NT drivers for the PCISpaceWire card, which is a peripheral component interface (PCI) bus interface that conforms to the IEEE-1355/ SpaceWire standard. As SpaceWire grows in popularity, the ability to use SpaceWire links within LabVIEW will be important to electronic ground support equipment vendors. In addition, there is a need for a high-level LabVIEW interface to the low-level device- driver software supplied with the card. The LabVIEW virtual instrument (VI) provides graphical interfaces to support all (1) SpaceWire link functions, including message handling and routing; (2) monitoring as a passive tap using specialized hardware; and (3) low-level access to satellite mission-control subsystem functions. The software is supplied in a zip file that contains LabVIEW VI files, which provide various functions of the PCI-SpaceWire card, as well as higher-link-level functions. The VIs are suitably named according to the matching function names in the driver manual. A number of test programs also are provided to exercise various functions.
Software Requirements Specification for Lunar IceCube
NASA Astrophysics Data System (ADS)
Glaser-Garbrick, Michael R.
Lunar IceCube is a 6U satellite that will orbit the moon to measure water volatiles as a function of position, altitude, and time, and measure in its various phases. Lunar IceCube, is a collaboration between Morehead State University, Vermont Technical University, Busek, and NASA. The Software Requirements Specification will serve as contract between the overall team and the developers of the flight software. It will provide a system's overview of the software that will be developed for Lunar IceCube, in that it will detail all of the interconnects and protocols for each subsystem's that Lunar IceCube will utilize. The flight software will be written in SPARK to the fullest extent, due to SPARK's unique ability to make software free of any errors. The LIC flight software does make use of a general purpose, reusable application framework called CubedOS. This framework imposes some structuring requirements on the architecture and design of the flight software, but it does not impose any high level requirements. It will also detail the tools that we will be using for Lunar IceCube, such as why we will be utilizing VxWorks.
Architecture of a framework for providing information services for public transport.
García, Carmelo R; Pérez, Ricardo; Lorenzo, Alvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino
2012-01-01
This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained.
System support software for the Space Ultrareliable Modular Computer (SUMC)
NASA Technical Reports Server (NTRS)
Hill, T. E.; Hintze, G. C.; Hodges, B. C.; Austin, F. A.; Buckles, B. P.; Curran, R. T.; Lackey, J. D.; Payne, R. E.
1974-01-01
The highly transportable programming system designed and implemented to support the development of software for the Space Ultrareliable Modular Computer (SUMC) is described. The SUMC system support software consists of program modules called processors. The initial set of processors consists of the supervisor, the general purpose assembler for SUMC instruction and microcode input, linkage editors, an instruction level simulator, a microcode grid print processor, and user oriented utility programs. A FORTRAN 4 compiler is undergoing development. The design facilitates the addition of new processors with a minimum effort and provides the user quasi host independence on the ground based operational software development computer. Additional capability is provided to accommodate variations in the SUMC architecture without consequent major modifications in the initial processors.
A Software Framework for Aircraft Simulation
NASA Technical Reports Server (NTRS)
Curlett, Brian P.
2008-01-01
The National Aeronautics and Space Administration Dryden Flight Research Center has a long history in developing simulations of experimental fixed-wing aircraft from gliders to suborbital vehicles on platforms ranging from desktop simulators to pilot-in-the-loop/aircraft-in-the-loop simulators. Regardless of the aircraft or simulator hardware, much of the software framework is common to all NASA Dryden simulators. Some of this software has withstood the test of time, but in recent years the push toward high-fidelity user-friendly simulations has resulted in some significant changes. This report presents an overview of the current NASA Dryden simulation software framework and capabilities with an emphasis on the new features that have permitted NASA to develop more capable simulations while maintaining the same staffing levels.
Configurable software for satellite graphics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartzman, P D
An important goal in interactive computer graphics is to provide users with both quick system responses for basic graphics functions and enough computing power for complex calculations. One solution is to have a distributed graphics system in which a minicomputer and a powerful large computer share the work. The most versatile type of distributed system is an intelligent satellite system in which the minicomputer is programmable by the application user and can do most of the work while the large remote machine is used for difficult computations. At New York University, the hardware was configured from available equipment. The levelmore » of system intelligence resulted almost completely from software development. Unlike previous work with intelligent satellites, the resulting system had system control centered in the satellite. It also had the ability to reconfigure software during realtime operation. The design of the system was done at a very high level using set theoretic language. The specification clearly illustrated processor boundaries and interfaces. The high-level specification also produced a compact, machine-independent virtual graphics data structure for picture representation. The software was written in a systems implementation language; thus, only one set of programs was needed for both machines. A user can program both machines in a single language. Tests of the system with an application program indicate that is has very high potential. A major result of this work is the demonstration that a gigantic investment in new hardware is not necessary for computing facilities interested in graphics.« less
Using software to predict occupational hearing loss in the mining industry
Azman, A.S.; Li, M.; Thompson, J.K.
2017-01-01
Powerful mining systems typically generate high-level noise that can damage the hearing ability of miners. Engineering noise controls are the most desirable and effective control for overexposure to noise. However, the effects of these noise controls on the actual hearing status of workers are not easily measured. A tool that can provide guidance in assigning workers to jobs based on the noise levels to which they will be exposed is highly desirable. Therefore, the Pittsburgh Mining Research Division (PMRD) of the U.S. National Institute for Occupational Safety and Health (NIOSH) developed a tool to estimate in a systematic way the hearing loss due to occupational noise exposure and to evaluate the effectiveness of developed engineering controls. This computer program is based on the ISO 1999 standard and can be used to estimate the loss of hearing ability caused by occupational noise exposures. In this paper, the functionalities of this software are discussed and several case studies related to mining machinery are presented to demonstrate the functionalities of this software. PMID:28596700
ABM Drag_Pass Report Generator
NASA Technical Reports Server (NTRS)
Fisher, Forest; Gladden, Roy; Khanampornpan, Teerapat
2008-01-01
dragREPORT software was developed in parallel with abmREPORT, which is described in the preceding article. Both programs were built on the capabilities created during that process. This tool generates a drag_pass report that summarizes vital information from the MRO aerobreaking drag_pass build process to facilitate both sequence reviews and provide a high-level summarization of the sequence for mission management. The script extracts information from the ENV, SSF, FRF, SCMFmax, and OPTG files, presenting them in a single, easy-to-check report providing the majority of parameters needed for cross check and verification as part of the sequence review process. Prior to dragReport, all the needed information was spread across a number of different files, each in a different format. This software is a Perl script that extracts vital summarization information and build-process details from a number of source files into a single, concise report format used to aid the MPST sequence review process and to provide a high-level summarization of the sequence for mission management reference. This software could be adapted for future aerobraking missions to provide similar reports, review and summarization information.
Proteus: a reconfigurable computational network for computer vision
NASA Astrophysics Data System (ADS)
Haralick, Robert M.; Somani, Arun K.; Wittenbrink, Craig M.; Johnson, Robert; Cooper, Kenneth; Shapiro, Linda G.; Phillips, Ihsin T.; Hwang, Jenq N.; Cheung, William; Yao, Yung H.; Chen, Chung-Ho; Yang, Larry; Daugherty, Brian; Lorbeski, Bob; Loving, Kent; Miller, Tom; Parkins, Larye; Soos, Steven L.
1992-04-01
The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.
Prospective comparison of speckle tracking longitudinal bidimensional strain between two vendors.
Castel, Anne-Laure; Szymanski, Catherine; Delelis, François; Levy, Franck; Menet, Aymeric; Mailliet, Amandine; Marotte, Nathalie; Graux, Pierre; Tribouilloy, Christophe; Maréchaux, Sylvestre
2014-02-01
Speckle tracking is a relatively new, largely angle-independent technique used for the evaluation of myocardial longitudinal strain (LS). However, significant differences have been reported between LS values obtained by speckle tracking with the first generation of software products. To compare LS values obtained with the most recently released equipment from two manufacturers. Systematic scanning with head-to-head acquisition with no modification of the patient's position was performed in 64 patients with equipment from two different manufacturers, with subsequent off-line post-processing for speckle tracking LS assessment (Philips QLAB 9.0 and General Electric [GE] EchoPAC BT12). The interobserver variability of each software product was tested on a randomly selected set of 20 echocardiograms from the study population. GE and Philips interobserver coefficients of variation (CVs) for global LS (GLS) were 6.63% and 5.87%, respectively, indicating good reproducibility. Reproducibility was very variable for regional and segmental LS values, with CVs ranging from 7.58% to 49.21% with both software products. The concordance correlation coefficient (CCC) between GLS values was high at 0.95, indicating substantial agreement between the two methods. While good agreement was observed between midwall and apical regional strains with the two software products, basal regional strains were poorly correlated. The agreement between the two software products at a segmental level was very variable; the highest correlation was obtained for the apical cap (CCC 0.90) and the poorest for basal segments (CCC range 0.31-0.56). A high level of agreement and reproducibility for global but not for basal regional or segmental LS was found with two vendor-dependent software products. This finding may help to reinforce clinical acceptance of GLS in everyday clinical practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Simpson, Robin; Devenyi, Gabriel A; Jezzard, Peter; Hennessy, T Jay; Near, Jamie
2017-01-01
To introduce a new toolkit for simulation and processing of magnetic resonance spectroscopy (MRS) data, and to demonstrate some of its novel features. The FID appliance (FID-A) is an open-source, MATLAB-based software toolkit for simulation and processing of MRS data. The software is designed specifically for processing data with multiple dimensions (eg, multiple radiofrequency channels, averages, spectral editing dimensions). It is equipped with functions for importing data in the formats of most major MRI vendors (eg, Siemens, Philips, GE, Agilent) and for exporting data into the formats of several common processing software packages (eg, LCModel, jMRUI, Tarquin). This paper introduces the FID-A software toolkit and uses examples to demonstrate its novel features, namely 1) the use of a spectral registration algorithm to carry out useful processing routines automatically, 2) automatic detection and removal of motion-corrupted scans, and 3) the ability to perform several major aspects of the MRS computational workflow from a single piece of software. This latter feature is illustrated through both high-level processing of in vivo GABA-edited MEGA-PRESS MRS data, as well as detailed quantum mechanical simulations to generate an accurate LCModel basis set for analysis of the same data. All of the described processing steps resulted in a marked improvement in spectral quality compared with unprocessed data. Fitting of MEGA-PRESS data using a customized basis set resulted in improved fitting accuracy compared with a generic MEGA-PRESS basis set. The FID-A software toolkit enables high-level processing of MRS data and accurate simulation of in vivo MRS experiments. Magn Reson Med 77:23-33, 2017. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Research on software behavior trust based on hierarchy evaluation
NASA Astrophysics Data System (ADS)
Long, Ke; Xu, Haishui
2017-08-01
In view of the correlation software behavior, we evaluate software behavior credibility from two levels of control flow and data flow. In control flow level, method of the software behavior of trace based on support vector machine (SVM) is proposed. In data flow level, behavioral evidence evaluation based on fuzzy decision analysis method is put forward.
NASA Astrophysics Data System (ADS)
Puig, Albert; LHCb Starterkit Team
2017-10-01
The vast majority of high-energy physicists use and produce software every day. Software skills are usually acquired “on the go” and dedicated training courses are rare. The LHCb Starterkit is a new training format for getting LHCb collaborators started in effectively using software to perform their research. The course focuses on teaching basic skills for research computing. Unlike traditional tutorials we focus on starting with basics, performing all the material live, with a high degree of interactivity, giving priority to understanding the tools as opposed to handing out recipes that work “as if by magic”. The LHCb Starterkit was started by two young members of the collaboration inspired by the principles of Software Carpentry, and the material is created in a collaborative fashion using the tools we teach. Three successful entry-level workshops, as well as an advance one, have taken place since the start of the initiative in 2015, and were taught largely by PhD students to other PhD students.
Evaluation of the efficiency and fault density of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1993-01-01
Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.
Space Telecommunications Radio System (STRS) Architecture Goals/Objectives and Level 1 Requirements
NASA Technical Reports Server (NTRS)
Briones, Janette C.; Johnson, Sandra K.; VanDerAar, Lisa
2007-01-01
The Space Telecommunications Radio System (STRS) Architecture Requirements Document provides the basis for the development of an open architecture for NASA Software Defined Radios (SDRs) for space use. The main objective of this document is to evaluate the goals and objectives and high level (Level 1) requirements that have bearing on the design of the architecture. The goals and objectives will provide broad, fundamental direction and purpose. The high level requirements (Level 1) intend to guide the broader and longer term aspects aspects of the SDR Architecture and provide guidance for the development of level 2 requirements.
Ada(R) Test and Verification System (ATVS)
NASA Technical Reports Server (NTRS)
Strelich, Tom
1986-01-01
The Ada Test and Verification System (ATVS) functional description and high level design are completed and summarized. The ATVS will provide a comprehensive set of test and verification capabilities specifically addressing the features of the Ada language, support for embedded system development, distributed environments, and advanced user interface capabilities. Its design emphasis was on effective software development environment integration and flexibility to ensure its long-term use in the Ada software development community.
2013-04-01
machine transitions. 2. We developed the TraceContract API for trace analysis in the Scala programming language. TraceContract combines a high-level...DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...awarded within the Software and Systems program . The original Pro- gram Manager was David Luginbuhl. Bob Bonneau took over as PM in March 2011. The award
Software quality assurance | News
Measure was removed: "Sufficient level of detail in the requirements to develop test cases." ; This control measure was removed since the sufficient level of detail needed to develop test cases is recorded for all test cases. (Note: This is mandatory for applications graded with a High Quality Assurance
Garrett, Pauline H; Faraone, Karen L; Patzelt, Sebastian B M; Keaser, Michael L
2015-12-01
Little is known about self-directed and self-reflective assessment in preclinical dental curricula. The aim of this study was to evaluate a visual dental anatomy teaching tool to train dental students to self-assess their dental anatomy wax carving practical examinations. The students self-assessed two waxing practical examinations (tooth #8 and tooth #19) using high-quality digital images in an assessment tool incorporated into a digital testing program. Student self-assessments were compared to the faculty evaluations and the results of a software-based evaluation tool (E4D Compare). Out of a total 130 first-year dental students at one U.S. dental school, wax-ups from 57 participants were available for this study. The assessment data were submitted to statistical analyses (p<0.05). For tooth #8, the student self-assessments were significantly different from the faculty and software assessments at a 400 micrometer level of tolerance (p=0.036), whereas the faculty assessment was not significantly different from the software assessment at a 300 micrometer level of tolerance (p=0.69). The evaluation of tooth #19 resulted in no significant differences between faculty members (p=0.94) or students (p=0.21) and the software at a level of tolerance of 400 micrometers. This study indicates that students can learn to self-assess their work using self-reflection in conjunction with faculty guidance and that it may be possible to use software-based evaluation tools to assist in faculty calibration and as objective grading tools.
Digitized Educational Technology: A Learning Tool Using Remotely Sensed Data
NASA Technical Reports Server (NTRS)
Love, Gloria Carter
1999-01-01
Digitized Educational software for different levels of instruction were developed and placed on the web (geocities). Students attending the Pre-Engineering Summer 1998 Camp at Dillard University explored the use of the software which included presentations, applications, and special exercises. Student comments were received and considered for adjustments. The second outreach program included students from Colton Junior High School and Natural Science Majors at Dillard University. The Natural Majors completed a second survey concerning reasons why students selected majors in the Sciences and Mathematics. Two student research assistants (DU) and faculty members/parents of Colton Junior High assisted.
Will the future of knowledge work automation transform personalized medicine?
Naik, Gauri; Bhide, Sanika S
2014-09-01
Today, we live in a world of 'information overload' which demands high level of knowledge-based work. However, advances in computer hardware and software have opened possibilities to automate 'routine cognitive tasks' for knowledge processing. Engineering intelligent software systems that can process large data sets using unstructured commands and subtle judgments and have the ability to learn 'on the fly' are a significant step towards automation of knowledge work. The applications of this technology for high throughput genomic analysis, database updating, reporting clinically significant variants, and diagnostic imaging purposes are explored using case studies.
NASA Technical Reports Server (NTRS)
Kapoor, Manju M.; Mehta, Manju
2010-01-01
The goal of this paper is to emphasize the importance of developing complete and unambiguous requirements early in the project cycle (prior to Preliminary Design Phase). Having a complete set of requirements early in the project cycle allows sufficient time to generate a traceability matrix. Requirements traceability and analysis are the key elements in improving verification and validation process, and thus overall software quality. Traceability can be most beneficial when the system changes. If changes are made to high-level requirements it implies that low-level requirements need to be modified. Traceability ensures that requirements are appropriately and efficiently verified at various levels whereas analysis ensures that a rightly interpreted set of requirements is produced.
Software Certification for Temporal Properties With Affordable Tool Qualification
NASA Technical Reports Server (NTRS)
Xia, Songtao; DiVito, Benedetto L.
2005-01-01
It has been recognized that a framework based on proof-carrying code (also called semantic-based software certification in its community) could be used as a candidate software certification process for the avionics industry. To meet this goal, tools in the "trust base" of a proof-carrying code system must be qualified by regulatory authorities. A family of semantic-based software certification approaches is described, each different in expressive power, level of automation and trust base. Of particular interest is the so-called abstraction-carrying code, which can certify temporal properties. When a pure abstraction-carrying code method is used in the context of industrial software certification, the fact that the trust base includes a model checker would incur a high qualification cost. This position paper proposes a hybrid of abstraction-based and proof-based certification methods so that the model checker used by a client can be significantly simplified, thereby leading to lower cost in tool qualification.
Wilson, Anna J; Dehaene, Stanislas; Pinel, Philippe; Revkin, Susannah K; Cohen, Laurent; Cohen, David
2006-05-30
Adaptive game software has been successful in remediation of dyslexia. Here we describe the cognitive and algorithmic principles underlying the development of similar software for dyscalculia. Our software is based on current understanding of the cerebral representation of number and the hypotheses that dyscalculia is due to a "core deficit" in number sense or in the link between number sense and symbolic number representations. "The Number Race" software trains children on an entertaining numerical comparison task, by presenting problems adapted to the performance level of the individual child. We report full mathematical specifications of the algorithm used, which relies on an internal model of the child's knowledge in a multidimensional "learning space" consisting of three difficulty dimensions: numerical distance, response deadline, and conceptual complexity (from non-symbolic numerosity processing to increasingly complex symbolic operations). The performance of the software was evaluated both by mathematical simulations and by five weeks of use by nine children with mathematical learning difficulties. The results indicate that the software adapts well to varying levels of initial knowledge and learning speeds. Feedback from children, parents and teachers was positive. A companion article describes the evolution of number sense and arithmetic scores before and after training. The software, open-source and freely available online, is designed for learning disabled children aged 5-8, and may also be useful for general instruction of normal preschool children. The learning algorithm reported is highly general, and may be applied in other domains.
Some Future Software Engineering Opportunities and Challenges
NASA Astrophysics Data System (ADS)
Boehm, Barry
This paper provides an update and extension of a 2006 paper, “Some Future Trends and Implications for Systems and Software Engineering Processes,” Systems Engineering, Spring 2006. Some of its challenges and opportunities are similar, such as the need to simultaneously achieve high levels of both agility and assurance. Others have emerged as increasingly important, such as the challenges of dealing with ultralarge volumes of data, with multicore chips, and with software as a service. The paper is organized around eight relatively surprise-free trends and two “wild cards” whose trends and implications are harder to foresee. The eight surprise-free trends are:
High performance bilateral telerobot control.
Kline-Schoder, Robert; Finger, William; Hogan, Neville
2002-01-01
Telerobotic systems are used when the environment that requires manipulation is not easily accessible to humans, as in space, remote, hazardous, or microscopic applications or to extend the capabilities of an operator by scaling motions and forces. The Creare control algorithm and software is an enabling technology that makes possible guaranteed stability and high performance for force-feedback telerobots. We have developed the necessary theory, structure, and software design required to implement high performance telerobot systems with time delay. This includes controllers for the master and slave manipulators, the manipulator servo levels, the communication link, and impedance shaping modules. We verified the performance using both bench top hardware as well as a commercial microsurgery system.
Vulnerability in Determining the Cost of Information System Project to Avoid Loses
NASA Astrophysics Data System (ADS)
Haryono, Kholid; Ikhsani, Zulfa Amalia
2018-03-01
Context: This study discusses the priority of cost required in software development projects. Objectives: To show the costing models, the variables involved, and how practitioners assess and decide the priorities of each variable. To strengthen the information, each variable also confirmed the risk if ignored. Method: The method is done by two approaches. First, systematic literature reviews to find the models and variables used to decide the cost of software development. Second, confirm and take judgments about the level of importance and risk of each variable to the software developer. Result: Obtained about 54 variables that appear on the 10 models discussed. The variables are categorized into 15 groups based on the similarity of meaning. Each group becomes a variable. Confirmation results with practitioners on the level of importance and risk. It shown there are two variables that are considered very important and high risk if ignored. That is duration and effort. Conclusion: The relationship of variable rates between the results of literature studies and confirmation of practitioners contributes to the use of software business actors in considering project cost variables.
Byrska-Bishop, Marta; Wallace, John; Frase, Alexander T; Ritchie, Marylyn D
2018-01-01
Abstract Motivation BioBin is an automated bioinformatics tool for the multi-level biological binning of sequence variants. Herein, we present a significant update to BioBin which expands the software to facilitate a comprehensive rare variant analysis and incorporates novel features and analysis enhancements. Results In BioBin 2.3, we extend our software tool by implementing statistical association testing, updating the binning algorithm, as well as incorporating novel analysis features providing for a robust, highly customizable, and unified rare variant analysis tool. Availability and implementation The BioBin software package is open source and freely available to users at http://www.ritchielab.com/software/biobin-download Contact mdritchie@geisinger.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28968757
Proposal for constructing an advanced software tool for planetary atmospheric modeling
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Sims, Michael H.; Podolak, Esther; Mckay, Christopher P.; Thompson, David E.
1990-01-01
Scientific model building can be a time intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We believe that advanced software techniques can facilitate both the model building and model sharing process. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing and using models. The proposed tool will include an interactive intelligent graphical interface and a high level, domain specific, modeling language. As a testbed for this research, we propose development of a software prototype in the domain of planetary atmospheric modeling.
Architecture of a Framework for Providing Information Services for Public Transport
García, Carmelo R.; Pérez, Ricardo; Lorenzo, Álvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino
2012-01-01
This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained. PMID:22778585
Automated daily quality control analysis for mammography in a multi-unit imaging center.
Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli
2018-01-01
Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.
Cleared Hot: A Forward Air Control (Airborne) Concepts Trainer
2006-09-01
list of high-level objectives imitating a detailed requirements document. In those cases, software developers are forced to make best guesses about...software developers are forced to make best guesses about how to meet those objectives. Is there a better method? We embarked on a project to create a...with participants at the end of an 18-month development cycle, we did the next best thing: Cleared Hot was taken to the mission subject matter
Achieving High Performance With TCP Over 40 GbE on NUMA Architectures for CMS Data Acquisition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bawej, Tomasz; et al.
2014-01-01
TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multicore era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities.more » During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.« less
Generic and Automated Data Evaluation in Analytical Measurement.
Adam, Martin; Fleischer, Heidi; Thurow, Kerstin
2017-04-01
In the past year, automation has become more and more important in the field of elemental and structural chemical analysis to reduce the high degree of manual operation and processing time as well as human errors. Thus, a high number of data points are generated, which requires fast and automated data evaluation. To handle the preprocessed export data from different analytical devices with software from various vendors offering a standardized solution without any programming knowledge should be preferred. In modern laboratories, multiple users will use this software on multiple personal computers with different operating systems (e.g., Windows, Macintosh, Linux). Also, mobile devices such as smartphones and tablets have gained growing importance. The developed software, Project Analytical Data Evaluation (ADE), is implemented as a web application. To transmit the preevaluated data from the device software to the Project ADE, the exported XML report files are detected and the included data are imported into the entities database using the Data Upload software. Different calculation types of a sample within one measurement series (e.g., method validation) are identified using information tags inside the sample name. The results are presented in tables and diagrams on different information levels (general, detailed for one analyte or sample).
OPEN-SOURCE SOFTWARE IN DENTISTRY: A SYSTEMATIC REVIEW.
Chruściel-Nogalska, Małgorzata; Smektała, Tomasz; Tutak, Marcin; Sporniak-Tutak, Katarzyna; Olszewski, Raphael
2017-01-01
Technological development and the need for electronic health records management resulted in the need for a computer with dedicated, commercial software in daily dental practice. The alternative for commercial software may be open-source solutions. Therefore, this study reviewed the current literature on the availability and use of open-source software (OSS) in dentistry. A comprehensive database search was performed on February 1, 2017. Only articles published in peer-reviewed journals with a focus on the use or description of OSS were retrieved. The level of evidence, according to Oxford EBM Centre Levels of Evidence Scale was classified for all studies. Experimental studies underwent additional quality reporting assessment. The screening and evaluation process resulted in twenty-one studies from 1,940 articles found, with 10 of them being experimental studies. None of the articles provided level 1 evidence, and only one study was considered high quality following quality assessment. Twenty-six different OSS programs were described in the included studies of which ten were used for image visualization, five were used for healthcare records management, four were used for educations processes, one was used for remote consultation and simulation, and six were used for general purposes. Our analysis revealed that the dental literature on OSS consists of scarce, incomplete, and methodologically low quality information.
Orbit targeting specialist function: Level C formulation requirements
NASA Technical Reports Server (NTRS)
Dupont, A.; Mcadoo, S.; Jones, H.; Jones, A. K.; Pearson, D.
1978-01-01
A definition of the level C requirements for onboard maneuver targeting software is provided. Included are revisions of the level C software requirements delineated in JSC IN 78-FM-27, Proximity Operations Software; Level C Requirements, dated May 1978. The software supports the terminal phase midcourse (TPM) maneuver, braking and close-in operations as well as supporting computation of the rendezvous corrective combination maneuver (NCC), and the terminal phase initiation (TPI). Specific formulation is contained here for the orbit targeting specialist function including the processing logic, linkage, and data base definitions for all modules. The crew interface with the software is through the keyboard and the ORBIT-TGT display.
NASA Technical Reports Server (NTRS)
Markos, H.
1978-01-01
Status of the computer programs dealing with space shuttle orbiter avionics is reported. Specific topics covered include: delivery status; SSW software; SM software; DL software; GNC software; level 3/4 testing; level 5 testing; performance analysis, SDL readiness for entry first article configuration inspection; and verification assessment.
Evaluation of the FIR Example using Xilinx Vivado High-Level Synthesis Compiler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Finkel, Hal; Yoshii, Kazutomo
Compared to central processing units (CPUs) and graphics processing units (GPUs), field programmable gate arrays (FPGAs) have major advantages in reconfigurability and performance achieved per watt. This development flow has been augmented with high-level synthesis (HLS) flow that can convert programs written in a high-level programming language to Hardware Description Language (HDL). Using high-level programming languages such as C, C++, and OpenCL for FPGA-based development could allow software developers, who have little FPGA knowledge, to take advantage of the FPGA-based application acceleration. This improves developer productivity and makes the FPGA-based acceleration accessible to hardware and software developers. Xilinx Vivado HLSmore » compiler is a high-level synthesis tool that enables C, C++ and System C specification to be directly targeted into Xilinx FPGAs without the need to create RTL manually. The white paper [1] published recently by Xilinx uses a finite impulse response (FIR) example to demonstrate the variable-precision features in the Vivado HLS compiler and the resource and power benefits of converting floating point to fixed point for a design. To get a better understanding of variable-precision features in terms of resource usage and performance, this report presents the experimental results of evaluating the FIR example using Vivado HLS 2017.1 and a Kintex Ultrascale FPGA. In addition, we evaluated the half-precision floating-point data type against the double-precision and single-precision data type and present the detailed results.« less
Making Presentation Software Accessible to High School Students with Intellectual Disabilities
ERIC Educational Resources Information Center
Doyle, Mary Beth; Giangreco, Michael F.
2009-01-01
As students with moderate and severe intellectual disabilities transition from inclusive elementary and middle schools to high schools, they deserve similar opportunities for inclusive educational experiences at this next level--namely to participate in general education classes and other activities (e.g., co-curricular) with their classmates…
High School Students' Social Media Usage Habits
ERIC Educational Resources Information Center
Tezci, Erdogan; Içen, Mustafa
2017-01-01
Social media which is an important product of Computer and Internet Technologies has a growing usage level day by day. Increasing social media usage level gives opportunity for new software developments and making investments in this area. From this aspect, therefore, social media has not only economic function but also make persons participate in…
NASA Astrophysics Data System (ADS)
Barbu, Alina L.; Laurent-Varin, Julien; Perosanz, Felix; Mercier, Flavien; Marty, Jean-Charles
2018-01-01
The implementation into the GINS CNES geodetic software of a more efficient filter was needed to satisfy the users who wanted to compute high-rate GNSS PPP solutions. We selected the SRI approach and a QR factorization technique including an innovative algorithm which optimizes the matrix reduction step. A full description of this algorithm is given for future users. The new capacities of the software have been tested using a set of 1 Hz data from the Japanese GEONET network including the Mw 9.0 2011 Tohoku earthquake. Station coordinates solution agreed at a sub-decimeter level with previous publications as well as with solutions we computed with the National Resource Canada software. An additional benefit from the implementation of the SRI filter is the capability to estimate high-rate tropospheric parameters too. As the CPU time to estimate a 1 Hz kinematic solution from 1 h of data is now less than 1 min we could produced series of coordinates for the full 1300 stations of the Japanese network. The corresponding movie shows the impressive co-seismic deformation as well as the wave propagation along the island. The processing was straightforward using a cluster of PCs which illustrates the new potentiality of the GINS software for massive network high rate PPP processing.
Software Template for Instruction in Mathematics
NASA Technical Reports Server (NTRS)
Shelton, Robert O.; Moebes, Travis A.; Beall, Anna
2005-01-01
Intelligent Math Tutor (IMT) is a software system that serves as a template for creating software for teaching mathematics. IMT can be easily connected to artificial-intelligence software and other analysis software through input and output of files. IMT provides an easy-to-use interface for generating courses that include tests that contain both multiple-choice and fill-in-the-blank questions, and enables tracking of test scores. IMT makes it easy to generate software for Web-based courses or to manufacture compact disks containing executable course software. IMT also can function as a Web-based application program, with features that run quickly on the Web, while retaining the intelligence of a high-level language application program with many graphics. IMT can be used to write application programs in text, graphics, and/or sound, so that the programs can be tailored to the needs of most handicapped persons. The course software generated by IMT follows a "back to basics" approach of teaching mathematics by inducing the student to apply creative mathematical techniques in the process of learning. Students are thereby made to discover mathematical fundamentals and thereby come to understand mathematics more deeply than they could through simple memorization.
Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.
2000-01-01
The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.
SpaceWire Driver Software for Special DSPs
NASA Technical Reports Server (NTRS)
Clark, Douglas; Lux, James; Nishimoto, Kouji; Lang, Minh
2003-01-01
A computer program provides a high-level C-language interface to electronics circuitry that controls a SpaceWire interface in a system based on a space qualified version of the ADSP-21020 digital signal processor (DSP). SpaceWire is a spacecraft-oriented standard for packet-switching data-communication networks that comprise nodes connected through bidirectional digital serial links that utilize low-voltage differential signaling (LVDS). The software is tailored to the SMCS-332 application-specific integrated circuit (ASIC) (also available as the TSS901E), which provides three highspeed (150 Mbps) serial point-to-point links compliant with the proposed Institute of Electrical and Electronics Engineers (IEEE) Standard 1355.2 and equivalent European Space Agency (ESA) Standard ECSS-E-50-12. In the specific application of this software, the SpaceWire ASIC was combined with the DSP processor, memory, and control logic in a Multi-Chip Module DSP (MCM-DSP). The software is a collection of low-level driver routines that provide a simple message-passing application programming interface (API) for software running on the DSP. Routines are provided for interrupt-driven access to the two styles of interface provided by the SMCS: (1) the "word at a time" conventional host interface (HOCI); and (2) a higher performance "dual port memory" style interface (COMI).
Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul
2016-09-29
The generation of Everglades Depth Estimation Network (EDEN) daily water-level and water-depth maps is dependent on high quality real-time data from over 240 water-level stations. To increase the accuracy of the daily water-surface maps, the Automated Data Assurance and Management (ADAM) tool was created by the U.S. Geological Survey as part of Greater Everglades Priority Ecosystems Science. The ADAM tool is used to provide accurate quality-assurance review of the real-time data from the EDEN network and allows estimation or replacement of missing or erroneous data. This user’s manual describes how to install and operate the ADAM software. File structure and operation of the ADAM software is explained using examples.
Wallot, Sebastian; Roepstorff, Andreas; Mønster, Dan
2016-01-01
We introduce Multidimensional Recurrence Quantification Analysis (MdRQA) as a tool to analyze multidimensional time-series data. We show how MdRQA can be used to capture the dynamics of high-dimensional signals, and how MdRQA can be used to assess coupling between two or more variables. In particular, we describe applications of the method in research on joint and collective action, as it provides a coherent analysis framework to systematically investigate dynamics at different group levels-from individual dynamics, to dyadic dynamics, up to global group-level of arbitrary size. The Appendix in Supplementary Material contains a software implementation in MATLAB to calculate MdRQA measures.
Popok, David W; West, Christopher R; Hubli, Michele; Currie, Katharine D; Krassioukov, Andrei V
2017-02-01
Cardiovascular disease is one of the leading causes of morbidity and mortality in the spinal cord injury (SCI) population. SCI may disrupt autonomic cardiovascular homeostasis, which can lead to persistent hypotension, irregular diurnal rhythmicity, and the development of autonomic dysreflexia (AD). There is currently no software available to perform automated detection and evaluation of cardiovascular autonomic dysfunction(s) such as those generated from 24 h ambulatory blood pressure monitoring (ABPM) recordings in the clinical setting. The objective of this study is to compare the efficacy of a novel 24 h ABPM Autonomic Dysfunction Detection Software against manual detection and to use the software to demonstrate the relationships between level of injury and the degree of autonomic cardiovascular impairment in a large cohort of individuals with SCI. A total of 46 individuals with cervical (group 1, n = 37) or high thoracic (group 2, n = 9) SCI participated in the study. Outcome measures included the frequency and severity of AD, frequency of hypotensive events, and diurnal variations in blood pressure and heart rate. There was good agreement between the software and manual detection of AD events (Bland-Altman limits of agreement = ±1.458 events). Cervical SCI presented with more frequent (p = 0.0043) and severe AD (p = 0.0343) than did high thoracic SCI. Cervical SCI exhibited higher systolic and diastolic blood pressure during the night and lower heart rate during the day than high thoracic SCI. In conclusion, our ABPM AD Detection Software was equally as effective in detecting the frequency and severity of AD and hypotensive events as manual detection, suggesting that this software can be used in the clinical setting to expedite ABPM analyses.
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Ji, Yuefeng; Zhang, Jie; Li, Hui; Xiong, Qianjin; Qiu, Shaofeng
2014-08-01
Ultrahigh throughout capacity requirement is challenging the current optical switching nodes with the fast development of data center networks. Pbit/s level all optical switching networks need to be deployed soon, which will cause the high complexity of node architecture. How to control the future network and node equipment together will become a new problem. An enhanced Software Defined Networking (eSDN) control architecture is proposed in the paper, which consists of Provider NOX (P-NOX) and Node NOX (N-NOX). With the cooperation of P-NOX and N-NOX, the flexible control of the entire network can be achieved. All optical switching network testbed has been experimentally demonstrated with efficient control of enhanced Software Defined Networking (eSDN). Pbit/s level all optical switching nodes in the testbed are implemented based on multi-dimensional switching architecture, i.e. multi-level and multi-planar. Due to the space and cost limitation, each optical switching node is only equipped with four input line boxes and four output line boxes respectively. Experimental results are given to verify the performance of our proposed control and switching architecture.
ProjectQ: Compiling quantum programs for various backends
NASA Astrophysics Data System (ADS)
Haener, Thomas; Steiger, Damian S.; Troyer, Matthias
In order to control quantum computers beyond the current generation, a high level quantum programming language and optimizing compilers will be essential. Therefore, we have developed ProjectQ - an open source software framework to facilitate implementing and running quantum algorithms both in software and on actual quantum hardware. Here, we introduce the backends available in ProjectQ. This includes a high-performance simulator and emulator to test and debug quantum algorithms, tools for resource estimation, and interfaces to several small-scale quantum devices. We demonstrate the workings of the framework and show how easily it can be further extended to control upcoming quantum hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kidd, M.E.C.
1997-02-01
The goal of our work is to provide a high level of confidence that critical software driven event sequences are maintained in the face of hardware failures, malevolent attacks and harsh or unstable operating environments. This will be accomplished by providing dynamic fault management measures directly to the software developer and to their varied development environments. The methodology employed here is inspired by previous work in path expressions. This paper discusses the perceived problems, a brief overview of path expressions, the proposed methods, and a discussion of the differences between the proposed methods and traditional path expression usage and implementation.
Critical Design Decisions of The Planck LFI Level 1 Software
NASA Astrophysics Data System (ADS)
Morisset, N.; Rohlfs, R.; Türler, M.; Meharga, M.; Binko, P.; Beck, M.; Frailis, M.; Zacchei, A.
2010-12-01
The PLANCK satellite with two on-board instruments, a Low Frequency Instrument (LFI) and a High Frequency Instrument (HFI) has been launched on May 14th with Ariane 5. The ISDC Data Centre for Astrophysics in Versoix, Switzerland has developed and maintains the Planck LFI Level 1 software for the Data Processing Centre (DPC) in Trieste, Italy. The main tasks of the Level 1 processing are to retrieve the daily available scientific and housekeeping (HK) data of the LFI instrument, the Sorption Cooler and the 4k Cooler data from Mission Operation Centre (MOC) in Darmstadt; to sort them by time and by type (detector, observing mode, etc...); to extract the spacecraft attitude information from auxiliary files; to flag the data according to several criteria; and to archive the resulting Time Ordered Information (TOI), which will then be used to produce maps of the sky in different spectral bands. The output of the Level 1 software are the TOI files in FITS format, later ingested into the Data Management Component (DMC) database. This software has been used during different phases of the LFI instrument development. We started to reuse some ISDC components for the LFI Qualification Model (QM) and we completely rework the software for the Flight Model (FM). This was motivated by critical design decisions taken jointly with the DPC. The main questions were: a) the choice of the data format: FITS or DMC? b) the design of the pipelines: use of the Planck Process Coordinator (ProC) or a simple Perl script? c) do we adapt the existing QM software or do we restart from scratch? The timeline and available manpower are also important issues to be taken into account. We present here the orientation of our choices and discuss their pertinence based on the experience of the final pre-launch tests and the start of real Planck LFI operations.
High-Level Performance Modeling of SAR Systems
NASA Technical Reports Server (NTRS)
Chen, Curtis
2006-01-01
SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.
Development of a CCD based solar speckle imaging system
NASA Astrophysics Data System (ADS)
Nisenson, Peter; Stachnik, Robert V.; Noyes, Robert W.
1986-02-01
A program to develop software and hardware for the purpose of obtaining high angular resolution images of the solar surface is described. The program included the procurement of a Charge Coupled Devices imaging system; an extensive laboratory and remote site testing of the camera system; the development of a software package for speckle image reconstruction which was eventually installed and tested at the Sacramento Peak Observatory; and experiments of the CCD system (coupled to an image intensifier) for low light level, narrow spectral band solar imaging.
NASA Astrophysics Data System (ADS)
Niblack, Carlton W.; Zhu, Xiaoming; Hafner, James L.; Breuel, Tom; Ponceleon, Dulce B.; Petkovic, Dragutin; Flickner, Myron D.; Upfal, Eli; Nin, Sigfredo I.; Sull, Sanghoon; Dom, Byron E.; Yeo, Boon-Lock; Srinivasan, Savitha; Zivkovic, Dan; Penner, Mike
1997-12-01
QBICTM (Query By Image Content) is a set of technologies and associated software that allows a user to search, browse, and retrieve image, graphic, and video data from large on-line collections. This paper discusses current research directions of the QBIC project such as indexing for high-dimensional multimedia data, retrieval of gray level images, and storyboard generation suitable for video. It describes aspects of QBIC software including scripting tools, application interfaces, and available GUIs, and gives examples of applications and demonstration systems using it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingerfelt, Eric J; Messer, II, Otis E
2017-01-02
The Bellerophon software system supports CHIMERA, a production-level HPC application that simulates the evolution of core-collapse supernovae. Bellerophon enables CHIMERA's geographically dispersed team of collaborators to perform job monitoring and real-time data analysis from multiple supercomputing resources, including platforms at OLCF, NERSC, and NICS. Its multi-tier architecture provides an encapsulated, end-to-end software solution that enables the CHIMERA team to quickly and easily access highly customizable animated and static views of results from anywhere in the world via a cross-platform desktop application.
NASA Technical Reports Server (NTRS)
Kavi, K. M.
1984-01-01
There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.
Crystal MD: The massively parallel molecular dynamics software for metal with BCC structure
NASA Astrophysics Data System (ADS)
Hu, Changjun; Bai, He; He, Xinfu; Zhang, Boyao; Nie, Ningming; Wang, Xianmeng; Ren, Yingwen
2017-02-01
Material irradiation effect is one of the most important keys to use nuclear power. However, the lack of high-throughput irradiation facility and knowledge of evolution process, lead to little understanding of the addressed issues. With the help of high-performance computing, we could make a further understanding of micro-level-material. In this paper, a new data structure is proposed for the massively parallel simulation of the evolution of metal materials under irradiation environment. Based on the proposed data structure, we developed the new molecular dynamics software named Crystal MD. The simulation with Crystal MD achieved over 90% parallel efficiency in test cases, and it takes more than 25% less memory on multi-core clusters than LAMMPS and IMD, which are two popular molecular dynamics simulation software. Using Crystal MD, a two trillion particles simulation has been performed on Tianhe-2 cluster.
NASA Astrophysics Data System (ADS)
Mills, M. W.; Hutchinson, Matthew J.
2005-05-01
A variety of consumer applications, eg cellphone camera lenses, optical storage devices, digital cameras, etc, are driving the demand for small, high aspheric departure rotationally-symmetric moulded optics, manufactured both in polymer and glass materials. The mould tools for such components are manufactured by ultra-high precision techniques such as single point diamond turning and ultra-precision grinding, and must be accurate to <1/10μm levels for form, and exhibit nanometric surface finish quality. The aspheric forms of such components' optical surfaces exhibit high departure from best-fit sphere towards their outer edge, which renders this outer region especially critical for optical performance. The high slope of these components at the clear aperture has caused some restrictions on the use of profilometry in the measurement of form across their full diameter. Taylor Hobson designs and manufactures a range of ultra-precision profilometers for use in such industries as aspheric optics fabrication. In order to address the issues described, a new measurement system, Taylor Hobson Form Talysurf PGI 1250, has been developed, which contains new Aspheric Data Fusion Software, as well as Asphero-Diffractive Analysis Software, allowing the entire diametric profile to be analysed to the desired level of accuracy. This development removes the previous limitation of maximum slope for this type of measurement, thus enabling better quality control of high slope, high aspheric departure optics. Measurement data from the Form Talysurf PGI 1250 can be fed back directly to the machine tool, in order to optimize the form of the optical mould.
NASA Astrophysics Data System (ADS)
Mills, M. W.; Hutchinson, Matthew J.
2005-05-01
A variety of consumer applications, eg cellphone camera lenses, optical storage devices, digital cameras, etc, are driving the demand for small, high aspheric departure rotationally-symmetric moulded optics, manufactured both in polymer and glass materials. The mould tools for such components are manufactured by ultra-high precision techniques such as single point diamond turning and ultra-precision grinding, and must be accurate to <1/10μm levels for form, and exhibit nanometric surface finish quality. The aspheric forms of such components" optical surfaces exhibit high departure from best-fit sphere towards their outer edge, which renders this outer region especially critical for optical performance. The high slope of these components at the clear aperture has caused some restrictions on the use of profilometry in the measurement of form across their full diameter. Taylor Hobson designs and manufactures a range of ultra-precision profilometers for use in such industries as aspheric optics fabrication. In order to address the issues described, a new measurement system, Taylor Hobson Form Talysurf PGI 1250, has been developed, which contains new Aspheric Data Fusion Software, as well as Asphero-Diffractive Analysis Software, allowing the entire diametric profile to be analysed to the desired level of accuracy. This development removes the previous limitation of maximum slope for this type of measurement, thus enabling better quality control of high slope, high aspheric departure optics. Measurement data from the Form Talysurf PGI 1250 can be fed back directly to the machine tool, in order to optimize the form of the optical mould.
Upper Secondary and Vocational Level Teachers at Social Software
ERIC Educational Resources Information Center
Valtonen, Teemu; Kontkanen, Sini; Dillon, Patrick; Kukkonen, Jari; Väisänen, Pertti
2014-01-01
This study focuses on upper secondary and vocational level teachers as users of social software i.e. what software they use during their leisure and work and for what purposes they use software in teaching. The study is theorised within a technological pedagogical content knowledge framework, the emphasis is especially on technological knowledge…
Wilson, Anna J; Dehaene, Stanislas; Pinel, Philippe; Revkin, Susannah K; Cohen, Laurent; Cohen, David
2006-01-01
Background Adaptive game software has been successful in remediation of dyslexia. Here we describe the cognitive and algorithmic principles underlying the development of similar software for dyscalculia. Our software is based on current understanding of the cerebral representation of number and the hypotheses that dyscalculia is due to a "core deficit" in number sense or in the link between number sense and symbolic number representations. Methods "The Number Race" software trains children on an entertaining numerical comparison task, by presenting problems adapted to the performance level of the individual child. We report full mathematical specifications of the algorithm used, which relies on an internal model of the child's knowledge in a multidimensional "learning space" consisting of three difficulty dimensions: numerical distance, response deadline, and conceptual complexity (from non-symbolic numerosity processing to increasingly complex symbolic operations). Results The performance of the software was evaluated both by mathematical simulations and by five weeks of use by nine children with mathematical learning difficulties. The results indicate that the software adapts well to varying levels of initial knowledge and learning speeds. Feedback from children, parents and teachers was positive. A companion article [1] describes the evolution of number sense and arithmetic scores before and after training. Conclusion The software, open-source and freely available online, is designed for learning disabled children aged 5–8, and may also be useful for general instruction of normal preschool children. The learning algorithm reported is highly general, and may be applied in other domains. PMID:16734905
The Software Industry in the Coffee Triangle of Colombia
NASA Astrophysics Data System (ADS)
Cuesta, Albeiro; Joyanes, Luis; López, Marcelo
The so-called "Coffee Triangle" region is located in the Andean Region, in central Colombia, South America. This Andean Region is composed of the Departments of Caldas, Quindío and Risaralda. The Andean Region has been characterized by the production of coffee as a worldwide industry supported by high Quality and Research standards. These components have become the key bastions to compete in international markets. After the decline of the Coffee industry it is necessary to consider alternatives, supplemented by the success of the Software Industry at the global level. The strengthening of the Software Industry in the Coffee Triangle seeks to establish a productive alternative for regional growth in a visionary way, where knowledge, a fundamental input of the Software Industry, is emerging as one of the greatest assets present in this geographical area - Andean Region - of Colombia.
Construction of an advanced software tool for planetary atmospheric modeling
NASA Technical Reports Server (NTRS)
Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.
1993-01-01
Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a testbed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.
Managing MDO Software Development Projects
NASA Technical Reports Server (NTRS)
Townsend, J. C.; Salas, A. O.
2002-01-01
Over the past decade, the NASA Langley Research Center developed a series of 'grand challenge' applications demonstrating the use of parallel and distributed computation and multidisciplinary design optimization. All but the last of these applications were focused on the high-speed civil transport vehicle; the final application focused on reusable launch vehicles. Teams of discipline experts developed these multidisciplinary applications by integrating legacy engineering analysis codes. As teams became larger and the application development became more complex with increasing levels of fidelity and numbers of disciplines, the need for applying software engineering practices became evident. This paper briefly introduces the application projects and then describes the approaches taken in project management and software engineering for each project; lessons learned are highlighted.
Construction of an advanced software tool for planetary atmospheric modeling
NASA Technical Reports Server (NTRS)
Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.
1992-01-01
Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a test bed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.
Human-rated Safety Certification of a High Voltage Robonaut Lithium-ion Battery
NASA Technical Reports Server (NTRS)
Jeevarajan, Judith; Yayathi, S.; Johnson, M.; Waligora, T.; Verdeyen, W.
2013-01-01
NASA's rigorous certification process is being followed for the R2 high voltage battery program for use of R2 on International Space Station (ISS). Rigorous development testing at appropriate levels to credible off-nominal conditions and review of test data led to design improvements for safety at the virtual cell, cartridge and battery levels. Tests were carried out at all levels to confirm that both hardware and software controls work. Stringent flight acceptance testing of the flight battery will be completed before launch for mission use on ISS.
On Designing Lightweight Threads for Substrate Software
NASA Technical Reports Server (NTRS)
Haines, Matthew
1997-01-01
Existing user-level thread packages employ a 'black box' design approach, where the implementation of the threads is hidden from the user. While this approach is often sufficient for application-level programmers, it hides critical design decisions that system-level programmers must be able to change in order to provide efficient service for high-level systems. By applying the principles of Open Implementation Analysis and Design, we construct a new user-level threads package that supports common thread abstractions and a well-defined meta-interface for altering the behavior of these abstractions. As a result, system-level programmers will have the advantages of using high-level thread abstractions without having to sacrifice performance, flexibility or portability.
Peterson, Elena S; McCue, Lee Ann; Schrimpe-Rutledge, Alexandra C; Jensen, Jeffrey L; Walker, Hyunjoo; Kobold, Markus A; Webb, Samantha R; Payne, Samuel H; Ansong, Charles; Adkins, Joshua N; Cannon, William R; Webb-Robertson, Bobbie-Jo M
2012-04-05
The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php.
2012-01-01
Background The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. Results VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. Conclusions VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php. PMID:22480257
ERIC Educational Resources Information Center
Ardiel, Evan L.; Giles, Andrew C.; Yu, Alex J.; Lindsay, Theodore H.; Lockery, Shawn R.; Rankin, Catharine H.
2016-01-01
Habituation is a highly conserved phenomenon that remains poorly understood at the molecular level. Invertebrate model systems, like "Caenorhabditis elegans," can be a powerful tool for investigating this fundamental process. Here we established a high-throughput learning assay that used real-time computer vision software for behavioral…
Experiences of Advanced High School Students in Synchronous Online Recitations
ERIC Educational Resources Information Center
Mayer, Greg; Lingle, Jeremy; Usselman, Marion
2017-01-01
The question of how to best design an online course that promotes student-centred learning is an area of ongoing research. This mixed-methods study focused on a section of advanced high school students, in college-level mathematics courses, that used a synchronous online environment mediated over web-conferencing software, and whether the…
Carreer, William J.; Flight, Robert M.; Moseley, Hunter N. B.
2013-01-01
New metabolomics applications of ultra-high resolution and accuracy mass spectrometry can provide thousands of detectable isotopologues, with the number of potentially detectable isotopologues increasing exponentially with the number of stable isotopes used in newer isotope tracing methods like stable isotope-resolved metabolomics (SIRM) experiments. This huge increase in usable data requires software capable of correcting the large number of isotopologue peaks resulting from SIRM experiments in a timely manner. We describe the design of a new algorithm and software system capable of handling these high volumes of data, while including quality control methods for maintaining data quality. We validate this new algorithm against a previous single isotope correction algorithm in a two-step cross-validation. Next, we demonstrate the algorithm and correct for the effects of natural abundance for both 13C and 15N isotopes on a set of raw isotopologue intensities of UDP-N-acetyl-D-glucosamine derived from a 13C/15N-tracing experiment. Finally, we demonstrate the algorithm on a full omics-level dataset. PMID:24404440
ERIC Educational Resources Information Center
Chen, Yixing
2013-01-01
The objective of this study was to develop a "Virtual Design Studio (VDS)": a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. The VDS is intended to assist collaborating architects,…
NASA Technical Reports Server (NTRS)
Allen, B. Danette
1998-01-01
In the traditional 'waterfall' model of the software project life cycle, the Requirements Phase ends and flows into the Design Phase, which ends and flows into the Development Phase. Unfortunately, the process rarely, if ever, works so smoothly in practice. Instead, software developers often receive new requirements, or modifications to the original requirements, well after the earlier project phases have been completed. In particular, projects with shorter than ideal schedules are highly susceptible to frequent requirements changes, as the software requirements analysis phase is often forced to begin before the overall system requirements and top-level design are complete. This results in later modifications to the software requirements, even though the software design and development phases may be complete. Requirements changes received in the later stages of a software project inevitably lead to modification of existing developed software. Presented here is a series of software design techniques that can greatly reduce the impact of last-minute requirements changes. These techniques were successfully used to add built-in flexibility to two complex software systems in which the requirements were expected to (and did) change frequently. These large, real-time systems were developed at NASA Langley Research Center (LaRC) to test and control the Lidar In-Space Technology Experiment (LITE) instrument which flew aboard the space shuttle Discovery as the primary payload on the STS-64 mission.
Agile Methods for Open Source Safety-Critical Software
Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John
2011-01-01
The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the right amount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion. PMID:21799545
Agile Methods for Open Source Safety-Critical Software.
Gary, Kevin; Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John
2011-08-01
The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.
Software Quality Assurance Audits Guidebooks
NASA Technical Reports Server (NTRS)
1990-01-01
The growth in cost and importance of software to NASA has caused NASA to address the improvement of software development across the agency. One of the products of this program is a series of guidebooks that define a NASA concept of the assurance processes that are used in software development. The Software Assurance Guidebook, NASA-GB-A201, issued in September, 1989, provides an overall picture of the NASA concepts and practices in software assurance. Second level guidebooks focus on specific activities that fall within the software assurance discipline, and provide more detailed information for the manager and/or practitioner. This is the second level Software Quality Assurance Audits Guidebook that describes software quality assurance audits in a way that is compatible with practices at NASA Centers.
Automated sequence analysis and editing software for HIV drug resistance testing.
Struck, Daniel; Wallis, Carole L; Denisov, Gennady; Lambert, Christine; Servais, Jean-Yves; Viana, Raquel V; Letsoalo, Esrom; Bronze, Michelle; Aitken, Sue C; Schuurman, Rob; Stevens, Wendy; Schmit, Jean Claude; Rinke de Wit, Tobias; Perez Bercoff, Danielle
2012-05-01
Access to antiretroviral treatment in resource-limited-settings is inevitably paralleled by the emergence of HIV drug resistance. Monitoring treatment efficacy and HIV drugs resistance testing are therefore of increasing importance in resource-limited settings. Yet low-cost technologies and procedures suited to the particular context and constraints of such settings are still lacking. The ART-A (Affordable Resistance Testing for Africa) consortium brought together public and private partners to address this issue. To develop an automated sequence analysis and editing software to support high throughput automated sequencing. The ART-A Software was designed to automatically process and edit ABI chromatograms or FASTA files from HIV-1 isolates. The ART-A Software performs the basecalling, assigns quality values, aligns query sequences against a set reference, infers a consensus sequence, identifies the HIV type and subtype, translates the nucleotide sequence to amino acids and reports insertions/deletions, premature stop codons, ambiguities and mixed calls. The results can be automatically exported to Excel to identify mutations. Automated analysis was compared to manual analysis using a panel of 1624 PR-RT sequences generated in 3 different laboratories. Discrepancies between manual and automated sequence analysis were 0.69% at the nucleotide level and 0.57% at the amino acid level (668,047 AA analyzed), and discordances at major resistance mutations were recorded in 62 cases (4.83% of differences, 0.04% of all AA) for PR and 171 (6.18% of differences, 0.03% of all AA) cases for RT. The ART-A Software is a time-sparing tool for pre-analyzing HIV and viral quasispecies sequences in high throughput laboratories and highlighting positions requiring attention. Copyright © 2012 Elsevier B.V. All rights reserved.
Giraldo, N A; Amariles, P; Monsalve, M; Faus, M J
Highly active antiretroviral therapy has extended the expected lifespan of patients with HIV/AIDS. However, the therapeutic benefits of some drugs used simultaneously with highly active antiretroviral therapy may be adversely affected by drug interactions. The goal was to design and develop a free software to facilitate analysis, assessment, and clinical decision making according to the clinical relevance of drug interactions in patients with HIV/AIDS. A comprehensive Medline/PubMed database search of drug interactions was performed. Articles that recognized any drug interactions in HIV disease were selected. The publications accessed were limited to human studies in English or Spanish, with full texts retrieved. Drug interactions were analyzed, assessed, and grouped into four levels of clinical relevance according to gravity and probability. Software to systematize the information regarding drug interactions and their clinical relevance was designed and developed. Overall, 952 different references were retrieved and 446 selected; in addition, 67 articles were selected from the citation lists of identified articles. A total of 2119 pairs of drug interactions were identified; of this group, 2006 (94.7%) were drug-drug interactions, 1982 (93.5%) had an identified pharmacokinetic mechanism, and 1409 (66.5%) were mediated by enzyme inhibition. In terms of clinical relevance, 1285 (60.6%) drug interactions were clinically significant in patients with HIV (levels 1 and 2). With this information, a software program that facilitates identification and assessment of the clinical relevance of antiretroviral drug interactions (SIMARV ® ) was developed. A free software package with information on 2119 pairs of antiretroviral drug interactions was designed and developed that could facilitate analysis, assessment, and clinical decision making according to the clinical relevance of drug interactions in patients with HIV/AIDS. Copyright © 2016 Elsevier Inc. All rights reserved.
SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology
Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E.; Troein, Carl; Millar, Andrew J.; Goryanin, Igor; Gilmore, Stephen
2013-01-01
Summary: Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats. Availability and implementation: All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials. Contact: stg@inf.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23329415
Contal, Olivier; Vignaux, Laurence; Combescure, Christophe; Pepin, Jean-Louis; Jolliet, Philippe; Janssens, Jean-Paul
2012-02-01
Current bilevel positive-pressure ventilators for home noninvasive ventilation (NIV) provide physicians with software that records items important for patient monitoring, such as compliance, tidal volume (Vt), and leaks. However, to our knowledge, the validity of this information has not yet been independently assessed. Testing was done for seven home ventilators on a bench model adapted to simulate NIV and generate unintentional leaks (ie, other than of the mask exhalation valve). Five levels of leaks were simulated using a computer-driven solenoid valve (0-60 L/min) at different levels of inspiratory pressure (15 and 25 cm H(2)O) and at a fixed expiratory pressure (5 cm H(2)O), for a total of 10 conditions. Bench data were compared with results retrieved from ventilator software for leaks and Vt. For assessing leaks, three of the devices tested were highly reliable, with a small bias (0.3-0.9 L/min), narrow limits of agreement (LA), and high correlations (R(2), 0.993-0.997) when comparing ventilator software and bench results; conversely, for four ventilators, bias ranged from -6.0 L/min to -25.9 L/min, exceeding -10 L/min for two devices, with wide LA and lower correlations (R(2), 0.70-0.98). Bias for leaks increased markedly with the importance of leaks in three devices. Vt was underestimated by all devices, and bias (range, 66-236 mL) increased with higher insufflation pressures. Only two devices had a bias < 100 mL, with all testing conditions considered. Physicians monitoring patients who use home ventilation must be aware of differences in the estimation of leaks and Vt by ventilator software. Also, leaks are reported in different ways according to the device used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, C.
1997-11-01
For many years, software quality assurance lagged behind hardware quality assurance in terms of methods, metrics, and successful results. New approaches such as Quality Function Deployment (QFD) the ISO 9000-9004 standards, the SEI maturity levels, and Total Quality Management (TQM) are starting to attract wide attention, and in some cases to bring software quality levels up to a parity with manufacturing quality levels. Since software is on the critical path for many engineered products, and for internal business systems as well, the new approaches are starting to affect global competition and attract widespread international interest. It can be hypothesized thatmore » success in mastering software quality will be a key strategy for dominating global software markets in the 21st century.« less
The Transition to a Many-core World
NASA Astrophysics Data System (ADS)
Mattson, T. G.
2012-12-01
The need to increase performance within a fixed energy budget has pushed the computer industry to many core processors. This is grounded in the physics of computing and is not a trend that will just go away. It is hard to overestimate the profound impact of many-core processors on software developers. Virtually every facet of the software development process will need to change to adapt to these new processors. In this talk, we will look at many-core hardware and consider its evolution from a perspective grounded in the CPU. We will show that the number of cores will inevitably increase, but in addition, a quest to maximize performance per watt will push these cores to be heterogeneous. We will show that the inevitable result of these changes is a computing landscape where the distinction between the CPU and the GPU is blurred. We will then consider the much more pressing problem of software in a many core world. Writing software for heterogeneous many core processors is well beyond the ability of current programmers. One solution is to support a software development process where programmer teams are split into two distinct groups: a large group of domain-expert productivity programmers and much smaller team of computer-scientist efficiency programmers. The productivity programmers work in terms of high level frameworks to express the concurrency in their problems while avoiding any details for how that concurrency is exploited. The second group, the efficiency programmers, map applications expressed in terms of these frameworks onto the target many-core system. In other words, we can solve the many-core software problem by creating a software infrastructure that only requires a small subset of programmers to become master parallel programmers. This is different from the discredited dream of automatic parallelism. Note that productivity programmers still need to define the architecture of their software in a way that exposes the concurrency inherent in their problem. We submit that domain-expert programmers understand "what is concurrent". The parallel programming problem emerges from the complexity of "how that concurrency is utilized" on real hardware. The research described in this talk was carried out in collaboration with the ParLab at UC Berkeley. We use a design pattern language to define the high level frameworks exposed to domain-expert, productivity programmers. We then use tools from the SEJITS project (Selective embedded Just In time Specializers) to build the software transformation tool chains thst turn these framework-oriented designs into highly efficient code. The final ingredient is a software platform to serve as a target for these tools. One such platform is the OpenCL industry standard for programming heterogeneous systems. We will briefly describe OpenCL and show how it provides a vendor-neutral software target for current and future many core systems; both CPU-based, GPU-based, and heterogeneous combinations of the two.
Flight Software Development for the CHEOPS Instrument with the CORDET Framework
NASA Astrophysics Data System (ADS)
Cechticky, V.; Ottensamer, R.; Pasetti, A.
2015-09-01
CHEOPS is an ESA S-class mission dedicated to the precise measurement of radii of already known exoplanets using ultra-high precision photometry. The instrument flight software controlling the instrument and handling the science data is developed by the University of Vienna using the CORDET Framework offered by P&P Software GmbH. The CORDET Framework provides a generic software infrastructure for PUS-based applications. This paper describes how the framework is used for the CHEOPS application software to provide a consistent solution for to the communication and control services, event handling and FDIR procedures. This approach is innovative in four respects: (a) it is a true third-party reuse; (b) re-use is done at specification, validation and code level; (c) the re-usable assets and their qualification data package are entirely open-source; (d) re-use is based on call-back with the application developer providing functions which are called by the reusable architecture. File names missing from here on out (I tried to mimic the files names from before.)
Craniux: A LabVIEW-Based Modular Software Framework for Brain-Machine Interface Research
Degenhart, Alan D.; Kelly, John W.; Ashmore, Robin C.; Collinger, Jennifer L.; Tyler-Kabara, Elizabeth C.; Weber, Douglas J.; Wang, Wei
2011-01-01
This paper presents “Craniux,” an open-access, open-source software framework for brain-machine interface (BMI) research. Developed in LabVIEW, a high-level graphical programming environment, Craniux offers both out-of-the-box functionality and a modular BMI software framework that is easily extendable. Specifically, it allows researchers to take advantage of multiple features inherent to the LabVIEW environment for on-the-fly data visualization, parallel processing, multithreading, and data saving. This paper introduces the basic features and system architecture of Craniux and describes the validation of the system under real-time BMI operation using simulated and real electrocorticographic (ECoG) signals. Our results indicate that Craniux is able to operate consistently in real time, enabling a seamless work flow to achieve brain control of cursor movement. The Craniux software framework is made available to the scientific research community to provide a LabVIEW-based BMI software platform for future BMI research and development. PMID:21687575
Craniux: a LabVIEW-based modular software framework for brain-machine interface research.
Degenhart, Alan D; Kelly, John W; Ashmore, Robin C; Collinger, Jennifer L; Tyler-Kabara, Elizabeth C; Weber, Douglas J; Wang, Wei
2011-01-01
This paper presents "Craniux," an open-access, open-source software framework for brain-machine interface (BMI) research. Developed in LabVIEW, a high-level graphical programming environment, Craniux offers both out-of-the-box functionality and a modular BMI software framework that is easily extendable. Specifically, it allows researchers to take advantage of multiple features inherent to the LabVIEW environment for on-the-fly data visualization, parallel processing, multithreading, and data saving. This paper introduces the basic features and system architecture of Craniux and describes the validation of the system under real-time BMI operation using simulated and real electrocorticographic (ECoG) signals. Our results indicate that Craniux is able to operate consistently in real time, enabling a seamless work flow to achieve brain control of cursor movement. The Craniux software framework is made available to the scientific research community to provide a LabVIEW-based BMI software platform for future BMI research and development.
Reconfigurable Software for Controlling Formation Flying
NASA Technical Reports Server (NTRS)
Mueller, Joseph B.
2006-01-01
Software for a system to control the trajectories of multiple spacecraft flying in formation is being developed to reflect underlying concepts of (1) a decentralized approach to guidance and control and (2) reconfigurability of the control system, including reconfigurability of the software and of control laws. The software is organized as a modular network of software tasks. The computational load for both determining relative trajectories and planning maneuvers is shared equally among all spacecraft in a cluster. The flexibility and robustness of the software are apparent in the fact that tasks can be added, removed, or replaced during flight. In a computational simulation of a representative formation-flying scenario, it was demonstrated that the following are among the services performed by the software: Uploading of commands from a ground station and distribution of the commands among the spacecraft, Autonomous initiation and reconfiguration of formations, Autonomous formation of teams through negotiations among the spacecraft, Working out details of high-level commands (e.g., shapes and sizes of geometrically complex formations), Implementation of a distributed guidance law providing autonomous optimization and assignment of target states, and Implementation of a decentralized, fuel-optimal, impulsive control law for planning maneuvers.
Software quality for 1997 - what works and what doesn`t?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, C.
1997-11-01
This presentation provides a view of software quality for 1997 - what works and what doesn`t. For many years, software quality assurance lagged behind hardware quality assurance in terms of methods, metrics, and successful results. New approaches such as Quality Function Development (WFD) the ISO 9000-9004 standards, the SEI maturity levels, and Total Quality Management (TQM) are starting to attract wide attention, and in some cases to bring software quality levels up to a parity with manufacturing quality levels.
Microcomputer Simulation of Real Gases--Part 1.
ERIC Educational Resources Information Center
Sperandeo-Mineo, R. M.; Tripi, G.
1987-01-01
Describes some simple computer programs designed to simulate the molecular dynamics of two-dimensional systems with a Lennard-Jones interaction potential. Discusses the use of the software in introductory physics courses at the high school and college level. (TW)
Assessing Requirements Quality through Requirements Coverage
NASA Technical Reports Server (NTRS)
Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt
2008-01-01
In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.
Health management and controls for Earth-to-orbit propulsion systems
NASA Astrophysics Data System (ADS)
Bickford, R. L.
1995-03-01
Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.
Software Model Checking Without Source Code
NASA Technical Reports Server (NTRS)
Chaki, Sagar; Ivers, James
2009-01-01
We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.
Manyak, Kristin A.; Abdenour, Thomas E.; Rauh, Mitchell J.; Baweja, Harsimran S.
2016-01-01
Background As recently dictated by the American Medical Society, balance testing is an important component in the clinical evaluation of concussion. Despite this, previous research on the efficacy of balance testing for concussion diagnosis suggests low sensitivity (∼30%), based primarily on the popular Balance Error Scoring System (BESS). The Balance Tracking System (BTrackS, Balance Tracking Systems Inc., San Diego, CA, USA) consists of a force plate (BTrackS Balance Plate) and software (BTrackS Sport Balance) which can quickly (<2 min) perform concussion balance testing with gold standard accuracy. Purpose The present study aimed to determine the sensitivity of the BTrackS Balance Plate and Sports Balance Software for concussion diagnosis. Study Design Cross-Sectional Study Methods Preseason baseline balance testing of 519 healthy Division I college athletes playing sports with a relatively high risk for concussions was performed with the BTrackS Balance Test. Testing was administered by certified athletic training staff using the BTrackS Balance Plate and Sport Balance software. Of the baselined athletes, 25 later experienced a concussion during the ensuing sport season. Post-injury balance testing was performed on these concussed athletes within 48 of injury and the sensitivity of the BTrackS Balance Plate and Sport Balance software was estimated based on the number of athletes showing a balance decline according to the criteria specified in the Sport Balance software. This criteria is based on the minimal detectable change statistic with a 90% confidence level (i.e. 90% specificity). Results Of 25 athletes who experienced concussions, 16 had balance declines relative to baseline testing results according to the BTrackS Sport Balance software criteria. This corresponds to an estimated concussion sensitivity of 64%, which is twice as great as that reported previously for the BESS. Conclusions The BTrackS Balance Plate and Sport Balance software has the greatest concussion sensitivity of any balance testing instrument reported to date. Level of Evidence Level 2 (Individual cross sectional diagnostic study) PMID:27104048
Pipeline for illumination correction of images for high-throughput microscopy.
Singh, S; Bray, M-A; Jones, T R; Carpenter, A E
2014-12-01
The presence of systematic noise in images in high-throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non-homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high-content screen readouts due to software-based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real-world high-throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z'-factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high-content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post-hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open-source image analysis pipelines publicly available. This software-based solution has the potential to improve outcomes for a wide-variety of image-based HTS experiments. © 2014 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
DASL-Data and Activities for Solar Learning
NASA Technical Reports Server (NTRS)
Jones, Harrison P.; Henney, Carl; Hill, Frank; Gearen, Michael; Pompca, Stephen; Stagg, Travis; Stefaniak, Linda; Walker, Connie
2004-01-01
DASL-Data and Activities for Solar Learning Data and Activities for Solar Learning (DASL) provides a classroom learning environment based on a twenty-five year record of solar magnetograms from the National Solar Observatory (NSO) at Kitt Peak, AZ. The data, together with image processing software for Macs or PCs, can be used to learn basic facts about the Sun and astronomy at the middle school level. At the high school level, students can study properties of the Sun's magnetic cycle with classroom exercises emphasizing data and error analysis and can participate in a new scientific study, Research in Active Solar Longitudes (RASL), in collaboration with classrooms throughout the country and scientists at NSO and NASA. We present a half-day course to train teachers in the scientific content of the project and its classroom use. We will provide a compact disc with the data and software and will demonstrate software installation and use, classroom exercises, and participation in RASL with computer projection.
PUS Services Software Building Block Automatic Generation for Space Missions
NASA Astrophysics Data System (ADS)
Candia, S.; Sgaramella, F.; Mele, G.
2008-08-01
The Packet Utilization Standard (PUS) has been specified by the European Committee for Space Standardization (ECSS) and issued as ECSS-E-70-41A to define the application-level interface between Ground Segments and Space Segments. The ECSS-E- 70-41A complements the ECSS-E-50 and the Consultative Committee for Space Data Systems (CCSDS) recommendations for packet telemetry and telecommand. The ECSS-E-70-41A characterizes the identified PUS Services from a functional point of view and the ECSS-E-70-31 standard specifies the rules for their mission-specific tailoring. The current on-board software design for a space mission implies the production of several PUS terminals, each providing a specific tailoring of the PUS services. The associated on-board software building blocks are developed independently, leading to very different design choices and implementations even when the mission tailoring requires very similar services (from the Ground operative perspective). In this scenario, the automatic production of the PUS services building blocks for a mission would be a way to optimize the overall mission economy and improve the robusteness and reliability of the on-board software and of the Ground-Space interactions. This paper presents the Space Software Italia (SSI) activities for the development of an integrated environment to support: the PUS services tailoring activity for a specific mission. the mission-specific PUS services configuration. the generation the UML model of the software building block implementing the mission-specific PUS services and the related source code, support documentation (software requirements, software architecture, test plans/procedures, operational manuals), and the TM/TC database. The paper deals with: (a) the project objectives, (b) the tailoring, configuration, and generation process, (c) the description of the environments supporting the process phases, (d) the characterization of the meta-model used for the generation, (e) the characterization of the reference avionics architecture and of the reference on- board software high-level architecture.
Precise and Scalable Static Program Analysis of NASA Flight Software
NASA Technical Reports Server (NTRS)
Brat, G.; Venet, A.
2005-01-01
Recent NASA mission failures (e.g., Mars Polar Lander and Mars Orbiter) illustrate the importance of having an efficient verification and validation process for such systems. One software error, as simple as it may be, can cause the loss of an expensive mission, or lead to budget overruns and crunched schedules. Unfortunately, traditional verification methods cannot guarantee the absence of errors in software systems. Therefore, we have developed the CGS static program analysis tool, which can exhaustively analyze large C programs. CGS analyzes the source code and identifies statements in which arrays are accessed out of bounds, or, pointers are used outside the memory region they should address. This paper gives a high-level description of CGS and its theoretical foundations. It also reports on the use of CGS on real NASA software systems used in Mars missions (from Mars PathFinder to Mars Exploration Rover) and on the International Space Station.
Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model
NASA Astrophysics Data System (ADS)
Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.
2013-12-01
Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient as developers do not need to spend time iterating over small changes. Instead, these changes are realized in early prototypes and implemented before the task is seen by developers. The development practices followed by the LROC SOC DevOps team help facilitate a high level of software quality that is necessary for LROC SOC operations. Application to the Scientific Community: There is no replacement for having software developed by professional developers. While it is beneficial for scientists to write software, this activity should be seen as prototyping, which is then made production ready by professional developers. When constructed properly, even a small development team has the ability to increase the rate of software development for a research group while creating more efficient, reliable, and maintainable products. This strategy allows scientists to accomplish more, focusing on teamwork, rather than software development, which may not be their primary focus. 1. Robinson et al. (2010) Space Sci. Rev. 150, 81-124 2. DeGrandis. (2011) Cutter IT Journal. Vol 24, No. 8, 34-39 3. Estes, N.M.; Hanger, C.D.; Licht, A.A.; Bowman-Cisneros, E.; Lunaserv Web Map Service: History, Implementation Details, Development, and Uses, http://adsabs.harvard.edu/abs/2013LPICo1719.2609E.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachan, John
Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.
Utilizing GIS to evaluate base schedules in paratransit operations
DOT National Transportation Integrated Search
1999-02-02
With ready access to street file names and inexpensive GIS software, paratransit systems can take advantage of GIS technology to evaluate base schedules on a regular basis in order to maintain system efficiency at consistently high levels. This proje...
NASA Astrophysics Data System (ADS)
Chu, X.
2011-12-01
This study, funded by the NSF CAREER program, focuses on developing new methods to quantify microtopography-controlled overland flow processes and integrating the cutting-edge hydrologic research with all-level education and outreach activities. To achieve the educational goal, an interactive teaching-learning software package has been developed. This software, with enhanced visualization capabilities, integrates the new modeling techniques, computer-guided learning processes, and education-oriented tools in a user-friendly interface. Both Windows-based and web-based versions have been developed. The software is specially designed for three major user levels: elementary level (Level 1: K-12 and outreach education), medium level (Level 2: undergraduate education), and advanced level (Level 3: graduate education). Depending on the levels, users are guided to different educational systems. Each system consists of a series of mini "libraries" featured with movies, pictures, and documentation that cover fundamental theories, varying scale experiments, and computer modeling of overland flow generation, surface runoff, and infiltration processes. Testing and practical use of this educational software in undergraduate and graduate teaching demonstrate its effectiveness to promote students' learning and interest in hydrologic sciences. This educational software also has been used as a hydrologic demonstration tool for K-12 students and Native American students through the Nurturing American Tribal Undergraduate Research Education (NATURE) program and Science, Technology, Engineering and Mathematics (STEM) outreach activities.
NASA Software Assurance's Roles in Research and Technology
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2010-01-01
This slide presentation reviews the interactions between the scientist and engineers doing research and technology and the software developers and others who are doing software assurance. There is a discussion of the role of the Safety and Mission Assurance (SMA) in developing software to be used for research and technology, and the importance of this role as the technology moves to the higher levels of the technology readiness levels (TRLs). There is also a call to change the way the development of software is developed.
Enhancing instruction scheduling with a block-structured ISA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melvin, S.; Patt, Y.
It is now generally recognized that not enough parallelism exists within the small basic blocks of most general purpose programs to satisfy high performance processors. Thus, a wide variety of techniques have been developed to exploit instruction level parallelism across basic block boundaries. In this paper we discuss some previous techniques along with their hardware and software requirements. Then we propose a new paradigm for an instruction set architecture (ISA): block-structuring. This new paradigm is presented, its hardware and software requirements are discussed and the results from a simulation study are presented. We show that a block-structured ISA utilizes bothmore » dynamic and compile-time mechanisms for exploiting instruction level parallelism and has significant performance advantages over a conventional ISA.« less
Leightley, Daniel; Puddephatt, Jo-Anne; Goodwin, Laura; Rona, Roberto; Fear, Nicola T
2018-03-23
InDEx is a software package for reporting and monitoring alcohol consumption via a smartphone application. Consumption of alcohol is self-reported by the user, and the app provides a visual representation of drinking behaviour and offers feedback on consumption levels compared to the general population. InDEx is intended as an exemplar app, operating as a standalone smartphone application and is highly customisable for a variety of research domains. InDEx is written in JavaScript, using IONIC framework which is cross-platform and is available under the liberal GNU General Public License (v3). The software is available from GitHub (https://github.com/DrDanL/index-app-public).
Markov chains for testing redundant software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1988-01-01
A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.
InDEx: Open Source iOS and Android Software for Self-Reporting and Monitoring of Alcohol Consumption
Leightley, Daniel; Puddephatt, Jo-Anne; Goodwin, Laura; Rona, Roberto; Fear, Nicola T.
2018-01-01
InDEx is a software package for reporting and monitoring alcohol consumption via a smartphone application. Consumption of alcohol is self-reported by the user, and the app provides a visual representation of drinking behaviour and offers feedback on consumption levels compared to the general population. InDEx is intended as an exemplar app, operating as a standalone smartphone application and is highly customisable for a variety of research domains. InDEx is written in JavaScript, using IONIC framework which is cross-platform and is available under the liberal GNU General Public License (v3). The software is available from GitHub (https://github.com/DrDanL/index-app-public). PMID:29795769
Software Assurance Competency Model
2013-03-01
COTS) software , and software as a service ( SaaS ). L2: Define and analyze risks in the acquisition of contracted software , COTS software , and SaaS ...2010a]: Application of technologies and processes to achieve a required level of confidence that software systems and services function in the...
Delay Tolerant Networking on NASA's Space Communication and Navigation Testbed
NASA Technical Reports Server (NTRS)
Johnson, Sandra; Eddy, Wesley
2016-01-01
This presentation covers the status of the implementation of an open source software that implements the specifications developed by the CCSDS Working Group. Interplanetary Overlay Network (ION) is open source software and it implements specifications that have been developed by two international working groups through IETF and CCSDS. ION was implemented on the SCaN Testbed, a testbed located on an external pallet on ISS, by the GRC team. The presentation will cover the architecture of the system, high level implementation details, and issues porting ION to VxWorks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Run; Su, Peng; Burge, James H.
The Software Configurable Optical Test System (SCOTS) uses deflectometry to measure surface slopes of general optical shapes without the need for additional null optics. Careful alignment of test geometry and calibration of inherent system error improve the accuracy of SCOTS to a level where it competes with interferometry. We report a SCOTS surface measurement of an off-axis superpolished elliptical x-ray mirror that achieves <1 nm<1 nm root-mean-square accuracy for the surface measurement with low-order term included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Elena S.; McCue, Lee Ann; Rutledge, Alexandra C.
2012-04-25
Visual Exploration and Statistics to Promote Annotation (VESPA) is an interactive visual analysis software tool that facilitates the discovery of structural mis-annotations in prokaryotic genomes. VESPA integrates high-throughput peptide-centric proteomics data and oligo-centric or RNA-Seq transcriptomics data into a genomic context. The data may be interrogated via visual analysis across multiple levels of genomic resolution, linked searches, exports and interaction with BLAST to rapidly identify location of interest within the genome and evaluate potential mis-annotations.
Parallel Computing for Probabilistic Response Analysis of High Temperature Composites
NASA Technical Reports Server (NTRS)
Sues, R. H.; Lua, Y. J.; Smith, M. D.
1994-01-01
The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.
A software framework for pipelined arithmetic algorithms in field programmable gate arrays
NASA Astrophysics Data System (ADS)
Kim, J. B.; Won, E.
2018-03-01
Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.
Multiplexing the Ethernet Interface Among VAX/VMS Users.
1983-12-01
Microcomputer Development Systems (MDS) and VAX/VPS sy’stemn were used for the implementation and testing of the project. The software is designed in such a...detailed design and .splesenta- toa of tie project. Is realLty chape: I constitutes the doee-.at os of the developed software. In soms instances, thisgs that...changes. 460i9n4t 4 provides a high level design of a virtual x $44 aletwock. alcag with some hints which may be useful So 44e W". who will work in this
A synchronized computational architecture for generalized bilateral control of robot arms
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Szakaly, Zoltan
1987-01-01
This paper describes a computational architecture for an interconnected high speed distributed computing system for generalized bilateral control of robot arms. The key method of the architecture is the use of fully synchronized, interrupt driven software. Since an objective of the development is to utilize the processing resources efficiently, the synchronization is done in the hardware level to reduce system software overhead. The architecture also achieves a balaced load on the communication channel. The paper also describes some architectural relations to trading or sharing manual and automatic control.
Software-based data path for raster-scanned multi-beam mask lithography
NASA Astrophysics Data System (ADS)
Rajagopalan, Archana; Agarwal, Ankita; Buck, Peter; Geller, Paul; Hamaker, H. Christopher; Rao, Nagswara
2016-10-01
According to the 2013 SEMATECH Mask Industry Survey,i roughly half of all photomasks are produced using laser mask pattern generator ("LMPG") lithography. LMPG lithography can be used for all layers at mature technology nodes, and for many non-critical and semi-critical masks at advanced nodes. The extensive use of multi-patterning at the 14-nm node significantly increases the number of critical mask layers, and the transition in wafer lithography from positive tone resist to negative tone resist at the 14-nm design node enables the switch from advanced binary masks back to attenuated phase shifting masks that require second level writes to remove unwanted chrome. LMPG lithography is typically used for second level writes due to its high productivity, absence of charging effects, and versatile non-actinic alignment capability. As multi-patterning use expands from double to triple patterning and beyond, the number of LMPG second level writes increases correspondingly. The desire to reserve the limited capacity of advanced electron beam writers for use when essential is another factor driving the demand for LMPG capacity. The increasing demand for cost-effective productivity has kept most of the laser mask writers ever manufactured running in production, sometimes long past their projected lifespan, and new writers continue to be built based on hardware developed some years ago.ii The data path is a case in point. While state-ofthe- art when first introduced, hardware-based data path systems are difficult to modify or add new features to meet the changing requirements of the market. As data volumes increase, design styles change, and new uses are found for laser writers, it is useful to consider a replacement for this critical subsystem. The availability of low-cost, high-performance, distributed computer systems combined with highly scalable EDA software lends itself well to creating an advanced data path system. EDA software, in routine production today, scales well to hundreds or even thousands of CPU-cores, offering the potential for virtually unlimited capacity. Features available in EDA software such as sizing, scaling, tone reversal, OPC, MPC, rasterization, and others are easily adapted to the requirements of a data path system. This paper presents the motivation, requirements, design and performance of an advanced, scalable software data path system suitable to support multi-beam laser mask lithography.
Integration of the Remote Agent for the NASA Deep Space One Autonomy Experiment
NASA Technical Reports Server (NTRS)
Dorais, Gregory A.; Bernard, Douglas E.; Gamble, Edward B., Jr.; Kanefsky, Bob; Kurien, James; Muscettola, Nicola; Nayak, P. Pandurang; Rajan, Kanna; Lau, Sonie (Technical Monitor)
1998-01-01
This paper describes the integration of the Remote Agent (RA), a spacecraft autonomy system which is scheduled to control the Deep Space 1 spacecraft during a flight experiment in 1999. The RA is a reusable, model-based autonomy system that is quite different from software typically used to control an aerospace system. We describe the integration challenges we faced, how we addressed them, and the lessons learned. We focus on those aspects of integrating the RA that were either easier or more difficult than integrating a more traditional large software application because the RA is a model-based autonomous system. A number of characteristics of the RA made integration process easier. One example is the model-based nature of RA. Since the RA is model-based, most of its behavior is not hard coded into procedural program code. Instead, engineers specify high level models of the spacecraft's components from which the Remote Agent automatically derives correct system-wide behavior on the fly. This high level, modular, and declarative software description allowed some interfaces between RA components and between RA and the flight software to be automatically generated and tested for completeness against the Remote Agent's models. In addition, the Remote Agent's model-based diagnosis system automatically diagnoses when the RA models are not consistent with the behavior of the spacecraft. In flight, this feature is used to diagnose failures in the spacecraft hardware. During integration, it proved valuable in finding problems in the spacecraft simulator or flight software. In addition, when modifications are made to the spacecraft hardware or flight software, the RA models are easily changed because they only capture a description of the spacecraft. one does not have to maintain procedural code that implements the correct behavior for every expected situation. On the other hand, several features of the RA made it more difficult to integrate than typical flight software. For example, the definition of correct behavior is more difficult to specify for a system that is expected to reason about and flexibly react to its environment than for a traditional flight software system. Consequently, whenever a change is made to the RA it is more time consuming to determine if the resulting behavior is correct. We conclude the paper with a discussion of future work on the Remote Agent as well as recommendations to ease integration of similar autonomy projects.
Using Free Internet Videogames in Upper Extremity Motor Training for Children with Cerebral Palsy.
Sevick, Marisa; Eklund, Elizabeth; Mensch, Allison; Foreman, Matthew; Standeven, John; Engsberg, Jack
2016-06-07
Movement therapy is one type of upper extremity intervention for children with cerebral palsy (CP) to improve function. It requires high-intensity, repetitive and task-specific training. Tedium and lack of motivation are substantial barriers to completing the training. An approach to overcome these barriers is to couple the movement therapy with videogames. This investigation: (1) tested the feasibility of delivering a free Internet videogame upper extremity motor intervention to four children with CP (aged 8-17 years) with mild to moderate limitations to upper limb function; and (2) determined the level of intrinsic motivation during the intervention. The intervention used free Internet videogames in conjunction with the Microsoft Kinect motion sensor and the Flexible Action and Articulated Skeleton Toolkit software (FAAST) software. Results indicated that the intervention could be successfully delivered in the laboratory and the home, and pre- and post- impairment, function and performance assessments were possible. Results also indicated a high level of motivation among the participants. It was concluded that the use of inexpensive hardware and software in conjunction with free Internet videogames has the potential to be very motivating in helping to improve the upper extremity abilities of children with CP. Future work should include results from additional participants and from a control group in a randomized controlled trial to establish efficacy.
Missileborne Artificial Vision System (MAVIS)
NASA Technical Reports Server (NTRS)
Andes, David K.; Witham, James C.; Miles, Michael D.
1994-01-01
Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.
GOCE gravity field simulation based on actual mission scenario
NASA Astrophysics Data System (ADS)
Pail, R.; Goiginger, H.; Mayrhofer, R.; Höck, E.; Schuh, W.-D.; Brockmann, J. M.; Krasbutter, I.; Fecher, T.; Gruber, T.
2009-04-01
In the framework of the ESA-funded project "GOCE High-level Processing Facility", an operational hardware and software system for the scientific processing (Level 1B to Level 2) of GOCE data has been set up by the European GOCE Gravity Consortium EGG-C. One key component of this software system is the processing of a spherical harmonic Earth's gravity field model and the corresponding full variance-covariance matrix from the precise GOCE orbit and calibrated and corrected satellite gravity gradiometry (SGG) data. In the framework of the time-wise approach a combination of several processing strategies for the optimum exploitation of the information content of the GOCE data has been set up: The Quick-Look Gravity Field Analysis is applied to derive a fast diagnosis of the GOCE system performance and to monitor the quality of the input data. In the Core Solver processing a rigorous high-precision solution of the very large normal equation systems is derived by applying parallel processing techniques on a PC cluster. Before the availability of real GOCE data, by means of a realistic numerical case study, which is based on the actual GOCE orbit and mission scenario and simulation data stemming from the most recent ESA end-to-end simulation, the expected GOCE gravity field performance is evaluated. Results from this simulation as well as recently developed features of the software system are presented. Additionally some aspects on data combination with complementary data sources are addressed.
10 CFR 961.11 - Text of the contract.
Code of Federal Regulations, 2012 CFR
2012-01-01
... characteristic, of a specific or technical nature. It may, for example, document research, experimental... computer software documentation). Examples of technical data include research and engineering data... repository, to take title to the spent nuclear fuel or high-level radioactive waste involved as expeditiously...
10 CFR 961.11 - Text of the contract.
Code of Federal Regulations, 2011 CFR
2011-01-01
... characteristic, of a specific or technical nature. It may, for example, document research, experimental... computer software documentation). Examples of technical data include research and engineering data... repository, to take title to the spent nuclear fuel or high-level radioactive waste involved as expeditiously...
The Virtual Beach Manager Toolset (VB) is a set of decision support software tools developed to help local beach managers make decisions as to when beaches should be closed due to predicted high levels of water borne pathogens. The tools are being developed under the umbrella of...
Science Software in High-Button Shoes.
ERIC Educational Resources Information Center
Dyrli, Odvard Egil
1984-01-01
Discusses inquiry-oriented science instruction and experientially based programs that support newer instructional strategies. Also provides examples of six process-oriented programs (Cat Lab, Geology Search, Rocky's Boots, Tell Star, Volcanoes, What's in Your Lunch?). Level, hardware needed, publisher, and description are provided for each…
An Exploration of Software-Based GNSS Signal Processing at Multiple Frequencies
NASA Astrophysics Data System (ADS)
Pasqual Paul, Manuel; Elosegui, Pedro; Lind, Frank; Vazquez, Antonio; Pankratius, Victor
2017-01-01
The Global Navigation Satellite System (GNSS; i.e., GPS, GLONASS, Galileo, and other constellations) has recently grown into numerous areas that go far beyond the traditional scope in navigation. In the geosciences, for example, high-precision GPS has become a powerful tool for a myriad of geophysical applications such as in geodynamics, seismology, paleoclimate, cryosphere, and remote sensing of the atmosphere. Positioning with millimeter-level accuracy can be achieved through carrier-phase-based, multi-frequency signal processing, which mitigates various biases and error sources such as those arising from ionospheric effects. Today, however, most receivers with multi-frequency capabilities are highly specialized hardware receiving systems with proprietary and closed designs, limited interfaces, and significant acquisition costs. This work explores alternatives that are entirely software-based, using Software-Defined Radio (SDR) receivers as a way to digitize the entire spectrum of interest. It presents an overview of existing open-source frameworks and outlines the next steps towards converting GPS software receivers from single-frequency to dual-frequency, geodetic-quality systems. In the future, this development will lead to a more flexible multi-constellation GNSS processing architecture that can be easily reused in different contexts, as well as to further miniaturization of receivers.
OASYS (OrAnge SYnchrotron Suite): an open-source graphical environment for x-ray virtual experiments
NASA Astrophysics Data System (ADS)
Rebuffi, Luca; Sanchez del Rio, Manuel
2017-08-01
The evolution of the hardware platforms, the modernization of the software tools, the access to the codes of a large number of young people and the popularization of the open source software for scientific applications drove us to design OASYS (ORange SYnchrotron Suite), a completely new graphical environment for modelling X-ray experiments. The implemented software architecture allows to obtain not only an intuitive and very-easy-to-use graphical interface, but also provides high flexibility and rapidity for interactive simulations, making configuration changes to quickly compare multiple beamline configurations. Its purpose is to integrate in a synergetic way the most powerful calculation engines available. OASYS integrates different simulation strategies via the implementation of adequate simulation tools for X-ray Optics (e.g. ray tracing and wave optics packages). It provides a language to make them to communicate by sending and receiving encapsulated data. Python has been chosen as main programming language, because of its universality and popularity in scientific computing. The software Orange, developed at the University of Ljubljana (SLO), is the high level workflow engine that provides the interaction with the user and communication mechanisms.
Advanced Autonomous Systems for Space Operations
NASA Astrophysics Data System (ADS)
Gross, A. R.; Smith, B. D.; Muscettola, N.; Barrett, A.; Mjolssness, E.; Clancy, D. J.
2002-01-01
New missions of exploration and space operations will require unprecedented levels of autonomy to successfully accomplish their objectives. Inherently high levels of complexity, cost, and communication distances will preclude the degree of human involvement common to current and previous space flight missions. With exponentially increasing capabilities of computer hardware and software, including networks and communication systems, a new balance of work is being developed between humans and machines. This new balance holds the promise of not only meeting the greatly increased space exploration requirements, but simultaneously dramatically reducing the design, development, test, and operating costs. New information technologies, which take advantage of knowledge-based software, model-based reasoning, and high performance computer systems, will enable the development of a new generation of design and development tools, schedulers, and vehicle and system health management capabilities. Such tools will provide a degree of machine intelligence and associated autonomy that has previously been unavailable. These capabilities are critical to the future of advanced space operations, since the science and operational requirements specified by such missions, as well as the budgetary constraints will limit the current practice of monitoring and controlling missions by a standing army of ground-based controllers. System autonomy capabilities have made great strides in recent years, for both ground and space flight applications. Autonomous systems have flown on advanced spacecraft, providing new levels of spacecraft capability and mission safety. Such on-board systems operate by utilizing model-based reasoning that provides the capability to work from high-level mission goals, while deriving the detailed system commands internally, rather than having to have such commands transmitted from Earth. This enables missions of such complexity and communication` distances as are not otherwise possible, as well as many more efficient and low cost applications. In addition, utilizing component and system modeling and reasoning capabilities, autonomous systems will play an increasing role in ground operations for space missions, where they will both reduce the human workload as well as provide greater levels of monitoring and system safety. This paper will focus specifically on new and innovative software for remote, autonomous, space systems flight operations. Topics to be presented will include a brief description of key autonomous control concepts, the Remote Agent program that commanded the Deep Space 1 spacecraft to new levels of system autonomy, recent advances in distributed autonomous system capabilities, and concepts for autonomous vehicle health management systems. A brief description of teaming spacecraft and rovers for complex exploration missions will also be provided. New on-board software for autonomous science data acquisition for planetary exploration will be described, as well as advanced systems for safe planetary landings. A new multi-agent architecture that addresses some of the challenges of autonomous systems will be presented. Autonomous operation of ground systems will also be considered, including software for autonomous in-situ propellant production and management, and closed- loop ecological life support systems (CELSS). Finally, plans and directions for the future will be discussed.
Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger
NASA Astrophysics Data System (ADS)
Conde Muíño, P.; ATLAS Collaboration
2017-10-01
General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Cameron, Jonathan M.; Myint, Steven
2013-01-01
This software runs a suite of arbitrary software tests spanning various software languages and types of tests (unit level, system level, or file comparison tests). The dtest utility can be set to automate periodic testing of large suites of software, as well as running individual tests. It supports distributing multiple tests over multiple CPU cores, if available. The dtest tool is a utility program (written in Python) that scans through a directory (and its subdirectories) and finds all directories that match a certain pattern and then executes any tests in that directory as described in simple configuration files.
V & V Within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1996-01-01
Verification and validation (V&V) is used to increase the level of assurance of critical software, particularly that of safety-critical and mission critical software. This paper describes the working group's success in identifying V&V tasks that could be performed in the domain engineering and transition levels of reuse-based software engineering. The primary motivation for V&V at the domain level is to provide assurance that the domain requirements are correct and that the domain artifacts correctly implement the domain requirements. A secondary motivation is the possible elimination of redundant V&V activities at the application level. The group also considered the criteria and motivation for performing V&V in domain engineering.
Software design and implementation concepts for an interoperable medical communication framework.
Besting, Andreas; Bürger, Sebastian; Kasparick, Martin; Strathen, Benjamin; Portheine, Frank
2018-02-23
The new IEEE 11073 service-oriented device connectivity (SDC) standard proposals for networked point-of-care and surgical devices constitutes the basis for improved interoperability due to its independence of vendors. To accelerate the distribution of the standard a reference implementation is indispensable. However, the implementation of such a framework has to overcome several non-trivial challenges. First, the high level of complexity of the underlying standard must be reflected in the software design. An efficient implementation has to consider the limited resources of the underlying hardware. Moreover, the frameworks purpose of realizing a distributed system demands a high degree of reliability of the framework itself and its internal mechanisms. Additionally, a framework must provide an easy-to-use and fail-safe application programming interface (API). In this work, we address these challenges by discussing suitable software engineering principles and practical coding guidelines. A descriptive model is developed that identifies key strategies. General feasibility is shown by outlining environments in which our implementation has been utilized.
Beamline Insertions Manager at Jefferson Lab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Michael C.
2015-09-01
The beam viewer system at Jefferson Lab provides operators and beam physicists with qualitative and quantitative information on the transverse electron beam properties. There are over 140 beam viewers installed on the 12 GeV CEBAF accelerator. This paper describes an upgrade consisting of replacing the EPICS-based system tasked with managing all viewers with a mixed system utilizing EPICS and high-level software. Most devices, particularly the beam viewers, cannot be safely inserted into the beam line during high-current beam operations. Software is partly responsible for protecting the machine from untimely insertions. The multiplicity of beam-blocking and beam-vulnerable devices motivates us tomore » try a data-driven approach. The beamline insertions application components are centrally managed and configured through an object-oriented software framework created for this purpose. A rules-based engine tracks the configuration and status of every device, along with the beam status of the machine segment containing the device. The application uses this information to decide on which device actions are allowed at any given time.« less
Performance Evaluation of Communication Software Systems for Distributed Computing
NASA Technical Reports Server (NTRS)
Fatoohi, Rod
1996-01-01
In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.
Software Past, Present, and Future: Views from Government, Industry and Academia
NASA Technical Reports Server (NTRS)
Holcomb, Lee; Page, Jerry; Evangelist, Michael
2000-01-01
Views from the NASA CIO NASA Software Engineering Workshop on software development from the past, present, and future are presented. The topics include: 1) Software Past; 2) Software Present; 3) NASA's Largest Software Challenges; 4) 8330 Software Projects in Industry Standish Groups 1994 Report; 5) Software Future; 6) Capability Maturity Model (CMM): Software Engineering Institute (SEI) levels; 7) System Engineering Quality Also Part of the Problem; 8) University Environment Trends Will Increase the Problem in Software Engineering; and 9) NASA Software Engineering Goals.
PsyToolkit: a software package for programming psychological experiments using Linux.
Stoet, Gijsbert
2010-11-01
PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.
Assistive Software Tools for Secondary-Level Students with Literacy Difficulties
ERIC Educational Resources Information Center
Lange, Alissa A.; McPhillips, Martin; Mulhern, Gerry; Wylie, Judith
2006-01-01
The present study assessed the compensatory effectiveness of four assistive software tools (speech synthesis, spellchecker, homophone tool, and dictionary) on literacy. Secondary-level students (N = 93) with reading difficulties completed computer-based tests of literacy skills. Training on their respective software followed for those assigned to…
DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.
Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less
DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia
Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.
2017-01-16
Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less
Optimum concentric circular array antenna with high gain and side lobe reduction at 5.8 GHz
NASA Astrophysics Data System (ADS)
Zaid, Mohammed; Rafiqul Islam, Md; Habaebi, Mohamed H.; Zahirul Alam, AHM; Abdullah, Khaizuran
2017-11-01
The significance of high gain directional antennas stems from the need to cope up with the everyday progressing wireless communication systems. Due to low gain of the widely used microstrip antenna, combining multiple antennas in proper geometry increases the gain with good directive property. Over other array forms, this paper uses concentric circular array configuration for its compact structure and inherent symmetry in azimuth. This proposed array is composed of 9 elements on FR-4 substrate, which is designed for WLAN applications at 5.8GHz. Antenna Magus software is used for synthesis, while CST software is used for optimization. The proposed array is designed with optimum inter-element spacing and number of elements achieving a high directional gain of 15.7 dB compared to 14.2 dB of available literature, with a high reduction in side lobe level of -17.6 dB.
Amazing "Speaking Dynamically" Tools Utilized in High School Special Education Classroom.
ERIC Educational Resources Information Center
Robinson, Deann A.; Attix, Gerald V.
This paper describes how Alameda High School in California is using the Speaking Dynamically Pro v2.0 software by Mayer-Johnson to help 13 special education students (ages 14-18) communicate and make the world more accessible. The students have a range of disabilities and skill levels. This computer program enables the students to make choices,…
The Implementation of Satellite Control System Software Using Object Oriented Design
NASA Technical Reports Server (NTRS)
Anderson, Mark O.; Reid, Mark; Drury, Derek; Hansell, William; Phillips, Tom
1998-01-01
NASA established the Small Explorer (SMEX) program in 1988 to provide frequent opportunities for highly focused and relatively inexpensive space science missions that can be launched into low earth orbit by small expendable vehicles. The development schedule for each SMEX spacecraft was three years from start to launch. The SMEX program has produced five satellites; Solar Anomalous and Magnetospheric Particle Explorer (SAMPEX), Fast Auroral Snapshot Explorer (FAST), Submillimeter Wave Astronomy Satellite (SWAS), Transition Region and Coronal Explorer (TRACE) and Wide-Field Infrared Explorer (WIRE). SAMPEX and FAST are on-orbit, TRACE is scheduled to be launched in April of 1998, WIRE is scheduled to be launched in September of 1998, and SWAS is scheduled to be launched in January of 1999. In each of these missions, the Attitude Control System (ACS) software was written using a modular procedural design. Current program goals require complete spacecraft development within 18 months. This requirement has increased pressure to write reusable flight software. Object-Oriented Design (OOD) offers the constructs for developing an application that only needs modification for mission unique requirements. This paper describes the OOD that was used to develop the SMEX-Lite ACS software. The SMEX-Lite ACS is three-axis controlled, momentum stabilized, and is capable of performing sub-arc-minute pointing. The paper first describes the high level requirements which governed the architecture of the SMEX-Lite ACS software. Next, the context in which the software resides is explained. The paper describes the benefits of encapsulation, inheritance and polymorphism with respect to the implementation of an ACS software system. This paper will discuss the design of several software components that comprise the ACS software. Specifically, Object-Oriented designs are presented for sensor data processing, attitude control, attitude determination and failure detection. The paper addresses the benefits of the OOD versus a conventional procedural design. The final discussion in this paper will address the establishment of the ACS Foundation Class (AFC) Library. The AFC is a large software repository, requiring a minimal amount of code modifications to produce ACS software for future projects, saving production time and costs.
NASA Technical Reports Server (NTRS)
1987-01-01
Genigraphics Corporation's Masterpiece 8770 FilmRecorder is an advanced high resolution system designed to improve and expand a company's in-house graphics production. GRAFTIME/software package was designed to allow office personnel with minimal training to produce professional level graphics for business communications and presentations. Products are no longer being manufactured.
UTM TCL2 Software Requirements
NASA Technical Reports Server (NTRS)
Smith, Irene S.; Rios, Joseph L.; McGuirk, Patrick O.; Mulfinger, Daniel G.; Venkatesan, Priya; Smith, David R.; Baskaran, Vijayakumar; Wang, Leo
2017-01-01
The Unmanned Aircraft Systems (UAS) Traffic Management (UTM) Technical Capability Level (TCL) 2 software implements the UTM TCL 2 software requirements described herein. These software requirements are linked to the higher level UTM TCL 2 System Requirements. Each successive TCL implements additional UTM functionality, enabling additional use cases. TCL 2 demonstrated how to enable expanded multiple operations by implementing automation for beyond visual line-of-sight, tracking operations, and operations flying over sparsely populated areas.
HEP Software Foundation Community White Paper Working Group - Data Analysis and Interpretation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, Lothar
At the heart of experimental high energy physics (HEP) is the development of facilities and instrumentation that provide sensitivity to new phenomena. Our understanding of nature at its most fundamental level is advanced through the analysis and interpretation of data from sophisticated detectors in HEP experiments. The goal of data analysis systems is to realize the maximum possible scientific potential of the data within the constraints of computing and human resources in the least time. To achieve this goal, future analysis systems should empower physicists to access the data with a high level of interactivity, reproducibility and throughput capability. Asmore » part of the HEP Software Foundation Community White Paper process, a working group on Data Analysis and Interpretation was formed to assess the challenges and opportunities in HEP data analysis and develop a roadmap for activities in this area over the next decade. In this report, the key findings and recommendations of the Data Analysis and Interpretation Working Group are presented.« less
GATE: software for the analysis and visualization of high-dimensional time series expression data.
MacArthur, Ben D; Lachmann, Alexander; Lemischka, Ihor R; Ma'ayan, Avi
2010-01-01
We present Grid Analysis of Time series Expression (GATE), an integrated computational software platform for the analysis and visualization of high-dimensional biomolecular time series. GATE uses a correlation-based clustering algorithm to arrange molecular time series on a two-dimensional hexagonal array and dynamically colors individual hexagons according to the expression level of the molecular component to which they are assigned, to create animated movies of systems-level molecular regulatory dynamics. In order to infer potential regulatory control mechanisms from patterns of correlation, GATE also allows interactive interroga-tion of movies against a wide variety of prior knowledge datasets. GATE movies can be paused and are interactive, allowing users to reconstruct networks and perform functional enrichment analyses. Movies created with GATE can be saved in Flash format and can be inserted directly into PDF manuscript files as interactive figures. GATE is available for download and is free for academic use from http://amp.pharm.mssm.edu/maayan-lab/gate.htm
The Chandra X-ray Center data system: supporting the mission of the Chandra X-ray Observatory
NASA Astrophysics Data System (ADS)
Evans, Janet D.; Cresitello-Dittmar, Mark; Doe, Stephen; Evans, Ian; Fabbiano, Giuseppina; Germain, Gregg; Glotfelty, Kenny; Hall, Diane; Plummer, David; Zografou, Panagoula
2006-06-01
The Chandra X-ray Center Data System provides end-to-end scientific software support for Chandra X-ray Observatory mission operations. The data system includes the following components: (1) observers' science proposal planning tools; (2) science mission planning tools; (3) science data processing, monitoring, and trending pipelines and tools; and (4) data archive and database management. A subset of the science data processing component is ported to multiple platforms and distributed to end-users as a portable data analysis package. Web-based user tools are also available for data archive search and retrieval. We describe the overall architecture of the data system and its component pieces, and consider the design choices and their impacts on maintainability. We discuss the many challenges involved in maintaining a large, mission-critical software system with limited resources. These challenges include managing continually changing software requirements and ensuring the integrity of the data system and resulting data products while being highly responsive to the needs of the project. We describe our use of COTS and OTS software at the subsystem and component levels, our methods for managing multiple release builds, and adapting a large code base to new hardware and software platforms. We review our experiences during the life of the mission so-far, and our approaches for keeping a small, but highly talented, development team engaged during the maintenance phase of a mission.
Managing Scientific Software Complexity with Bocca and CCA
Allan, Benjamin A.; Norris, Boyana; Elwasif, Wael R.; ...
2008-01-01
In high-performance scientific software development, the emphasis is often on short time to first solution. Even when the development of new components mostly reuses existing components or libraries and only small amounts of new code must be created, dealing with the component glue code and software build processes to obtain complete applications is still tedious and error-prone. Component-based software meant to reduce complexity at the application level increases complexity to the extent that the user must learn and remember the interfaces and conventions of the component model itself. To address these needs, we introduce Bocca, the first tool to enablemore » application developers to perform rapid component prototyping while maintaining robust software-engineering practices suitable to HPC environments. Bocca provides project management and a comprehensive build environment for creating and managing applications composed of Common Component Architecture components. Of critical importance for high-performance computing (HPC) applications, Bocca is designed to operate in a language-agnostic way, simultaneously handling components written in any of the languages commonly used in scientific applications: C, C++, Fortran, Python and Java. Bocca automates the tasks related to the component glue code, freeing the user to focus on the scientific aspects of the application. Bocca embraces the philosophy pioneered by Ruby on Rails for web applications: start with something that works, and evolve it to the user's purpose.« less
Chemistry Modeling for Aerothermodynamics and TPS
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Stallcop, James R.; Dateo, Christopher e.; Schwenke, David W.; Halicioglu, Timur; Huo, winifred M.
2005-01-01
Recent advances in supercomputers and highly scalable quantum chemistry software render computational chemistry methods a viable means of providing chemistry data for aerothermal analysis at a specific level of confidence. Four examples of first principles quantum chemistry calculations will be presented. Study of the highly nonequilibrium rotational distribution of a nitrogen molecule from the exchange reaction N + N2 illustrates how chemical reactions can influence rotational distribution. The reaction C2H + H2 is one example of a radical reaction that occurs during hypersonic entry into an atmosphere containing methane. A study of the etching of a Si surface illustrates our approach to surface reactions. A recently developed web accessible database and software tool (DDD) that provides the radiation profile of diatomic molecules is also described.
Chemistry Modeling for Aerothermodynamics and TPS
NASA Technical Reports Server (NTRS)
Wang, Dun-You; Stallcop, James R.; Dateo, Christopher E.; Schwenke, David W.; Haliciogiu, Timur; Huo, Winifred
2004-01-01
Recent advances in supercomputers and highly scalable quantum chemistry software render computational chemistry methods a viable means of providing chemistry data for aerothermal analysis at a specific level of confidence. Four examples of first principles quantum chemistry calculations will be presented. The study of the highly nonequilibrium rotational distribution of nitrogen molecule from the exchange reaction N + N2 illustrates how chemical reactions can influence the rotational distribution. The reaction C2H + H2 is one example of a radical reaction that occurs during hypersonic entry into a methane containing atmosphere. A study of the etching of Si surface illustrates our approach to surface reactions. A recently developed web accessible database and software tool (DDD) that provides the radiation profile of diatomic molecules is also described.
Multiple-function multi-input/multi-output digital control and on-line analysis
NASA Technical Reports Server (NTRS)
Hoadley, Sherwood T.; Wieseman, Carol D.; Mcgraw, Sandra M.
1992-01-01
The design and capabilities of two digital controller systems for aeroelastic wind-tunnel models are described. The first allowed control of flutter while performing roll maneuvers with wing load control as well as coordinating the acquisition, storage, and transfer of data for on-line analysis. This system, which employs several digital signal multi-processor (DSP) boards programmed in high-level software languages, is housed in a SUN Workstation environment. A second DCS provides a measure of wind-tunnel safety by functioning as a trip system during testing in the case of high model dynamic response or in case the first DCS fails. The second DCS uses National Instruments LabVIEW Software and Hardware within a Macintosh environment.
NASA Astrophysics Data System (ADS)
Dervilllé, A.; Labrosse, A.; Zimmermann, Y.; Foucher, J.; Gronheid, R.; Boeckx, C.; Singh, A.; Leray, P.; Halder, S.
2016-03-01
The dimensional scaling in IC manufacturing strongly drives the demands on CD and defect metrology techniques and their measurement uncertainties. Defect review has become as important as CD metrology and both of them create a new metrology paradigm because it creates a completely new need for flexible, robust and scalable metrology software. Current, software architectures and metrology algorithms are performant but it must be pushed to another higher level in order to follow roadmap speed and requirements. For example: manage defect and CD in one step algorithm, customize algorithms and outputs features for each R&D team environment, provide software update every day or every week for R&D teams in order to explore easily various development strategies. The final goal is to avoid spending hours and days to manually tune algorithm to analyze metrology data and to allow R&D teams to stay focus on their expertise. The benefits are drastic costs reduction, more efficient R&D team and better process quality. In this paper, we propose a new generation of software platform and development infrastructure which can integrate specific metrology business modules. For example, we will show the integration of a chemistry module dedicated to electronics materials like Direct Self Assembly features. We will show a new generation of image analysis algorithms which are able to manage at the same time defect rates, images classifications, CD and roughness measurements with high throughput performances in order to be compatible with HVM. In a second part, we will assess the reliability, the customization of algorithm and the software platform capabilities to follow new specific semiconductor metrology software requirements: flexibility, robustness, high throughput and scalability. Finally, we will demonstrate how such environment has allowed a drastic reduction of data analysis cycle time.
Software analysis in the semantic web
NASA Astrophysics Data System (ADS)
Taylor, Joshua; Hall, Robert T.
2013-05-01
Many approaches in software analysis, particularly dynamic malware analyis, benefit greatly from the use of linked data and other Semantic Web technology. In this paper, we describe AIS, Inc.'s Semantic Extractor (SemEx) component from the Malware Analysis and Attribution through Genetic Information (MAAGI) effort, funded under DARPA's Cyber Genome program. The SemEx generates OWL-based semantic models of high and low level behaviors in malware samples from system call traces generated by AIS's introspective hypervisor, IntroVirtTM. Within MAAGI, these semantic models were used by modules that cluster malware samples by functionality, and construct "genealogical" malware lineages. Herein, we describe the design, implementation, and use of the SemEx, as well as the C2DB, an OWL ontology used for representing software behavior and cyber-environments.
A complexity-scalable software-based MPEG-2 video encoder.
Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin
2004-05-01
With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.
NASA Technical Reports Server (NTRS)
Albus, James S.; Mccain, Harry G.; Lumia, Ronald
1989-01-01
The document describes the NASA Standard Reference Model (NASREM) Architecture for the Space Station Telerobot Control System. It defines the functional requirements and high level specifications of the control system for the NASA space Station document for the functional specification, and a guideline for the development of the control system architecture, of the 10C Flight Telerobot Servicer. The NASREM telerobot control system architecture defines a set of standard modules and interfaces which facilitates software design, development, validation, and test, and make possible the integration of telerobotics software from a wide variety of sources. Standard interfaces also provide the software hooks necessary to incrementally upgrade future Flight Telerobot Systems as new capabilities develop in computer science, robotics, and autonomous system control.
Systems aspects of COBE science data compression
NASA Technical Reports Server (NTRS)
Freedman, I.; Boggess, E.; Seiler, E.
1993-01-01
A general approach to compression of diverse data from large scientific projects has been developed and this paper addresses the appropriate system and scientific constraints together with the algorithm development and test strategy. This framework has been implemented for the COsmic Background Explorer spacecraft (COBE) by retrofitting the existing VAS-based data management system with high-performance compression software permitting random access to the data. Algorithms which incorporate scientific knowledge and consume relatively few system resources are preferred over ad hoc methods. COBE exceeded its planned storage by a large and growing factor and the retrieval of data significantly affects the processing, delaying the availability of data for scientific usage and software test. Embedded compression software is planned to make the project tractable by reducing the data storage volume to an acceptable level during normal processing.
Wallot, Sebastian; Roepstorff, Andreas; Mønster, Dan
2016-01-01
We introduce Multidimensional Recurrence Quantification Analysis (MdRQA) as a tool to analyze multidimensional time-series data. We show how MdRQA can be used to capture the dynamics of high-dimensional signals, and how MdRQA can be used to assess coupling between two or more variables. In particular, we describe applications of the method in research on joint and collective action, as it provides a coherent analysis framework to systematically investigate dynamics at different group levels—from individual dynamics, to dyadic dynamics, up to global group-level of arbitrary size. The Appendix in Supplementary Material contains a software implementation in MATLAB to calculate MdRQA measures. PMID:27920748
Opportunities and choice in a new vector era
NASA Astrophysics Data System (ADS)
Nowak, A.
2014-06-01
This work discusses the significant changes in computing landscape related to the progression of Moore's Law, and the implications on scientific computing. Particular attention is devoted to the High Energy Physics domain (HEP), which has always made good use of threading, but levels of parallelism closer to the hardware were often left underutilized. Findings of the CERN openlab Platform Competence Center are reported in the context of expanding "performance dimensions", and especially the resurgence of vectors. These suggest that data oriented designs are feasible in HEP and have considerable potential for performance improvements on multiple levels, but will rarely trump algorithmic enhancements. Finally, an analysis of upcoming hardware and software technologies identifies heterogeneity as a major challenge for software, which will require more emphasis on scalable, efficient design.
Towards a general object-oriented software development methodology
NASA Technical Reports Server (NTRS)
Seidewitz, ED; Stark, Mike
1986-01-01
Object diagrams were used to design a 5000 statement team training exercise and to design the entire dynamics simulator. The object diagrams are also being used to design another 50,000 statement Ada system and a personal computer based system that will be written in Modula II. The design methodology evolves out of these experiences as well as the limitations of other methods that were studied. Object diagrams, abstraction analysis, and associated principles provide a unified framework which encompasses concepts from Yourdin, Booch, and Cherry. This general object-oriented approach handles high level system design, possibly with concurrency, through object-oriented decomposition down to a completely functional level. How object-oriented concepts can be used in other phases of the software life-cycle, such as specification and testing is being studied concurrently.
The ASSERT Virtual Machine Kernel: Support for Preservation of Temporal Properties
NASA Astrophysics Data System (ADS)
Zamorano, J.; de la Puente, J. A.; Pulido, J. A.; Urueña
2008-08-01
A new approach to building embedded real-time software has been developed in the ASSERT project. One of its key elements is the concept of a virtual machine preserving the non-functional properties of the system, and especially real-time properties, all the way down from high- level design models down to executable code. The paper describes one instance of the virtual machine concept that provides support for the preservation of temporal properties both at the source code level —by accept- ing only "legal" entities, i.e. software components with statically analysable real-tim behaviour— and at run-time —by monitoring the temporal behaviour of the system. The virtual machine has been validated on several pilot projects carried out by aerospace companies in the framework of the ASSERT project.
High/Scope Buyer's Guide to Children's Software. 11th Edition.
ERIC Educational Resources Information Center
Hohmann, Charles; And Others
This 11th edition of the High/Scope Buyer's Guide to Children's Software was designed to help teachers, caregivers, and parents make good choices when purchasing software to enhance children's learning. The book consists of an introduction, a chapter on finding the best software, software reviews for 48 different software products. The…
1992-12-01
provide program 5 managers some level of confidence that their software will operate at an acceptable level of risk. A number of structured safety...safety within the constraints of operational effectiveness, schedule, and cost through timely application of system safety management and engineering...Master of Science in Software Systems Management Peter W. Colan, B.S.E. Robert W. Prouhet, B.S. Captain, USAF Captain, USAF December 1992 Approved for
High-Level Data Fusion Software for SHOALS-1000TH FY07 Annual Report
2007-01-01
This survey covered the lakeside town of Alpena , Michigan, and the shoreline of Lake Huron. Additionally a small set of ground reflectance...Figure 2. SHOALS green laser reflectance image of the eastern part of Alpena , Michigan, and the shoreline of Thunder Bay
Library Development Handbook. Central Archive for Reusable Defense Software (CARDS)
1993-10-29
features. This feature benefits the individual not versed in the terminology of the domain. When class requirements become part of the domain criteria, they... franchisee - Group to whom a franchise is granted. generic architecture - A collection of high-level paradigms and constraints that characterize the
Virtual Beach: Decision Support Tools for Beach Pathogen Prediction
The Virtual Beach Managers Tool (VB) is decision-making software developed to help local beach managers make decisions as to when beaches should be closed due to predicted high levels of water borne pathogens. The tool is being developed under the umbrella of EPA's Advanced Monit...
Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan
2013-11-01
Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms, such that volumes quantified from scans of different reconstruction algorithms can be compared. The little difference found between the precision of FBP and iterative reconstructions could be a result of both iterative reconstruction's diminished noise reduction at the edge of the nodules as well as the loss of resolution at high noise levels with iterative reconstruction. The findings do not rule out potential advantage of IR that might be evident in a study that uses a larger number of nodules or repeated scans.
Popok, David; West, Christopher; Frias, Barbara; Krassioukov, Andrei V
2016-07-29
Spinal cord injury (SCI) is a debilitating neurological condition characterized by somatic and autonomic dysfunctions. In particular, SCI above the mid-thoracic level can lead to a potentially life-threatening hypertensive condition called autonomic dysreflexia (AD) that is often triggered by noxious or non-noxious somatic or visceral stimuli below the level of injury. One of the most common triggers of AD is the distension of pelvic viscera, such as during bladder and bowel distension or evacuation. This protocol presents a novel pattern recognition algorithm developed for a JAVA platform software to study the fluctuations of cardiovascular parameters as well as the number, severity and duration of spontaneously occurring AD events. The software is able to apply a pattern recognition algorithm on hemodynamic data such as systolic blood pressure (SBP) and heart rate (HR) extracted from telemetry recordings of conscious and unrestrained animals before and after thoracic (T3) complete transection. With this software, hemodynamic parameters and episodes of AD are able to be detected and analyzed with minimal experimenter bias.
Development of the functional simulator for the Galileo attitude and articulation control system
NASA Technical Reports Server (NTRS)
Namiri, M. K.
1983-01-01
A simulation program for verifying and checking the performance of the Galileo Spacecraft's Attitude and Articulation Control Subsystem's (AACS) flight software is discussed. The program, which is called Functional Simulator (FUNSIM), provides a simple method of interfacing user-supplied mathematical models coded in FORTRAN which describes spacecraft dynamics, sensors, and actuators; this is done with the AACS flight software, coded in HAL/S (High-level Advanced Language/Shuttle). It is thus able to simulate the AACS flight software accurately to the HAL/S statement level in the environment of a mainframe computer system. FUNSIM also has a command and data subsystem (CDS) simulator. It is noted that the input/output data and timing are simulated with the same precision as the flight microprocessor. FUNSIM uses a variable stepsize numerical integration algorithm complete with individual error bound control on the state variable to solve the equations of motion. The program has been designed to provide both line printer and matrix dot plotting of the variables requested in the run section and to provide error diagnostics.
NASA Technical Reports Server (NTRS)
Langston, L. J.
1976-01-01
The formulation of Level C requirements for guidance software was reported. Requirements for a PEG supervisor which controls all input/output interfaces with other processors and determines which PEG mode is to be utilized were studied in detail. A description of the two guidance modes for which Level C requirements have been formulated was presented. Functions required for proper execution of the guidance software were defined. The requirements for a navigation function that is used in the prediction logic of PEG mode 4 were discussed. It is concluded that this function is extracted from the current navigation FSSR.
Automated benchmark generation based upon a specification language
NASA Technical Reports Server (NTRS)
Rajan, N.; Feteih, S. E.; Saito, J.
1984-01-01
The problem of validating and verifying digital flight control system (DFCS) software is addressed in this paper. A new specification language DIVERS is proposed, and is the keystone of the approach. This language consists of keywords where each keyword represents an element in the block diagram of a DFCS. DIVERS has a dictionary which contains all the keywords a DFCS designer might need. Translator programs convert the system specifications into an executable, high-level language program. The features of translators are discussed and are elucidated by examples. This language is used to describe a typical flight software module.
Quantum Computing Architectural Design
NASA Astrophysics Data System (ADS)
West, Jacob; Simms, Geoffrey; Gyure, Mark
2006-03-01
Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.
ERIC Educational Resources Information Center
Biju, Soly Mathew
2008-01-01
Many software development firms are now adopting the agile software development method. This method involves the customer at every level of software development, thus reducing the impact of change in the requirement at a later stage. In this article, the principles of the agile method for software development are explored and there is a focus on…
VML 3.0 Reactive Sequencing Objects and Matrix Math Operations for Attitude Profiling
NASA Technical Reports Server (NTRS)
Grasso, Christopher A.; Riedel, Joseph E.
2012-01-01
VML (Virtual Machine Language) has been used as the sequencing flight software on over a dozen JPL deep-space missions, most recently flying on GRAIL and JUNO. In conjunction with the NASA SBIR entitled "Reactive Rendezvous and Docking Sequencer", VML version 3.0 has been enhanced to include object-oriented element organization, built-in queuing operations, and sophisticated matrix / vector operations. These improvements allow VML scripts to easily perform much of the work that formerly would have required a great deal of expensive flight software development to realize. Autonomous turning and tracking makes considerable use of new VML features. Profiles generated by flight software are managed using object-oriented VML data constructs executed in discrete time by the VML flight software. VML vector and matrix operations provide the ability to calculate and supply quaternions to the attitude controller flight software which produces torque requests. Using VML-based attitude planning components eliminates flight software development effort, and reduces corresponding costs. In addition, the direct management of the quaternions allows turning and tracking to be tied in with sophisticated high-level VML state machines. These state machines provide autonomous management of spacecraft operations during critical tasks like a hypothetic Mars sample return rendezvous and docking. State machines created for autonomous science observations can also use this sort of attitude planning system, allowing heightened autonomy levels to reduce operations costs. VML state machines cannot be considered merely sequences - they are reactive logic constructs capable of autonomous decision making within a well-defined domain. The state machine approach enabled by VML 3.0 is progressing toward flight capability with a wide array of applicable mission activities.
New generation of exploration tools: interactive modeling software and microcomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krajewski, S.A.
1986-08-01
Software packages offering interactive modeling techniques are now available for use on microcomputer hardware systems. These packages are reasonably priced for both company and independent explorationists; they do not require users to have high levels of computer literacy; they are capable of rapidly completing complex ranges of sophisticated geologic and geophysical modeling tasks; and they can produce presentation-quality output for comparison with real-world data. For example, interactive packages are available for mapping, log analysis, seismic modeling, reservoir studies, and financial projects as well as for applying a variety of statistical and geostatistical techniques to analysis of exploration data. More importantly,more » these packages enable explorationists to directly apply their geologic expertise when developing and fine-tuning models for identifying new prospects and for extending producing fields. As a result of these features, microcomputers and interactive modeling software are becoming common tools in many exploration offices. Gravity and magnetics software programs illustrate some of the capabilities of such exploration tools.« less
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Arthur, James D.; Stapko, Ruth K.; Davani, Darush
1999-01-01
The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has been investigating how projects can determine when sufficient testing has been completed. For most projects, schedules are underestimated, and the last phase of the software development, testing, must be decreased. Two questions are frequently asked: "To what extent is the software error-free? " and "How much time and effort is required to detect and remove the remaining errors? " Clearly, neither question can be answered with absolute certainty. Nonetheless, the ability to answer these questions with some acceptable level of confidence is highly desirable. First, knowing the extent to which a product is error-free, we can judge when it is time to terminate testing. Secondly, if errors are judged to be present, we can perform a cost/benefit trade-off analysis to estimate when the software will be ready for use and at what cost. This paper explains the efforts of the SATC to help projects determine what is sufficient testing and when is the most cost-effective time to stop testing.
NASA Astrophysics Data System (ADS)
Deliparaschos, Kyriakos M.; Michail, Konstantinos; Zolotas, Argyrios C.; Tzafestas, Spyros G.
2016-05-01
This work presents a field programmable gate array (FPGA)-based embedded software platform coupled with a software-based plant, forming a hardware-in-the-loop (HIL) that is used to validate a systematic sensor selection framework. The systematic sensor selection framework combines multi-objective optimization, linear-quadratic-Gaussian (LQG)-type control, and the nonlinear model of a maglev suspension. A robustness analysis of the closed-loop is followed (prior to implementation) supporting the appropriateness of the solution under parametric variation. The analysis also shows that quantization is robust under different controller gains. While the LQG controller is implemented on an FPGA, the physical process is realized in a high-level system modeling environment. FPGA technology enables rapid evaluation of the algorithms and test designs under realistic scenarios avoiding heavy time penalty associated with hardware description language (HDL) simulators. The HIL technique facilitates significant speed-up in the required execution time when compared to its software-based counterpart model.
Reducing Risk in DoD Software-Intensive Systems Development
2016-03-01
intensive systems development risk. This research addresses the use of the Technical Readiness Assessment (TRA) using the nine-level software Technology...The software TRLs are ineffective in reducing technical risk for the software component development. • Without the software TRLs, there is no...effective method to perform software TRA or reduce the technical development risk. The software component will behave as a new, untried technology in nearly
Concept Designed and Developed for Distortion- Tolerant, High-Stability Engine Control
NASA Technical Reports Server (NTRS)
1995-01-01
Engine Control Future aircraft turbine engines, both commercial and military, must be able to successfully accommodate expected increased levels of steady-state and dynamic engine-face distortion. Advanced tactical aircraft are likely to use thrust vectoring to enhance their maneuverability. As a result, the engines will see more extreme aircraft angles-of-attack and sideslip levels than are currently encountered with present-day aircraft. Also, the mixed-compression inlets needed for the High Speed Civil Transport will likely encounter disturbances similar to those seen by tactical aircraft, in addition to planar pulse, inlet buzz, and high distortion levels at low flight speed and off-design operation. The current approach of incorporating a sufficient component design stall margin to tolerate these increased levels of distortion would significantly reduce performance. The objective of the High Stability Engine Control (HISTEC) program is to design, develop, and flight demonstrate an advanced, high-stability, integrated engine-control system that uses measurement-based, real-time estimates of distortion to enhance engine stability. The resulting distortion-tolerant control reduces the required design stall margin, with a corresponding increase in performance and decrease in fuel burn. The HISTEC concept has been designed and developed, and the software implementing the concept has successfully accommodated time-varying distortion. The NASA Lewis Research Center is currently overseeing the development and validation of the hardware and software necessary to flight test the HISTEC concept. HISTEC is a contracted effort with Pratt & Whitney of West Palm Beach, Florida. The HISTEC approach includes two major systems: A Distortion Estimation System (DES) and Stability Management Control (SMC). DES is an aircraft-mounted, high-speed processor that estimates the amount and type of distortion present and its effect on the engine. It uses high-response pressure measurements at the engine face to calculate indicators of the type and extent of distortion in real time. From these indicators, DES determines the effects of distortion on the propulsion systems and the corresponding engine match point necessary to accommodate it. DES output consists of fan and compressor pressure ratio trim commands that are passed to the SMC. In addition, DES uses maneuver information, consisting of angle-of-attack and sideslip from the flight control, to anticipate high inlet distortion conditions. The SMC, which is contained in the engine-mounted, Improved Digital Electronic Engine Control (IDEEC), includes advanced control laws to directly control the fan and compressor transient operating line (pressure ratio). These advanced control laws, with a multivariable design, have the potential for higher bandwidth and the resulting more precise control of engine match. The ability to measure and assess the distortion effects in real time coupled with a high-response controller improves engine stability at high levels of distortion. The software algorithms implementing DES have been designed, developed, and demonstrated, and integration testing of the DES and SMC software has been completed. The results show that the HISTEC system will be able to sense inlet distortion, determine the effect on engine stability, and accommodate distortion by maintaining an adequate margin for engine surge. The Pratt &Whitney Comprehensive Engine Diagnostic Unit was chosen as the DES processor. An instrumented inlet case for sensing distortion was designed and fabricated. HISTEC is scheduled for flight test on the ACTIVE F-15 aircraft at the NASA Dryden Flight Research Center in Edwards, California, in late 1996.
Firefly: an optical lithographic system for the fabrication of holographic security labels
NASA Astrophysics Data System (ADS)
Calderón, Jorge; Rincón, Oscar; Amézquita, Ricardo; Pulido, Iván.; Amézquita, Sebastián.; Bernal, Andrés.; Romero, Luis; Agudelo, Viviana
2016-03-01
This paper introduces Firefly, an optical lithography origination system that has been developed to produce holographic masters of high quality. This mask-less lithography system has a resolution of 418 nm half-pitch, and generates holographic masters with the optical characteristics required for security applications of level 1 (visual verification), level 2 (pocket reader verification) and level 3 (forensic verification). The holographic master constitutes the main core of the manufacturing process of security holographic labels used for the authentication of products and documents worldwide. Additionally, the Firefly is equipped with a software tool that allows for the hologram design from graphic formats stored in bitmaps. The software is capable of generating and configuring basic optical effects such as animation and color, as well as effects of high complexity such as Fresnel lenses, engraves and encrypted images, among others. The Firefly technology gathers together optical lithography, digital image processing and the most advanced control systems, making possible a competitive equipment that challenges the best technologies in the industry of holographic generation around the world. In this paper, a general description of the origination system is provided as well as some examples of its capabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
YU, DANTONG; Jin, Shudong
2014-03-01
Data-intensive applications, including high energy and nuclear physics, astrophysics, climate modeling, nano-scale materials science, genomics, and financing, are expected to generate exabytes of data over the coming years, which must be transferred, visualized, and analyzed by geographically distributed teams of users. High-performance network capabilities must be available to these users at the application level in a transparent, virtualized manner. Moreover, the application users must have the capability to move large datasets from local and remote locations across network environments to their home institutions. To solve these challenges, the main goal of our project is to design and evaluate high-performance datamore » transfer software to support various data-intensive applications. First, we have designed a middleware software that provides access to Remote Direct Memory Access (RDMA) functionalities. This middleware integrates network access, memory management and multitasking in its core design. We address a number of issues related to its efficient implementation, for instance, explicit buffer management and memory registration, and parallelization of RDMA operations, which are vital to delivering the benefit of RDMA to the applications. Built on top of this middleware, an implementation and experimental evaluation of the RDMA-based FTP software, RFTP, is described and evaluated. This application has been implemented by our team to exploit the full capabilities of advanced RDMA mechanisms for ultra-high speed bulk data transfer applications on Energy Sciences Network (ESnet). Second, we designed our data transfer software to optimize TCP/IP based data transfer performance such that RFTP can be fully compatible with today’s Internet. Our kernel optimization techniques with Linux system calls sendfile and splice, can reduce data copy cost. In this report, we summarize the technical challenges of our project, the primary software design methods, the major project milestones achieved, as well as the testbed evaluation work and demonstrations during our project life time.« less
CognitionMaster: an object-based image analysis framework
2013-01-01
Background Automated image analysis methods are becoming more and more important to extract and quantify image features in microscopy-based biomedical studies and several commercial or open-source tools are available. However, most of the approaches rely on pixel-wise operations, a concept that has limitations when high-level object features and relationships between objects are studied and if user-interactivity on the object-level is desired. Results In this paper we present an open-source software that facilitates the analysis of content features and object relationships by using objects as basic processing unit instead of individual pixels. Our approach enables also users without programming knowledge to compose “analysis pipelines“ that exploit the object-level approach. We demonstrate the design and use of example pipelines for the immunohistochemistry-based cell proliferation quantification in breast cancer and two-photon fluorescence microscopy data about bone-osteoclast interaction, which underline the advantages of the object-based concept. Conclusions We introduce an open source software system that offers object-based image analysis. The object-based concept allows for a straight-forward development of object-related interactive or fully automated image analysis solutions. The presented software may therefore serve as a basis for various applications in the field of digital image analysis. PMID:23445542
Toward Intelligent Software Defect Detection
NASA Technical Reports Server (NTRS)
Benson, Markland J.
2011-01-01
Source code level software defect detection has gone from state of the art to a software engineering best practice. Automated code analysis tools streamline many of the aspects of formal code inspections but have the drawback of being difficult to construct and either prone to false positives or severely limited in the set of defects that can be detected. Machine learning technology provides the promise of learning software defects by example, easing construction of detectors and broadening the range of defects that can be found. Pinpointing software defects with the same level of granularity as prominent source code analysis tools distinguishes this research from past efforts, which focused on analyzing software engineering metrics data with granularity limited to that of a particular function rather than a line of code.
Bringing the Tools of Big Science to Bear on Local Environmental Challenges
ERIC Educational Resources Information Center
Bronson, Scott; Jones, Keith W.; Brown, Maria
2013-01-01
We describe an interactive collaborative environmental education project that makes advanced laboratory facilities at Brookhaven National Laboratory accessible for one-year or multi-year science projects for the high school level. Cyber-enabled Environmental Science (CEES) utilizes web conferencing software to bring multi-disciplinary,…
Marine beaches are occasionally contaminated by unacceptably high levels of fecal indicator bacteria (FIB) that exceed EPA water quality criteria. Here we describe application of a recent version of the software package Virtual Beach tool (VB 3.0.6) to build and evaluate multiple...
TGeoCad: an Interface between ROOT and CAD Systems
NASA Astrophysics Data System (ADS)
Luzzi, C.; Carminati, F.
2014-06-01
In the simulation of High Energy Physics experiment a very high precision in the description of the detector geometry is essential to achieve the required performances. The physicists in charge of Monte Carlo Simulation of the detector need to collaborate efficiently with the engineers working at the mechanical design of the detector. Often, this collaboration is made hard by the usage of different and incompatible software. ROOT is an object-oriented C++ framework used by physicists for storing, analyzing and simulating data produced by the high-energy physics experiments while CAD (Computer-Aided Design) software is used for mechanical design in the engineering field. The necessity to improve the level of communication between physicists and engineers led to the implementation of an interface between the ROOT geometrical modeler used by the virtual Monte Carlo simulation software and the CAD systems. In this paper we describe the design and implementation of the TGeoCad Interface that has been developed to enable the use of ROOT geometrical models in several CAD systems. To achieve this goal, the ROOT geometry description is converted into STEP file format (ISO 10303), which can be imported and used by many CAD systems.
High-quality remote interactive imaging in the operating theatre
NASA Astrophysics Data System (ADS)
Grimstead, Ian J.; Avis, Nick J.; Evans, Peter L.; Bocca, Alan
2009-02-01
We present a high-quality display system that enables the remote access within an operating theatre of high-end medical imaging and surgical planning software. Currently, surgeons often use printouts from such software for reference during surgery; our system enables surgeons to access and review patient data in a sterile environment, viewing real-time renderings of MRI & CT data as required. Once calibrated, our system displays shades of grey in Operating Room lighting conditions (removing any gamma correction artefacts). Our system does not require any expensive display hardware, is unobtrusive to the remote workstation and works with any application without requiring additional software licenses. To extend the native 256 levels of grey supported by a standard LCD monitor, we have used the concept of "PseudoGrey" where slightly off-white shades of grey are used to extend the intensity range from 256 to 1,785 shades of grey. Remote access is facilitated by a customized version of UltraVNC, which corrects remote shades of grey for display in the Operating Room. The system is successfully deployed at Morriston Hospital, Swansea, UK, and is in daily use during Maxillofacial surgery. More formal user trials and quantitative assessments are being planned for the future.
The High-Level Interface Definitions in the ASTRI/CTA Mini Array Software System (MASS)
NASA Astrophysics Data System (ADS)
Conforti, V.; Tosti, G.; Schwarz, J.; Bruno, P.; Cefal‘A, M.; Paola, A. D.; Gianotti, F.; Grillo, A.; Russo, F.; Tanci, C.; Testa, V.; Antonelli, L. A.; Canestrari, R.; Catalano, O.; Fiorini, M.; Gallozzi, S.; Giro, E.; Palombara, N. L.; Leto, G.; Maccarone, M. C.; Pareschi, G.; Stringhetti, L.; Trifoglio, M.; Vercellone, S.; Astri Collaboration; Cta Consortium
2015-09-01
ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project funded by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end-to-end prototype, named ASTRI SST-2M, of a Small Size Dual-Mirror Telescope for the Cherenkov Telescope Array, CTA. A second goal of the project is the realization of the ASTRI/CTA mini-array, which will be composed of seven SST-2M telescopes placed at the CTA Southern Site. The ASTRI Mini Array Software System (MASS) is designed to support the ASTRI/CTA mini-array operations. MASS is being built on top of the ALMA Common Software (ACS) framework, which provides support for the implementation of distributed data acquisition and control systems, and functionality for log and alarm management, message driven communication and hardware devices management. The first version of the MASS system, which will comply with the CTA requirements and guidelines, will be tested on the ASTRI SST-2M prototype. In this contribution we present the interface definitions of the MASS high level components in charge of the ASTRI SST-2M observation scheduling, telescope control and monitoring, and data taking. Particular emphasis is given to their potential reuse for the ASTRI/CTA mini-array.
Using Free Internet Videogames in Upper Extremity Motor Training for Children with Cerebral Palsy
Sevick, Marisa; Eklund, Elizabeth; Mensch, Allison; Foreman, Matthew; Standeven, John; Engsberg, Jack
2016-01-01
Movement therapy is one type of upper extremity intervention for children with cerebral palsy (CP) to improve function. It requires high-intensity, repetitive and task-specific training. Tedium and lack of motivation are substantial barriers to completing the training. An approach to overcome these barriers is to couple the movement therapy with videogames. This investigation: (1) tested the feasibility of delivering a free Internet videogame upper extremity motor intervention to four children with CP (aged 8–17 years) with mild to moderate limitations to upper limb function; and (2) determined the level of intrinsic motivation during the intervention. The intervention used free Internet videogames in conjunction with the Microsoft Kinect motion sensor and the Flexible Action and Articulated Skeleton Toolkit software (FAAST) software. Results indicated that the intervention could be successfully delivered in the laboratory and the home, and pre- and post- impairment, function and performance assessments were possible. Results also indicated a high level of motivation among the participants. It was concluded that the use of inexpensive hardware and software in conjunction with free Internet videogames has the potential to be very motivating in helping to improve the upper extremity abilities of children with CP. Future work should include results from additional participants and from a control group in a randomized controlled trial to establish efficacy. PMID:27338485
ERIC Educational Resources Information Center
Mitchell, Susan Marie
2012-01-01
Uncontrollable costs, schedule overruns, and poor end product quality continue to plague the software engineering field. Innovations formulated with the expectation to minimize or eliminate cost, schedule, and quality problems have generally fallen into one of three categories: programming paradigms, software tools, and software process…
Comprehensive measures of sound exposures in cinemas using smart phones.
Huth, Markus E; Popelka, Gerald R; Blevins, Nikolas H
2014-01-01
Sensorineural hearing loss from sound overexposure has a considerable prevalence. Identification of sound hazards is crucial, as prevention, due to a lack of definitive therapies, is the sole alternative to hearing aids. One subjectively loud, yet little studied, potential sound hazard is movie theaters. This study uses smart phones to evaluate their applicability as a widely available, validated sound pressure level (SPL) meter. Therefore, this study measures sound levels in movie theaters to determine whether sound levels exceed safe occupational noise exposure limits and whether sound levels in movie theaters differ as a function of movie, movie theater, presentation time, and seat location within the theater. Six smart phones with an SPL meter software application were calibrated with a precision SPL meter and validated as an SPL meter. Additionally, three different smart phone generations were measured in comparison to an integrating SPL meter. Two different movies, an action movie and a children's movie, were measured six times each in 10 different venues (n = 117). To maximize representativeness, movies were selected focusing on large release productions with probable high attendance. Movie theaters were selected in the San Francisco, CA, area based on whether they screened both chosen movies and to represent the largest variety of theater proprietors. Measurements were analyzed in regard to differences between theaters, location within the theater, movie, as well as presentation time and day as indirect indicator of film attendance. The smart phone measurements demonstrated high accuracy and reliability. Overall, sound levels in movie theaters do not exceed safe exposure limits by occupational standards. Sound levels vary significantly across theaters and demonstrated statistically significant higher sound levels and exposures in the action movie compared to the children's movie. Sound levels decrease with distance from the screen. However, no influence on time of day or day of the week as indirect indicator of film attendance could be found. Calibrated smart phones with an appropriate software application as used in this study can be utilized as a validated SPL meter. Because of the wide availability, smart phones in combination with the software application can provide high quantity recreational sound exposure measurements, which can facilitate the identification of potential noise hazards. Sound levels in movie theaters decrease with distance to the screen, but do not exceed safe occupational noise exposure limits. Additionally, there are significant differences in sound levels across movie theaters and movies, but not in presentation time.
PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems
Stefanini, Fabio; Neftci, Emre O.; Sheik, Sadique; Indiveri, Giacomo
2014-01-01
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS. PMID:25232314
PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems.
Stefanini, Fabio; Neftci, Emre O; Sheik, Sadique; Indiveri, Giacomo
2014-01-01
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu
2010-01-01
Purpose Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. Methods A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. Results The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. Conclusion A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users. PMID:20714933
Maciel, Anderson; Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu
2011-07-01
Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users.
A Study of Emotions in Requirements Engineering
NASA Astrophysics Data System (ADS)
Colomo-Palacios, Ricardo; Hernández-López, Adrián; García-Crespo, Ángel; Soto-Acosta, Pedro
Requirements engineering (RE) is a crucial activity in software development projects. This phase in the software development cycle is knowledge intensive, and thus, human capital intensive. From the human point of view, emotions play an important role in behavior and can even act as behavioral motivators. Thus, if we consider that RE represents a set of knowledge-intensive tasks, which include acceptance and negotiation activities, then the emotional factor represents a key element in these issues. However, the emotional factor in RE has not received the attention it deserves. This paper aims to integrate the stakeholder's emotions into the requirement process, proposing to catalogue them like any other factor in the process such as clarity or stability. Results show that high arousal and low pleasure levels are predictors of high versioning requirements.
The software development process at the Chandra X-ray Center
NASA Astrophysics Data System (ADS)
Evans, Janet D.; Evans, Ian N.; Fabbiano, Giuseppina
2008-08-01
Software development for the Chandra X-ray Center Data System began in the mid 1990's, and the waterfall model of development was mandated by our documents. Although we initially tried this approach, we found that a process with elements of the spiral model worked better in our science-based environment. High-level science requirements are usually established by scientists, and provided to the software development group. We follow with review and refinement of those requirements prior to the design phase. Design reviews are conducted for substantial projects within the development team, and include scientists whenever appropriate. Development follows agreed upon schedules that include several internal releases of the task before completion. Feedback from science testing early in the process helps to identify and resolve misunderstandings present in the detailed requirements, and allows review of intangible requirements. The development process includes specific testing of requirements, developer and user documentation, and support after deployment to operations or to users. We discuss the process we follow at the Chandra X-ray Center (CXC) to develop software and support operations. We review the role of the science and development staff from conception to release of software, and some lessons learned from managing CXC software development for over a decade.
LArSoft: toolkit for simulation, reconstruction and analysis of liquid argon TPC neutrino detectors
NASA Astrophysics Data System (ADS)
Snider, E. L.; Petrillo, G.
2017-10-01
LArSoft is a set of detector-independent software tools for the simulation, reconstruction and analysis of data from liquid argon (LAr) neutrino experiments The common features of LAr time projection chambers (TPCs) enable sharing of algorithm code across detectors of very different size and configuration. LArSoft is currently used in production simulation and reconstruction by the ArgoNeuT, DUNE, LArlAT, MicroBooNE, and SBND experiments. The software suite offers a wide selection of algorithms and utilities, including those for associated photo-detectors and the handling of auxiliary detectors outside the TPCs. Available algorithms cover the full range of simulation and reconstruction, from raw waveforms to high-level reconstructed objects, event topologies and classification. The common code within LArSoft is contributed by adopting experiments, which also provide detector-specific geometry descriptions, and code for the treatment of electronic signals. LArSoft is also a collaboration of experiments, Fermilab and associated software projects which cooperate in setting requirements, priorities, and schedules. In this talk, we outline the general architecture of the software and the interaction with external libraries and detector-specific code. We also describe the dynamics of LArSoft software development between the contributing experiments, the projects supporting the software infrastructure LArSoft relies on, and the core LArSoft support project.
The software-cycle model for re-engineering and reuse
NASA Technical Reports Server (NTRS)
Bailey, John W.; Basili, Victor R.
1992-01-01
This paper reports on the progress of a study which will contribute to our ability to perform high-level, component-based programming by describing means to obtain useful components, methods for the configuration and integration of those components, and an underlying economic model of the costs and benefits associated with this approach to reuse. One goal of the study is to develop and demonstrate methods to recover reusable components from domain-specific software through a combination of tools, to perform the identification, extraction, and re-engineering of components, and domain experts, to direct the applications of those tools. A second goal of the study is to enable the reuse of those components by identifying techniques for configuring and recombining the re-engineered software. This component-recovery or software-cycle model addresses not only the selection and re-engineering of components, but also their recombination into new programs. Once a model of reuse activities has been developed, the quantification of the costs and benefits of various reuse options will enable the development of an adaptable economic model of reuse, which is the principal goal of the overall study. This paper reports on the conception of the software-cycle model and on several supporting techniques of software recovery, measurement, and reuse which will lead to the development of the desired economic model.
Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan
2014-12-01
The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mcdonald, Kathleen Herrera
2016-02-29
KIVA is a family of Fortran-based computational fluid dynamics software developed by LANL. The software predicts complex fuel and air flows as well as ignition, combustion, and pollutant-formation processes in engines. The KIVA models have been used to understand combustion chemistry processes, such as auto-ignition of fuels, and to optimize diesel engines for high efficiency and low emissions. Fuel economy is heavily dependent upon engine efficiency, which in turn depends to a large degree on how fuel is burned within the cylinders of the engine. Higher in-cylinder pressures and temperatures lead to increased fuel economy, but they also create moremore » difficulty in controlling the combustion process. Poorly controlled and incomplete combustion can cause higher levels of emissions and lower engine efficiencies.« less
Sun, Yongmei; Li, Xing; Wu, Di; Pan, Qi; Ji, Yuefeng; Ren, Hong; Ding, Keyue
2016-01-01
RNA editing is one of the post- or co-transcriptional processes that can lead to amino acid substitutions in protein sequences, alternative pre-mRNA splicing, and changes in gene expression levels. Although several methods have been suggested to identify RNA editing sites, there remains challenges to be addressed in distinguishing true RNA editing sites from its counterparts on genome and technical artifacts. In addition, there lacks a software framework to identify and visualize potential RNA editing sites. Here, we presented a software - 'RED' (RNA Editing sites Detector) - for the identification of RNA editing sites by integrating multiple rule-based and statistical filters. The potential RNA editing sites can be visualized at the genome and the site levels by graphical user interface (GUI). To improve performance, we used MySQL database management system (DBMS) for high-throughput data storage and query. We demonstrated the validity and utility of RED by identifying the presence and absence of C→U RNA-editing sites experimentally validated, in comparison with REDItools, a command line tool to perform high-throughput investigation of RNA editing. In an analysis of a sample data-set with 28 experimentally validated C→U RNA editing sites, RED had sensitivity and specificity of 0.64 and 0.5. In comparison, REDItools had a better sensitivity (0.75) but similar specificity (0.5). RED is an easy-to-use, platform-independent Java-based software, and can be applied to RNA-seq data without or with DNA sequencing data. The package is freely available under the GPLv3 license at http://github.com/REDetector/RED or https://sourceforge.net/projects/redetector.
Sun, Yongmei; Li, Xing; Wu, Di; Pan, Qi; Ji, Yuefeng; Ren, Hong; Ding, Keyue
2016-01-01
RNA editing is one of the post- or co-transcriptional processes that can lead to amino acid substitutions in protein sequences, alternative pre-mRNA splicing, and changes in gene expression levels. Although several methods have been suggested to identify RNA editing sites, there remains challenges to be addressed in distinguishing true RNA editing sites from its counterparts on genome and technical artifacts. In addition, there lacks a software framework to identify and visualize potential RNA editing sites. Here, we presented a software − ‘RED’ (RNA Editing sites Detector) − for the identification of RNA editing sites by integrating multiple rule-based and statistical filters. The potential RNA editing sites can be visualized at the genome and the site levels by graphical user interface (GUI). To improve performance, we used MySQL database management system (DBMS) for high-throughput data storage and query. We demonstrated the validity and utility of RED by identifying the presence and absence of C→U RNA-editing sites experimentally validated, in comparison with REDItools, a command line tool to perform high-throughput investigation of RNA editing. In an analysis of a sample data-set with 28 experimentally validated C→U RNA editing sites, RED had sensitivity and specificity of 0.64 and 0.5. In comparison, REDItools had a better sensitivity (0.75) but similar specificity (0.5). RED is an easy-to-use, platform-independent Java-based software, and can be applied to RNA-seq data without or with DNA sequencing data. The package is freely available under the GPLv3 license at http://github.com/REDetector/RED or https://sourceforge.net/projects/redetector. PMID:26930599
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phisherman is an online software tool that was created to help experimenters study phishing. It can potentially be re-purposed to run other human studies. Phisherman enables studies to be run online, so that users can participate from their own computers. This means that experimenters can get data from subjects in their natural settings. Alternatively, an experimenter can also run the app online in a lab-based setting, if that is desired. The software enables the online deployment of a study that is comprised of three main parts: (1) a consent page, (2) a survey, and (3) an identification task, with instruction/transitionmore » screens between each part, allowing the experimenter to provide the user with instructions and messages. Upon logging in, the subject is taken to the permission page, where they agree to or do not agree to take part in the study. If the subject agrees to participate, then the software randomly chooses between doing the survey first (and identification task second) or the identification task first (and survey second). This is to balance possible order effects in the data. Procedurally, in the identification task, the software shows the stimuli to the subject, and asks if she thinks it is a phish (yes/no) and how confident she is about her answer. The subject is given 5 levels of certainty to select from, labeled "low" (1), to "medium" (3), to "high" (5), with the option of picking a level between low and medium (2), and between medium and high (4). After selecting his/her confidence level, then the "Next" button activates, allowing a user to move to the next email. The software saves a given subject's progress in the identification task, so that she may log in and out of the site. The consent page is a space for the experimenter to provide the subject with human studies board /internal review board information, and to formally consent to participate in the study. The survey is a space for the experimenter to provide questions and spaces for the users to input answers (allowing both multiple-choice and free-answer options). Phisherman includes administrative pages for managing the stimuli and users. This includes a tool for the experimenter to create, preview, edit, delete (if desired), and manage stimuli (emails). The stimuli may include pictures (uploaded to an appropriate folder) and links, for realism. The software includes a safety feature that prevents the user from going to any link location or opening a file/image. Instead of re-directing the subject's browser, the software provides a pop-up box with the URL location of where the user would have gone. Another administrative page may be used to create fake subject accounts for testing the software prior to deployment, as well as to delete subject accounts when necessary. Data from the experiment can be downloaded from another administrative page.« less
Storage system software solutions for high-end user needs
NASA Technical Reports Server (NTRS)
Hogan, Carole B.
1992-01-01
Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.
Digital readout for image converter cameras
NASA Astrophysics Data System (ADS)
Honour, Joseph
1991-04-01
There is an increasing need for fast and reliable analysis of recorded sequences from image converter cameras so that experimental information can be readily evaluated without recourse to more time consuming photographic procedures. A digital readout system has been developed using a randomly triggerable high resolution CCD camera, the output of which is suitable for use with IBM AT compatible PC. Within half a second from receipt of trigger pulse, the frame reformatter displays the image and transfer to storage media can be readily achieved via the PC and dedicated software. Two software programmes offer different levels of image manipulation which includes enhancement routines and parameter calculations with accuracy down to pixel levels. Hard copy prints can be acquired using a specially adapted Polaroid printer, outputs for laser and video printer extend the overall versatility of the system.
Situation Awareness and Levels of Automation
NASA Technical Reports Server (NTRS)
Kaber, David B.
1999-01-01
During the first year of this project, a taxonomy of theoretical levels of automation (LOAs) was applied to the advanced commercial aircraft by categorizing actual modes of McDonald Douglas MD-11 autoflight system operation in terms of the taxonomy. As well, high LOAs included in the taxonomy (e.g., supervisory control) were modeled in the context of MD-11 autoflight systems through development of a virtual flight simulator. The flight simulator was an integration of a re-configurable simulator developed by the Georgia Institute Technology and new software prototypes of autoflight system modules found in the MD-11 cockpit. In addition to this work, a version of the Situation Awareness Global Assessment Technique (SAGAT) was developed for application to commercial piloting tasks. A software package was developed to deliver the SAGAT and was integrated with the virtual flight simulator.
Comparison of in-hospital versus 30-day mortality assessments for selected medical conditions.
Borzecki, Ann M; Christiansen, Cindy L; Chew, Priscilla; Loveland, Susan; Rosen, Amy K
2010-12-01
In-hospital mortality measures such as the Agency for Healthcare Research and Quality (AHRQ) Inpatient Quality Indicators (IQIs) are easily derived using hospital discharge abstracts and publicly available software. However, hospital assessments based on a 30-day postadmission interval might be more accurate given potential differences in facility discharge practices. To compare in-hospital and 30-day mortality rates for 6 medical conditions using the AHRQ IQI software. We used IQI software (v3.1) and 2004-2007 Veterans Health Administration (VA) discharge and Vital Status files to derive 4-year facility-level in-hospital and 30-day observed mortality rates and observed/expected ratios (O/Es) for admissions with a principal diagnosis of acute myocardial infarction, congestive heart failure, stroke, gastrointestinal hemorrhage, hip fracture, and pneumonia. We standardized software-calculated O/Es to the VA population and compared O/Es and outlier status across sites using correlation, observed agreement, and kappas. Of 119 facilities, in-hospital versus 30-day mortality O/E correlations were generally high (median: r = 0.78; range: 0.31-0.86). Examining outlier status, observed agreement was high (median: 84.7%, 80.7%-89.1%). Kappas showed at least moderate agreement (k > 0.40) for all indicators except stroke and hip fracture (k ≤ 0.22). Across indicators, few sites changed from a high to nonoutlier or low outlier, or vice versa (median: 10, range: 7-13). The AHRQ IQI software can be easily adapted to generate 30-day mortality rates. Although 30-day mortality has better face validity as a hospital performance measure than in-hospital mortality, site assessments were similar despite the definition used. Thus, the measure selected for internal benchmarking should primarily depend on the healthcare system's data linkage capabilities.
Manage your blood sugar (image)
... bedtime. Your blood sugar meter may have computer software to help you track your blood sugar level. ... bedtime. Your blood sugar meter may have computer software to help you track your blood sugar level. ...
2008-05-01
communicate with other weapon models In a mission-level simulation; (3) introduces the four configuration levels of the M&S framework; and (4) presents a cost ...and Disadvantages ....................................................................... 26 6 COST -EFFECTIVE M&S LABORATORY PLAN...25 23 Weapon Model Sample Time and Average TET Displayed on the Target PC ..... 26 24 Design and Cost of an
Developing Systems Engineering Skills Through NASA Summer Intern Project
NASA Technical Reports Server (NTRS)
Bhasin, Kul; Barritt, Brian; Golden, Bert; Knoblock, Eric; Matthews, Seth; Warner, Joe
2010-01-01
During the Formulation phases of the NASA Project Life Cycle, communication systems engineers are responsible for designing space communication links and analyzing their performance to ensure that the proposed communication architecture is capable of satisfying high-level mission requirements. Senior engineers with extensive experience in communications systems perform these activities. However, the increasing complexity of space systems coupled with the current shortage of communications systems engineers has led to an urgent need for expedited training of new systems engineers. A pilot program, in which college-bound high school and undergraduate students studying various engineering disciplines are immersed in NASA s systems engineering practices, was conceived out of this need. This rapid summerlong training approach is feasible because of the availability of advanced software and technology tools and the students inherent ability to operate such tools. During this pilot internship program, a team of college-level and recently-hired engineers configured and utilized various software applications in the design and analysis of communication links for a plausible lunar sortie mission. The approach taken was to first design the direct-to-Earth communication links for the lunar mission elements, then to design the links between lunar surface and lunar orbital elements. Based on the data obtained from these software applications, an integrated communication system design was realized and the students gained valuable systems engineering knowledge. This paper describes this approach to rapidly training college-bound high school and undergraduate engineering students from various disciplines in NASA s systems engineering practices and tools. A summary of the potential use of NASA s emerging systems engineering internship program in broader applications is also described.
Experience with Data Science as an Intern with the Jet Propulsion Laboratory
NASA Astrophysics Data System (ADS)
Whittell, J.; Mattmann, C. A.; Whitehall, K. D.; Ramirez, P.; Goodale, C. E.; Boustani, M.; Hart, A. F.; Kim, J.; Waliser, D. E.; Joyce, M. J.
2013-12-01
The Regional Climate Model Evaluation System (RCMES, http://rcmes.jpl.nasa.gov) at NASA's Jet Propulsion Laboratory seeks to improve regional climate model output by comparing past model predictions with Earth-orbiting satellite data (Mattmann et al. 2013). RCMES ingests satellite and RCM data and processes these data into a common format; as needed, the software queries the RCMES database for these datasets, on which it runs a series of statistical metrics including model-satellite comparisons. The development of the RCMES software relies on collaboration between climatologists and computer scientists, as evinced by RCMES longstanding work with CORDEX (Kim et al. 2012). Over a total of 17 weeks in 2011, 2012, and 2013, I worked as an intern at NASA's Jet Propulsion Laboratory in a supportive capacity for RCMES. A high school student, I had no formal background in either Earth science or computer technology, but was immersed in both fields. In 2011, I researched three earth-science data management projects, producing a high-level explanation of these endeavors. The following year, I studied Python, contributing a command-line user interface to the RCMES project code. In 2013, I assisted with data acquisition, wrote a file header information plugin, and the visualization tool GrADS. The experience demonstrated the importance of an interdisciplinary approach to data processing: to streamline data ingestion and processing, scientists must understand, at least on a high-level, any programs they might utilize while to best serve the needs of earth scientists, software engineers must understand the science behind the data they handle.
Simultaneous operation and control of about 100 telescopes for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Wegner, P.; Colomé, J.; Hoffmann, D.; Houles, J.; Köppel, H.; Lamanna, G.; Le Flour, T.; Lopatin, A.; Lyard, E.; Melkumyan, D.; Oya, I.; Panazol, L.-I.; Punch, M.; Schlenstedt, S.; Schmidt, T.; Stegmann, C.; Schwanke, U.; Walter, R.; Consortium, CTA
2012-12-01
The Cherenkov Telescope Array (CTA) project is an initiative to build the next generation ground-based very high energy (VHE) gamma-ray instrument. Compared to current imaging atmospheric Cherenkov telescope experiments CTA will extend the energy range and improve the angular resolution while increasing the sensitivity up to a factor of 10. With about 100 separate telescopes it will be operated as an observatory open to a wide astrophysics and particle physics community, providing a deep insight into the non-thermal high-energy universe. The CTA Array Control system (ACTL) is responsible for several essential control tasks supporting the evaluation and selection of proposals, as well as the preparation, scheduling, and finally the execution of observations with the array. A possible basic distributed software framework for ACTL being considered is the ALMA Common Software (ACS). The ACS framework follows a container component model and contains a high level abstraction layer to integrate different types of device. To achieve a low-level consolidation of connecting control hardware, OPC UA (OPen Connectivity-Unified Architecture) client functionality is integrated directly into ACS, thus allowing interaction with other OPC UA capable hardware. The CTA Data Acquisition System comprises the data readout of all cameras and the transfer of the data to a camera server farm, thereby using standard hardware and software technologies. CTA array control is also covering conceptions for a possible array trigger system and the corresponding clock distribution. The design of the CTA observations scheduler is introducing new algorithmic technologies to achieve the required flexibility.
NASA Astrophysics Data System (ADS)
Ames, D.; Kadlec, J.; Horsburgh, J. S.; Maidment, D. R.
2009-12-01
The Consortium of Universities for the Advancement of Hydrologic Sciences (CUAHSI) Hydrologic Information System (HIS) project includes extensive development of data storage and delivery tools and standards including WaterML (a language for sharing hydrologic data sets via web services); and HIS Server (a software tool set for delivering WaterML from a server); These and other CUASHI HIS tools have been under development and deployment for several years and together, present a relatively complete software “stack” to support the consistent storage and delivery of hydrologic and other environmental observation data. This presentation describes the development of a new HIS software tool called “HydroDesktop” and the development of an online open source software development community to update and maintain the software. HydroDesktop is a local (i.e. not server-based) client side software tool that ultimately will run on multiple operating systems and will provide a highly usable level of access to HIS services. The software provides many key capabilities including data query, map-based visualization, data download, local data maintenance, editing, graphing, data export to selected model-specific data formats, linkage with integrated modeling systems such as OpenMI, and ultimately upload to HIS servers from the local desktop software. As the software is presently in the early stages of development, this presentation will focus on design approach and paradigm and is viewed as an opportunity to encourage participation in the open development community. Indeed, recognizing the value of community based code development as a means of ensuring end-user adoption, this project has adopted an “iterative” or “spiral” software development approach which will be described in this presentation.
The Use of Computer Software to Teach High Technology Skills to Vocational Students.
ERIC Educational Resources Information Center
Farmer, Edgar I.
A study examined the type of computer software that is best suited to teach high technology skills to vocational students. During the study, 50 manufacturers of computer software and hardware were sent questionnaires designed to gather data concerning their recommendations in regard to: software to teach high technology skills to vocational…
NASA Astrophysics Data System (ADS)
Babaali, Parisa; Gonzalez, Lidia
2015-07-01
Supporting student success in entry-level mathematics courses at the undergraduate level has and continues to be a challenge. Recently we have seen an increased reliance on technological supports including software to supplement more traditional in-class instruction. In this paper, we explore the effects on student performance of the use of a computer software program to supplement instruction in an entry-level mathematics course at the undergraduate level, specifically, a pre-calculus course. Relying on data from multiple sections of the course over various semesters, we compare student performance in those classes utilizing the software against those in which it was not used. Quantitative analysis of the data then leads us to conclusions about the effectiveness of the software as well as recommendations for future iterations of the course and others like it.
Point-Mass Aircraft Trajectory Prediction Using a Hierarchical, Highly-Adaptable Software Design
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Woods, Sharon E.; Wing, David J.
2017-01-01
A highly adaptable and extensible method for predicting four-dimensional trajectories of civil aircraft has been developed. This method, Behavior-Based Trajectory Prediction, is based on taxonomic concepts developed for the description and comparison of trajectory prediction software. A hierarchical approach to the "behavioral" layer of a point-mass model of aircraft flight, a clear separation between the "behavioral" and "mathematical" layers of the model, and an abstraction of the methods of integrating differential equations in the "mathematical" layer have been demonstrated to support aircraft models of different types (in particular, turbojet vs. turboprop aircraft) using performance models at different levels of detail and in different formats, and promise to be easily extensible to other aircraft types and sources of data. The resulting trajectories predict location, altitude, lateral and vertical speeds, and fuel consumption along the flight path of the subject aircraft accurately and quickly, accounting for local conditions of wind and outside air temperature. The Behavior-Based Trajectory Prediction concept was implemented in NASA's Traffic Aware Planner (TAP) flight-optimizing cockpit software application.
SPCC- Software Elements for Security Partition Communication Controller
NASA Astrophysics Data System (ADS)
Herpel, H. J.; Willig, G.; Montano, G.; Tverdyshev, S.; Eckstein, K.; Schoen, M.
2016-08-01
Future satellite missions like Earth Observation, Telecommunication or any other kind are likely to be exposed to various threats aiming at exploiting vulnerabilities of the involved systems and communications. Moreover, the growing complexity of systems coupled with more ambitious types of operational scenarios imply increased security vulnerabilities in the future. In the paper we will describe an architecture and software elements to ensure high level of security on-board a spacecraft. First the threats to the Security Partition Communication Controller (SPCC) will be addressed including the identification of specific vulnerabilities to the SPCC. Furthermore, appropriate security objectives and security requirements are identified to be counter the identified threats. The security evaluation of the SPCC will be done in accordance to the Common Criteria (CC). The Software Elements for SPCC has been implemented on flight representative hardware which consists of two major elements: the I/O board and the SPCC board. The SPCC board provides the interfaces with ground while the I/O board interfaces with typical spacecraft equipment busses. Both boards are physically interconnected by a high speed spacewire (SpW) link.
Can your software engineer program your PLC?
NASA Astrophysics Data System (ADS)
Borrowman, Alastair J.; Taylor, Philip
2016-07-01
The use of Programmable Logic Controllers (PLCs) in the control of large physics experiments is ubiquitous1, 2, 3. The programming of these controllers is normally the domain of engineers with a background in electronics, this paper introduces PLC program development from the software engineer's perspective. PLC programs provide the link between control software running on PC architecture systems and physical hardware controlled and monitored by digital and analog signals. The higher-level software running on the PC is typically responsible for accepting operator input and from this deciding when and how hardware connected to the PLC is controlled. The PLC accepts demands from the PC, considers the current state of its connected hardware and if correct to do so (based upon interlocks or other constraints) adjusts its hardware output signals appropriately for the PC's demands. A published ICD (Interface Control Document) defines the PLC memory locations available to be written and read by the PC to control and monitor the hardware. Historically the method of programming PLCs has been ladder diagrams that closely resemble circuit diagrams, however, PLC manufacturers nowadays also provide, and promote, the use of higher-level programming languages4. Based on techniques used in the development of high-level PC software to control PLCs for multiple telescopes, this paper examines the development of PLC programs to operate the hardware of a medical cyclotron beamline controlled from a PC using the Experimental Physics and Industrial Control System (EPICS), which is also widely used in telescope control5, 6, 7. The PLC used is the new generation Siemens S7-1200 programmed using Siemens Pascal based Structured Control Language (SCL), which is their implementation of Structured Text (ST). The approach described is that from a software engineer's perspective, utilising Siemens Totally Integrated Automation (TIA) Portal integrated development environment (IDE) to create modular PLC programs based upon reusable functions capable of being unit tested without the PLC connected to hardware. Emphasis has been placed on designing an interface between EPICS and SCL that enforces correct operation of hardware through stringent separation of PC accessible PLC memory and hardware I/O addresses used only by the PLC. The paper also introduces the method used to automate the creation, from the same source document, the PLC memory structure (tag) definitions (defining memory used to access hardware I/O and that accessed by the PC) and creation of the PC program data structures (EPICS database records) used to access the permitted PLC addresses. From direct experience this paper demonstrates the advantages of PLC program development being shared between electronic and software engineers, to enable use of the most appropriate processes from both the perspective of the hardware and the higher-level software used to control it.
Policy-Based Management Natural Language Parser
NASA Technical Reports Server (NTRS)
James, Mark
2009-01-01
The Policy-Based Management Natural Language Parser (PBEM) is a rules-based approach to enterprise management that can be used to automate certain management tasks. This parser simplifies the management of a given endeavor by establishing policies to deal with situations that are likely to occur. Policies are operating rules that can be referred to as a means of maintaining order, security, consistency, or other ways of successfully furthering a goal or mission. PBEM provides a way of managing configuration of network elements, applications, and processes via a set of high-level rules or business policies rather than managing individual elements, thus switching the control to a higher level. This software allows unique management rules (or commands) to be specified and applied to a cross-section of the Global Information Grid (GIG). This software embodies a parser that is capable of recognizing and understanding conversational English. Because all possible dialect variants cannot be anticipated, a unique capability was developed that parses passed on conversation intent rather than the exact way the words are used. This software can increase productivity by enabling a user to converse with the system in conversational English to define network policies. PBEM can be used in both manned and unmanned science-gathering programs. Because policy statements can be domain-independent, this software can be applied equally to a wide variety of applications.
2016-01-01
Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome. PMID:27051515
DyNAMiC Workbench: an integrated development environment for dynamic DNA nanotechnology
Grun, Casey; Werfel, Justin; Zhang, David Yu; Yin, Peng
2015-01-01
Dynamic DNA nanotechnology provides a promising avenue for implementing sophisticated assembly processes, mechanical behaviours, sensing and computation at the nanoscale. However, design of these systems is complex and error-prone, because the need to control the kinetic pathway of a system greatly increases the number of design constraints and possible failure modes for the system. Previous tools have automated some parts of the design workflow, but an integrated solution is lacking. Here, we present software implementing a three ‘tier’ design process: a high-level visual programming language is used to describe systems, a molecular compiler builds a DNA implementation and nucleotide sequences are generated and optimized. Additionally, our software includes tools for analysing and ‘debugging’ the designs in silico, and for importing/exporting designs to other commonly used software systems. The software we present is built on many existing pieces of software, but is integrated into a single package—accessible using a Web-based interface at http://molecular-systems.net/workbench. We hope that the deep integration between tools and the flexibility of this design process will lead to better experimental results, fewer experimental design iterations and the development of more complex DNA nanosystems. PMID:26423437
Nickerson, David; Atalag, Koray; de Bono, Bernard; Geiger, Jörg; Goble, Carole; Hollmann, Susanne; Lonien, Joachim; Müller, Wolfgang; Regierer, Babette; Stanford, Natalie J; Golebiewski, Martin; Hunter, Peter
2016-04-06
Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome.
NASA Astrophysics Data System (ADS)
Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro
2003-05-01
This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.
Level 1 Processing of MODIS Direct Broadcast Data From Terra
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Smith, Peter; Shotland, Larry; El-Ghazawi, Tarek; Zhu, Ming
2000-01-01
In February 2000, an effort was begun to adapt the Moderate Resolution Imaging Spectroradiometer (MODIS) Level 1 production software to process direct broadcast data. Three Level 1 algorithms have been adapted and packaged for release: Level 1A converts raw (level 0) data into Hierarchical Data Format (HDF), unpacking packets into scans; Geolocation computes geographic information for the data points in the Level 1A; and the Level 1B computes geolocated, calibrated radiances from the Level 1A and Geolocation products. One useful aspect of adapting the production software is the ability to incorporate enhancements contributed by the MODIS Science Team. We have therefore tried to limit changes to the software. However, in order to process the data immediately on receipt, we have taken advantage of a branch in the geolocation software that reads orbit and altitude information from the packets themselves, rather than external ancillary files used in standard production. We have also verified that the algorithms can be run with smaller time increments (2.5 minutes) than the five-minute increments used in production. To make the code easier to build and run, we have simplified directories and build scripts. Also, dependencies on a commercial numerics library have been replaced by public domain software. A version of the adapted code has been released for Silicon Graphics machines running lrix. Perhaps owing to its origin in production, the software is rather CPU-intensive. Consequently, a port to Linux is underway, followed by a version to run on PC clusters, with an eventual goal of running in near-real-time (i.e., process a ten-minute pass in ten minutes).
The (mis)use of subjective process measures in software engineering
NASA Technical Reports Server (NTRS)
Valett, Jon D.; Condon, Steven E.
1993-01-01
A variety of measures are used in software engineering research to develop an understanding of the software process and product. These measures fall into three broad categories: quantitative, characteristics, and subjective. Quantitative measures are those to which a numerical value can be assigned, for example effort or lines of code (LOC). Characteristics describe the software process or product; they might include programming language or the type of application. While such factors do not provide a quantitative measurement of a process or product, they do help characterize them. Subjective measures (as defined in this study) are those that are based on the opinion or opinions of individuals; they are somewhat unique and difficult to quantify. Capturing of subjective measure data typically involves development of some type of scale. For example, 'team experience' is one of the subjective measures that were collected and studied by the Software Engineering Laboratory (SEL). Certainly, team experience could have an impact on the software process or product; actually measuring a team's experience, however, is not a strictly mathematical exercise. Simply adding up each team member's years of experience appears inadequate. In fact, most researchers would agree that 'years' do not directly translate into 'experience.' Team experience must be defined subjectively and then a scale must be developed e.g., high experience versus low experience; or high, medium, low experience; or a different or more granular scale. Using this type of scale, a particular team's overall experience can be compared with that of other teams in the development environment. Defining, collecting, and scaling subjective measures is difficult. First, precise definitions of the measures must be established. Next, choices must be made about whose opinions will be solicited to constitute the data. Finally, care must be given to defining the right scale and level of granularity for measurement.
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Linczuk, Maciej
2016-09-01
The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.
NASA Technical Reports Server (NTRS)
Brown, David B.
1988-01-01
A history of the Query Utility Environment for Software Testing (QUEST)/Ada is presented. A fairly comprehensive literature review which is targeted toward issues of Ada testing is given. The definition of the system structure and the high level interfaces are then presented. The design of the three major components is described. The QUEST/Ada IORL System Specifications to this point in time are included in the Appendix. A paper is also included in the appendix which gives statistical evidence of the validity of the test case generation approach which is being integrated into QUEST/Ada.
Hyperswitch Communication Network Computer
NASA Technical Reports Server (NTRS)
Peterson, John C.; Chow, Edward T.; Priel, Moshe; Upchurch, Edwin T.
1993-01-01
Hyperswitch Communications Network (HCN) computer is prototype multiple-processor computer being developed. Incorporates improved version of hyperswitch communication network described in "Hyperswitch Network For Hypercube Computer" (NPO-16905). Designed to support high-level software and expansion of itself. HCN computer is message-passing, multiple-instruction/multiple-data computer offering significant advantages over older single-processor and bus-based multiple-processor computers, with respect to price/performance ratio, reliability, availability, and manufacturing. Design of HCN operating-system software provides flexible computing environment accommodating both parallel and distributed processing. Also achieves balance among following competing factors; performance in processing and communications, ease of use, and tolerance of (and recovery from) faults.
WIRM: An Open Source Toolkit for Building Biomedical Web Applications
Jakobovits, Rex M.; Rosse, Cornelius; Brinkley, James F.
2002-01-01
This article describes an innovative software toolkit that allows the creation of web applications that facilitate the acquisition, integration, and dissemination of multimedia biomedical data over the web, thereby reducing the cost of knowledge sharing. There is a lack of high-level web application development tools suitable for use by researchers, clinicians, and educators who are not skilled programmers. Our Web Interfacing Repository Manager (WIRM) is a software toolkit that reduces the complexity of building custom biomedical web applications. WIRM’s visual modeling tools enable domain experts to describe the structure of their knowledge, from which WIRM automatically generates full-featured, customizable content management systems. PMID:12386108
Automatic Certification of Kalman Filters for Reliable Code Generation
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd; Schumann, Johann; Richardson, Julian
2005-01-01
AUTOFILTER is a tool for automatically deriving Kalman filter code from high-level declarative specifications of state estimation problems. It can generate code with a range of algorithmic characteristics and for several target platforms. The tool has been designed with reliability of the generated code in mind and is able to automatically certify that the code it generates is free from various error classes. Since documentation is an important part of software assurance, AUTOFILTER can also automatically generate various human-readable documents, containing both design and safety related information. We discuss how these features address software assurance standards such as DO-178B.
AIRS Maps from Space Processing Software
NASA Technical Reports Server (NTRS)
Thompson, Charles K.; Licata, Stephen J.
2012-01-01
This software package processes Atmospheric Infrared Sounder (AIRS) Level 2 swath standard product geophysical parameters, and generates global, colorized, annotated maps. It automatically generates daily and multi-day averaged colorized and annotated maps of various AIRS Level 2 swath geophysical parameters. It also generates AIRS input data sets for Eyes on Earth, Puffer-sphere, and Magic Planet. This program is tailored to AIRS Level 2 data products. It re-projects data into 1/4-degree grids that can be combined and averaged for any number of days. The software scales and colorizes global grids utilizing AIRS-specific color tables, and annotates images with title and color bar. This software can be tailored for use with other swath data products for the purposes of visualization.
Read All about It: Motivate Your Students with These Exercises
ERIC Educational Resources Information Center
Tuttle, Harry Grover
2007-01-01
Educators at elementary, middle, and high school levels will find that integrating digital tools and resources--many commonly used by students in their "out of school" lives--can be a springboard to creativity and new skills. In this article, the author describes how word processors, presentation software and hardware, mind-mapping applications,…
Educational JavaBeans: a Requirements Driven Architecture.
ERIC Educational Resources Information Center
Hall, Jon; Rapanotti, Lucia
This paper investigates, through a case study, the development of a software architecture that is compatible with a system's high-level requirements. The case study is an example of an extended customer/supplier relationship (post-point of sale support) involved in e-universities and is representative of a class of enterprise without current…
The Impact of Interstate Migration on Human Capital Development in Washington
ERIC Educational Resources Information Center
Spaulding, Randy
2010-01-01
Washington State is a leader in the innovation economy largely due to the combination of aerospace, software, and biomedical industries centered in the greater Seattle area; and, the state's high level of international trade. Despite Washington's national ranking, the state is overly reliant on importing educated workers from other states and…
On Parallel Software Engineering Education Using Python
ERIC Educational Resources Information Center
Marowka, Ami
2018-01-01
Python is gaining popularity in academia as the preferred language to teach novices serial programming. The syntax of Python is clean, easy, and simple to understand. At the same time, it is a high-level programming language that supports multi programming paradigms such as imperative, functional, and object-oriented. Therefore, by default, it is…
Developing Instructional Applications at the Secondary Level. The Computer as a Tool.
ERIC Educational Resources Information Center
McManus, Jack; And Others
Case studies are presented for seven Los Angeles area (California) high schools that worked with Pepperdine University in the IBM/ETS (International Business Machines/Educational Testing Service) Model Schools program, a project which provided training for selected secondary school teachers in the use of personal computers and selected software as…
The Gender Factor in Computer Anxiety and Interest among Some Australian High School Students.
ERIC Educational Resources Information Center
Okebukola, Peter Akinsola
1993-01-01
Western Australia eleventh graders (142 boys, 139 girls) were compared on such variables as computers at home, computer classes, experience with computers, and socioeconomic status. Girls had higher anxiety levels, boys higher computer interest. Possible causes included social beliefs about computer use, teacher sex bias, and software (games) more…
ERIC Educational Resources Information Center
Fontes, Kris
2008-01-01
Not every art department is fortunate enough to have access to digital cameras and image-editing software, but if a scanner, computer, and printer are available, students can create some imaginative and surreal work. This high-school level lesson begins with a discussion of self-portraits, and then moves to students creating images by scanning…
Virtual Reality Robotic Programming Software in the Technology Classroom
ERIC Educational Resources Information Center
Geissler, Jason; Knott, Patrick J.; Vazquez, Matthew R.; Wright, John R., Jr.
2004-01-01
Robots make a wonderful context for teaching students about many concepts important to technological literacy. They can provide an authentic context and produce high levels of motivation. According to Standards for Technological Literacy: Content for the Study of Technology (STL) (ITEA, 2000, 2002), there are six core concepts that should be…
READ 180. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2009
2009-01-01
"READ 180" is a reading program designed for students in elementary through high school whose reading achievement is below the proficient level. The goal of "READ 180" is to address gaps in students' skills through the use of a computer program, literature, and direct instruction in reading skills. The software component of the…
GELATIO: a general framework for modular digital analysis of high-purity Ge detector signals
NASA Astrophysics Data System (ADS)
Agostini, M.; Pandola, L.; Zavarise, P.; Volynets, O.
2011-08-01
GELATIO is a new software framework for advanced data analysis and digital signal processing developed for the GERDA neutrinoless double beta decay experiment. The framework is tailored to handle the full analysis flow of signals recorded by high purity Ge detectors and photo-multipliers from the veto counters. It is designed to support a multi-channel modular and flexible analysis, widely customizable by the user either via human-readable initialization files or via a graphical interface. The framework organizes the data into a multi-level structure, from the raw data up to the condensed analysis parameters, and includes tools and utilities to handle the data stream between the different levels. GELATIO is implemented in C++. It relies upon ROOT and its extension TAM, which provides compatibility with PROOF, enabling the software to run in parallel on clusters of computers or many-core machines. It was tested on different platforms and benchmarked in several GERDA-related applications. A stable version is presently available for the GERDA Collaboration and it is used to provide the reference analysis of the experiment data.
The Role and Quality of Software Safety in the NASA Constellation Program
NASA Technical Reports Server (NTRS)
Layman, Lucas; Basili, Victor R.; Zelkowitz, Marvin V.
2010-01-01
In this study, we examine software safety risk in the early design phase of the NASA Constellation spaceflight program. Obtaining an accurate, program-wide picture of software safety risk is difficult across multiple, independently-developing systems. We leverage one source of safety information, hazard analysis, to provide NASA quality assurance managers with information regarding the ongoing state of software safety across the program. The goal of this research is two-fold: 1) to quantify the relative importance of software with respect to system safety; and 2) to quantify the level of risk presented by software in the hazard analysis. We examined 154 hazard reports created during the preliminary design phase of three major flight hardware systems within the Constellation program. To quantify the importance of software, we collected metrics based on the number of software-related causes and controls of hazardous conditions. To quantify the level of risk presented by software, we created a metric scheme to measure the specificity of these software causes. We found that from 49-70% of hazardous conditions in the three systems could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. Furthermore, 10-12% of all controls were software-based. There is potential for inaccuracy in these counts, however, as software causes are not consistently scoped, and the presence of software in a cause or control is not always clear. The application of our software specificity metrics also identified risks in the hazard reporting process. In particular, we found a number of traceability risks in the hazard reports may impede verification of software and system safety.
Evaluating Software Assurance Knowledge and Competency of Acquisition Professionals
2014-10-01
of ISO 12207 -2008, both internationally and in the United States [7]. That standard documents a comprehensive set of activities and supporting...grows, organizations must ensure that their procurement agents acquire high quality, secure software. ISO 12207 and the Software Assurance Competency...cyberattacks grows, organizations must ensure that their procurement agents acquire high quality, secure software. ISO 12207 and the Software Assurance
Free and Open Source Software for land degradation vulnerability assessment
NASA Astrophysics Data System (ADS)
Imbrenda, Vito; Calamita, Giuseppe; Coluzzi, Rosa; D'Emilio, Mariagrazia; Lanfredi, Maria Teresa; Perrone, Angela; Ragosta, Maria; Simoniello, Tiziana
2013-04-01
Nowadays the role of FOSS software in scientific research is becoming increasingly important. Besides the important issues of reduced costs for licences, legality and security there are many other reasons that make FOSS software attractive. Firstly, making the code opened is a warranty of quality permitting to thousands of developers around the world to check the code and fix bugs rather than rely on vendors claims. FOSS communities are usually enthusiastic about helping other users for solving problems and expand or customize software (flexibility). Most important for this study, the interoperability allows to combine the user-friendly QGIS with the powerful GRASS-GIS and the richness of statistical methods of R in order to process remote sensing data and to perform geo-statistical analysis in one only environment. This study is focused on the land degradation (i.e. the reduction in the capacity of the land to provide ecosystem goods and services and assure its functions) and in particular on the estimation of the vulnerability levels in order to suggest appropriate policy actions to reduce/halt land degradation impacts, using the above mentioned software. The area investigated is the Basilicata Region (Southern Italy) where large natural areas are mixed with anthropized areas. To identify different levels of vulnerability we adopted the Environmentally Sensitive Areas (ESAs) model, based on the combination of indicators related to soil, climate, vegetation and anthropic stress. Such indicators were estimated by using the following data-sources: - Basilicata Region Geoportal to assess soil vulnerability; - DESERTNET2 project to evaluate potential vegetation vulnerability and climate vulnerability; - NDVI-MODIS satellite time series (2000-2010) with 250m resolution, available as 16-day composite from the NASA LP DAAC to characterize the dynamic component of vegetation; - Agricultural Census data 2010, Corine Land Cover 2006 and morphological information to assess the vulnerability to anthropic factors mainly connected with agricultural and grazing management. To achieve the final ESAs Index depicting the overall vulnerability to degradation of the investigated area we applied the geometric mean to cross normalized indices related to each examined component. In this context QGIS was used to display data and to perform basic GIS calculations, whereas GRASS was used for map-algebra operations and image processing. Finally R was used for computing statistical analysis (Principal Component Analysis) aimed to determine the relative importance of each adopted indicator. Our results show that GRASS, QGIS and R software are suitable to map land degradation vulnerability and identify highly vulnerable areas in which rehabilitation/recovery interventions are urgent. In addition they allow us to put into evidence the most important drivers of degradation thus supplying basic information for the setting up of intervention strategies. Ultimately, Free Open Source Software deliver a fair chance for geoscientific investigations thanks to their high interoperability and flexibility enabling to preserve the accuracy of the data and to reduce processing time. Moreover, the presence of several communities that steadily support users allows for achieving high quality results, making free open source software a valuable and easy alternative to conventional commercial software.
NASA Technical Reports Server (NTRS)
Greene, P. H.
1972-01-01
Both in practical engineering and in control of muscular systems, low level subsystems automatically provide crude approximations to the proper response. Through low level tuning of these approximations, the proper response variant can emerge from standardized high level commands. Such systems are expressly suited to emerging large scale integrated circuit technology. A computer, using symbolic descriptions of subsystem responses, can select and shape responses of low level digital or analog microcircuits. A mathematical theory that reveals significant informational units in this style of control and software for realizing such information structures are formulated.
The 1988 Directory of Educational Software Publishing Companies.
ERIC Educational Resources Information Center
Electronic Learning, 1988
1988-01-01
Based on questionnaires sent to educational software companies in January 1988, this directory lists 78 companies. Information given includes company address, curriculum subject areas for which the company publishes software, types of machines and operating systems on which the software operates, and grade level for which it is targeted. (LRW)
ERIC Educational Resources Information Center
Marson, Guilherme A.; Torres, Bayardo B.
2011-01-01
This work presents a convenient framework for developing interactive chemical education software to facilitate the integration of macroscopic, microscopic, and symbolic dimensions of chemical concepts--specifically, via the development of software for gel permeation chromatography. The instructional role of the software was evaluated in a study…
Berkowitz, Murray R
2013-01-01
Current information systems for use in detecting bioterrorist attacks lack a consistent, overarching information architecture. An overview of the use of biological agents as weapons during a bioterrorist attack is presented. Proposed are the design, development, and implementation of a medical informatics system to mine pertinent databases, retrieve relevant data, invoke appropriate biostatistical and epidemiological software packages, and automatically analyze these data. The top-level information architecture is presented. Systems requirements and functional specifications for this level are presented. Finally, future studies are identified.
Concurrent Probabilistic Simulation of High Temperature Composite Structural Response
NASA Technical Reports Server (NTRS)
Abdi, Frank
1996-01-01
A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
An Overview of Starfish: A Table-Centric Tool for Interactive Synthesis
NASA Technical Reports Server (NTRS)
Tsow, Alex
2008-01-01
Engineering is an interactive process that requires intelligent interaction at many levels. My thesis [1] advances an engineering discipline for high-level synthesis and architectural decomposition that integrates perspicuous representation, designer interaction, and mathematical rigor. Starfish, the software prototype for the design method, implements a table-centric transformation system for reorganizing control-dominated system expressions into high-level architectures. Based on the digital design derivation (DDD) system a designer-guided synthesis technique that applies correctness preserving transformations to synchronous data flow specifications expressed as co- recursive stream equations Starfish enhances user interaction and extends the reachable design space by incorporating four innovations: behavior tables, serialization tables, data refinement, and operator retiming. Behavior tables express systems of co-recursive stream equations as a table of guarded signal updates. Developers and users of the DDD system used manually constructed behavior tables to help them decide which transformations to apply and how to specify them. These design exercises produced several formally constructed hardware implementations: the FM9001 microprocessor, an SECD machine for evaluating LISP, and the SchemEngine, garbage collected machine for interpreting a byte-code representation of compiled Scheme programs. Bose and Tuna, two of DDD s developers, have subsequently commercialized the design derivation methodology at Derivation Systems, Inc. (DSI). DSI has formally derived and validated PCI bus interfaces and a Java byte-code processor; they further executed a contract to prototype SPIDER-NASA's ultra-reliable communications bus. To date, most derivations from DDD and DRS have targeted hardware due to its synchronous design paradigm. However, Starfish expressions are independent of the synchronization mechanism; there is no commitment to hardware or globally broadcast clocks. Though software back-ends for design derivation are limited to the DDD stream-interpreter, targeting synchronous or real-time software is not substantively different from targeting hardware.
Ramírez-Fernández, Cristina; Morán, Alberto L; García-Canseco, Eloísa; Gómez-Montalvo, Jorge R
2017-03-23
1) To enhance the content of an ontology for designing virtual environments (VEs) for upper limb motor rehabilitation of stroke patients according to the suggestions and comments of rehabilitation specialists and software developers, 2) to characterize the perceived importance level of the ontology, 3) to determine the perceived usefulness of the ontology, and 4) to identify the safety characteristics of the ontology for VEs design according to the rehabilitation specialists. Using two semi-structured Web questionnaires, we asked six rehabilitation specialists and six software developers to provide us with their perception regarding the level of importance and the usability of the ontology. From their responses we have identified themes related to perceived and required safety characteristics of the ontology. Significant differences in the importance level were obtained for the Stroke Disability, VE Configuration, Outcome Measures, and Safety Calibration classes, which were perceived as highly important by rehabilitation specialists. Regarding usability, the ontology was perceived by both groups with high usefulness, ease of use, learnability and intention of use. Concerning the thematic analysis of recommendations, eight topics for safety characteristics of the ontology were identified: adjustment of therapy strategies; selection and delimitation of movements; selection and proper calibration of the interaction device; proper selection of measuring instruments; gradual modification of the difficulty of the exercise; adaptability and variability of therapy exercises; feedback according to the capabilities of the patient; and real-time support for exercise training. The rehabilitation specialists and software developers confirmed the importance of the information contained in the ontology regarding motor rehabilitation of the upper limb. Their recommendations highlight the safety features and the advantages of the ontology as a guide for the effective design of VEs.
The Kepler Science Data Processing Pipeline Source Code Road Map
NASA Technical Reports Server (NTRS)
Wohler, Bill; Jenkins, Jon M.; Twicken, Joseph D.; Bryson, Stephen T.; Clarke, Bruce Donald; Middour, Christopher K.; Quintana, Elisa Victoria; Sanderfer, Jesse Thomas; Uddin, Akm Kamal; Sabale, Anima;
2016-01-01
We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed, developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center, the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center which hosts the computers required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Processing Pipeline, including, the software algorithms. We present the high-performance, parallel computing software modules of the pipeline that perform transit photometry, pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument characterization.
Viger, Roland J.
2008-01-01
This fact sheet provides a high-level description of the GIS Weasel, a software system designed to aid users in preparing spatial information as input to lumped and distributed parameter environmental simulation models (ESMs). The GIS Weasel provides geographic information system (GIS) tools to help create maps of geographic features relevant to the application of a user?s ESM and to generate parameters from those maps. The operation of the GIS Weasel does not require a user to be a GIS expert, only that a user has an understanding of the spatial information requirements of the model. The GIS Weasel software system provides a GIS-based graphical user interface (GUI), C programming language executables, and general utility scripts. The software will run on any computing platform where ArcInfo Workstation (version 8.1 or later) and the GRID extension are accessible. The user controls the GIS Weasel by interacting with menus, maps, and tables.
Chew, Avenell L.; Lamey, Tina; McLaren, Terri; De Roach, John
2016-01-01
Purpose To present en face optical coherence tomography (OCT) images generated by graph-search theory algorithm-based custom software and examine correlation with other imaging modalities. Methods En face OCT images derived from high density OCT volumetric scans of 3 healthy subjects and 4 patients using a custom algorithm (graph-search theory) and commercial software (Heidelberg Eye Explorer software (Heidelberg Engineering)) were compared and correlated with near infrared reflectance, fundus autofluorescence, adaptive optics flood-illumination ophthalmoscopy (AO-FIO) and microperimetry. Results Commercial software was unable to generate accurate en face OCT images in eyes with retinal pigment epithelium (RPE) pathology due to segmentation error at the level of Bruch’s membrane (BM). Accurate segmentation of the basal RPE and BM was achieved using custom software. The en face OCT images from eyes with isolated interdigitation or ellipsoid zone pathology were of similar quality between custom software and Heidelberg Eye Explorer software in the absence of any other significant outer retinal pathology. En face OCT images demonstrated angioid streaks, lesions of acute macular neuroretinopathy, hydroxychloroquine toxicity and Bietti crystalline deposits that correlated with other imaging modalities. Conclusions Graph-search theory algorithm helps to overcome the limitations of outer retinal segmentation inaccuracies in commercial software. En face OCT images can provide detailed topography of the reflectivity within a specific layer of the retina which correlates with other forms of fundus imaging. Our results highlight the need for standardization of image reflectivity to facilitate quantification of en face OCT images and longitudinal analysis. PMID:27959968
Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Douglas, Freddie; Bourgeois, Edit Kaminsky
2005-01-01
The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).
Sea Level Data Archaeology for the Global Sea Level Observing System (GLOSS)
NASA Astrophysics Data System (ADS)
Bradshaw, Elizabeth; Matthews, Andy; Rickards, Lesley; Jevrejeva, Svetlana
2015-04-01
The Global Sea Level Observing System (GLOSS) was set up in 1985 to collect long term tide gauge observations and has carried out a number of data archaeology activities over the past decade, including sending member organisations questionnaires to report on their repositories. The GLOSS Group of Experts (GLOSS GE) is looking to future developments in sea level data archaeology and will provide its user community with guidance on finding, digitising, quality controlling and distributing historic records. Many records may not be held in organisational archives and may instead by in national libraries, archives and other collections. GLOSS will promote a Citizen Science approach to discovering long term records by providing tools for volunteers to report data. Tide gauge data come in two different formats, charts and hand-written ledgers. Charts are paper analogue records generated by the mechanical instrument driving a pen trace. Several GLOSS members have developed software to automatically digitise these charts and the various methods were reported in a paper on automated techniques for the digitization of archived mareograms, delivered to the GLOSS GE 13th meeting. GLOSS is creating a repository of software for scanning analogue charts. NUNIEAU is the only publically available software for digitising tide gauge charts but other organisations have developed their own tide gauge digitising software that is available internally. There are several other freely available software packages that convert image data to numerical values. GLOSS could coordinate a comparison study of the various different digitising software programs by: Sending the same charts to each organisation and asking everyone to digitise them using their own procedures Comparing the digitised data Providing recommendations to the GLOSS community The other major form of analogue sea level data is handwritten ledgers, which are usually observations of high and low waters, but sometimes contain higher frequency data. The standard current method for digitising these data is to enter the values manually, which has been performed by GLOSS countries, including France and Spain. The GLOSS GE is exploring other methods for use in the future as this process is time consuming. Current projects to improve Handwritten Text Recognition (HTR) tend to be working with the written word and so require knowledge of sentence structures and word occurrence probabilities to reconstruct sentences e.g. tranScriptorium (European Union's Seventh Framework Programme funded project). This approach would not be applicable to sea level data, however tidal data by its very nature contains periodicity and predictability. HTR technology could be adapted to take this into account and improve the automatic digitisation of handwritten tide gauge ledgers. There are many challenges facing the sea level data archaeology community, but it is hoped that improvements in technology can overcome some of the obstacles: Faster automated digitisation of tide gauge charts Minimal user input Automatic transcribing of handwritten ledgers The GLOSS GE will provide a central location to share software, guidelines for quality controlling data and the GLOSS data archive centres will be the repository of the newly created datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-07-01
New hardware and software tools build on existing platforms and add performance and ease-of-use benefits as the struggle to find and produce hydrocarbons at the lowest cost becomes more and more competitive. Software tools now provide geoscientists and petroleum engineers with a better understanding of reservoirs from the shape and makeup of formation to behavior projections as hydrocarbons are extracted. Petroleum software tools allow scientists to simulate oil flow, predict the life expectancy of a reservoir, and even help determine how to extend the life and economic viability of the reservoir. The requirement of the petroleum industry to find andmore » extract petroleum more efficiently drives the solutions provided by software and service companies. To one extent or another, most of the petroleum software products available today have achieved an acceptable level of competency. Innovative, high-impact products from small, focussed companies often were bought out by larger companies with deeper pockets if their developers couldn`t fund their expansion. Other products disappeared from the scene, because they were unable to evolve fast enough to compete. There are still enough small companies around producing excellent products to prevent the marketplace from feeling too narrow and lacking in choice. Oil companies requiring specific solutions to their problems have helped fund product development within the commercial sector. As the industry has matured, strategic alliances between vendors, both hardware and software, have provided market advantages, often combining strengths to enter new and undeveloped areas for technology. The pace of technological development has been fast and constant.« less
The KATE shell: An implementation of model-based control, monitor and diagnosis
NASA Technical Reports Server (NTRS)
Cornell, Matthew
1987-01-01
The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.
King, Robert; Parker, Simon; Mouzakis, Kon; Fletcher, Winston; Fitzgerald, Patrick
2007-11-01
The Integrated Task Modeling Environment (ITME) is a user-friendly software tool that has been developed to automatically recode low-level data into an empirical record of meaningful task performance. The present research investigated and validated the performance of the ITME software package by conducting complex simulation missions and comparing the task analyses produced by ITME with taskanalyses produced by experienced video analysts. A very high interrater reliability (> or = .94) existed between experienced video analysts and the ITME for the task analyses produced for each mission. The mean session time:analysis time ratio was 1:24 using video analysis techniques and 1:5 using the ITME. It was concluded that the ITME produced task analyses that were as reliable as those produced by experienced video analysts, and significantly reduced the time cost associated with these analyses.
PTools: an opensource molecular docking library
Saladin, Adrien; Fiorucci, Sébastien; Poulain, Pierre; Prévost, Chantal; Zacharias, Martin
2009-01-01
Background Macromolecular docking is a challenging field of bioinformatics. Developing new algorithms is a slow process generally involving routine tasks that should be found in a robust library and not programmed from scratch for every new software application. Results We present an object-oriented Python/C++ library to help the development of new docking methods. This library contains low-level routines like PDB-format manipulation functions as well as high-level tools for docking and analyzing results. We also illustrate the ease of use of this library with the detailed implementation of a 3-body docking procedure. Conclusion The PTools library can handle molecules at coarse-grained or atomic resolution and allows users to rapidly develop new software. The library is already in use for protein-protein and protein-DNA docking with the ATTRACT program and for simulation analysis. This library is freely available under the GNU GPL license, together with detailed documentation. PMID:19409097
PTools: an opensource molecular docking library.
Saladin, Adrien; Fiorucci, Sébastien; Poulain, Pierre; Prévost, Chantal; Zacharias, Martin
2009-05-01
Macromolecular docking is a challenging field of bioinformatics. Developing new algorithms is a slow process generally involving routine tasks that should be found in a robust library and not programmed from scratch for every new software application. We present an object-oriented Python/C++ library to help the development of new docking methods. This library contains low-level routines like PDB-format manipulation functions as well as high-level tools for docking and analyzing results. We also illustrate the ease of use of this library with the detailed implementation of a 3-body docking procedure. The PTools library can handle molecules at coarse-grained or atomic resolution and allows users to rapidly develop new software. The library is already in use for protein-protein and protein-DNA docking with the ATTRACT program and for simulation analysis. This library is freely available under the GNU GPL license, together with detailed documentation.
The development of a program analysis environment for Ada
NASA Technical Reports Server (NTRS)
Brown, David B.; Carlisle, Homer W.; Chang, Kai-Hsiung; Cross, James H.; Deason, William H.; Haga, Kevin D.; Huggins, John R.; Keleher, William R. A.; Starke, Benjamin B.; Weyrich, Orville R.
1989-01-01
A unit level, Ada software module testing system, called Query Utility Environment for Software Testing of Ada (QUEST/Ada), is described. The project calls for the design and development of a prototype system. QUEST/Ada design began with a definition of the overall system structure and a description of component dependencies. The project team was divided into three groups to resolve the preliminary designs of the parser/scanner: the test data generator, and the test coverage analyzer. The Phase 1 report is a working document from which the system documentation will evolve. It provides history, a guide to report sections, a literature review, the definition of the system structure and high level interfaces, descriptions of the prototype scope, the three major components, and the plan for the remainder of the project. The appendices include specifications, statistics, two papers derived from the current research, a preliminary users' manual, and the proposal and work plan for Phase 2.
Guidance and Control Software,
1980-05-01
commitments of function, cost, and schedule . The phrase "software engineering" was intended to contrast with the phrase "computer science" the latter aims...the software problems of cost, delivery schedule , and quality were gradually being recognized at the highest management levels. Thus, in a project... schedule dates. Although the analysis of software problems indicated that the entire software development process (figure 1) needed new methods, only
DOD Weapon Systems Software Management Study, Appendix B. Shipborne Systems
1975-06-01
program management, from Inception to development maintenance, 2. Detailed documentation requirements, 3. Standard high -level language development (CS-1...the Guided Missile School (GMS) at Dam Neck. The APL Land-Based Test Site (LETS) consisted of a Mk 152 digital fire control computer, SPG-55B radar...instruction and data segments are respectively placed in low and high core addresses to take advantage of UYK-7 memory accessing time savings. UYK-7
Bilingual parallel programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, I.; Overbeek, R.
1990-01-01
Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach providesmore » and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.« less
NASA Technical Reports Server (NTRS)
Pisaich, Gregory; Flueckiger, Lorenzo; Neukom, Christian; Wagner, Mike; Buchanan, Eric; Plice, Laura
2007-01-01
The Mission Simulation Toolkit (MST) is a flexible software system for autonomy research. It was developed as part of the Mission Simulation Facility (MSF) project that was started in 2001 to facilitate the development of autonomous planetary robotic missions. Autonomy is a key enabling factor for robotic exploration. There has been a large gap between autonomy software (at the research level), and software that is ready for insertion into near-term space missions. The MST bridges this gap by providing a simulation framework and a suite of tools for supporting research and maturation of autonomy. MST uses a distributed framework based on the High Level Architecture (HLA) standard. A key feature of the MST framework is the ability to plug in new models to replace existing ones with the same services. This enables significant simulation flexibility, particularly the mixing and control of fidelity level. In addition, the MST provides automatic code generation from robot interfaces defined with the Unified Modeling Language (UML), methods for maintaining synchronization across distributed simulation systems, XML-based robot description, and an environment server. Finally, the MSF supports a number of third-party products including dynamic models and terrain databases. Although the communication objects and some of the simulation components that are provided with this toolkit are specifically designed for terrestrial surface rovers, the MST can be applied to any other domain, such as aerial, aquatic, or space.
NASA Technical Reports Server (NTRS)
Gill, Esther Naomi
1986-01-01
A review was conducted of software packages currently on the market which might be integrated with the interface language and aid in reaching the objectives of customization, standardization, transparency, reliability, maintainability, language substitutions, expandability, portability, and flexibility. Recommendations are given for best choices in hardware and software acquisition for inhouse testing of these possible integrations. Software acquisition in the line of tools to aid expert-system development and/or novice program development, artificial intelligent voice technology and touch screen or joystick or mouse utilization as well as networking were recommended. Other recommendations concerned using the language Ada for the user interface language shell because of its high level of standardization, structure, and ability to accept and execute programs written in other programming languages, its DOD ownership and control, and keeping the user interface language simple so that multiples of users will find the commercialization of space within their realm of possibility which is, after all, the purpose of the Space Station.
Automatic programming for critical applications
NASA Technical Reports Server (NTRS)
Loganantharaj, Raj L.
1988-01-01
The important phases of a software life cycle include verification and maintenance. Usually, the execution performance is an expected requirement in a software development process. Unfortunately, the verification and the maintenance of programs are the time consuming and the frustrating aspects of software engineering. The verification cannot be waived for the programs used for critical applications such as, military, space, and nuclear plants. As a consequence, synthesis of programs from specifications, an alternative way of developing correct programs, is becoming popular. The definition, or what is understood by automatic programming, has been changed with our expectations. At present, the goal of automatic programming is the automation of programming process. Specifically, it means the application of artificial intelligence to software engineering in order to define techniques and create environments that help in the creation of high level programs. The automatic programming process may be divided into two phases: the problem acquisition phase and the program synthesis phase. In the problem acquisition phase, an informal specification of the problem is transformed into an unambiguous specification while in the program synthesis phase such a specification is further transformed into a concrete, executable program.
Mark 4A antenna control system data handling architecture study
NASA Technical Reports Server (NTRS)
Briggs, H. C.; Eldred, D. B.
1991-01-01
A high-level review was conducted to provide an analysis of the existing architecture used to handle data and implement control algorithms for NASA's Deep Space Network (DSN) antennas and to make system-level recommendations for improving this architecture so that the DSN antennas can support the ever-tightening requirements of the next decade and beyond. It was found that the existing system is seriously overloaded, with processor utilization approaching 100 percent. A number of factors contribute to this overloading, including dated hardware, inefficient software, and a message-passing strategy that depends on serial connections between machines. At the same time, the system has shortcomings and idiosyncrasies that require extensive human intervention. A custom operating system kernel and an obscure programming language exacerbate the problems and should be modernized. A new architecture is presented that addresses these and other issues. Key features of the new architecture include a simplified message passing hierarchy that utilizes a high-speed local area network, redesign of particular processing function algorithms, consolidation of functions, and implementation of the architecture in modern hardware and software using mainstream computer languages and operating systems. The system would also allow incremental hardware improvements as better and faster hardware for such systems becomes available, and costs could potentially be low enough that redundancy would be provided economically. Such a system could support DSN requirements for the foreseeable future, though thorough consideration must be given to hard computational requirements, porting existing software functionality to the new system, and issues of fault tolerance and recovery.
NASA Technical Reports Server (NTRS)
Rede, Leonard J.; Booth, Andrew; Hsieh, Jonathon; Summer, Kellee
2004-01-01
This paper presents a discussion of the evolution of a sequencer from a simple EPICS (Experimental Physics and Industrial Control System) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a CASE (Computer Aided Software Engineering) tool approach. The main purpose of the sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii. The IF Sequencer is a high-level, multi-threaded, Hare1 finite state machine, software program designed to orchestrate several lower-level hardware and software hard real time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORB A, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation. The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.
NASA Astrophysics Data System (ADS)
Reder, Leonard J.; Booth, Andrew; Hsieh, Jonathan; Summers, Kellee R.
2004-09-01
This paper presents a discussion of the evolution of a sequencer from a simple Experimental Physics and Industrial Control System (EPICS) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a Computer Aided Software Engineering (CASE) tool approach. The main purpose of the Interferometer Sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations to be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii. The IF Sequencer is a high-level, multi-threaded, Harel finite state machine software program designed to orchestrate several lower-level hardware and software hard real-time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORBA, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation. The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.
Computed gray levels in multislice and cone-beam computed tomography.
Azeredo, Fabiane; de Menezes, Luciane Macedo; Enciso, Reyes; Weissheimer, Andre; de Oliveira, Rogério Belle
2013-07-01
Gray level is the range of shades of gray in the pixels, representing the x-ray attenuation coefficient that allows for tissue density assessments in computed tomography (CT). An in-vitro study was performed to investigate the relationship between computed gray levels in 3 cone-beam CT (CBCT) scanners and 1 multislice spiral CT device using 5 software programs. Six materials (air, water, wax, acrylic, plaster, and gutta-percha) were scanned with the CBCT and CT scanners, and the computed gray levels for each material at predetermined points were measured with OsiriX Medical Imaging software (Geneva, Switzerland), OnDemand3D (CyberMed International, Seoul, Korea), E-Film (Merge Healthcare, Milwaukee, Wis), Dolphin Imaging (Dolphin Imaging & Management Solutions, Chatsworth, Calif), and InVivo Dental Software (Anatomage, San Jose, Calif). The repeatability of these measurements was calculated with intraclass correlation coefficients, and the gray levels were averaged to represent each material. Repeated analysis of variance tests were used to assess the differences in gray levels among scanners and materials. There were no differences in mean gray levels with the different software programs. There were significant differences in gray levels between scanners for each material evaluated (P <0.001). The software programs were reliable and had no influence on the CT and CBCT gray level measurements. However, the gray levels might have discrepancies when different CT and CBCT scanners are used. Therefore, caution is essential when interpreting or evaluating CBCT images because of the significant differences in gray levels between different CBCT scanners, and between CBCT and CT values. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Reuse Metrics for Object Oriented Software
NASA Technical Reports Server (NTRS)
Bieman, James M.
1998-01-01
One way to increase the quality of software products and the productivity of software development is to reuse existing software components when building new software systems. In order to monitor improvements in reuse, the level of reuse must be measured. In this NASA supported project we (1) derived a suite of metrics which quantify reuse attributes for object oriented, object based, and procedural software, (2) designed prototype tools to take these measurements in Ada, C++, Java, and C software, (3) evaluated the reuse in available software, (4) analyzed the relationship between coupling, cohesion, inheritance, and reuse, (5) collected object oriented software systems for our empirical analyses, and (6) developed quantitative criteria and methods for restructuring software to improve reusability.
Software for Dosage Individualization of Voriconazole for Immunocompromised Patients
VanGuilder, Michael; Donnelly, J. Peter; Blijlevens, Nicole M. A.; Brüggemann, Roger J. M.; Jelliffe, Roger W.; Neely, Michael N.
2013-01-01
The efficacy of voriconazole is potentially compromised by considerable pharmacokinetic variability. There are increasing insights into voriconazole concentrations that are safe and effective for treatment of invasive fungal infections. Therapeutic drug monitoring is increasingly advocated. Software to aid in the individualization of dosing would be an extremely useful clinical tool. We developed software to enable the individualization of voriconazole dosing to attain predefined serum concentration targets. The process of individualized voriconazole therapy was based on concepts of Bayesian stochastic adaptive control. Multiple-model dosage design with feedback control was used to calculate dosages that achieved desired concentration targets with maximum precision. The performance of the software program was assessed using the data from 10 recipients of an allogeneic hematopoietic stem cell transplant (HSCT) receiving intravenous (i.v.) voriconazole. The program was able to model the plasma concentrations with a high level of precision, despite the wide range of concentration trajectories and interindividual pharmacokinetic variability. The voriconazole concentrations predicted after the last dosages were largely concordant with those actually measured. Simulations provided an illustration of the way in which the software can be used to adjust dosages of patients falling outside desired concentration targets. This software appears to be an extremely useful tool to further optimize voriconazole therapy and aid in therapeutic drug monitoring. Further prospective studies are now required to define the utility of the controller in daily clinical practice. PMID:23380734
Bendels, Michael H K; Beed, Prateep; Leibold, Christian; Schmitz, Dietmar; Johenning, Friedrich W
2008-10-30
Optical uncaging of caged compounds is a well-established method to study the functional anatomy of a brain region on the circuit level. We present an alternative approach to existing experimental setups. Using a low-magnification objective we acquire images for planning the spatial patterns of stimulation. Then high-magnification objectives are used during laser stimulation providing a laser spot between 2 microm and 20 microm size. The core of this system is a video-based control software that monitors and controls the connected devices, allows for planning of the experiment, coordinates the stimulation process and manages automatic data storage. This combines a high-resolution analysis of neuronal circuits with flexible and efficient online planning and execution of a grid of spatial stimulation patterns on a larger scale. The software offers special optical features that enable the system to achieve a maximum degree of spatial reliability. The hardware is mainly built upon standard laboratory devices and thus ideally suited to cost-effectively complement existing electrophysiological setups with a minimal amount of additional equipment. Finally, we demonstrate the performance of the system by mapping the excitatory and inhibitory connections of entorhinal cortex layer II stellate neurons and present an approach for the analysis of photo-induced synaptic responses in high spontaneous activity.
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
ERIC Educational Resources Information Center
Northwest Regional Educational Lab., Portland, OR.
This document consists of 24 microcomputer software package evaluations prepared by the MicroSIFT (Microcomputer Software and Information for Teachers) Clearinghouse at the Northwest Regional Educational Laboratory. Each software review lists source, cost, ability level, subject, topic, medium of transfer, required hardware, required software,…
AADL and Model-based Engineering
2014-10-20
and MBE Feiler, Oct 20, 2014 © 2014 Carnegie Mellon University We Rely on Software for Safe Aircraft Operation Embedded software systems ...D eveloper Compute Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency...confusion Hardware Engineer Why do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software
Estimating Software Effort Hours for Major Defense Acquisition Programs
ERIC Educational Resources Information Center
Wallshein, Corinne C.
2010-01-01
Software Cost Estimation (SCE) uses labor hours or effort required to conceptualize, develop, integrate, test, field, or maintain program components. Department of Defense (DoD) SCE can use initial software data parameters to project effort hours for large, software-intensive programs for contractors reporting the top levels of process maturity,…
EOS MLS Level 2 Data Processing Software Version 3
NASA Technical Reports Server (NTRS)
Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.;
2011-01-01
This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.
Leveraging Existing Mission Tools in a Re-Usable, Component-Based Software Environment
NASA Technical Reports Server (NTRS)
Greene, Kevin; Grenander, Sven; Kurien, James; z,s (fshir. z[orttr); z,scer; O'Reilly, Taifun
2006-01-01
Emerging methods in component-based software development offer significant advantages but may seem incompatible with existing mission operations applications. In this paper we relate our positive experiences integrating existing mission applications into component-based tools we are delivering to three missions. In most operations environments, a number of software applications have been integrated together to form the mission operations software. In contrast, with component-based software development chunks of related functionality and data structures, referred to as components, can be individually delivered, integrated and re-used. With the advent of powerful tools for managing component-based development, complex software systems can potentially see significant benefits in ease of integration, testability and reusability from these techniques. These benefits motivate us to ask how component-based development techniques can be relevant in a mission operations environment, where there is significant investment in software tools that are not component-based and may not be written in languages for which component-based tools even exist. Trusted and complex software tools for sequencing, validation, navigation, and other vital functions cannot simply be re-written or abandoned in order to gain the advantages offered by emerging component-based software techniques. Thus some middle ground must be found. We have faced exactly this issue, and have found several solutions. Ensemble is an open platform for development, integration, and deployment of mission operations software that we are developing. Ensemble itself is an extension of an open source, component-based software development platform called Eclipse. Due to the advantages of component-based development, we have been able to vary rapidly develop mission operations tools for three surface missions by mixing and matching from a common set of mission operation components. We have also had to determine how to integrate existing mission applications for sequence development, sequence validation, and high level activity planning, and other functions into a component-based environment. For each of these, we used a somewhat different technique based upon the structure and usage of the existing application.
ERIC Educational Resources Information Center
Pfaffman, Jay
2008-01-01
Free/Open Source Software (FOSS) applications meet many of the software needs of high school science classrooms. In spite of the availability and quality of FOSS tools, they remain unknown to many teachers and utilized by fewer still. In a world where most software has restrictions on copying and use, FOSS is an anomaly, free to use and to…
BATS: a Bayesian user-friendly software for analyzing time series microarray experiments.
Angelini, Claudia; Cutillo, Luisa; De Canditiis, Daniela; Mutarelli, Margherita; Pensky, Marianna
2008-10-06
Gene expression levels in a given cell can be influenced by different factors, namely pharmacological or medical treatments. The response to a given stimulus is usually different for different genes and may depend on time. One of the goals of modern molecular biology is the high-throughput identification of genes associated with a particular treatment or a biological process of interest. From methodological and computational point of view, analyzing high-dimensional time course microarray data requires very specific set of tools which are usually not included in standard software packages. Recently, the authors of this paper developed a fully Bayesian approach which allows one to identify differentially expressed genes in a 'one-sample' time-course microarray experiment, to rank them and to estimate their expression profiles. The method is based on explicit expressions for calculations and, hence, very computationally efficient. The software package BATS (Bayesian Analysis of Time Series) presented here implements the methodology described above. It allows an user to automatically identify and rank differentially expressed genes and to estimate their expression profiles when at least 5-6 time points are available. The package has a user-friendly interface. BATS successfully manages various technical difficulties which arise in time-course microarray experiments, such as a small number of observations, non-uniform sampling intervals and replicated or missing data. BATS is a free user-friendly software for the analysis of both simulated and real microarray time course experiments. The software, the user manual and a brief illustrative example are freely available online at the BATS website: http://www.na.iac.cnr.it/bats.
A cyber infrastructure for the SKA Telescope Manager
NASA Astrophysics Data System (ADS)
Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul
2016-07-01
The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.
Implementation of an object oriented track reconstruction model into multiple LHC experiments*
NASA Astrophysics Data System (ADS)
Gaines, Irwin; Gonzalez, Saul; Qian, Sijin
2001-10-01
An Object Oriented (OO) model (Gaines et al., 1996; 1997; Gaines and Qian, 1998; 1999) for track reconstruction by the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. The model has been coded in the C++ programming language and has been successfully implemented into the OO computing environments of both the CMS (1994) and ATLAS (1994) experiments at the future Large Hadron Collider (LHC) at CERN. We shall report: how the OO model was adapted, with largely the same code, to different scenarios and serves the different reconstruction aims in different experiments (i.e. the level-2 trigger software for ATLAS and the offline software for CMS); how the OO model has been incorporated into different OO environments with a similar integration structure (demonstrating the ease of re-use of OO program); what are the OO model's performance, including execution time, memory usage, track finding efficiency and ghost rate, etc.; and additional physics performance based on use of the OO tracking model. We shall also mention the experience and lessons learned from the implementation of the OO model into the general OO software framework of the experiments. In summary, our practice shows that the OO technology really makes the software development and the integration issues straightforward and convenient; this may be particularly beneficial for the general non-computer-professional physicists.
Mapping CMMI Level 2 to Scrum Practices: An Experience Report
NASA Astrophysics Data System (ADS)
Diaz, Jessica; Garbajosa, Juan; Calvo-Manzano, Jose A.
CMMI has been adopted advantageously in large companies for improvements in software quality, budget fulfilling, and customer satisfaction. However SPI strategies based on CMMI-DEV require heavy software development processes and large investments in terms of cost and time that medium/small companies do not deal with. The so-called light software development processes, such as Agile Software Development (ASD), deal with these challenges. ASD welcomes changing requirements and stresses the importance of adaptive planning, simplicity and continuous delivery of valuable software by short time-framed iterations. ASD is becoming convenient in a more and more global, and changing software market. It would be greatly useful to be able to introduce agile methods such as Scrum in compliance with CMMI process model. This paper intends to increase the understanding of the relationship between ASD and CMMI-DEV reporting empirical results that confirm theoretical comparisons between ASD practices and CMMI level2.
TMT approach to observatory software development process
NASA Astrophysics Data System (ADS)
Buur, Hanne; Subramaniam, Annapurni; Gillies, Kim; Dumas, Christophe; Bhatia, Ravinder
2016-07-01
The purpose of the Observatory Software System (OSW) is to integrate all software and hardware components of the Thirty Meter Telescope (TMT) to enable observations and data capture; thus it is a complex software system that is defined by four principal software subsystems: Common Software (CSW), Executive Software (ESW), Data Management System (DMS) and Science Operations Support System (SOSS), all of which have interdependencies with the observatory control systems and data acquisition systems. Therefore, the software development process and plan must consider dependencies to other subsystems, manage architecture, interfaces and design, manage software scope and complexity, and standardize and optimize use of resources and tools. Additionally, the TMT Observatory Software will largely be developed in India through TMT's workshare relationship with the India TMT Coordination Centre (ITCC) and use of Indian software industry vendors, which adds complexity and challenges to the software development process, communication and coordination of activities and priorities as well as measuring performance and managing quality and risk. The software project management challenge for the TMT OSW is thus a multi-faceted technical, managerial, communications and interpersonal relations challenge. The approach TMT is using to manage this multifaceted challenge is a combination of establishing an effective geographically distributed software team (Integrated Product Team) with strong project management and technical leadership provided by the TMT Project Office (PO) and the ITCC partner to manage plans, process, performance, risk and quality, and to facilitate effective communications; establishing an effective cross-functional software management team composed of stakeholders, OSW leadership and ITCC leadership to manage dependencies and software release plans, technical complexities and change to approved interfaces, architecture, design and tool set, and to facilitate effective communications; adopting an agile-based software development process across the observatory to enable frequent software releases to help mitigate subsystem interdependencies; defining concise scope and work packages for each of the OSW subsystems to facilitate effective outsourcing of software deliverables to the ITCC partner, and to enable performance monitoring and risk management. At this stage, the architecture and high-level design of the software system has been established and reviewed. During construction each subsystem will have a final design phase with reviews, followed by implementation and testing. The results of the TMT approach to the Observatory Software development process will only be preliminary at the time of the submittal of this paper, but it is anticipated that the early results will be a favorable indication of progress.
SEPAC software configuration control plan and procedures, revision 1
NASA Technical Reports Server (NTRS)
1981-01-01
SEPAC Software Configuration Control Plan and Procedures are presented. The objective of the software configuration control is to establish the process for maintaining configuration control of the SEPAC software beginning with the baselining of SEPAC Flight Software Version 1 and encompass the integration and verification tests through Spacelab Level IV Integration. They are designed to provide a simplified but complete configuration control process. The intent is to require a minimum amount of paperwork but provide total traceability of SEPAC software.
Software engineering from a Langley perspective
NASA Technical Reports Server (NTRS)
Voigt, Susan
1994-01-01
A brief introduction to software engineering is presented. The talk is divided into four sections beginning with the question 'What is software engineering', followed by a brief history of the progression of software engineering at the Langley Research Center in the context of an expanding computing environment. Several basic concepts and terms are introduced, including software development life cycles and maturity levels. Finally, comments are offered on what software engineering means for the Langley Research Center and where to find more information on the subject.
Learning Photogrammetry with Interactive Software Tool PhoX
NASA Astrophysics Data System (ADS)
Luhmann, T.
2016-06-01
Photogrammetry is a complex topic in high-level university teaching, especially in the fields of geodesy, geoinformatics and metrology where high quality results are demanded. In addition, more and more black-box solutions for 3D image processing and point cloud generation are available that generate nice results easily, e.g. by structure-from-motion approaches. Within this context, the classical approach of teaching photogrammetry (e.g. focusing on aerial stereophotogrammetry) has to be reformed in order to educate students and professionals with new topics and provide them with more information behind the scene. Since around 20 years photogrammetry courses at the Jade University of Applied Sciences in Oldenburg, Germany, include the use of digital photogrammetry software that provide individual exercises, deep analysis of calculation results and a wide range of visualization tools for almost all standard tasks in photogrammetry. During the last years the software package PhoX has been developed that is part of a new didactic concept in photogrammetry and related subjects. It also serves as analysis tool in recent research projects. PhoX consists of a project-oriented data structure for images, image data, measured points and features and 3D objects. It allows for almost all basic photogrammetric measurement tools, image processing, calculation methods, graphical analysis functions, simulations and much more. Students use the program in order to conduct predefined exercises where they have the opportunity to analyse results in a high level of detail. This includes the analysis of statistical quality parameters but also the meaning of transformation parameters, rotation matrices, calibration and orientation data. As one specific advantage, PhoX allows for the interactive modification of single parameters and the direct view of the resulting effect in image or object space.
Hyperspectral Cubesat Constellation for Rapid Natural Hazard Response
NASA Astrophysics Data System (ADS)
Mandl, D.; Huemmrich, K. F.; Ly, V. T.; Handy, M.; Ong, L.; Crum, G.
2015-12-01
With the advent of high performance space networks that provide total coverage for Cubesats, the paradigm for low cost, high temporal coverage with hyperspectral instruments becomes more feasible. The combination of ground cloud computing resources, high performance with low power consumption onboard processing, total coverage for the cubesats and social media provide an opprotunity for an architecture that provides cost-effective hyperspectral data products for natural hazard response and decision support. This paper provides a series of pathfinder efforts to create a scalable Intelligent Payload Module(IPM) that has flown on a variety of airborne vehicles including Cessna airplanes, Citation jets and a helicopter and will fly on an Unmanned Aerial System (UAS) hexacopter to monitor natural phenomena. The IPM's developed thus far were developed on platforms that emulate a satellite environment which use real satellite flight software, real ground software. In addition, science processing software has been developed that perform hyperspectral processing onboard using various parallel processing techniques to enable creation of onboard hyperspectral data products while consuming low power. A cubesat design was developed that is low cost and that is scalable to larger consteallations and thus can provide daily hyperspectral observations for any spot on earth. The design was based on the existing IPM prototypes and metrics that were developed over the past few years and a shrunken IPM that can perform up to 800 Mbps throughput. Thus this constellation of hyperspectral cubesats could be constantly monitoring spectra with spectral angle mappers after Level 0, Level 1 Radiometric Correction, Atmospheric Correction processing. This provides the opportunity daily monitoring of any spot on earth on a daily basis at 30 meter resolution which is not available today.
Mapping of H.264 decoding on a multiprocessor architecture
NASA Astrophysics Data System (ADS)
van der Tol, Erik B.; Jaspers, Egbert G.; Gelderblom, Rob H.
2003-05-01
Due to the increasing significance of development costs in the competitive domain of high-volume consumer electronics, generic solutions are required to enable reuse of the design effort and to increase the potential market volume. As a result from this, Systems-on-Chip (SoCs) contain a growing amount of fully programmable media processing devices as opposed to application-specific systems, which offered the most attractive solutions due to a high performance density. The following motivates this trend. First, SoCs are increasingly dominated by their communication infrastructure and embedded memory, thereby making the cost of the functional units less significant. Moreover, the continuously growing design costs require generic solutions that can be applied over a broad product range. Hence, powerful programmable SoCs are becoming increasingly attractive. However, to enable power-efficient designs, that are also scalable over the advancing VLSI technology, parallelism should be fully exploited. Both task-level and instruction-level parallelism can be provided by means of e.g. a VLIW multiprocessor architecture. To provide the above-mentioned scalability, we propose to partition the data over the processors, instead of traditional functional partitioning. An advantage of this approach is the inherent locality of data, which is extremely important for communication-efficient software implementations. Consequently, a software implementation is discussed, enabling e.g. SD resolution H.264 decoding with a two-processor architecture, whereas High-Definition (HD) decoding can be achieved with an eight-processor system, executing the same software. Experimental results show that the data communication considerably reduces up to 65% directly improving the overall performance. Apart from considerable improvement in memory bandwidth, this novel concept of partitioning offers a natural approach for optimally balancing the load of all processors, thereby further improving the overall speedup.
ERIC Educational Resources Information Center
Guzeller, Cem Oktay; Akin, Ayca
2014-01-01
The purpose of this study is to determine the predicting power of mathematics achievement from ICT variables including the Internet/entertainment use (IEU), program/software use (PRGUSE), confidence in internet tasks (INTCONF) and confidence in ICT high level tasks (HIGHCONF) based on PISA 2006 data. This study indicates that the ICT variables…
Damian Cirelli; Victor J. Lieffers; Melvin T. Tyree
2012-01-01
Measuring whole-plant transpiration is highly relevant considering the increasing interest in understanding and improving plant water use at the whole-plant level. We present an original software package (Amalthea) and a design to create a system for measuring transpiration using laboratory balances based on the readily available commodity hardware. The system is...
ERIC Educational Resources Information Center
Nicholes, Justin
2016-01-01
This classroom quasi-experiment aimed to learn if and to what degree supplementing classroom instruction with Rosetta Stone (RS), Tell Me More (TMM), Memrise (MEM), or ESL WOW (WOW) impacted high-stakes English test performance in areas of university-level writing, reading, speaking, listening, and grammar. Seventy-eight (N = 78) Chinese learners…
NASA Astrophysics Data System (ADS)
Wan Salleh, Masturah; Sulaiman, Hajar
2013-04-01
The use of technology in the teaching of mathematics at the university level has long been introduced; but many among the lecturers, especially those that have taught for many years, still opt for a traditional teaching method, that is, by lecture talk. One reason is that lecturers themselves were not exposed to the technologies available and how it can assist in the teaching and learning procedures (T&L) in mathematics. GeoGebra is a mathematical software which is open and free and has just recently been introduced in Malaysia. Compared with the software Cabri Geometry and Geometer's Sketchpad (GSP), which only focus on geometry, GeoGebra is able to connect geometry, algebra and numerical representation. Realizing this, the researchers have conducted a study to expose the university lecturers on the use of GeoGebra in T&L. The researchers chose to do the research on mathematics lecturers at the Department of Computer Science and Mathematics (JSKM), Universiti Teknologi Mara (UiTM), Penang. The objective of this study is to determine whether an exposure to GeoGebra software can affect the conceptual knowledge and procedural teaching of mathematics at the university level. This study is a combination of descriptive and qualitative. One session was conducted in an open workshop for all the 45 lecturers. From that total, four people were selected as a sample. The sample was selected by using a simple random sampling method. This study used materials in the form of modules during the workshop. In terms of conceptual knowledge, the results showed that the GeoGebra software is appropriate, relevant and highly effective for in-depth understanding of the selected topics. While the procedural aspects of teaching, it can be one of the teaching aids and considerably facilitate the lecturers.
A federated design for a neurobiological simulation engine: the CBI federated software architecture.
Cornelis, Hugo; Coop, Allan D; Bower, James M
2012-01-01
Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components.
A Federated Design for a Neurobiological Simulation Engine: The CBI Federated Software Architecture
Cornelis, Hugo; Coop, Allan D.; Bower, James M.
2012-01-01
Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components. PMID:22242154
Computer systems and software engineering
NASA Technical Reports Server (NTRS)
Mckay, Charles W.
1988-01-01
The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.
FTOOLS: A general package of software to manipulate FITS files
NASA Astrophysics Data System (ADS)
Blackburn, J. K.; Shaw, R. A.; Payne, H. E.; Hayes, J. J. E.; Heasarc
1999-12-01
FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. The FTOOLS package contains many utility programs which perform modular tasks on any FITS image or table, as well as higher-level analysis programs designed specifically for data from current and past high energy astrophysics missions. The utility programs for FITS tables are especially rich and powerful, and provide functions for presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual FTOOLS programs can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. FTOOLS development began in 1991 and has produced the main set of data analysis software for the current ASCA and RXTE space missions and for other archival sets of X-ray and gamma-ray data. The FTOOLS software package is supported on most UNIX platforms and on Windows machines. The user interface is controlled by standard parameter files that are very similar to those used by IRAF. The package is self documenting through a stand alone help task called fhelp. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, A.; Acero, J.; Alberdi, B.
High precision coil current control, stability and ripple content are very important aspects for a stellarator design. The TJ-II coils will be supplied by network commutated current converters and therefore the coil currents will contain harmonics which have to be kept to a very low level. An analytical investigation as well as numerous simulations with EMTP, SABER{reg_sign} and other softwares, have been done in order to predict the harmonic currents and to verify the completion with the specified maximum levels. The calculations and the results are presented.
Relay Forward-Link File Management Services (MaROS Phase 2)
NASA Technical Reports Server (NTRS)
Allard, Daniel A.; Wallick, Michael N.; Hy, Franklin H.; Gladden, Roy E.
2013-01-01
This software provides the service-level functionality to manage the delivery of files from a lander mission repository to an orbiter mission repository for eventual spacelink relay by the orbiter asset on a specific communications pass. It provides further functions to deliver and track a set of mission-defined messages detailing lander authorization instructions and orbiter data delivery state. All of the information concerning these transactions is persisted in a database providing a high level of accountability of the forward-link relay process.
Macro-actor execution on multilevel data-driven architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaudiot, J.L.; Najjar, W.
1988-12-31
The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.
ERIC Educational Resources Information Center
Wulfson, Stephen
1988-01-01
Presents reviews of six computer software programs for teaching science. Provides the publisher, grade level, cost, and descriptions of software, including: (1) "Recycling Logic"; (2) "Introduction to Biochemistry"; (3) "Food for Thought"; (4) "Watts in a Home"; (5) "Geology in Action"; and (6)…
2010-01-01
offshoring, or producing major software components overseas (Defense Science Board, 2009). These trends raise concerns about the level of trust that...7 Software Complexity...7 Increasing Software Vulnerabilities and Malware Population . . . . . . . . . . . . . . . . 9 Limitations of
Nuclear and Particle Physics Simulations: The Consortium of Upper-Level Physics Software
NASA Astrophysics Data System (ADS)
Bigelow, Roberta; Moloney, Michael J.; Philpott, John; Rothberg, Joseph
1995-06-01
The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.
Incorporating Code-Based Software in an Introductory Statistics Course
ERIC Educational Resources Information Center
Doehler, Kirsten; Taylor, Laura
2015-01-01
This article is based on the experiences of two statistics professors who have taught students to write and effectively utilize code-based software in a college-level introductory statistics course. Advantages of using software and code-based software in this context are discussed. Suggestions are made on how to ease students into using code with…
Shuttle program. MCC Level C formulation requirements: Entry guidance and entry autopilot
NASA Technical Reports Server (NTRS)
Harpold, J. C.; Hill, O.
1980-01-01
A set of preliminary entry guidance and autopilot software formulations is presented for use in the Mission Control Center (MCC) entry processor. These software formulations meet all level B requirements. Revision 2 incorporates the modifications required to functionally simulate optimal TAEM targeting capability (OTT). Implementation of this logic in the MCC must be coordinated with flight software OTT implementation and MCC TAEM guidance OTT. The entry guidance logic is based on the Orbiter avionics entry guidance software. This MCC requirements document contains a definition of coordinate systems, a list of parameter definitions for the software formulations, a description of the entry guidance detailed formulation requirements, a description of the detailed autopilot formulation requirements, a description of the targeting routine, and a set of formulation flow charts.
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
Trigger and data acquisition system for the N- N experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldo-Ceolin, M.; Bobisut, F.; Bonaiti, V.
1991-04-01
In this paper the Trigger and Data Acquisition system of the N-{bar N} experiment at the Institute Laue-Langevin at Grenoble is presented, together with CAMAC modules especially designed for this experiment. The trigger system is organized on three logical levels; it works in the presence of a high level of beam induced noise, without beam pulse synchronization, looking for a very rare signal. The data acquisition is based on a MicroVax II computer, in a cluster with 4 VaxStations, the DAQP software developed at CERN. The system has been working for a year with high efficiency and reliability.
The Knowledge-Based Software Assistant: Beyond CASE
NASA Technical Reports Server (NTRS)
Carozzoni, Joseph A.
1993-01-01
This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.
Simulink/PARS Integration Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vacaliuc, B.; Nakhaee, N.
2013-12-18
The state of the art for signal processor hardware has far out-paced the development tools for placing applications on that hardware. In addition, signal processors are available in a variety of architectures, each uniquely capable of handling specific types of signal processing efficiently. With these processors becoming smaller and demanding less power, it has become possible to group multiple processors, a heterogeneous set of processors, into single systems. Different portions of the desired problem set can be assigned to different processor types as appropriate. As software development tools do not keep pace with these processors, especially when multiple processors ofmore » different types are used, a method is needed to enable software code portability among multiple processors and multiple types of processors along with their respective software environments. Sundance DSP, Inc. has developed a software toolkit called “PARS”, whose objective is to provide a framework that uses suites of tools provided by different vendors, along with modeling tools and a real time operating system, to build an application that spans different processor types. The software language used to express the behavior of the system is a very high level modeling language, “Simulink”, a MathWorks product. ORNL has used this toolkit to effectively implement several deliverables. This CRADA describes this collaboration between ORNL and Sundance DSP, Inc.« less
Efficient system interrupt concept design at the microprogramming level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fakharzadeh, M.M.
1989-01-01
Over the past decade the demand for high speed super microcomputers has been tremendously increased. To satisfy this demand many high speed 32-bit microcomputers have been designed. However, the currently available 32-bit systems do not provide an adequate solution to many highly demanding problems such as in multitasking, and in interrupt driven applications, which both require context switching. Systems for these purposes usually incorporate sophisticated software. In order to be efficient, a high end microprocessor based system must satisfy stringent software demands. Although these microprocessors use the latest technology in the fabrication design and run at a very high speed,more » they still suffer from insufficient hardware support for such applications. All too often, this lack also is the premier cause of execution inefficiency. In this dissertation a micro-programmable control unit and operation unit is considered in an advanced design. An automaton controller is designed for high speed micro-level interrupt handling. Different stack models are designed for the single task and multitasking environment. The stacks are used for storage of various components of the processor during the interrupt calls, procedure calls, and task switching. A universal (as an example seven port) register file is designed for high speed parameter passing, and intertask communication in the multitasking environment. In addition, the register file provides a direct path between ALU and the peripheral data which is important in real-time control applications. The overall system is a highly parallel architecture, with no pipeline and internal cache memory, which allows the designer to be able to predict the processor's behavior during the critical times.« less
A software package for interactive motor unit potential classification using fuzzy k-NN classifier.
Rasheed, Sarbast; Stashuk, Daniel; Kamel, Mohamed
2008-01-01
We present an interactive software package for implementing the supervised classification task during electromyographic (EMG) signal decomposition process using a fuzzy k-NN classifier and utilizing the MATLAB high-level programming language and its interactive environment. The method employs an assertion-based classification that takes into account a combination of motor unit potential (MUP) shapes and two modes of use of motor unit firing pattern information: the passive and the active modes. The developed package consists of several graphical user interfaces used to detect individual MUP waveforms from a raw EMG signal, extract relevant features, and classify the MUPs into motor unit potential trains (MUPTs) using assertion-based classifiers.
Terrain following of arbitrary surfaces using a high intensity LED proximity sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, J.E.
1992-01-01
Many robotic operations, e.g., mapping, scanning, feature following, etc., require accurate surface following of arbitrary targets. This paper presents a versatile surface following and mapping system designed to promote hardware, software and application independence, modular development, and upward expandability. These goals are met by: a full, a priori specification of the hardware and software interfaces; a modular system architecture; and a hierarchical surface-data analysis method, permitting application specific tuning at each conceptual level of topological abstraction. This surface following system was fully designed and independently of any specific robotic host, then successfully integrated with and demonstrated on a completely amore » priori unknown, real-time robotic system. 7 refs.« less
Open source cardiology electronic health record development for DIGICARDIAC implementation
NASA Astrophysics Data System (ADS)
Dugarte, Nelson; Medina, Rubén.; Huiracocha, Lourdes; Rojas, Rubén.
2015-12-01
This article presents the development of a Cardiology Electronic Health Record (CEHR) system. Software consists of a structured algorithm designed under Health Level-7 (HL7) international standards. Novelty of the system is the integration of high resolution ECG (HRECG) signal acquisition and processing tools, patient information management tools and telecardiology tools. Acquisition tools are for management and control of the DIGICARDIAC electrocardiograph functions. Processing tools allow management of HRECG signal analysis searching for indicative patterns of cardiovascular pathologies. Telecardiology tools incorporation allows system communication with other health care centers decreasing access time to the patient information. CEHR system was completely developed using open source software. Preliminary results of process validation showed the system efficiency.
Cognitive tutor: applied research in mathematics education.
Ritter, Steven; Anderson, John R; Koedinger, Kenneth R; Corbett, Albert
2007-04-01
For 25 years, we have been working to build cognitive models of mathematics, which have become a basis for middle- and high-school curricula. We discuss the theoretical background of this approach and evidence that the resulting curricula are more effective than other approaches to instruction. We also discuss how embedding a well specified theory in our instructional software allows us to dynamically evaluate the effectiveness of our instruction at a more detailed level than was previously possible. The current widespread use of the software is allowing us to test hypotheses across large numbers of students. We believe that this will lead to new approaches both to understanding mathematical cognition and to improving instruction.
NSLS-II HIGH LEVEL APPLICATION INFRASTRUCTURE AND CLIENT API DESIGN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, G.; Yang; L.
2011-03-28
The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. It is an open structure platform, and we try to provide a narrow API set for client application. With this narrow API, existing applications developed in different language under different architecture could be ported to our platform with small modification. This paper describes system infrastructure design, client API and system integration, and latest progress. As a new 3rd generation synchrotron light source with ultra low emittance, there are new requirements and challenges to control and manipulate themore » beam. A use case study and a theoretical analysis have been performed to clarify requirements and challenges to the high level applications (HLA) software environment. To satisfy those requirements and challenges, adequate system architecture of the software framework is critical for beam commissioning, study and operation. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating, plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service oriented architecture technology. The HLA is combination of tools for accelerator physicists and operators, which is same as traditional approach. In NSLS-II, they include monitoring applications and control routines. Scripting environment is very important for the later part of HLA and both parts are designed based on a common set of APIs. Physicists and operators are users of these APIs, while control system engineers and a few accelerator physicists are the developers of these APIs. With our Client/Server mode based approach, we leave how to retrieve information to the developers of APIs and how to use them to form a physics application to the users. For example, how the channels are related to magnet and what the current real-time setting of a magnet is in physics unit are the internals of APIs. Measuring chromaticities are the users of APIs. All the users of APIs are working with magnet and instrument names in a physics unit. The low level communications in current or voltage unit are minimized. In this paper, we discussed our recent progress of our infrastructure development, and client API.« less
The Goddard Space Flight Center (GSFC) robotics technology testbed
NASA Technical Reports Server (NTRS)
Schnurr, Rick; Obrien, Maureen; Cofer, Sue
1989-01-01
Much of the technology planned for use in NASA's Flight Telerobotic Servicer (FTS) and the Demonstration Test Flight (DTF) is relatively new and untested. To provide the answers needed to design safe, reliable, and fully functional robotics for flight, NASA/GSFC is developing a robotics technology testbed for research of issues such as zero-g robot control, dual arm teleoperation, simulations, and hierarchical control using a high level programming language. The testbed will be used to investigate these high risk technologies required for the FTS and DTF projects. The robotics technology testbed is centered around the dual arm teleoperation of a pair of 7 degree-of-freedom (DOF) manipulators, each with their own 6-DOF mini-master hand controllers. Several levels of safety are implemented using the control processor, a separate watchdog computer, and other low level features. High speed input/output ports allow the control processor to interface to a simulation workstation: all or part of the testbed hardware can be used in real time dynamic simulation of the testbed operations, allowing a quick and safe means for testing new control strategies. The NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) hierarchical control scheme, is being used as the reference standard for system design. All software developed for the testbed, excluding some of simulation workstation software, is being developed in Ada. The testbed is being developed in phases. The first phase, which is nearing completion, and highlights future developments is described.
The Speckle Toolbox: A Powerful Data Reduction Tool for CCD Astrometry
NASA Astrophysics Data System (ADS)
Harshaw, Richard; Rowe, David; Genet, Russell
2017-01-01
Recent advances in high-speed low-noise CCD and CMOS cameras, coupled with breakthroughs in data reduction software that runs on desktop PCs, has opened the domain of speckle interferometry and high-accuracy CCD measurements of double stars to amateurs, allowing them to do useful science of high quality. This paper describes how to use a speckle interferometry reduction program, the Speckle Tool Box (STB), to achieve this level of result. For over a year the author (Harshaw) has been using STB (and its predecessor, Plate Solve 3) to obtain measurements of double stars based on CCD camera technology for pairs that are either too wide (the stars not sharing the same isoplanatic patch, roughly 5 arc-seconds in diameter) or too faint to image in the coherence time required for speckle (usually under 40ms). This same approach - using speckle reduction software to measure CCD pairs with greater accuracy than possible with lucky imaging - has been used, it turns out, for several years by the U. S. Naval Observatory.
The change in critical technologies for computational physics
NASA Technical Reports Server (NTRS)
Watson, Val
1990-01-01
It is noted that the types of technology required for computational physics are changing as the field matures. Emphasis has shifted from computer technology to algorithm technology and, finally, to visual analysis technology as areas of critical research for this field. High-performance graphical workstations tied to a supercommunicator with high-speed communications along with the development of especially tailored visualization software has enabled analysis of highly complex fluid-dynamics simulations. Particular reference is made here to the development of visual analysis tools at NASA's Numerical Aerodynamics Simulation Facility. The next technology which this field requires is one that would eliminate visual clutter by extracting key features of simulations of physics and technology in order to create displays that clearly portray these key features. Research in the tuning of visual displays to human cognitive abilities is proposed. The immediate transfer of technology to all levels of computers, specifically the inclusion of visualization primitives in basic software developments for all work stations and PCs, is recommended.
Scheduling algorithms for automatic control systems for technological processes
NASA Astrophysics Data System (ADS)
Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.
2017-01-01
Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays - all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.
Software Configuration Management Guidebook
NASA Technical Reports Server (NTRS)
1995-01-01
The growth in cost and importance of software to NASA has caused NASA to address the improvement of software development across the agency. One of the products of this program is a series of guidebooks that define a NASA concept of the assurance processes which are used in software development. The Software Assurance Guidebook, SMAP-GB-A201, issued in September, 1989, provides an overall picture of the concepts and practices of NASA in software assurance. Lower level guidebooks focus on specific activities that fall within the software assurance discipline, and provide more detailed information for the manager and/or practitioner. This is the Software Configuration Management Guidebook which describes software configuration management in a way that is compatible with practices in industry and at NASA Centers. Software configuration management is a key software development process, and is essential for doing software assurance.
The CMS electron and photon trigger for the LHC Run 2
NASA Astrophysics Data System (ADS)
Dezoort, Gage; Xia, Fan
2017-01-01
The CMS experiment implements a sophisticated two-level triggering system composed of Level-1, instrumented by custom-design hardware boards, and a software High-Level-Trigger. A new Level-1 trigger architecture with improved performance is now being used to maintain the thresholds that were used in LHC Run I for the more challenging luminosity conditions experienced during Run II. The upgrades to the calorimetry trigger will be described along with performance data. The algorithms for the selection of final states with electrons and photons, both for precision measurements and for searches of new physics beyond the Standard Model, will be described in detail.
NASA Technical Reports Server (NTRS)
Lawrence, Stella
1991-01-01
The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.
Leveraging OpenStudio's Application Programming Interfaces: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, N.; Ball, B.; Goldwasser, D.
2013-11-01
OpenStudio development efforts have been focused on providing Application Programming Interfaces (APIs) where users are able to extend OpenStudio without the need to compile the open source libraries. This paper will discuss the basic purposes and functionalities of the core libraries that have been wrapped with APIs including the Building Model, Results Processing, Advanced Analysis, UncertaintyQuantification, and Data Interoperability through Translators. Several building energy modeling applications have been produced using OpenStudio's API and Software Development Kits (SDK) including the United States Department of Energy's Asset ScoreCalculator, a mobile-based audit tool, an energy design assistance reporting protocol, and a portfolio scalemore » incentive optimization analysismethodology. Each of these software applications will be discussed briefly and will describe how the APIs were leveraged for various uses including high-level modeling, data transformations from detailed building audits, error checking/quality assurance of models, and use of high-performance computing for mass simulations.« less
Fault Injection Validation of a Safety-Critical TMR Sysem
NASA Astrophysics Data System (ADS)
Irrera, Ivano; Madeira, Henrique; Zentai, Andras; Hergovics, Beata
2016-08-01
Digital systems and their software are the core technology for controlling and monitoring industrial systems in practically all activity domains. Functional safety standards such as the European standard EN 50128 for railway applications define the procedures and technical requirements for the development of software for railway control and protection systems. The validation of such systems is a highly demanding task. In this paper we discuss the use of fault injection techniques, which have been used extensively in several domains, particularly in the space domain, to complement the traditional procedures to validate a SIL (Safety Integrity Level) 4 system for railway signalling, implementing a TMR (Triple Modular Redundancy) architecture. The fault injection tool is based on JTAG technology. The results of our injection campaign showed a high degree of tolerance to most of the injected faults, but several cases of unexpected behaviour have also been observed, helping understanding worst-case scenarios.
Mapping RNA-seq Reads with STAR
Dobin, Alexander; Gingeras, Thomas R.
2015-01-01
Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, signal visualization, and so forth. In this unit we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is Open Source software that can be run on Unix, Linux or Mac OS X systems. PMID:26334920
Mapping RNA-seq Reads with STAR.
Dobin, Alexander; Gingeras, Thomas R
2015-09-03
Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates, providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, and signal visualization. In this unit, we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is open source software that can be run on Unix, Linux, or Mac OS X systems. Copyright © 2015 John Wiley & Sons, Inc.
Highly efficient simulation environment for HDTV video decoder in VLSI design
NASA Astrophysics Data System (ADS)
Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter
2002-01-01
With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.
The information protection level assessment system implementation
NASA Astrophysics Data System (ADS)
Trapeznikov, E. V.
2018-04-01
Currently, the threat of various attacks increases significantly as automated systems become more widespread. On the basis of the conducted analysis the information protection level assessment system establishing objective was identified. The paper presents the information protection level assessment software implementation in the information system by applying the programming language C #. In conclusions the software features are identified and experimental results are represented.
ERIC Educational Resources Information Center
Tofan, Daniel C.
2009-01-01
This paper describes an upper-level undergraduate and graduate-level course on computers in chemical education that was developed and offered for the first time in Fall 2007. The course provides future chemistry teachers with exposure to current software tools that can improve productivity in teaching, curriculum development, and education…
[Research progress of probe design software of oligonucleotide microarrays].
Chen, Xi; Wu, Zaoquan; Liu, Zhengchun
2014-02-01
DNA microarray has become an essential medical genetic diagnostic tool for its high-throughput, miniaturization and automation. The design and selection of oligonucleotide probes are critical for preparing gene chips with high quality. Several sets of probe design software have been developed and are available to perform this work now. Every set of the software aims to different target sequences and shows different advantages and limitations. In this article, the research and development of these sets of software are reviewed in line with three main criteria, including specificity, sensitivity and melting temperature (Tm). In addition, based on the experimental results from literatures, these sets of software are classified according to their applications. This review will be helpful for users to choose an appropriate probe-design software. It will also reduce the costs of microarrays, improve the application efficiency of microarrays, and promote both the research and development (R&D) and commercialization of high-performance probe design software.
MODIS. Volume 1: MODIS level 1A software baseline requirements
NASA Technical Reports Server (NTRS)
Masuoka, Edward; Fleig, Albert; Ardanuy, Philip; Goff, Thomas; Carpenter, Lloyd; Solomon, Carl; Storey, James
1994-01-01
This document describes the level 1A software requirements for the moderate resolution imaging spectroradiometer (MODIS) instrument. This includes internal and external requirements. Internal requirements include functional, operational, and data processing as well as performance, quality, safety, and security engineering requirements. External requirements include those imposed by data archive and distribution systems (DADS); scheduling, control, monitoring, and accounting (SCMA); product management (PM) system; MODIS log; and product generation system (PGS). Implementation constraints and requirements for adapting the software to the physical environment are also included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, K.R.; Hansen, F.R.; Napolitano, L.M.
1992-01-01
DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate ( C'' or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability bymore » using DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, K.R.; Hansen, F.R.; Napolitano, L.M.
1992-01-01
DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate (``C`` or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability by usingmore » DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less
An Agile Constructionist Mentoring Methodology for Software Projects in the High School
ERIC Educational Resources Information Center
Meerbaum-Salant, Orni; Hazzan, Orit
2010-01-01
This article describes the construction process and evaluation of the Agile Constructionist Mentoring Methodology (ACMM), a mentoring method for guiding software development projects in the high school. The need for such a methodology has arisen due to the complexity of mentoring software project development in the high school. We introduce the…
NASA Astrophysics Data System (ADS)
Fadil, A.; Barriot, J.; Sichoix, L.; Ortega, P.; Willis, P.; Serafini, J.
2010-12-01
Monitoring vertical land motion is of crucial interest in observations of long-term sea level change and its reconstruction, but is among of the most, yet highly challenging, tasks of space geodesy. The aim of the paper is to compare the vertical velocity estimates of Tahiti Island obtained from six independent geophysical measurements, namely a decade of GPS, DORIS, and GRACE data, 17 years sea level difference (altimeter minus tide gauge (TG)) time series, ICE-5G (VM2 L90) Post-Glacial Rebound (PGR) model predictions, and coral reef stratigraphy. Except The Glacial Isostatic Adjustment (GIA also known as PGR) model, all the techniques are in a good agreement and reveal a very slow subsidence of the Tahiti Island averaged at -0.3 mm/yr which is barely significant. Neverthless, despite of that vertical motion, Tahiti remains an ideal location for the calibration of satellite altimeter measurements.Estimated vertical crustal motions from GPS, DORIS, GRACE, (altimetry - tide-gauge) sea level records, coral reef stratigraphy, and GIA. GG = GAMIT-GLOBK software packageGOA= GIPSY-OASIS II software package
Evaluation of features to support safety and quality in general practice clinical software
2011-01-01
Background Electronic prescribing is now the norm in many countries. We wished to find out if clinical software systems used by general practitioners in Australia include features (functional capabilities and other characteristics) that facilitate improved patient safety and care, with a focus on quality use of medicines. Methods Seven clinical software systems used in general practice were evaluated. Fifty software features that were previously rated as likely to have a high impact on safety and/or quality of care in general practice were tested and are reported here. Results The range of results for the implementation of 50 features across the 7 clinical software systems was as follows: 17-31 features (34-62%) were fully implemented, 9-13 (18-26%) partially implemented, and 9-20 (18-40%) not implemented. Key findings included: Access to evidence based drug and therapeutic information was limited. Decision support for prescribing was available but varied markedly between systems. During prescribing there was potential for medicine mis-selection in some systems, and linking a medicine with its indication was optional. The definition of 'current medicines' versus 'past medicines' was not always clear. There were limited resources for patients, and some medicines lists for patients were suboptimal. Results were provided to the software vendors, who were keen to improve their systems. Conclusions The clinical systems tested lack some of the features expected to support patient safety and quality of care. Standards and certification for clinical software would ensure that safety features are present and that there is a minimum level of clinical functionality that clinicians could expect to find in any system.
Integrated Environment for Development and Assurance
2015-01-26
Jan 26, 2015 © 2015 Carnegie Mellon University We Rely on Software for Safe Aircraft Operation Embedded software systems introduce a new class of...eveloper Compute Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency jitter affects...Why do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software system as major source of
Improved CLARAty Functional-Layer/Decision-Layer Interface
NASA Technical Reports Server (NTRS)
Estlin, Tara; Rabideau, Gregg; Gaines, Daniel; Johnston, Mark; Chouinard, Caroline; Nessnas, Issa; Shu, I-Hsiang
2008-01-01
Improved interface software for communication between the CLARAty Decision and Functional layers has been developed. [The Coupled Layer Architecture for Robotics Autonomy (CLARAty) was described in Coupled-Layer Robotics Architecture for Autonomy (NPO-21218), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48. To recapitulate: the CLARAty architecture was developed to improve the modularity of robotic software while tightening coupling between planning/execution and basic control subsystems. Whereas prior robotic software architectures typically contained three layers, the CLARAty contains two layers: a decision layer (DL) and a functional layer (FL).] Types of communication supported by the present software include sending commands from DL modules to FL modules and sending data updates from FL modules to DL modules. The present software supplants prior interface software that had little error-checking capability, supported data parameters in string form only, supported commanding at only one level of the FL, and supported only limited updates of the state of the robot. The present software offers strong error checking, and supports complex data structures and commanding at multiple levels of the FL, and relative to the prior software, offers a much wider spectrum of state-update capabilities.
Open core control software for surgical robots.
Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo
2010-05-01
In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a "force guide" for supporting operators to perform precise manipulation by using a master-slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. The Open Core Control software was implemented on a surgical master-slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a "force guide" on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement "General Principles of Software Validation" or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field.
V&V Within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1996-01-01
Verification and Validation (V&V) is used to increase the level of assurance of critical software, particularly that of safety-critical and mission-critical software. V&V is a systems engineering discipline that evaluates the software in a systems context, and is currently applied during the development of a specific application system. In order to bring the effectiveness of V&V to bear within reuse-based software engineering, V&V must be incorporated within the domain engineering process.
Software engineering project management - A state-of-the-art report
NASA Technical Reports Server (NTRS)
Thayer, R. H.; Lehman, J. H.
1977-01-01
The management of software engineering projects in the aerospace industry was investigated. The survey assessed such features as contract type, specification preparation techniques, software documentation required by customers, planning and cost-estimating, quality control, the use of advanced program practices, software tools and test procedures, the education levels of project managers, programmers and analysts, work assignment, automatic software monitoring capabilities, design and coding reviews, production times, success rates, and organizational structure of the projects.
Survey of Software Assurance Techniques for Highly Reliable Systems
NASA Technical Reports Server (NTRS)
Nelson, Stacy
2004-01-01
This document provides a survey of software assurance techniques for highly reliable systems including a discussion of relevant safety standards for various industries in the United States and Europe, as well as examples of methods used during software development projects. It contains one section for each industry surveyed: Aerospace, Defense, Nuclear Power, Medical Devices and Transportation. Each section provides an overview of applicable standards and examples of a mission or software development project, software assurance techniques used and reliability achieved.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1990-01-01
This volume (1 of 4) gives a summary of the original AMPS software system configuration, points out some of the problem areas in the original software design that this project is to address, and in the appendix collects all the bimonthly status reports. The purpose of AMPS is to provide a self reliant system to control the generation and distribution of power in the space station. The software in the AMPS breadboard can be divided into three levels: the operating environment software, the protocol software, and the station specific software. This project deals only with the operating environment software and the protocol software. The present station specific software will not change except as necessary to conform to new data formats.
The Cherenkov Telescope Array Observatory: top level use cases
NASA Astrophysics Data System (ADS)
Bulgarelli, A.; Kosack, K.; Hinton, J.; Tosti, G.; Schwanke, U.; Schwarz, J.; Colomé, P.; Conforti, V.; Khelifi, B.; Goullon, J.; Ong, R.; Markoff, S.; Contreras, J. L.; Lucarelli, F.; Antonelli, L. A.; Bigongiari, C.; Boisson, C.; Bosnjak, Z.; Brau-Nogué, S.; Carosi, A.; Chen, A.; Cotter, G.; Covino, S.; Daniel, M.; De Cesare, G.; de Ona Wilhelmi, E.; Della Volpe, M.; Di Pierro, F.; Fioretti, V.; Füßling, M.; Garczarczyk, M.; Gaug, M.; Glicenstein, J. F.; Goldoni, P.; Götz, D.; Grandi, P.; Heller, M.; Hermann, G.; Inoue, S.; Knödlseder, J.; Lenain, J.-P.; Lindfors, E.; Lombardi, S.; Luque-Escamilla, P.; Maier, G.; Marisaldi, M.; Mundell, C.; Neyroud, N.; Noda, K.; O'Brien, P.; Petrucci, P. O.; Martí Ribas, J.; Ribó, M.; Rodriguez, J.; Romano, P.; Schmid, J.; Serre, N.; Sol, H.; Schussler, F.; Stamerra, A.; Stolarczyk, T.; Vandenbrouck, J.; Vercellone, S.; Vergani, S.; Zech, A.; Zoli, A.
2016-08-01
Today the scientific community is facing an increasing complexity of the scientific projects, from both a technological and a management point of view. The reason for this is in the advance of science itself, where new experiments with unprecedented levels of accuracy, precision and coverage (time and spatial) are realised. Astronomy is one of the fields of the physical sciences where a strong interaction between the scientists, the instrument and software developers is necessary to achieve the goals of any Big Science Project. The Cherenkov Telescope Array (CTA) will be the largest ground-based very high-energy gamma-ray observatory of the next decades. To achieve the full potential of the CTA Observatory, the system must be put into place to enable users to operate the telescopes productively. The software will cover all stages of the CTA system, from the preparation of the observing proposals to the final data reduction, and must also fit into the overall system. Scientists, engineers, operators and others will use the system to operate the Observatory, hence they should be involved in the design process from the beginning. We have organised a workgroup and a workflow for the definition of the CTA Top Level Use Cases in the context of the Requirement Management activities of the CTA Observatory. Scientists, instrument and software developers are collaborating and sharing information to provide a common and general understanding of the Observatory from a functional point of view. Scientists that will use the CTA Observatory will provide mainly Science Driven Use Cases, whereas software engineers will subsequently provide more detailed Use Cases, comments and feedbacks. The main purposes are to define observing modes and strategies, and to provide a framework for the flow down of the Use Cases and requirements to check missing requirements and the already developed Use-Case models at CTA sub-system level. Use Cases will also provide the basis for the definition of the Acceptance Test Plan for the validation of the overall CTA system. In this contribution we present the organisation and the workflow of the Top Level Use Cases workgroup.
Using high-resolution displays for high-resolution cardiac data.
Goodyer, Christopher; Hodrien, John; Wood, Jason; Kohl, Peter; Brodlie, Ken
2009-07-13
The ability to perform fast, accurate, high-resolution visualization is fundamental to improving our understanding of anatomical data. As the volumes of data increase from improvements in scanning technology, the methods applied to visualization must evolve. In this paper, we address the interactive display of data from high-resolution magnetic resonance imaging scanning of a rabbit heart and subsequent histological imaging. We describe a visualization environment involving a tiled liquid crystal display panel display wall and associated software, which provides an interactive and intuitive user interface. The oView software is an OpenGL application that is written for the VR Juggler environment. This environment abstracts displays and devices away from the application itself, aiding portability between different systems, from desktop PCs to multi-tiled display walls. Portability between display walls has been demonstrated through its use on walls at the universities of both Leeds and Oxford. We discuss important factors to be considered for interactive two-dimensional display of large three-dimensional datasets, including the use of intuitive input devices and level of detail aspects.
ERIC Educational Resources Information Center
Danan, Martine
2010-01-01
This article describes a series of exploratory L1 to L2 dubbing projects for which students translated and used editing software to dub short American film and TV clips into their target language. Translating and dubbing into the target language involve students in multifaceted, high-level language production tasks that lead to enhanced vocabulary…
Metagram Software - A New Perspective on the Art of Computation.
1981-10-01
numober) Computer Programming Information and Analysis Metagramming Philosophy Intelligence Information Systefs Abstraction & Metasystems Metagranmming...control would also serve well in the analysis of military and political intelligence, and in other areas where highly abstract methods of thought serve...needed in intelligence because several levels of abstraction are involved in a political or military system, because analysis entails a complex interplay
ERIC Educational Resources Information Center
Xie, Ying; Sharma, Priya
2013-01-01
Reflective learning refers to a learner's purposeful and conscious manipulation of ideas toward meaningful learning. Blogs have been used to support reflective thinking, but the commonly seen blog software usually does not provide overt mechanisms for students' high-level reflections. A new tool was designed to support the reflective…
NASA Technical Reports Server (NTRS)
Delaat, J. C.; Merrill, W. C.
1983-01-01
A sensor failure detection, isolation, and accommodation algorithm was developed which incorporates analytic sensor redundancy through software. This algorithm was implemented in a high level language on a microprocessor based controls computer. Parallel processing and state-of-the-art 16-bit microprocessors are used along with efficient programming practices to achieve real-time operation.
Framework for Flexible Security in Group Communications
NASA Technical Reports Server (NTRS)
McDaniel, Patrick; Prakash, Atul
2006-01-01
The Antigone software system defines a framework for the flexible definition and implementation of security policies in group communication systems. Antigone does not dictate the available security policies, but provides high-level mechanisms for implementing them. A central element of the Antigone architecture is a suite of such mechanisms comprising micro-protocols that provide the basic services needed by secure groups.
ERIC Educational Resources Information Center
Hasenekoglu, Ismet; Timucin, Melih
2007-01-01
The aim of this study is to collect and evaluate opinions of CAI experts and biology teachers about a high school level Computer Assisted Biology Instruction Material presenting computer-made modelling and simulations. It is a case study. A material covering "Nucleic Acids and Protein Synthesis" topic was developed as the…
Tracking at High Level Trigger in CMS
NASA Astrophysics Data System (ADS)
Tosi, M.
2016-04-01
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.
Framework for architecture-independent run-time reconfigurable applications
NASA Astrophysics Data System (ADS)
Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.
2000-10-01
Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.
Rivara, Federico; Lumetti, Simone; Calciolari, Elena; Toffoli, Andrea; Forlani, Gianfranco; Manfredi, Edoardo
2016-06-01
The position of dental implants placed with software-guided systems should be highly accurate in order to ensure safety and a passive fit of the immediate prosthesis. The purpose of this study was to measure the discrepancy between the clinical and software-planned position of dental implants by applying a photogrammetric method. Two casts were obtained, 1 from the surgical template and 1 from the actual position of the implants on the alveolar ridge of a patient. Photogrammetry was then applied to precisely locate the position of each implant on the casts. Because this mathematical technique required the identification of image points and of the relative spatial coordinates, 4 marks were drilled on the implant screw. The position of the implants was then identified as the geometric center of the 4 marks, while the orientation of the implant axis was represented by a vector normal to the plane fitting the points. A series of 16 convergent images all around the object was made using a high-resolution digital camera. A mathematical method called "rototranslation" was used to superimpose the cast images for the comparison. The tests performed on the casts resulted in an average precision level of 4 μm for the locations and less than 1 degree for the axis of the implants. A series of empirical and numerical tests were performed to assess the performance of the procedure and of the measurement protocol. The photogrammetric method is reproducible and can be used to measure the discrepancy between the software-planned and the real position of dental implants. Considering that the average precision level required for an implant-based prosthesis is approximately 50 μm, the error associated with this method can be considered as negligible. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Piras, Annamaria; Malucchi, Giovanni
2012-08-01
In the design and development phase of a new program one of the critical aspects is the integration of all the functional requirements of the system and the control of the overall consistency between the identified needs on one side and the available resources on the other side, especially when both the required needs and available resources are not yet consolidated, but they are evolving as the program maturity increases.The Integrated Engineering Harness Avionics and Software database (IDEHAS) is a tool that has been developed to support this process in the frame of the Avionics and Software disciplines through the different phases of the program. The tool is in fact designed to allow an incremental build up of the avionics and software systems, from the description of the high level architectural data (available in the early stages of the program) to the definition of the pin to pin connectivity information (typically consolidated in the design finalization stages) and finally to the construction and validation of the detailed telemetry parameters and commands to be used in the test phases and in the Mission Control Centre. The key feature of this approach and of the associated tool is that it allows the definition and the maintenance / update of all these data in a single, consistent environment.On one side a system level and concurrent approach requires the feasibility to easily integrate and update the best data available since the early stages of a program in order to improve confidence in the consistency and to control the design information.On the other side, the amount of information of different typologies and the cross-relationships among the data imply highly consolidated structures requiring lot of checks to guarantee the data content consistency with negative effects on simplicity and flexibility and often limiting the attention to special needs and to the interfaces with other disciplines.
How reliable is computerized assessment of readability?
Mailloux, S L; Johnson, M E; Fisher, D G; Pettibone, T J
1995-01-01
To assess the consistency and comparability of readability software programs, four software programs (Corporate Voice, Grammatix IV, Microsoft Word for Windows, and RightWriter) were compared. Standard materials included 28 pieces of printed educational materials on human immunodeficiency virus/acquired immunodeficiency syndrome distributed nationally and the Gettysburg Address. Statistical analyses for the educational materials revealed that each of the three formulas assessed (Flesch-Kincaid, Flesch Reading Ease, and Gunning Fog Index) provided significantly different grade equivalent scores and that the Microsoft Word program provided significantly lower grade levels and was more inconsistent in the scores provided. For the Gettysburg Address, considerable variation was revealed among formulas, with the discrepancy being up to two grade levels. When averaging across formulas, there was a variation of 1.3 grade levels between the four software programs. Given the variation between formulas and programs, implications for decisions based on results of these software programs are provided.
Elementary Keyboarding Software Product Reports.
ERIC Educational Resources Information Center
Northwest Regional Educational Lab., Portland, OR.
This report provides detailed product descriptions of 45 software programs designed to teach or improve the keyboarding skills of elementary school students that were identified by the MicroSIFT (Microcomputer Information and Software for Teachers) staff. The descriptions include program titles, producer names, costs, grade levels, hardware,…
Intermediate Levels of Autonomy within the SSM/PMAD Breadboard
NASA Technical Reports Server (NTRS)
Dugal-Whitehead, Norma R.; Walls, Bryan
1995-01-01
The Space Station Module Power Management and Distribution (SSM/PMAD) bread-board is a test bed for the development of advanced power system control and automation. Software control in the SSM/PMAD breadboard is through co-operating systems, called Autonomous Agents. Agents can be a mixture of algorithmic software and expert systems. The early SSM/PMAD system was envisioned as being completely autonomous. It soon became apparent, though, that there would always be a need for human intervention, at least as long as a human interacts with the system in any way. In a system designed only for autonomous operation, manual intervention meant taking full control of the whole system, and loosing whatever expertise was in the system. Several methods for allowing humans to interact at an appropriate level of control were developed. This paper examines some of these intermediate modes of autonomy. The least humanly intrusive mode is simple monitoring. The ability to modify future behavior by altering a schedule involves high-level interaction. Modification of operating activities comes next. The coarsest mode of control is individual, unplanned operation of individual Power System components. Each of these levels is integrated into the SSM/PMAD breadboard, with support for the user (such as warnings of the consequences of control decisions) at every level.
Teaching ESL Beginners Metacognitive Writing Strategies through Multimedia Software
ERIC Educational Resources Information Center
Wei, Jing; Chen, Julian Chengchiang; Adawu, Anthony
2014-01-01
This case study explores how strategy-based instruction (SBI), assisted by multimedia software, can be incorporated to teach beginning-level ESL learners metacognitive writing strategies. Two beginning-level adult learners participated in a 10-session SBI on planning and organizing strategies. The Cognitive Academic Language Learning Approach…