A Matrix Approach to Software Process Definition
NASA Technical Reports Server (NTRS)
Schultz, David; Bachman, Judith; Landis, Linda; Stark, Mike; Godfrey, Sally; Morisio, Maurizio; Powers, Edward I. (Technical Monitor)
2000-01-01
The Software Engineering Laboratory (SEL) is currently engaged in a Methodology and Metrics program for the Information Systems Center (ISC) at Goddard Space Flight Center (GSFC). This paper addresses the Methodology portion of the program. The purpose of the Methodology effort is to assist a software team lead in selecting and tailoring a software development or maintenance process for a specific GSFC project. It is intended that this process will also be compliant with both ISO 9001 and the Software Engineering Institute's Capability Maturity Model (CMM). Under the Methodology program, we have defined four standard ISO-compliant software processes for the ISC, and three tailoring criteria that team leads can use to categorize their projects. The team lead would select a process and appropriate tailoring factors, from which a software process tailored to the specific project could be generated. Our objective in the Methodology program is to present software process information in a structured fashion, to make it easy for a team lead to characterize the type of software engineering to be performed, and to apply tailoring parameters to search for an appropriate software process description. This will enable the team lead to follow a proven, effective software process and also satisfy NASA's requirement for compliance with ISO 9001 and the anticipated requirement for CMM assessment. This work is also intended to support the deployment of sound software processes across the ISC.
Workflow-Based Software Development Environment
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
2013-01-01
The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment
Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2016-01-01
To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.
NASA Astrophysics Data System (ADS)
Musil, Juergen; Schweda, Angelika; Winkler, Dietmar; Biffl, Stefan
Based on our observations of Austrian video game software development (VGSD) practices we identified a lack of systematic processes/method support and inefficient collaboration between various involved disciplines, i.e. engineers and artists. VGSD includes heterogeneous disciplines, e.g. creative arts, game/content design, and software. Nevertheless, improving team collaboration and process support is an ongoing challenge to enable a comprehensive view on game development projects. Lessons learned from software engineering practices can help game developers to increase game development processes within a heterogeneous environment. Based on a state of the practice survey in the Austrian games industry, this paper presents (a) first results with focus on process/method support and (b) suggests a candidate flexible process approach based on Scrum to improve VGSD and team collaboration. Results showed (a) a trend to highly flexible software processes involving various disciplines and (b) identified the suggested flexible process approach as feasible and useful for project application.
Supporting the Use of CERT (registered trademark) Secure Coding Standards in DoD Acquisitions
2012-07-01
Capability Maturity Model IntegrationSM (CMMI®) [Davis 2009]. SM Team Software Process, TSP, and Capability Maturity Model Integration are service...STP Software Test Plan TEP Test and Evaluation Plan TSP Team Software Process V & V verification and validation CMU/SEI-2012-TN-016 | 47...Supporting the Use of CERT® Secure Coding Standards in DoD Acquisitions Tim Morrow ( Software Engineering Institute) Robert Seacord ( Software
Implementing Extreme Programming in Distributed Software Project Teams: Strategies and Challenges
NASA Astrophysics Data System (ADS)
Maruping, Likoebe M.
Agile software development methods and distributed forms of organizing teamwork are two team process innovations that are gaining prominence in today's demanding software development environment. Individually, each of these innovations has yielded gains in the practice of software development. Agile methods have enabled software project teams to meet the challenges of an ever turbulent business environment through enhanced flexibility and responsiveness to emergent customer needs. Distributed software project teams have enabled organizations to access highly specialized expertise across geographic locations. Although much progress has been made in understanding how to more effectively manage agile development teams and how to manage distributed software development teams, managers have little guidance on how to leverage these two potent innovations in combination. In this chapter, I outline some of the strategies and challenges associated with implementing agile methods in distributed software project teams. These are discussed in the context of a study of a large-scale software project in the United States that lasted four months.
Leader Delegation and Trust in Global Software Teams
ERIC Educational Resources Information Center
Zhang, Suling
2008-01-01
Virtual teams are an important work structure in global software development. The distributed team structure enables access to a diverse set of expertise which is often not available in one location, to a cheaper labor force, and to a potentially accelerated development process that uses a twenty-four hour work structure. Many software teams…
The (mis)use of subjective process measures in software engineering
NASA Technical Reports Server (NTRS)
Valett, Jon D.; Condon, Steven E.
1993-01-01
A variety of measures are used in software engineering research to develop an understanding of the software process and product. These measures fall into three broad categories: quantitative, characteristics, and subjective. Quantitative measures are those to which a numerical value can be assigned, for example effort or lines of code (LOC). Characteristics describe the software process or product; they might include programming language or the type of application. While such factors do not provide a quantitative measurement of a process or product, they do help characterize them. Subjective measures (as defined in this study) are those that are based on the opinion or opinions of individuals; they are somewhat unique and difficult to quantify. Capturing of subjective measure data typically involves development of some type of scale. For example, 'team experience' is one of the subjective measures that were collected and studied by the Software Engineering Laboratory (SEL). Certainly, team experience could have an impact on the software process or product; actually measuring a team's experience, however, is not a strictly mathematical exercise. Simply adding up each team member's years of experience appears inadequate. In fact, most researchers would agree that 'years' do not directly translate into 'experience.' Team experience must be defined subjectively and then a scale must be developed e.g., high experience versus low experience; or high, medium, low experience; or a different or more granular scale. Using this type of scale, a particular team's overall experience can be compared with that of other teams in the development environment. Defining, collecting, and scaling subjective measures is difficult. First, precise definitions of the measures must be established. Next, choices must be made about whose opinions will be solicited to constitute the data. Finally, care must be given to defining the right scale and level of granularity for measurement.
NASA Technical Reports Server (NTRS)
Tijidjian, Raffi P.
2010-01-01
The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.
Fully Employing Software Inspections Data
NASA Technical Reports Server (NTRS)
Shull, Forrest; Feldmann, Raimund L.; Seaman, Carolyn; Regardie, Myrna; Godfrey, Sally
2009-01-01
Software inspections provide a proven approach to quality assurance for software products of all kinds, including requirements, design, code, test plans, among others. Common to all inspections is the aim of finding and fixing defects as early as possible, and thereby providing cost savings by minimizing the amount of rework necessary later in the lifecycle. Measurement data, such as the number and type of found defects and the effort spent by the inspection team, provide not only direct feedback about the software product to the project team but are also valuable for process improvement activities. In this paper, we discuss NASA's use of software inspections and the rich set of data that has resulted. In particular, we present results from analysis of inspection data that illustrate the benefits of fully utilizing that data for process improvement at several levels. Examining such data across multiple inspections or projects allows team members to monitor and trigger cross project improvements. Such improvements may focus on the software development processes of the whole organization as well as improvements to the applied inspection process itself.
Student Team Projects in Information Systems Development: Measuring Collective Creative Efficacy
ERIC Educational Resources Information Center
Cheng, Hsiu-Hua; Yang, Heng-Li
2011-01-01
For information systems development project student teams, learning how to improve software development processes is an important training. Software process improvement is an outcome of a number of creative behaviours. Social cognitive theory states that the efficacy of judgment influences behaviours. This study explores the impact of three types…
Management Guidelines for Database Developers' Teams in Software Development Projects
NASA Astrophysics Data System (ADS)
Rusu, Lazar; Lin, Yifeng; Hodosi, Georg
Worldwide job market for database developers (DBDs) is continually increasing in last several years. In some companies, DBDs are organized as a special team (DBDs team) to support other projects and roles. As a new role, the DBDs team is facing a major problem that there are not any management guidelines for them. The team manager does not know which kinds of tasks should be assigned to this team and what practices should be used during DBDs work. Therefore in this paper we have developed a set of management guidelines, which includes 8 fundamental tasks and 17 practices from software development process, by using two methodologies Capability Maturity Model (CMM) and agile software development in particular Scrum in order to improve the DBDs team work. Moreover the management guidelines developed here has been complemented with practices from authors' experience in this area and has been evaluated in the case of a software company. The management guidelines for DBD teams presented in this paper could be very usefully for other companies too that are using a DBDs team and could contribute towards an increase of the efficiency of these teams in their work on software development projects.
NASA Astrophysics Data System (ADS)
Yetman, G.; Downs, R. R.
2011-12-01
Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.
Case Study: Accelerating Process Improvement by Integrating the TSP and CMMI
2007-06-01
Could software development teams and indi- viduals apply similar principles to improve their work? Watts S . Humphrey , a founder of the process...was an authorized PSP instructor. At Schwalb’s urging, Watts Humphrey briefed the SLT on the PSP and TSP, and after the briefing, the team... Humphrey 96] Humphrey , Watts S . Introduction to the Personal Software Process. Boston, MA: Addison- Wesley Publishing Company, Inc., 1996 (ISBN
SLS Flight Software Testing: Using a Modified Agile Software Testing Approach
NASA Technical Reports Server (NTRS)
Bolton, Albanie T.
2016-01-01
NASA's Space Launch System (SLS) is an advanced launch vehicle for a new era of exploration beyond earth's orbit (BEO). The world's most powerful rocket, SLS, will launch crews of up to four astronauts in the agency's Orion spacecraft on missions to explore multiple deep-space destinations. Boeing is developing the SLS core stage, including the avionics that will control vehicle during flight. The core stage will be built at NASA's Michoud Assembly Facility (MAF) in New Orleans, LA using state-of-the-art manufacturing equipment. At the same time, the rocket's avionics computer software is being developed here at Marshall Space Flight Center in Huntsville, AL. At Marshall, the Flight and Ground Software division provides comprehensive engineering expertise for development of flight and ground software. Within that division, the Software Systems Engineering Branch's test and verification (T&V) team uses an agile test approach in testing and verification of software. The agile software test method opens the door for regular short sprint release cycles. The idea or basic premise behind the concept of agile software development and testing is that it is iterative and developed incrementally. Agile testing has an iterative development methodology where requirements and solutions evolve through collaboration between cross-functional teams. With testing and development done incrementally, this allows for increased features and enhanced value for releases. This value can be seen throughout the T&V team processes that are documented in various work instructions within the branch. The T&V team produces procedural test results at a higher rate, resolves issues found in software with designers at an earlier stage versus at a later release, and team members gain increased knowledge of the system architecture by interfacing with designers. SLS Flight Software teams want to continue uncovering better ways of developing software in an efficient and project beneficial manner. Through agile testing, there has been increased value through individuals and interactions over processes and tools, improved customer collaboration, and improved responsiveness to changes through controlled planning. The presentation will describe agile testing methodology as taken with the SLS FSW Test and Verification team at Marshall Space Flight Center.
Streamlining Software Aspects of Certification: Report on the SSAC Survey
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Dorsey, Cheryl A.; Knight, John C.; Leveson, Nancy G.; McCormick, G. Frank
1999-01-01
The aviation system now depends on information technology more than ever before to ensure safety and efficiency. To address concerns about the efficacy of software aspects of the certification process, the Federal Aviation Administration (FAA) began the Streamlining Software Aspects of Certification (SSAC) program. The SSAC technical team was commissioned to gather data, analyze results, and propose recommendations to maximize efficiency and minimize cost and delay, without compromising safety. The technical team conducted two public workshops to identify and prioritize software approval issues, and conducted a survey to validate the most urgent of those issues. The SSAC survey, containing over two hundred questions about the FAA's software approval process, reached over four hundred industry software developers, aircraft manufacturers, and FAA designated engineering representatives. Three hundred people responded. This report presents the SSAC program rationale, survey process, preliminary findings, and recommendations.
Using Modern Methodologies with Maintenance Software
NASA Technical Reports Server (NTRS)
Streiffert, Barbara A.; Francis, Laurie K.; Smith, Benjamin D.
2014-01-01
Jet Propulsion Laboratory uses multi-mission software produced by the Mission Planning and Sequencing (MPS) team to process, simulate, translate, and package the commands that are sent to a spacecraft. MPS works under the auspices of the Multi-Mission Ground Systems and Services (MGSS). This software consists of nineteen applications that are in maintenance. The MPS software is classified as either class B (mission critical) or class C (mission important). The scheduling of tasks is difficult because mission needs must be addressed prior to performing any other tasks and those needs often spring up unexpectedly. Keeping track of the tasks that everyone is working on is also difficult because each person is working on a different software component. Recently the group adopted the Scrum methodology for planning and scheduling tasks. Scrum is one of the newer methodologies typically used in agile development. In the Scrum development environment, teams pick their tasks that are to be completed within a sprint based on priority. The team specifies the sprint length usually a month or less. Scrum is typically used for new development of one application. In the Scrum methodology there is a scrum master who is a facilitator who tries to make sure that everything moves smoothly, a product owner who represents the user(s) of the software and the team. MPS is not the traditional environment for the Scrum methodology. MPS has many software applications in maintenance, team members who are working on disparate applications, many users, and is interruptible based on mission needs, issues and requirements. In order to use scrum, the methodology needed adaptation to MPS. Scrum was chosen because it is adaptable. This paper is about the development of the process for using scrum, a new development methodology, with a team that works on disparate interruptible tasks on multiple software applications.
Empirical studies of design software: Implications for software engineering environments
NASA Technical Reports Server (NTRS)
Krasner, Herb
1988-01-01
The empirical studies team of MCC's Design Process Group conducted three studies in 1986-87 in order to gather data on professionals designing software systems in a range of situations. The first study (the Lift Experiment) used thinking aloud protocols in a controlled laboratory setting to study the cognitive processes of individual designers. The second study (the Object Server Project) involved the observation, videotaping, and data collection of a design team of a medium-sized development project over several months in order to study team dynamics. The third study (the Field Study) involved interviews with the personnel from 19 large development projects in the MCC shareholders in order to study how the process of design is affected by organizationl and project behavior. The focus of this report will be on key observations of design process (at several levels) and their implications for the design of environments.
Contingency theoretic methodology for agent-based web-oriented manufacturing systems
NASA Astrophysics Data System (ADS)
Durrett, John R.; Burnell, Lisa J.; Priest, John W.
2000-12-01
The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.
CrossTalk: The Journal of Defense Software Engineering. Volume 18, Number 9
2005-09-01
2004. 12. Humphrey , Watts . Introduction to the Personal Software Process SM. Addison- Wesley 1997. 13. Humphrey , Watts . Introduction to the Team...Personal Software ProcessSM (PSPSM)is a software development process orig- inated by Watts Humphrey at the Software Engineering Institute (SEI) in the...meets its commitments and bring a sense of control and predictability into an apparently chaotic project.u References 1. Humphrey , Watts . Coaching
A Bibliography of the Personal Software Process (PSP) and the Team Software Process (TSP)
2009-10-01
Postmortem.‖ Proceedings of the TSP Symposium (September 2007). http://www.sei.cmu.edu/tspsymposium/ Rickets , Chris; Lindeman, Robert; & Hodgins, Brad... Rickets , Chris A. ―A TSP Software Maintenance Life Cycle.‖ CrossTalk (March 2005). Rozanc, I. & Mahnic, V. ―Teaching Software Quality with Emphasis on PSP
A self-referential HOWTO on release engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galassi, Mark C.
Release engineering is a fundamental part of the software development cycle: it is the point at which quality control is exercised and bug fixes are integrated. The way in which software is released also gives the end user her first experience of a software package, while in scientific computing release engineering can guarantee reproducibility. For these reasons and others, the release process is a good indicator of the maturity and organization of a development team. Software teams often do not put in place a release process at the beginning. This is unfortunate because the team does not have early andmore » continuous execution of test suites, and it does not exercise the software in the same conditions as the end users. I describe an approach to release engineering based on the software tools developed and used by the GNU project, together with several specific proposals related to packaging and distribution. I do this in a step-by-step manner, demonstrating how this very paper is written and built using proper release engineering methods. Because many aspects of release engineering are not exercised in the building of the paper, the accompanying software repository also contains examples of software libraries.« less
TOPEX Software Document Series. Volume 5; Rev. 1; TOPEX GDR Processing
NASA Technical Reports Server (NTRS)
Lee, Jeffrey; Lockwood, Dennis; Hancock, David W., III
2003-01-01
This document is a compendium of the WFF TOPEX Software Development Team's knowledge regarding Geophysical Data Record (GDR) Processing. It includes many elements of a requirements document, a software specification document, a software design document, and a user's manual. In the more technical sections, this document assumes the reader is familiar with TOPEX and instrument files.
Requirements Engineering in Building Climate Science Software
ERIC Educational Resources Information Center
Batcheller, Archer L.
2011-01-01
Software has an important role in supporting scientific work. This dissertation studies teams that build scientific software, focusing on the way that they determine what the software should do. These requirements engineering processes are investigated through three case studies of climate science software projects. The Earth System Modeling…
Requirements Engineering in Building Climate Science Software
NASA Astrophysics Data System (ADS)
Batcheller, Archer L.
Software has an important role in supporting scientific work. This dissertation studies teams that build scientific software, focusing on the way that they determine what the software should do. These requirements engineering processes are investigated through three case studies of climate science software projects. The Earth System Modeling Framework assists modeling applications, the Earth System Grid distributes data via a web portal, and the NCAR (National Center for Atmospheric Research) Command Language is used to convert, analyze and visualize data. Document analysis, observation, and interviews were used to investigate the requirements-related work. The first research question is about how and why stakeholders engage in a project, and what they do for the project. Two key findings arise. First, user counts are a vital measure of project success, which makes adoption important and makes counting tricky and political. Second, despite the importance of quantities of users, a few particular "power users" develop a relationship with the software developers and play a special role in providing feedback to the software team and integrating the system into user practice. The second research question focuses on how project objectives are articulated and how they are put into practice. The team seeks to both build a software system according to product requirements but also to conduct their work according to process requirements such as user support. Support provides essential communication between users and developers that assists with refining and identifying requirements for the software. It also helps users to learn and apply the software to their real needs. User support is a vital activity for scientific software teams aspiring to create infrastructure. The third research question is about how change in scientific practice and knowledge leads to changes in the software, and vice versa. The "thickness" of a layer of software infrastructure impacts whether the software team or users have control and responsibility for making changes in response to new scientific ideas. Thick infrastructure provides more functionality for users, but gives them less control of it. The stability of infrastructure trades off against the responsiveness that the infrastructure can have to user needs.
Quantitative CMMI Assessment for Offshoring through the Analysis of Project Management Repositories
NASA Astrophysics Data System (ADS)
Sunetnanta, Thanwadee; Nobprapai, Ni-On; Gotel, Olly
The nature of distributed teams and the existence of multiple sites in offshore software development projects pose a challenging setting for software process improvement. Often, the improvement and appraisal of software processes is achieved through a turnkey solution where best practices are imposed or transferred from a company’s headquarters to its offshore units. In so doing, successful project health checks and monitoring for quality on software processes requires strong project management skills, well-built onshore-offshore coordination, and often needs regular onsite visits by software process improvement consultants from the headquarters’ team. This paper focuses on software process improvement as guided by the Capability Maturity Model Integration (CMMI) and proposes a model to evaluate the status of such improvement efforts in the context of distributed multi-site projects without some of this overhead. The paper discusses the application of quantitative CMMI assessment through the collection and analysis of project data gathered directly from project repositories to facilitate CMMI implementation and reduce the cost of such implementation for offshore-outsourced software development projects. We exemplify this approach to quantitative CMMI assessment through the analysis of project management data and discuss the future directions of this work in progress.
WFF TOPEX Software Documentation Altimeter Instrument File (AIF) Processing, October 1998. Volume 3
NASA Technical Reports Server (NTRS)
Lee, Jeffrey; Lockwood, Dennis
2003-01-01
This document is a compendium of the WFF TOPEX Software Development Team's knowledge regarding Sensor Data Record (SDR) Processing. It includes many elements of a requirements document, a software specification document, a software design document, and a user's manual. In the more technical sections, this document assumes the reader is familiar with TOPEX and instrument files.
Cleanroom Software Engineering Reference Model. Version 1.0.
1996-11-01
teams. It also serves as a baseline for continued evolution of Cleanroom practice. The scope of the CRM is software management , specification...addition to project staff, participants include management , peer organization representatives, and customer representatives as appropriate for...2 Review the status of the process with management , the project team, peer groups, and the customer . These verification activities include
NASA Technical Reports Server (NTRS)
Mahmot, Ron; Koslosky, John T.; Beach, Edward; Schwarz, Barbara
1994-01-01
The Mission Operations Division (MOD) at Goddard Space Flight Center builds Mission Operations Centers which are used by Flight Operations Teams to monitor and control satellites. Reducing system life cycle costs through software reuse has always been a priority of the MOD. The MOD's Transportable Payload Operations Control Center development team established an extensive library of 14 subsystems with over 100,000 delivered source instructions of reusable, generic software components. Nine TPOCC-based control centers to date support 11 satellites and achieved an average software reuse level of more than 75 percent. This paper shares experiences of how the TPOCC building blocks were developed and how building block developer's, mission development teams, and users are all part of the process.
TOPEX SDR Processing, October 1998. Volume 4
NASA Technical Reports Server (NTRS)
Lee, Jeffrey E.; Lockwood, Dennis W.
2003-01-01
This document is a compendium of the WFF TOPEX Software Development Team's knowledge regarding Sensor Data Record (SDR) Processing. It includes many elements of a requirements document, a software specification document, a software design document, and a user's manual. In the more technical sections, this document assumes the reader is familiar with TOPEX and instrument files.
Why and how Mastering an Incremental and Iterative Software Development Process
NASA Astrophysics Data System (ADS)
Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe
2004-06-01
One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.
TMT approach to observatory software development process
NASA Astrophysics Data System (ADS)
Buur, Hanne; Subramaniam, Annapurni; Gillies, Kim; Dumas, Christophe; Bhatia, Ravinder
2016-07-01
The purpose of the Observatory Software System (OSW) is to integrate all software and hardware components of the Thirty Meter Telescope (TMT) to enable observations and data capture; thus it is a complex software system that is defined by four principal software subsystems: Common Software (CSW), Executive Software (ESW), Data Management System (DMS) and Science Operations Support System (SOSS), all of which have interdependencies with the observatory control systems and data acquisition systems. Therefore, the software development process and plan must consider dependencies to other subsystems, manage architecture, interfaces and design, manage software scope and complexity, and standardize and optimize use of resources and tools. Additionally, the TMT Observatory Software will largely be developed in India through TMT's workshare relationship with the India TMT Coordination Centre (ITCC) and use of Indian software industry vendors, which adds complexity and challenges to the software development process, communication and coordination of activities and priorities as well as measuring performance and managing quality and risk. The software project management challenge for the TMT OSW is thus a multi-faceted technical, managerial, communications and interpersonal relations challenge. The approach TMT is using to manage this multifaceted challenge is a combination of establishing an effective geographically distributed software team (Integrated Product Team) with strong project management and technical leadership provided by the TMT Project Office (PO) and the ITCC partner to manage plans, process, performance, risk and quality, and to facilitate effective communications; establishing an effective cross-functional software management team composed of stakeholders, OSW leadership and ITCC leadership to manage dependencies and software release plans, technical complexities and change to approved interfaces, architecture, design and tool set, and to facilitate effective communications; adopting an agile-based software development process across the observatory to enable frequent software releases to help mitigate subsystem interdependencies; defining concise scope and work packages for each of the OSW subsystems to facilitate effective outsourcing of software deliverables to the ITCC partner, and to enable performance monitoring and risk management. At this stage, the architecture and high-level design of the software system has been established and reviewed. During construction each subsystem will have a final design phase with reviews, followed by implementation and testing. The results of the TMT approach to the Observatory Software development process will only be preliminary at the time of the submittal of this paper, but it is anticipated that the early results will be a favorable indication of progress.
Predicting Software Suitability Using a Bayesian Belief Network
NASA Technical Reports Server (NTRS)
Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.
2005-01-01
The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.
Developing high-quality educational software.
Johnson, Lynn A; Schleyer, Titus K L
2003-11-01
The development of effective educational software requires a systematic process executed by a skilled development team. This article describes the core skills required of the development team members for the six phases of successful educational software development. During analysis, the foundation of product development is laid including defining the audience and program goals, determining hardware and software constraints, identifying content resources, and developing management tools. The design phase creates the specifications that describe the user interface, the sequence of events, and the details of the content to be displayed. During development, the pieces of the educational program are assembled. Graphics and other media are created, video and audio scripts written and recorded, the program code created, and support documentation produced. Extensive testing by the development team (alpha testing) and with students (beta testing) is conducted. Carefully planned implementation is most likely to result in a flawless delivery of the educational software and maintenance ensures up-to-date content and software. Due to the importance of the sixth phase, evaluation, we have written a companion article on it that follows this one. The development of a CD-ROM product is described including the development team, a detailed description of the development phases, and the lessons learned from the project.
Software Quality Assurance Metrics
NASA Technical Reports Server (NTRS)
McRae, Kalindra A.
2004-01-01
Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.
Architecture-Centric Development in Globally Distributed Projects
NASA Astrophysics Data System (ADS)
Sauer, Joachim
In this chapter architecture-centric development is proposed as a means to strengthen the cohesion of distributed teams and to tackle challenges due to geographical and temporal distances and the clash of different cultures. A shared software architecture serves as blueprint for all activities in the development process and ties them together. Architecture-centric development thus provides a plan for task allocation, facilitates the cooperation of globally distributed developers, and enables continuous integration reaching across distributed teams. Advice is also provided for software architects who work with distributed teams in an agile manner.
ERIC Educational Resources Information Center
Chen, Chung-Yang; Hong, Ya-Chun; Chen, Pei-Chi
2014-01-01
Software development relies heavily on teamwork; determining how to streamline this collaborative development is an essential training subject in computer and software engineering education. A team process known as the meetings-flow (MF) approach has recently been introduced in software capstone projects in engineering programs at various…
Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model
NASA Astrophysics Data System (ADS)
Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.
2013-12-01
Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient as developers do not need to spend time iterating over small changes. Instead, these changes are realized in early prototypes and implemented before the task is seen by developers. The development practices followed by the LROC SOC DevOps team help facilitate a high level of software quality that is necessary for LROC SOC operations. Application to the Scientific Community: There is no replacement for having software developed by professional developers. While it is beneficial for scientists to write software, this activity should be seen as prototyping, which is then made production ready by professional developers. When constructed properly, even a small development team has the ability to increase the rate of software development for a research group while creating more efficient, reliable, and maintainable products. This strategy allows scientists to accomplish more, focusing on teamwork, rather than software development, which may not be their primary focus. 1. Robinson et al. (2010) Space Sci. Rev. 150, 81-124 2. DeGrandis. (2011) Cutter IT Journal. Vol 24, No. 8, 34-39 3. Estes, N.M.; Hanger, C.D.; Licht, A.A.; Bowman-Cisneros, E.; Lunaserv Web Map Service: History, Implementation Details, Development, and Uses, http://adsabs.harvard.edu/abs/2013LPICo1719.2609E.
A Brief Survey of the Team Software ProcessSM (TSPSM)
2011-10-24
spent more than 20 years in industry as a software engineer, system designer, project leader, and development manager working on control systems...InnerWorkings, Inc. Instituto Tecnologico y de Estudios Superiores de Monterrey Siemens AG SILAC Ingenieria de Software S.A. de C.V
Empirical studies of software design: Implications for SSEs
NASA Technical Reports Server (NTRS)
Krasner, Herb
1988-01-01
Implications for Software Engineering Environments (SEEs) are presented in viewgraph format for characteristics of projects studied; significant problems and crucial problem areas in software design for large systems; layered behavioral model of software processes; implications of field study results; software project as an ecological system; results of the LIFT study; information model of design exploration; software design strategies; results of the team design study; and a list of publications.
Production Techniques for Computer-Based Learning Material.
ERIC Educational Resources Information Center
Moonen, Jef; Schoenmaker, Jan
Experiences in the development of educational software in the Netherlands have included the use of individual and team approaches, the determination of software content and how it should be presented, and the organization of the entire development process, from experimental programs to prototype to final product. Because educational software is a…
Team Software Process (TSP) Coach Mentoring Program Guidebook
2009-08-01
SEI TSP Initiative Team. • All training was conducted in English only, and observations were limited to English- speaking coaches and teams. The...Certified TSP Mentor Coach programs also enable the expansion of TSP implementation to non-English- speaking teams and organizations. This pro- gram also...Communication Needs Significant Improvement Could Benefit from Development Capable and Effective Role Model 1. I listen before speaking . 2. I
Using SFOC to fly the Magellan Venus mapping mission
NASA Technical Reports Server (NTRS)
Bucher, Allen W.; Leonard, Robert E., Jr.; Short, Owen G.
1993-01-01
Traditionally, spacecraft flight operations at the Jet Propulsion Laboratory (JPL) have been performed by teams of spacecraft experts utilizing ground software designed specifically for the current mission. The Jet Propulsion Laboratory set out to reduce the cost of spacecraft mission operations by designing ground data processing software that could be used by multiple spacecraft missions, either sequentially or concurrently. The Space Flight Operations Center (SFOC) System was developed to provide the ground data system capabilities needed to monitor several spacecraft simultaneously and provide enough flexibility to meet the specific needs of individual projects. The Magellan Spacecraft Team utilizes the SFOC hardware and software designed for engineering telemetry analysis, both real-time and non-real-time. The flexibility of the SFOC System has allowed the spacecraft team to integrate their own tools with SFOC tools to perform the tasks required to operate a spacecraft mission. This paper describes how the Magellan Spacecraft Team is utilizing the SFOC System in conjunction with their own software tools to perform the required tasks of spacecraft event monitoring as well as engineering data analysis and trending.
Implementing Kanban for agile process management within the ALMA Software Operations Group
NASA Astrophysics Data System (ADS)
Reveco, Johnny; Mora, Matias; Shen, Tzu-Chiang; Soto, Ruben; Sepulveda, Jorge; Ibsen, Jorge
2014-07-01
After the inauguration of the Atacama Large Millimeter/submillimeter Array (ALMA), the Software Operations Group in Chile has refocused its objectives to: (1) providing software support to tasks related to System Integration, Scientific Commissioning and Verification, as well as Early Science observations; (2) testing the remaining software features, still under development by the Integrated Computing Team across the world; and (3) designing and developing processes to optimize and increase the level of automation of operational tasks. Due to their different stakeholders, each of these tasks presents a wide diversity of importances, lifespans and complexities. Aiming to provide the proper priority and traceability for every task without stressing our engineers, we introduced the Kanban methodology in our processes in order to balance the demand on the team against the throughput of the delivered work. The aim of this paper is to share experiences gained during the implementation of Kanban in our processes, describing the difficulties we have found, solutions and adaptations that led us to our current but still evolving implementation, which has greatly improved our throughput, prioritization and problem traceability.
A Quantitative Study of Global Software Development Teams, Requirements, and Software Projects
ERIC Educational Resources Information Center
Parker, Linda L.
2016-01-01
The study explored the relationship between global software development teams, effective software requirements, and stakeholders' perception of successful software development projects within the field of information technology management. It examined the critical relationship between Global Software Development (GSD) teams creating effective…
From Prime to Extended Mission: Evolution of the MER Tactical Uplink Process
NASA Technical Reports Server (NTRS)
Mishkin, Andrew H.; Laubach, Sharon
2006-01-01
To support a 90-day surface mission for two robotic rovers, the Mars Exploration Rover mission designed and implemented an intensive tactical operations process, enabling daily commanding of each rover. Using a combination of new processes, custom software tools, a Mars-time staffing schedule, and seven-day-a-week operations, the MER team was able to compress the traditional weeks-long command-turnaround for a deep space robotic mission to about 18 hours. However, the pace of this process was never intended to be continued indefinitely. Even before the end of the three-month prime mission, MER operations began evolving towards greater sustainability. A combination of continued software tool development, increasing team experience, and availability of reusable sequences first reduced the mean process duration to approximately 11 hours. The number of workshifts required to perform the process dropped, and the team returned to a modified 'Earth-time' schedule. Additional process and tool adaptation eventually provided the option of planning multiple Martian days of activity within a single workshift, making 5-day-a-week operations possible. The vast majority of the science team returned to their home institutions, continuing to participate fully in the tactical operations process remotely. MER has continued to operate for over two Earth-years as many of its key personnel have moved on to other projects, the operations team and budget have shrunk, and the rovers have begun to exhibit symptoms of aging.
Integrating CMMI and TSP/PSP: Using TSP Data to Create Process Performance Models
2009-11-01
Humphrey , Watts S . PSP : A Self-Improvement Process for Software Engineers. Addison-Wesley, 2005. http://www.sei.cmu.edu/library/abstracts/ books ...Engineering. Addison-Wesley, 2002. [ Humphrey 00] Humphrey , Watts S . The Personal Software Process ( PSP ) (CMU/SEI-2000-TR-022, ADA387268). Pittsburgh...0321305493.cfm [ Humphrey 06a] Humphrey , W. S . TSP: Leading a Development Team. Addison-Wesley, 2006.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turgeon, Jennifer L.; Minana, Molly A.; Hackney, Patricia
2009-01-01
The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and themore » software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.« less
Project Report: Automatic Sequence Processor Software Analysis
NASA Technical Reports Server (NTRS)
Benjamin, Brandon
2011-01-01
The Mission Planning and Sequencing (MPS) element of Multi-Mission Ground System and Services (MGSS) provides space missions with multi-purpose software to plan spacecraft activities, sequence spacecraft commands, and then integrate these products and execute them on spacecraft. Jet Propulsion Laboratory (JPL) is currently is flying many missions. The processes for building, integrating, and testing the multi-mission uplink software need to be improved to meet the needs of the missions and the operations teams that command the spacecraft. The Multi-Mission Sequencing Team is responsible for collecting and processing the observations, experiments and engineering activities that are to be performed on a selected spacecraft. The collection of these activities is called a sequence and ultimately a sequence becomes a sequence of spacecraft commands. The operations teams check the sequence to make sure that no constraints are violated. The workflow process involves sending a program start command, which activates the Automatic Sequence Processor (ASP). The ASP is currently a file-based system that is comprised of scripts written in perl, c-shell and awk. Once this start process is complete, the system checks for errors and aborts if there are any; otherwise the system converts the commands to binary, and then sends the resultant information to be radiated to the spacecraft.
Introduction to the Navigation Team: Johnson Space Center EG6 Internship
NASA Technical Reports Server (NTRS)
Gualdoni, Matthew
2017-01-01
The EG6 navigation team at NASA Johnson Space Center, like any team of engineers, interacts with the engineering process from beginning to end; from exploring solutions to a problem, to prototyping and studying the implementations, all the way to polishing and verifying a final flight-ready design. This summer, I was privileged enough to gain exposure to each of these processes, while also getting to truly experience working within a team of engineers. My summer can be broken up into three projects: i) Initial study and prototyping: investigating a manual navigation method that can be utilized onboard Orion in the event of catastrophic failure of navigation systems; ii) Finalizing and verifying code: altering a software routine to improve its robustness and reliability, as well as designing unit tests to verify its performance; and iii) Development of testing equipment: assisting in developing and integrating of a high-fidelity testbed to verify the performance of software and hardware.
The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.
Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin
2007-11-01
This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.
The Effects of Development Team Skill on Software Product Quality
NASA Technical Reports Server (NTRS)
Beaver, Justin M.; Schiavone, Guy A.
2006-01-01
This paper provides an analysis of the effect of the skill/experience of the software development team on the quality of the final software product. A method for the assessment of software development team skill and experience is proposed, and was derived from a workforce management tool currently in use by the National Aeronautics and Space Administration. Using data from 26 smallscale software development projects, the team skill measures are correlated to 5 software product quality metrics from the ISO/IEC 9126 Software Engineering Product Quality standard. in the analysis of the results, development team skill is found to be a significant factor in the adequacy of the design and implementation. In addition, the results imply that inexperienced software developers are tasked with responsibilities ill-suited to their skill level, and thus have a significant adverse effect on the quality of the software product. Keywords: software quality, development skill, software metrics
IMSF: Infinite Methodology Set Framework
NASA Astrophysics Data System (ADS)
Ota, Martin; Jelínek, Ivan
Software development is usually an integration task in enterprise environment - few software applications work autonomously now. It is usually a collaboration of heterogeneous and unstable teams. One serious problem is lack of resources, a popular result being outsourcing, ‘body shopping’, and indirectly team and team member fluctuation. Outsourced sub-deliveries easily become black boxes with no clear development method used, which has a negative impact on supportability. Such environments then often face the problems of quality assurance and enterprise know-how management. The used methodology is one of the key factors. Each methodology was created as a generalization of a number of solved projects, and each methodology is thus more or less connected with a set of task types. When the task type is not suitable, it causes problems that usually result in an undocumented ad-hoc solution. This was the motivation behind formalizing a simple process for collaborative software engineering. Infinite Methodology Set Framework (IMSF) defines the ICT business process of adaptive use of methods for classified types of tasks. The article introduces IMSF and briefly comments its meta-model.
NA-42 TI Shared Software Component Library FY2011 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knudson, Christa K.; Rutz, Frederick C.; Dorow, Kevin E.
The NA-42 TI program initiated an effort in FY2010 to standardize its software development efforts with the long term goal of migrating toward a software management approach that will allow for the sharing and reuse of code developed within the TI program, improve integration, ensure a level of software documentation, and reduce development costs. The Pacific Northwest National Laboratory (PNNL) has been tasked with two activities that support this mission. PNNL has been tasked with the identification, selection, and implementation of a Shared Software Component Library. The intent of the library is to provide a common repository that is accessiblemore » by all authorized NA-42 software development teams. The repository facilitates software reuse through a searchable and easy to use web based interface. As software is submitted to the repository, the component registration process captures meta-data and provides version control for compiled libraries, documentation, and source code. This meta-data is then available for retrieval and review as part of library search results. In FY2010, PNNL and staff from the Remote Sensing Laboratory (RSL) teamed up to develop a software application with the goal of replacing the aging Aerial Measuring System (AMS). The application under development includes an Advanced Visualization and Integration of Data (AVID) framework and associated AMS modules. Throughout development, PNNL and RSL have utilized a common AMS code repository for collaborative code development. The AMS repository is hosted by PNNL, is restricted to the project development team, is accessed via two different geographic locations and continues to be used. The knowledge gained from the collaboration and hosting of this repository in conjunction with PNNL software development and systems engineering capabilities were used in the selection of a package to be used in the implementation of the software component library on behalf of NA-42 TI. The second task managed by PNNL is the development and continued maintenance of the NA-42 TI Software Development Questionnaire. This questionnaire is intended to help software development teams working under NA-42 TI in documenting their development activities. When sufficiently completed, the questionnaire illustrates that the software development activities recorded incorporate significant aspects of the software engineering lifecycle. The questionnaire template is updated as comments are received from NA-42 and/or its development teams and revised versions distributed to those using the questionnaire. PNNL also maintains a list of questionnaire recipients. The blank questionnaire template, the AVID and AMS software being developed, and the completed AVID AMS specific questionnaire are being used as the initial content to be established in the TI Component Library. This report summarizes the approach taken to identify requirements, search for and evaluate technologies, and the approach taken for installation of the software needed to host the component library. Additionally, it defines the process by which users request access for the contribution and retrieval of library content.« less
Team Software Process (TSP) Coach Mentoring Program Guidebook Version 1.1
2010-06-01
All training was conducted in English only, and observations were limited to English- speaking coaches and teams. The SEI-Certified TSP Coach...programs also enable the expansion of TSP implementation to non-English- speaking teams and organizations. This expanded capacity for qualifying candidate...Improvement Could Benefit from Development Capable and Effective Role Model 1. I listen before speaking . 2. I demonstrate persuasiveness in
NASA Technical Reports Server (NTRS)
Jefferys, S.; Johnson, W.; Lewis, R.; Rich, R.
1981-01-01
This specification establishes the requirements, concepts, and preliminary design for a set of software known as the IGDS/TRAP Interface Program (ITIP). This software provides the capability to develop at an Interactive Graphics Design System (IGDS) design station process flow diagrams for use by the NASA Coal Gasification Task Team. In addition, ITIP will use the Data Management and Retrieval System (DMRS) to maintain a data base from which a properly formatted input file to the Time-Line and Resources Analysis Program (TRAP) can be extracted. This set of software will reside on the PDP-11/70 and will become the primary interface between the Coal Gasification Task Team and IGDS, DMRS, and TRAP. The user manual for the computer program is presented.
Global Situational Awareness with Free Tools
2015-01-15
Client Technical Solutions • Software Engineering Measurement and Analysis • Architecture Practices • Product Line Practice • Team Software Process...multiple data sources • Snort (Snorby on Security Onion ) • Nagios • SharePoint RSS • Flow • Others • Leverage standard data formats • Keyhole Markup Language
Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator
NASA Technical Reports Server (NTRS)
Bolen, Kenny; Greenlaw, Ronald
2010-01-01
A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.
Considerations for Using Agile in DoD Acquisition
2010-04-01
successfully used in manufacturing throughout the world for decades, such as ―just-in- time,‖ Lean, Kanban , and work-flow-based planning. Another new...of this analysis is provided in Table 2. 29 Kanban / lean style of Agile might be the most relevant for this phase. 31 | CMU/SEI-2010-TN-002...family of approaches, including Kanban [14], Rational Unified Process (RUP), Personal Software Process (PSP), Team Software Process (TSP), and Cleanroom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minana, Molly A.; Sturtevant, Judith E.; Heaphy, Robert
2005-01-01
The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in DOE/AL Quality Criteria (QC-1) as conformance to customer requirements and expectations. This quality plan defines the ASC program software quality practices and provides mappings of these practices to the SNL Corporate Process Requirements (CPR 1.3.2 and CPR 1.3.6) and the Department of Energy (DOE) document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines (GP&G). This quality plan identifies ASC management andmore » software project teams' responsibilities for cost-effective software engineering quality practices. The SNL ASC Software Quality Plan establishes the signatories commitment to improving software products by applying cost-effective software engineering quality practices. This document explains the project teams opportunities for tailoring and implementing the practices; enumerates the practices that compose the development of SNL ASC's software products; and includes a sample assessment checklist that was developed based upon the practices in this document.« less
Managing Communication among Geographically Distributed Teams: A Brazilian Case
NASA Astrophysics Data System (ADS)
Almeida, Ana Carina M.; de Farias Junior, Ivaldir H.; de S. Carneiro, Pedro Jorge
The growing demand for qualified professionals is making software companies opt for distributed software development (DSD). At the project conception, communication and synchronization of information are critical factors for success. However problems such as time-zone difference between teams, culture, language and different development processes among sites could difficult the communication among teams. In this way, the main goal of this paper is to describe the solution adopted by a Brazilian team to improve communication in a multisite project environment. The purposed solution was based on the best practices described in the literature, and the communication plan was created based on the infrastructure needed by the project. The outcome of this work is to minimize the impact of communication issues in multisite projects, increasing productivity, good understanding and avoiding rework on code and document writing.
Towards a balanced software team formation based on Belbin team role using fuzzy technique
NASA Astrophysics Data System (ADS)
Omar, Mazni; Hasan, Bikhtiyar; Ahmad, Mazida; Yasin, Azman; Baharom, Fauziah; Mohd, Haslina; Darus, Norida Muhd
2016-08-01
In software engineering (SE), team roles play significant impact in determining the project success. To ensure the optimal outcome of the project the team is working on, it is essential to ensure that the team members are assigned to the right role with the right characteristics. One of the prevalent team roles is Belbin team role. A successful team must have a balance of team roles. Thus, this study demonstrates steps taken to determine balance of software team formation based on Belbin team role using fuzzy technique. Fuzzy technique was chosen because it allows analyzing of imprecise data and classifying selected criteria. In this study, two roles in Belbin team role, which are Shaper (Sh) and Plant (Pl) were chosen to assign the specific role in software team. Results show that the technique is able to be used for determining the balance of team roles. Future works will focus on the validation of the proposed method by using empirical data in industrial setting.
Team Software Process (TSP) Body of Knowledge (BOK)
2010-07-01
styles that correspond stereotypical extremes of group control and coordination, as shown in Figure 5. closed, random, open, and synchronous group ...and confirming the resolutions • managing the design change process and coordinating changes with the configuration control board • reporting...members. 123 | CMU/SEI-2010-TR-020 4. Coaching – Obtain a lead coach and the coaches for each team. 5. Conceptual design – Form a working group of
CMMI Level 5 and the Team Software Process
2007-04-01
could meet the rigors of a CMMI assessment and achieve their group’s goal of Level 5. Watts Humphrey , who is widely acknowledged as the founder of the...Capability Maturity Model® (CMM®) approach to improvement and who later created the Personal Software Process ( PSP )SM and TSP, has noted that one of the...intents of PSP and TSP is to be an operational process enactment of CMM Level 5 processes at the personal and pro- ject levels respectively [1]. CMM
Software Technology Transfer and Export Control.
1981-01-01
development projects of their own. By analogy, a Soviet team might be able to repeat the learning experience of the ADEPT-50 junior staff...recommendations concerning product form and further study . The posture of this group has been to consider software technology and its transfer as a process...and views of the Software Subgroup of Technical Working Group 7 (Computers) of the Critical Technologies Project . The work reported
NASA Astrophysics Data System (ADS)
Monaghan, Conal; Bizumic, Boris; Reynolds, Katherine; Smithson, Michael; Johns-Boast, Lynette; van Rooy, Dirk
2015-01-01
One prominent approach in the exploration of the variations in project team performance has been to study two components of the aggregate personalities of the team members: conscientiousness and agreeableness. A second line of research, known as self-categorisation theory, argues that identifying as team members and the team's performance norms should substantially influence the team's performance. This paper explores the influence of both these perspectives in university software engineering project teams. Eighty students worked to complete a piece of software in small project teams during 2007 or 2008. To reduce limitations in statistical analysis, Monte Carlo simulation techniques were employed to extrapolate from the results of the original sample to a larger simulated sample (2043 cases, within 319 teams). The results emphasise the importance of taking into account personality (particularly conscientiousness), and both team identification and the team's norm of performance, in order to cultivate higher levels of performance in student software engineering project teams.
NASA Astrophysics Data System (ADS)
Aguilar Cisneros, Jorge; Vargas Martinez, Hector; Pedroza Melendez, Alejandro; Alonso Arevalo, Miguel
2013-09-01
Mexico is a country where the experience to build software for satellite applications is beginning. This is a delicate situation because in the near future we will need to develop software for the SATEX-II (Mexican Experimental Satellite). SATEX- II is a SOMECyTA's project (the Mexican Society of Aerospace Science and Technology). We have experienced applying software development methodologies, like TSP (Team Software Process) and SCRUM in other areas. Then, we analyzed these methodologies and we concluded: these can be applied to develop software for the SATEX-II, also, we supported these methodologies with SSP-05-0 Standard in particular with ESA PSS-05-11. Our analysis was focusing on main characteristics of each methodology and how these methodologies could be used with the ESA PSS 05-0 Standards. Our outcomes, in general, may be used by teams who need to build small satellites, but, in particular, these are going to be used when we will build the on board software applications for the SATEX-II.
Motivating Company Personnel by Applying the Semi-self-organized Teams Principle
NASA Astrophysics Data System (ADS)
Kumlander, Deniss
The only way nowadays to improve stability of software development process in the global rapidly evolving world is to be innovative and involve professionals into projects motivating them using both material and non material factors. In this paper self-organized teams are discussed. Unfortunately not all kind of organizations can benefit directly from agile method including applying self-organized teams. The paper proposes semi-self-organized teams presenting it as a new and promising motivating factor allowing deriving many positive sides of been self-organized and partly agile and been compliant to less strict conditions for following this innovating process. The semi-self organized teams are reliable at least in the short-term perspective and are simple to organize and support.
Test/score/report: Simulation techniques for automating the test process
NASA Technical Reports Server (NTRS)
Hageman, Barbara H.; Sigman, Clayton B.; Koslosky, John T.
1994-01-01
A Test/Score/Report capability is currently being developed for the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) system which will automate testing of the Goddard Space Flight Center (GSFC) Payload Operations Control Center (POCC) and Mission Operations Center (MOC) software in three areas: telemetry decommutation, spacecraft command processing, and spacecraft memory load and dump processing. Automated computer control of the acceptance test process is one of the primary goals of a test team. With the proper simulation tools and user interface, the task of acceptance testing, regression testing, and repeatability of specific test procedures of a ground data system can be a simpler task. Ideally, the goal for complete automation would be to plug the operational deliverable into the simulator, press the start button, execute the test procedure, accumulate and analyze the data, score the results, and report the results to the test team along with a go/no recommendation to the test team. In practice, this may not be possible because of inadequate test tools, pressures of schedules, limited resources, etc. Most tests are accomplished using a certain degree of automation and test procedures that are labor intensive. This paper discusses some simulation techniques that can improve the automation of the test process. The TASS system tests the POCC/MOC software and provides a score based on the test results. The TASS system displays statistics on the success of the POCC/MOC system processing in each of the three areas as well as event messages pertaining to the Test/Score/Report processing. The TASS system also provides formatted reports documenting each step performed during the tests and the results of each step. A prototype of the Test/Score/Report capability is available and currently being used to test some POCC/MOC software deliveries. When this capability is fully operational it should greatly reduce the time necessary to test a POCC/MOC software delivery, as well as improve the quality of the test process.
A Core Plug and Play Architecture for Reusable Flight Software Systems
NASA Technical Reports Server (NTRS)
Wilmot, Jonathan
2006-01-01
The Flight Software Branch, at Goddard Space Flight Center (GSFC), has been working on a run-time approach to facilitate a formal software reuse process. The reuse process is designed to enable rapid development and integration of high-quality software systems and to more accurately predict development costs and schedule. Previous reuse practices have been somewhat successful when the same teams are moved from project to project. But this typically requires taking the software system in an all-or-nothing approach where useful components cannot be easily extracted from the whole. As a result, the system is less flexible and scalable with limited applicability to new projects. This paper will focus on the rationale behind, and implementation of the run-time executive. This executive is the core for the component-based flight software commonality and reuse process adopted at Goddard.
Combining Architecture-Centric Engineering with the Team Software Process
2010-12-01
colleagues from Quarksoft and CIMAT have re- cently reported on their experiences in “Introducing Software Architecture Development Methods into a TSP...Postmortem Lessons, new goals, new requirements, new risk , etc. Business and technical goals Estimates, plans, process, commitment Work products...architecture to mitigate the risks unco- vered by the ATAM. At the end of the iteration, version 1.0 of the architec- ture is available. Implement a second
Costs and Benefits of Software Process Improvement
1997-12-01
Washington, DC 20503. 1. AGENCY USE ONLY ( Leave blank) 2. REPORT DATE December 1997 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4...in this field, an organization’s chance for success depends first on having an exceptional manager and an effective development team (PEOPLE...Secondly, it depends on its effective use of TECHNOLOGY, and finally, on its PROCESS maturity. [Ref. 4] In a software organization: PEOPLE refers to
NASA Astrophysics Data System (ADS)
Dervilllé, A.; Labrosse, A.; Zimmermann, Y.; Foucher, J.; Gronheid, R.; Boeckx, C.; Singh, A.; Leray, P.; Halder, S.
2016-03-01
The dimensional scaling in IC manufacturing strongly drives the demands on CD and defect metrology techniques and their measurement uncertainties. Defect review has become as important as CD metrology and both of them create a new metrology paradigm because it creates a completely new need for flexible, robust and scalable metrology software. Current, software architectures and metrology algorithms are performant but it must be pushed to another higher level in order to follow roadmap speed and requirements. For example: manage defect and CD in one step algorithm, customize algorithms and outputs features for each R&D team environment, provide software update every day or every week for R&D teams in order to explore easily various development strategies. The final goal is to avoid spending hours and days to manually tune algorithm to analyze metrology data and to allow R&D teams to stay focus on their expertise. The benefits are drastic costs reduction, more efficient R&D team and better process quality. In this paper, we propose a new generation of software platform and development infrastructure which can integrate specific metrology business modules. For example, we will show the integration of a chemistry module dedicated to electronics materials like Direct Self Assembly features. We will show a new generation of image analysis algorithms which are able to manage at the same time defect rates, images classifications, CD and roughness measurements with high throughput performances in order to be compatible with HVM. In a second part, we will assess the reliability, the customization of algorithm and the software platform capabilities to follow new specific semiconductor metrology software requirements: flexibility, robustness, high throughput and scalability. Finally, we will demonstrate how such environment has allowed a drastic reduction of data analysis cycle time.
NASA Technical Reports Server (NTRS)
Lockwood, Dennis W.; Conger, A. M.
2003-01-01
This document is a compendium of the WFF GFO Software Development Team's knowledge regarding of GDO CAL/VAL Data. It includes many elements of a requirements document, a software specification document, a software design document, and a user's guide. In the more technical sections, this document assumes the reader is familiar with GFO and its CAL/VAL Data.
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
Lake, CA ,93555- 6110 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S...Activity, the systems engineering team is responsible for system and software requirements. 2 . Process Dashboard is a software planning and tracking tool... CA 93555- 6110 760-939-6989 Brad Hodgins is an interim TSP Mentor Coach, SEI-Authorized TSP Coach, SEI-Certified PSP/TSP Instructor, and SEI
The NCC project: A quality management perspective
NASA Technical Reports Server (NTRS)
Lee, Raymond H.
1993-01-01
The Network Control Center (NCC) Project introduced the concept of total quality management (TQM) in mid-1990. The CSC project team established a program which focused on continuous process improvement in software development methodology and consistent deliveries of high quality software products for the NCC. The vision of the TQM program was to produce error free software. Specific goals were established to allow continuing assessment of the progress toward meeting the overall quality objectives. The total quality environment, now a part of the NCC Project culture, has become the foundation for continuous process improvement and has resulted in the consistent delivery of quality software products over the last three years.
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
Extending Team Software Process (TSP) to Systems Engineering: A NAVAIR Experience Report
2010-03-01
instrumental in formulating the concepts and approaches presented in this report: Dan Burton, Anita Carleton, Timothy Chick, Mike Fehring, Watts Humphrey ...Senate,” GAO-04-393, Defense Acquisitions, 2004. http://www.gao.gov/new.items/d04393.pdf [ Humphrey 06] W. S . Humphrey , TSP: Leading a Development... Humphrey 08] W. S . Humphrey , “The Process Revolution,” CrossTalk The Journal of Defense Software Engineering, August 2008, Volume 28 Number 8
Sociotechnical Challenges of Developing an Interoperable Personal Health Record
Gaskin, G.L.; Longhurst, C.A.; Slayton, R.; Das, A.K.
2011-01-01
Objectives To analyze sociotechnical issues involved in the process of developing an interoperable commercial Personal Health Record (PHR) in a hospital setting, and to create guidelines for future PHR implementations. Methods This qualitative study utilized observational research and semi-structured interviews with 8 members of the hospital team, as gathered over a 28 week period of developing and adapting a vendor-based PHR at Lucile Packard Children’s Hospital at Stanford University. A grounded theory approach was utilized to code and analyze over 100 pages of typewritten field notes and interview transcripts. This grounded analysis allowed themes to surface during the data collection process which were subsequently explored in greater detail in the observations and interviews. Results Four major themes emerged: (1) Multidisciplinary teamwork helped team members identify crucial features of the PHR; (2) Divergent goals for the PHR existed even within the hospital team; (3) Differing organizational conceptions of the end-user between the hospital and software company differentially shaped expectations for the final product; (4) Difficulties with coordination and accountability between the hospital and software company caused major delays and expenses and strained the relationship between hospital and software vendor. Conclusions Though commercial interoperable PHRs have great potential to improve healthcare, the process of designing and developing such systems is an inherently sociotechnical process with many complex issues and barriers. This paper offers recommendations based on the lessons learned to guide future development of such PHRs. PMID:22003373
2010-12-01
PSP and TSP books by Watts Humphrey or in the TSP-MT (multi-team) process extension. A few additional items should be created, e.g., see OPD-2...Institute, Carnegie Mellon University, 2000. www.sei.cmu.edu/library/abstracts/reports/00tr023.cfm [ Humphrey 2005] Humphrey , Watts S . PSP : A Self... Humphrey 2006] Humphrey , Watts S . TSP: Coaching Development Teams. Addison Wesley, 2006 (ISBN 978- 0201731132). www.sei.cmu.edu/library/abstracts/
Automation of Cassini Support Imaging Uplink Command Development
NASA Technical Reports Server (NTRS)
Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert
2010-01-01
"Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.
The Comparison of VLBI Data Analysis Using Software Globl and Globk
NASA Astrophysics Data System (ADS)
Guangli, W.; Xiaoya, W.; Jinling, L.; Wenyao, Z.
The comparison of different geodetic data analysis software is one of the quite of- ten mentioned topics. In this paper we try to find out the difference between software GLOBL and GLOBK when use them to process the same set of VLBI data. GLOBL is a software developed by VLBI team, geodesy branch, GSFC/NASA to process geode- tic VLBI data using algorithm of arc-parameter-elimination, while GLOBK using al- gorithm of kalman filtering is mainly used in GPS data analysis, and it is also used in VLBI data analysis. Our work focus on whether there are significant difference when use the two softwares to analyze the same VLBI data set and investigate the reasons caused the difference.
NASA Technical Reports Server (NTRS)
Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam;
2009-01-01
The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.
2017-04-01
notice for non -US Government use and distribution. External use: This material may be reproduced in its entirety, without modification, and freely...Combinatorial Design Methods 4 2.1 Identification of Significant Improvement Opportunity 4 2.2 Methodology Development 4 2.3 Piloting...11 3 Process Performance Modeling and Analysis 13 3.1 Identification of Significant Improvement Opportunity 13 3.2 Methodology Development 13 3.3
Tactical Approaches for Making a Successful Satellite Passive Microwave ESDR
NASA Astrophysics Data System (ADS)
Hardman, M.; Brodzik, M. J.; Gotberg, J.; Long, D. G.; Paget, A. C.
2014-12-01
Our NASA MEaSUREs project is producing a new, enhanced resolution gridded Earth System Data Record for the entire satellite passive microwave (SMMR, SSM/I-SSMIS and AMSR-E) time series. Our project goals are twofold: to produce a well-documented, consistently processed, high-quality historical record at higher spatial resolutions than have previously been available, and to transition the production software to the NSIDC DAAC for ongoing processing after our project completion. In support of these goals, our distributed team at BYU and NSIDC faces project coordination challenges to produce a high-quality data set that our user community will accept as a replacement for the currently available historical versions of these data. We work closely with our DAAC liaison on format specifications, data and metadata plans, and project progress. In order for the user community to understand and support our project, we have solicited a team of Early Adopters who are reviewing and evaluating a prototype version of the data. Early Adopter feedback will be critical input to our final data content and format decisions. For algorithm transparency and accountability, we have released an Algorithm Theoretical Basis Document (ATBD) and detailed supporting technical documentation, with rationale for all algorithm implementation decisions. For distributed team management, we are using collaborative tools for software revision control and issue tracking. For reliably transitioning a research-quality image reconstruction software system to production-quality software suitable for use at the DAAC, we have adopted continuous integration methods for running automated regression testing. Our presentation will summarize bothadvantages and challenges of each of these tactics in ensuring production of a successful ESDR and an enduring production software system.
Scheduling System Assessment, and Development and Enhancement of Re-engineered Version of GPSS
NASA Technical Reports Server (NTRS)
Loganantharaj, Rasiah; Thomas, Bushrod; Passonno, Nicole
1996-01-01
The objective of this project is two-fold. First to provide an evaluation of a commercially developed version of the ground processing scheduling system (GPSS) for its applicability to the Kennedy Space Center (KSC) ground processing problem. Second, to work with the KSC GPSS development team and provide enhancement to the existing software. Systems reengineering is required to provide a sustainable system for the users and the software maintenance group. Using the LISP profile prototype code developed by the GPSS reverse reengineering groups as a building block, we have implemented the resource deconfliction portion of GPSS in common LISP using its object oriented features. The prototype corrects and extends some of the deficiencies of the current production version, plus it uses and builds on the classes from the development team's profile prototype.
2014-05-18
intention of offering improved software libraries for GNSS signal acquisition. It has been the team mission to implement new and improved techniques...with the intention of offering improved software libraries for GNSS signal acquisition. It has been the team mission to implement new and improved...intention of offering improved software libraries for GNSS signal acquisition. It has been the team mission to implement new and improved techniques to
Model-Driven Useware Engineering
NASA Astrophysics Data System (ADS)
Meixner, Gerrit; Seissler, Marc; Breiner, Kai
User-oriented hardware and software development relies on a systematic development process based on a comprehensive analysis focusing on the users' requirements and preferences. Such a development process calls for the integration of numerous disciplines, from psychology and ergonomics to computer sciences and mechanical engineering. Hence, a correspondingly interdisciplinary team must be equipped with suitable software tools to allow it to handle the complexity of a multimodal and multi-device user interface development approach. An abstract, model-based development approach seems to be adequate for handling this complexity. This approach comprises different levels of abstraction requiring adequate tool support. Thus, in this chapter, we present the current state of our model-based software tool chain. We introduce the use model as the core model of our model-based process, transformation processes, and a model-based architecture, and we present different software tools that provide support for creating and maintaining the models or performing the necessary model transformations.
Cyber security best practices for the nuclear industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badr, I.
2012-07-01
When deploying software based systems, such as, digital instrumentation and controls for the nuclear industry, it is vital to include cyber security assessment as part of architecture and development process. When integrating and delivering software-intensive systems for the nuclear industry, engineering teams should make use of a secure, requirements driven, software development life cycle, ensuring security compliance and optimum return on investment. Reliability protections, data loss prevention, and privacy enforcement provide a strong case for installing strict cyber security policies. (authors)
A Team Building Model for Software Engineering Courses Term Projects
ERIC Educational Resources Information Center
Sahin, Yasar Guneri
2011-01-01
This paper proposes a new model for team building, which enables teachers to build coherent teams rapidly and fairly for the term projects of software engineering courses. Moreover, the model can also be used to build teams for any type of project, if the team member candidates are students, or if they are inexperienced on a certain subject. The…
Software Capability Evaluation Version 2.0 Method Description
1994-06-01
These criteria are discussed below; they include training, team composition, team leadership , team member experience and knowledge, individual...previous SCEs. No more than one team member should have less than two years of professional software experience. 3 Leadership . Ideally, the team leader...features: e leadership - the assignment of responsibility the presence of sponsorship. * organizational policies - there are written po! ;goveming the
Spitzer observatory operations: increasing efficiency in mission operations
NASA Astrophysics Data System (ADS)
Scott, Charles P.; Kahr, Bolinda E.; Sarrel, Marc A.
2006-06-01
This paper explores the how's and why's of the Spitzer Mission Operations System's (MOS) success, efficiency, and affordability in comparison to other observatory-class missions. MOS exploits today's flight, ground, and operations capabilities, embraces automation, and balances both risk and cost. With operational efficiency as the primary goal, MOS maintains a strong control process by translating lessons learned into efficiency improvements, thereby enabling the MOS processes, teams, and procedures to rapidly evolve from concept (through thorough validation) into in-flight implementation. Operational teaming, planning, and execution are designed to enable re-use. Mission changes, unforeseen events, and continuous improvement have often times forced us to learn to fly anew. Collaborative spacecraft operations and remote science and instrument teams have become well integrated, and worked together to improve and optimize each human, machine, and software-system element. Adaptation to tighter spacecraft margins has facilitated continuous operational improvements via automated and autonomous software coupled with improved human analysis. Based upon what we now know and what we need to improve, adapt, or fix, the projected mission lifetime continues to grow - as does the opportunity for numerous scientific discoveries.
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Victor, Elias; Vasquez, Angel L.; Urbina, Alfredo R.
2017-01-01
A multi-threaded software application has been developed in-house by the Ground Special Power (GSP) team at NASA Kennedy Space Center (KSC) to separately simulate and fully emulate all units that supply VDC power and battery-based power backup to multiple KSC launch ground support systems for NASA Space Launch Systems (SLS) rocket.
Software Project Management and Measurement on the World-Wide-Web (WWW)
NASA Technical Reports Server (NTRS)
Callahan, John; Ramakrishnan, Sudhaka
1996-01-01
We briefly describe a system for forms-based, work-flow management that helps members of a software development team overcome geographical barriers to collaboration. Our system, called the Web Integrated Software Environment (WISE), is implemented as a World-Wide-Web service that allows for management and measurement of software development projects based on dynamic analysis of change activity in the workflow. WISE tracks issues in a software development process, provides informal communication between the users with different roles, supports to-do lists, and helps in software process improvement. WISE minimizes the time devoted to metrics collection and analysis by providing implicit delivery of messages between users based on the content of project documents. The use of a database in WISE is hidden from the users who view WISE as maintaining a personal 'to-do list' of tasks related to the many projects on which they may play different roles.
Decentralized Formation Flying Control in a Multiple-Team Hierarchy
NASA Technical Reports Server (NTRS)
Mueller, Joseph .; Thomas, Stephanie J.
2005-01-01
This paper presents the prototype of a system that addresses these objectives-a decentralized guidance and control system that is distributed across spacecraft using a multiple-team framework. The objective is to divide large clusters into teams of manageable size, so that the communication and computational demands driven by N decentralized units are related to the number of satellites in a team rather than the entire cluster. The system is designed to provide a high-level of autonomy, to support clusters with large numbers of satellites, to enable the number of spacecraft in the cluster to change post-launch, and to provide for on-orbit software modification. The distributed guidance and control system will be implemented in an object-oriented style using MANTA (Messaging Architecture for Networking and Threaded Applications). In this architecture, tasks may be remotely added, removed or replaced post-launch to increase mission flexibility and robustness. This built-in adaptability will allow software modifications to be made on-orbit in a robust manner. The prototype system, which is implemented in MATLAB, emulates the object-oriented and message-passing features of the MANTA software. In this paper, the multiple-team organization of the cluster is described, and the modular software architecture is presented. The relative dynamics in eccentric reference orbits is reviewed, and families of periodic, relative trajectories are identified, expressed as sets of static geometric parameters. The guidance law design is presented, and an example reconfiguration scenario is used to illustrate the distributed process of assigning geometric goals to the cluster. Next, a decentralized maneuver planning approach is presented that utilizes linear-programming methods to enact reconfiguration and coarse formation keeping maneuvers. Finally, a method for performing online collision avoidance is discussed, and an example is provided to gauge its performance.
Are the expected benefits of requirements reuse hampered by distance? An experiment.
Carrillo de Gea, Juan M; Nicolás, Joaquín; Fernández-Alemán, José L; Toval, Ambrosio; Idri, Ali
2016-01-01
Software development processes are often performed by distributed teams which may be separated by great distances. Global software development (GSD) has undergone a significant growth in recent years. The challenges concerning GSD are especially relevant to requirements engineering (RE). Stakeholders need to share a common ground, but there are many difficulties as regards the potentially variable interpretation of the requirements in different contexts. We posit that the application of requirements reuse techniques could alleviate this problem through the diminution of the number of requirements open to misinterpretation. This paper presents a reuse-based approach with which to address RE in GSD, with special emphasis on specification techniques, namely parameterised requirements and traceability relationships. An experiment was carried out with the participation of 29 university students enrolled on a Computer Science and Engineering course. Two main scenarios that represented co-localisation and distribution in software development were portrayed by participants from Spain and Morocco. The global teams achieved a slightly better performance than the co-located teams as regards effectiveness , which could be a result of the worse productivity of the global teams in comparison to the co-located teams. Subjective perceptions were generally more positive in the case of the distributed teams ( difficulty , speed and understanding ), with the exception of quality . A theoretical model has been proposed as an evaluation framework with which to analyse, from the point of view of the factor of distance, the effect of requirements specification techniques on a set of performance and perception-based variables. The experiment utilised a new internationalisation requirements catalogue. None of the differences found between co-located and distributed teams were significant according to the outcome of our statistical tests. The well-known benefits of requirements reuse in traditional co-located projects could, therefore, also be expected in GSD projects.
Software Engineering for Human Spaceflight
NASA Technical Reports Server (NTRS)
Fredrickson, Steven E.
2014-01-01
The Spacecraft Software Engineering Branch of NASA Johnson Space Center (JSC) provides world-class products, leadership, and technical expertise in software engineering, processes, technology, and systems management for human spaceflight. The branch contributes to major NASA programs (e.g. ISS, MPCV/Orion) with in-house software development and prime contractor oversight, and maintains the JSC Engineering Directorate CMMI rating for flight software development. Software engineering teams work with hardware developers, mission planners, and system operators to integrate flight vehicles, habitats, robotics, and other spacecraft elements. They seek to infuse automation and autonomy into missions, and apply new technologies to flight processor and computational architectures. This presentation will provide an overview of key software-related projects, software methodologies and tools, and technology pursuits of interest to the JSC Spacecraft Software Engineering Branch.
Software quality and process improvement in scientific simulation codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ambrosiano, J.; Webster, R.
1997-11-01
This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Pressburger, Thomas; Markosian, Lawrence; Feather, Martin S.
2006-01-01
New processes, methods and tools are constantly appearing in the field of software engineering. Many of these augur great potential in improving software development processes, resulting in higher quality software with greater levels of assurance. However, there are a number of obstacles that impede their infusion into software development practices. These are the recurring obstacles common to many forms of research. Practitioners cannot readily identify the emerging techniques that may most benefit them, and cannot afford to risk time and effort in evaluating and experimenting with them while there is still uncertainty about whether they will have payoff in this particular context. Similarly, researchers cannot readily identify those practitioners whose problems would be amenable to their techniques and lack the feedback from practical applications necessary to help them to evolve their techniques to make them more likely to be successful. This paper describes an ongoing effort conducted by a software engineering research infusion team, and the NASA Research Infusion Initiative, established by NASA s Software Engineering Initiative, to overcome these obstacles.
NASA Technical Reports Server (NTRS)
Green, Scott; Kouchakdjian, Ara; Basili, Victor; Weidow, David
1990-01-01
This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage.
A Survey of Commonly Applied Methods for Software Process Improvement
1994-02-01
conducted a controlled experimental study of the effectiveness of the method. They compared 10 cleanroom teams with 5 non -cleanroom teams working for six...Robert D. Austin Doctoral Candidate Carnegie Mellon University Daniel J. Paulish Resident Affiliate Siemens Corporate Research , Inc. Accesion For0...the U.S. Department of Defense. Copyright 0 1994 by Carnegie Mellon University. Copies of the documen are available from Research Access. Iinc., 800
Absorbing Software Testing into the Scrum Method
NASA Astrophysics Data System (ADS)
Tuomikoski, Janne; Tervonen, Ilkka
In this paper we study, how to absorb software testing into the Scrum method. We conducted the research as an action research during the years 2007-2008 with three iterations. The result showed that testing can and even should be absorbed to the Scrum method. The testing team was merged into the Scrum teams. The teams can now deliver better working software in a shorter time, because testing keeps track of the progress of the development. Also the team spirit is higher, because the Scrum team members are committed to the same goal. The biggest change from test manager’s point of view was the organized Product Owner Team. Test manager don’t have testing team anymore, and in the future all the testing tasks have to be assigned through the Product Backlog.
NASA Technical Reports Server (NTRS)
2001-01-01
Qualtech Systems, Inc. developed a complete software system with capabilities of multisignal modeling, diagnostic analysis, run-time diagnostic operations, and intelligent interactive reasoners. Commercially available as the TEAMS (Testability Engineering and Maintenance System) tool set, the software can be used to reveal unanticipated system failures. The TEAMS software package is broken down into four companion tools: TEAMS-RT, TEAMATE, TEAMS-KB, and TEAMS-RDS. TEAMS-RT identifies good, bad, and suspect components in the system in real-time. It reports system health results from onboard tests, and detects and isolates failures within the system, allowing for rapid fault isolation. TEAMATE takes over from where TEAMS-RT left off by intelligently guiding the maintenance technician through the troubleshooting procedure, repair actions, and operational checkout. TEAMS-KB serves as a model management and collection tool. TEAMS-RDS (TEAMS-Remote Diagnostic Server) has the ability to continuously assess a system and isolate any failure in that system or its components, in real time. RDS incorporates TEAMS-RT, TEAMATE, and TEAMS-KB in a large-scale server architecture capable of providing advanced diagnostic and maintenance functions over a network, such as the Internet, with a web browser user interface.
Team Software Development for Aerothermodynamic and Aerodynamic Analysis and Design
NASA Technical Reports Server (NTRS)
Alexandrov, N.; Atkins, H. L.; Bibb, K. L.; Biedron, R. T.; Carpenter, M. H.; Gnoffo, P. A.; Hammond, D. P.; Jones, W. T.; Kleb, W. L.; Lee-Rausch, E. M.
2003-01-01
A collaborative approach to software development is described. The approach employs the agile development techniques: project retrospectives, Scrum status meetings, and elements of Extreme Programming to efficiently develop a cohesive and extensible software suite. The software product under development is a fluid dynamics simulator for performing aerodynamic and aerothermodynamic analysis and design. The functionality of the software product is achieved both through the merging, with substantial rewrite, of separate legacy codes and the authorship of new routines. Examples of rapid implementation of new functionality demonstrate the benefits obtained with this agile software development process. The appendix contains a discussion of coding issues encountered while porting legacy Fortran 77 code to Fortran 95, software design principles, and a Fortran 95 coding standard.
Software Process Automation: Experiences from the Trenches.
1996-07-01
Integration of problem database Weaver tions) J Process WordPerfect, All-in-One, Oracle, CM Integration of tools Weaver System K Process Framemaker , CM...handle change requests and problem reports. * Autoplan, a project management tool * Framemaker , a document processing system * Worldview, a document...Cadre, Team Work, FrameMaker , some- thing for requirements traceability, their own homegrown scheduling tool, and their own homegrown tool integrator
The Legacy of Space Shuttle Flight Software
NASA Technical Reports Server (NTRS)
Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.
2011-01-01
The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.
NASA Astrophysics Data System (ADS)
Rimland, Jeffrey; McNeese, Michael; Hall, David
2013-05-01
Although the capability of computer-based artificial intelligence techniques for decision-making and situational awareness has seen notable improvement over the last several decades, the current state-of-the-art still falls short of creating computer systems capable of autonomously making complex decisions and judgments in many domains where data is nuanced and accountability is high. However, there is a great deal of potential for hybrid systems in which software applications augment human capabilities by focusing the analyst's attention to relevant information elements based on both a priori knowledge of the analyst's goals and the processing/correlation of a series of data streams too numerous and heterogeneous for the analyst to digest without assistance. Researchers at Penn State University are exploring ways in which an information framework influenced by Klein's (Recognition Primed Decision) RPD model, Endsley's model of situational awareness, and the Joint Directors of Laboratories (JDL) data fusion process model can be implemented through a novel combination of Complex Event Processing (CEP) and Multi-Agent Software (MAS). Though originally designed for stock market and financial applications, the high performance data-driven nature of CEP techniques provide a natural compliment to the proven capabilities of MAS systems for modeling naturalistic decision-making, performing process adjudication, and optimizing networked processing and cognition via the use of "mobile agents." This paper addresses the challenges and opportunities of such a framework for augmenting human observational capability as well as enabling the ability to perform collaborative context-aware reasoning in both human teams and hybrid human / software agent teams.
Orbit Determination and Navigation Software Testing for the Mars Reconnaissance Orbiter
NASA Technical Reports Server (NTRS)
Pini, Alex
2011-01-01
During the extended science phase of the Mars Reconnaissance Orbiter's lifecycle, the operational duties pertaining to navigation primarily involve orbit determination. The orbit determination process utilizes radiometric tracking data and is used for the prediction and reconstruction of MRO's trajectories. Predictions are done twice per week for ephemeris updates on-board the spacecraft and for planning purposes. Orbit Trim Maneuvers (OTM-s) are also designed using the predicted trajectory. Reconstructions, which incorporate a batch estimator, provide precise information about the spacecraft state to be synchronized with scientific measurements. These tasks were conducted regularly to validate the results obtained by the MRO Navigation Team. Additionally, the team is in the process of converting to newer versions of the navigation software and operating system. The capability to model multiple densities in the Martian atmosphere is also being implemented. However, testing outputs among these different configurations was necessary to ensure compliance to a satisfactory degree.
The Package-Based Development Process in the Flight Dynamics Division
NASA Technical Reports Server (NTRS)
Parra, Amalia; Seaman, Carolyn; Basili, Victor; Kraft, Stephen; Condon, Steven; Burke, Steven; Yakimovich, Daniil
1997-01-01
The Software Engineering Laboratory (SEL) has been operating for more than two decades in the Flight Dynamics Division (FDD) and has adapted to the constant movement of the software development environment. The SEL's Improvement Paradigm shows that process improvement is an iterative process. Understanding, Assessing and Packaging are the three steps that are followed in this cyclical paradigm. As the improvement process cycles back to the first step, after having packaged some experience, the level of understanding will be greater. In the past, products resulting from the packaging step have been large process documents, guidebooks, and training programs. As the technical world moves toward more modularized software, we have made a move toward more modularized software development process documentation, as such the products of the packaging step are becoming smaller and more frequent. In this manner, the QIP takes on a more spiral approach rather than a waterfall. This paper describes the state of the FDD in the area of software development processes, as revealed through the understanding and assessing activities conducted by the COTS study team. The insights presented include: (1) a characterization of a typical FDD Commercial Off the Shelf (COTS) intensive software development life-cycle process, (2) lessons learned through the COTS study interviews, and (3) a description of changes in the SEL due to the changing and accelerating nature of software development in the FDD.
Software Development in the Water Sciences: a view from the divide (Invited)
NASA Astrophysics Data System (ADS)
Miles, B.; Band, L. E.
2013-12-01
While training in statistical methods is an important part of many earth scientists' training, these scientists often learn the bulk of their software development skills in an ad hoc, just-in-time manner. Yet to carry out contemporary research scientists are spending more and more time developing software. Here I present perspectives - as an earth sciences graduate student with professional software engineering experience - on the challenges scientists face adopting software engineering practices, with an emphasis on areas of the science software development lifecycle that could benefit most from improved engineering. This work builds on experience gained as part of the NSF-funded Water Science Software Institute (WSSI) conceptualization award (NSF Award # 1216817). Throughout 2013, the WSSI team held a series of software scoping and development sprints with the goals of: (1) adding features to better model green infrastructure within the Regional Hydro-Ecological Simulation System (RHESSys); and (2) infusing test-driven agile software development practices into the processes employed by the RHESSys team. The goal of efforts such as the WSSI is to ensure that investments by current and future scientists in software engineering training will enable transformative science by improving both scientific reproducibility and researcher productivity. Experience with the WSSI indicates: (1) the potential for achieving this goal; and (2) while scientists are willing to adopt some software engineering practices, transformative science will require continued collaboration between domain scientists and cyberinfrastructure experts for the foreseeable future.
NASA Astrophysics Data System (ADS)
Orngreen, Rikke; Clemmensen, Torkil; Pejtersen, Annelise Mark
The boundaries and work processes for how virtual teams interact are undergoing changes, from a tool and stand-alone application orientation, to the use of multiple generic platforms chosen and redesigned to the specific context. These are often at the same time designed both by professional software developers and the individual members of the virtual teams, rather than determined on a single organizational level. There may be no impact of the technology per se on individuals, groups or organizations, as the technology for virtual teams rather enhance situation ambiguity and disrupt existing task-artifact cycles. This ambiguous situation calls for new methods for empirical work analysis and interaction design that can help us understand how organizations, teams and individuals learn to organize, design and work in virtual teams in various networked contexts.
NASA PC software evaluation project
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Kuan, Julie C.
1986-01-01
The USL NASA PC software evaluation project is intended to provide a structured framework for facilitating the development of quality NASA PC software products. The project will assist NASA PC development staff to understand the characteristics and functions of NASA PC software products. Based on the results of the project teams' evaluations and recommendations, users can judge the reliability, usability, acceptability, maintainability and customizability of all the PC software products. The objective here is to provide initial, high-level specifications and guidelines for NASA PC software evaluation. The primary tasks to be addressed in this project are as follows: to gain a strong understanding of what software evaluation entails and how to organize a structured software evaluation process; to define a structured methodology for conducting the software evaluation process; to develop a set of PC software evaluation criteria and evaluation rating scales; and to conduct PC software evaluations in accordance with the identified methodology. Communication Packages, Network System Software, Graphics Support Software, Environment Management Software, General Utilities. This report represents one of the 72 attachment reports to the University of Southwestern Louisiana's Final Report on NASA Grant NGT-19-010-900. Accordingly, appropriate care should be taken in using this report out of context of the full Final Report.
2011-03-01
performance of Federal Government Contract Number FA8721-05- C -0003 with Carnegie Mellon University for the operation of the Software Engineering... C Roles and Responsibilities 195 Appendix D Reporting Requirements and Options 201 Appendix E Managed Discovery 203 Appendix F Scoping and...Upgrade Team (SUT) • Mary Busby , Lockheed Martin • Palma Buttles-Valdez, Software Engineering Institute • Paul Byrnes, Integrated System Diagnostics
Bridging the Qualitative/Quantitative Software Divide
Annechino, Rachelle; Antin, Tamar M. J.; Lee, Juliet P.
2011-01-01
To compare and combine qualitative and quantitative data collected from respondents in a mixed methods study, the research team developed a relational database to merge survey responses stored and analyzed in SPSS and semistructured interview responses stored and analyzed in the qualitative software package ATLAS.ti. The process of developing the database, as well as practical considerations for researchers who may wish to use similar methods, are explored. PMID:22003318
Commercial Mobile Alert Service (CMAS) Scenarios
2012-05-01
Commercial Mobile Alert Service (CMAS) Scenarios The WEA Project Team May 2012 SPECIAL REPORT CMU/SEI-2012-SR-020 CERT® Division, Software ...Homeland Security under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally...DISTRIBUTES IT “AS IS.” References herein to any specific commercial product, process, or service by trade name, trade mark, manufacturer, or otherwise
Distributed Visualization Project
NASA Technical Reports Server (NTRS)
Craig, Douglas; Conroy, Michael; Kickbusch, Tracey; Mazone, Rebecca
2016-01-01
Distributed Visualization allows anyone, anywhere to see any simulation at any time. Development focuses on algorithms, software, data formats, data systems and processes to enable sharing simulation-based information across temporal and spatial boundaries without requiring stakeholders to possess highly-specialized and very expensive display systems. It also introduces abstraction between the native and shared data, which allows teams to share results without giving away proprietary or sensitive data. The initial implementation of this capability is the Distributed Observer Network (DON) version 3.1. DON 3.1 is available for public release in the NASA Software Store (https://software.nasa.gov/software/KSC-13775) and works with version 3.0 of the Model Process Control specification (an XML Simulation Data Representation and Communication Language) to display complex graphical information and associated Meta-Data.
NASA Astrophysics Data System (ADS)
Guzman, J. C.; Bennett, T.
2008-08-01
The Convergent Radio Astronomy Demonstrator (CONRAD) is a collaboration between the computing teams of two SKA pathfinder instruments, MeerKAT (South Africa) and ASKAP (Australia). Our goal is to produce the required common software to operate, process and store the data from the two instruments. Both instruments are synthesis arrays composed of a large number of antennas (40 - 100) operating at centimeter wavelengths with wide-field capabilities. Key challenges are the processing of high volume of data in real-time as well as the remote mode of operations. Here we present the software architecture for CONRAD. Our design approach is to maximize the use of open solutions and third-party software widely deployed in commercial applications, such as SNMP and LDAP, and to utilize modern web-based technologies for the user interfaces, such as AJAX.
Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, Wes
2016-07-24
The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing wasmore » not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.« less
Ye, Xin
2018-01-01
The awareness of others’ activities has been widely recognized as essential in facilitating coordination in a team among Computer-Supported Cooperative Work communities. Several field studies of software developers in large software companies such as Microsoft have shown that coworker and artifact awareness are the most common information needs for software developers; however, they are also two of the seven most frequently unsatisfied information needs. To address this problem, we built a workspace awareness tool named TeamWATCH to visualize developer activities using a 3-D city metaphor. In this paper, we discuss the importance of awareness in software development, review existing workspace awareness tools, present the design and implementation of TeamWATCH, and evaluate how it could help detect and resolve conflicts earlier and better maintain group awareness via a controlled experiment. The experimental results showed that the subjects using TeamWATCH performed significantly better with respect to early conflict detection and resolution. PMID:29558519
A Comparison of Authoring Software for Developing Mathematics Self-Learning Software Packages.
ERIC Educational Resources Information Center
Suen, Che-yin; Pok, Yang-ming
Four years ago, the authors started to develop a self-paced mathematics learning software called NPMaths by using an authoring package called Tencore. However, NPMaths had some weak points. A development team was hence formed to develop similar software called Mathematics On Line. This time the team used another development language called…
Extreme Programming: Maestro Style
NASA Technical Reports Server (NTRS)
Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark
2009-01-01
"Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme-programming practices. The single most influential of these factors is that continuous interaction between customers and programmers is not feasible.
NASA's TReK Project: A Case Study in Using the Spiral Model of Software Development
NASA Technical Reports Server (NTRS)
Hendrix, T. Dean; Schneider, Michelle P.
1998-01-01
Software development projects face numerous challenges that threaten their successful completion. Whether it is not enough money, too little time, or a case of "requirements creep" that has turned into a full sprint, projects must meet these challenges or face possible disastrous consequences. A robust, yet flexible process model can provide a mechanism through which software development teams can meet these challenges head on and win. This article describes how the spiral model has been successfully tailored to a specific project and relates some notable results to date.
NASA Technical Reports Server (NTRS)
Yang, Genevie Velarde; Mohr, David; Kirby, Charles E.
2008-01-01
To keep Cassini on its complex trajectory, more than 200 orbit trim maneuvers (OTMs) have been planned from July 2004 to July 2010. With only a few days between many of these OTMs, the operations process of planning and executing the necessary commands had to be automated. The resulting Maneuver Automation Software (MAS) process minimizes the workforce required for, and maximizes the efficiency of, the maneuver design and uplink activities. The MAS process is a well-organized and logically constructed interface between Cassini's Navigation (NAV), Spacecraft Operations (SCO), and Ground Software teams. Upon delivery of an orbit determination (OD) from NAV, the MAS process can generate a maneuver design and all related uplink and verification products within 30 minutes. To date, all 112 OTMs executed by the Cassini spacecraft have been successful. MAS was even used to successfully design and execute a maneuver while the spacecraft was in safe mode.
Next Generation Simulation Framework for Robotic and Human Space Missions
NASA Technical Reports Server (NTRS)
Cameron, Jonathan M.; Balaram, J.; Jain, Abhinandan; Kuo, Calvin; Lim, Christopher; Myint, Steven
2012-01-01
The Dartslab team at NASA's Jet Propulsion Laboratory (JPL) has a long history of developing physics-based simulations based on the Darts/Dshell simulation framework that have been used to simulate many planetary robotic missions, such as the Cassini spacecraft and the rovers that are currently driving on Mars. Recent collaboration efforts between the Dartslab team at JPL and the Mission Operations Directorate (MOD) at NASA Johnson Space Center (JSC) have led to significant enhancements to the Dartslab DSENDS (Dynamics Simulator for Entry, Descent and Surface landing) software framework. The new version of DSENDS is now being used for new planetary mission simulations at JPL. JSC is using DSENDS as the foundation for a suite of software known as COMPASS (Core Operations, Mission Planning, and Analysis Spacecraft Simulation) that is the basis for their new human space mission simulations and analysis. In this paper, we will describe the collaborative process with the JPL Dartslab and the JSC MOD team that resulted in the redesign and enhancement of the DSENDS software. We will outline the improvements in DSENDS that simplify creation of new high-fidelity robotic/spacecraft simulations. We will illustrate how DSENDS simulations are assembled and show results from several mission simulations.
An Investigation of Agility Issues in Scrum Teams Using Agility Indicators
NASA Astrophysics Data System (ADS)
Pikkarainen, Minna; Wang, Xiaofeng
Agile software development methods have emerged and become increasingly popular in recent years; yet the issues encountered by software development teams that strive to achieve agility using agile methods are yet to be explored systematically. Built upon a previous study that has established a set of indicators of agility, this study investigates what issues are manifested in software development teams using agile methods. It is focussed on Scrum teams particularly. In other words, the goal of the chapter is to evaluate Scrum teams using agility indicators and therefore to further validate previously presented agility indicators within the additional cases. A multiple case study research method is employed. The findings of the study reveal that the teams using Scrum do not necessarily achieve agility in terms of team autonomy, sharing, stability and embraced uncertainty. The possible reasons include previous organizational plan-driven culture, resistance towards the Scrum roles and changing resources.
Maintaining Quality and Confidence in Open-Source, Evolving Software: Lessons Learned with PFLOTRAN
NASA Astrophysics Data System (ADS)
Frederick, J. M.; Hammond, G. E.
2017-12-01
Software evolution in an open-source framework poses a major challenge to a geoscientific simulator, but when properly managed, the pay-off can be enormous for both the developers and the community at large. Developers must juggle implementing new scientific process models, adopting increasingly efficient numerical methods and programming paradigms, changing funding sources (or total lack of funding), while also ensuring that legacy code remains functional and reported bugs are fixed in a timely manner. With robust software engineering and a plan for long-term maintenance, a simulator can evolve over time incorporating and leveraging many advances in the computational and domain sciences. In this positive light, what practices in software engineering and code maintenance can be employed within open-source development to maximize the positive aspects of software evolution and community contributions while minimizing its negative side effects? This presentation will discusses steps taken in the development of PFLOTRAN (www.pflotran.org), an open source, massively parallel subsurface simulator for multiphase, multicomponent, and multiscale reactive flow and transport processes in porous media. As PFLOTRAN's user base and development team continues to grow, it has become increasingly important to implement strategies which ensure sustainable software development while maintaining software quality and community confidence. In this presentation, we will share our experiences and "lessons learned" within the context of our open-source development framework and community engagement efforts. Topics discussed will include how we've leveraged both standard software engineering principles, such as coding standards, version control, and automated testing, as well unique advantages of object-oriented design in process model coupling, to ensure software quality and confidence. We will also be prepared to discuss the major challenges faced by most open-source software teams, such as on-boarding new developers or one-time contributions, dealing with competitors or lookie-loos, and other downsides of complete transparency, as well as our approach to community engagement, including a user group email list, hosting short courses and workshops for new users, and maintaining a website. SAND2017-8174A
Remote Internet access to advanced analytical facilities: a new approach with Web-based services.
Sherry, N; Qin, J; Fuller, M Suominen; Xie, Y; Mola, O; Bauer, M; McIntyre, N S; Maxwell, D; Liu, D; Matias, E; Armstrong, C
2012-09-04
Over the past decade, the increasing availability of the World Wide Web has held out the possibility that the efficiency of scientific measurements could be enhanced in cases where experiments were being conducted at distant facilities. Examples of early successes have included X-ray diffraction (XRD) experimental measurements of protein crystal structures at synchrotrons and access to scanning electron microscopy (SEM) and NMR facilities by users from institutions that do not possess such advanced capabilities. Experimental control, visual contact, and receipt of results has used some form of X forwarding and/or VNC (virtual network computing) software that transfers the screen image of a server at the experimental site to that of the users' home site. A more recent development is a web services platform called Science Studio that provides teams of scientists with secure links to experiments at one or more advanced research facilities. The software provides a widely distributed team with a set of controls and screens to operate, observe, and record essential parts of the experiment. As well, Science Studio provides high speed network access to computing resources to process the large data sets that are often involved in complex experiments. The simple web browser and the rapid transfer of experimental data to a processing site allow efficient use of the facility and assist decision making during the acquisition of the experimental results. The software provides users with a comprehensive overview and record of all parts of the experimental process. A prototype network is described involving X-ray beamlines at two different synchrotrons and an SEM facility. An online parallel processing facility has been developed that analyzes the data in near-real time using stream processing. Science Studio and can be expanded to include many other analytical applications, providing teams of users with rapid access to processed results along with the means for detailed discussion of their significance.
Investigating Team Cohesion in COCOMO II.2000
ERIC Educational Resources Information Center
Snowdeal-Carden, Betty A.
2013-01-01
Software engineering is team oriented and intensely complex, relying on human collaboration and creativity more than any other engineering discipline. Poor software estimation is a problem that within the United States costs over a billion dollars per year. Effective measurement of team cohesion is foundationally important to gain accurate…
Teaching Tip: Managing Software Engineering Student Teams Using Pellerin's 4-D System
ERIC Educational Resources Information Center
Doman, Marguerite; Besmer, Andrew; Olsen, Anne
2015-01-01
In this article, we discuss the use of Pellerin's Four Dimension Leadership System (4-D) as a way to manage teams in a classroom setting. Over a 5-year period, we used a modified version of the 4-D model to manage teams within a senior level Software Engineering capstone course. We found that this approach for team management in a classroom…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prescott, Ryan; Marger, Bernard L.; Chiu, Ailsa
During the second iteration of the US NDC Modernization Elaboration phase (E2), the SNL US NDC Modernization project team completed follow-on COTS surveys & exploratory prototyping related to the Object Storage & Distribution (OSD) mechanism, and the processing control software infrastructure. This report summarizes the E2 prototyping work.
2011-03-01
performance of Federal Government Contract Number FA8721-05- C -0003 with Carnegie Mellon University for the operation of the Software Engineering... C Roles and Responsibilities 195 Appendix D Reporting Requirements and Options 201 Appendix E Managed Discovery 203 Appendix F Scoping and...Upgrade Team (SUT) • Mary Busby , Lockheed Martin • Palma Buttles-Valdez, Software Engineering Institute • Paul Byrnes, Integrated System Diagnostics
Remediating Non-Positive Definite State Covariances for Collision Probability Estimation
NASA Technical Reports Server (NTRS)
Hall, Doyle T.; Hejduk, Matthew D.; Johnson, Lauren C.
2017-01-01
The NASA Conjunction Assessment Risk Analysis team estimates the probability of collision (Pc) for a set of Earth-orbiting satellites. The Pc estimation software processes satellite position+velocity states and their associated covariance matri-ces. On occasion, the software encounters non-positive definite (NPD) state co-variances, which can adversely affect or prevent the Pc estimation process. Inter-polation inaccuracies appear to account for the majority of such covariances, alt-hough other mechanisms contribute also. This paper investigates the origin of NPD state covariance matrices, three different methods for remediating these co-variances when and if necessary, and the associated effects on the Pc estimation process.
Enhancing Collaborative Learning through Group Intelligence Software
NASA Astrophysics Data System (ADS)
Tan, Yin Leng; Macaulay, Linda A.
Employers increasingly demand not only academic excellence from graduates but also excellent interpersonal skills and the ability to work collaboratively in teams. This paper discusses the role of Group Intelligence software in helping to develop these higher order skills in the context of an enquiry based learning (EBL) project. The software supports teams in generating ideas, categorizing, prioritizing, voting and multi-criteria decision making and automatically generates a report of each team session. Students worked in a Group Intelligence lab designed to support both face to face and computer-mediated communication and employers provided feedback at two key points in the year long team project. Evaluation of the effectiveness of Group Intelligence software in collaborative learning was based on five key concepts of creativity, participation, productivity, engagement and understanding.
The Australian Computational Earth Systems Simulator
NASA Astrophysics Data System (ADS)
Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.
2001-12-01
Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic behaviour of earth systems. ACcESS represents a part of Australia's contribution to the APEC Cooperation for Earthquake Simulation (ACES) international initiative. Together with other national earth systems science initiatives including the Japanese Earth Simulator and US General Earthquake Model projects, ACcESS aims to provide a driver for scientific advancement and technological breakthroughs including: quantum leaps in understanding of earth evolution at global, crustal, regional and microscopic scales; new knowledge of the physics of crustal fault systems required to underpin the grand challenge of earthquake prediction; new understanding and predictive capabilities of geological processes such as tectonics and mineralisation.
Methodology for Software Reliability Prediction. Volume 2.
1987-11-01
The overall acquisition ,z program shall include the resources, schedule, management, structure , and controls necessary to ensure that specified AD...Independent Verification/Validation - Programming Team Structure - Educational Level of Team Members - Experience Level of Team Members * Methods Used...Prediction or Estimation Parameter Supported: Software - Characteristics 3. Objectives: Structured programming studies and Government Ur.’.. procurement
ERIC Educational Resources Information Center
Galloway, Edward A.; Michalek, Gabrielle V.
1995-01-01
Discusses the conversion project of the congressional papers of Senator John Heinz into digital format and the provision of electronic access to these papers by Carnegie Mellon University. Topics include collection background, project team structure, document processing, scanning, use of optical character recognition software, verification…
The role of metrics and measurements in a software intensive total quality management environment
NASA Technical Reports Server (NTRS)
Daniels, Charles B.
1992-01-01
Paramax Space Systems began its mission as a member of the Rockwell Space Operations Company (RSOC) team which was the successful bidder on a massive operations consolidation contract for the Mission Operations Directorate (MOD) at JSC. The contract awarded to the team was the Space Transportation System Operations Contract (STSOC). Our initial challenge was to accept responsibility for a very large, highly complex and fragmented collection of software from eleven different contractors and transform it into a coherent, operational baseline. Concurrently, we had to integrate a diverse group of people from eleven different companies into a single, cohesive team. Paramax executives recognized the absolute necessity to develop a business culture based on the concept of employee involvement to execute and improve the complex process of our new environment. Our executives clearly understood that management needed to set the example and lead the way to quality improvement. The total quality management policy and the metrics used in this endeavor are presented.
The software development process at the Chandra X-ray Center
NASA Astrophysics Data System (ADS)
Evans, Janet D.; Evans, Ian N.; Fabbiano, Giuseppina
2008-08-01
Software development for the Chandra X-ray Center Data System began in the mid 1990's, and the waterfall model of development was mandated by our documents. Although we initially tried this approach, we found that a process with elements of the spiral model worked better in our science-based environment. High-level science requirements are usually established by scientists, and provided to the software development group. We follow with review and refinement of those requirements prior to the design phase. Design reviews are conducted for substantial projects within the development team, and include scientists whenever appropriate. Development follows agreed upon schedules that include several internal releases of the task before completion. Feedback from science testing early in the process helps to identify and resolve misunderstandings present in the detailed requirements, and allows review of intangible requirements. The development process includes specific testing of requirements, developer and user documentation, and support after deployment to operations or to users. We discuss the process we follow at the Chandra X-ray Center (CXC) to develop software and support operations. We review the role of the science and development staff from conception to release of software, and some lessons learned from managing CXC software development for over a decade.
Achieving Agility and Stability in Large-Scale Software Development
2013-01-16
temporary team is assigned to prepare layers and frameworks for future feature teams. Presentation Layer Domain Layer Data Access Layer...http://www.sei.cmu.edu/training/ elearning ~ Software Engineering Institute CarnegieMellon
ERIC Educational Resources Information Center
Smith, James Robert
2012-01-01
This cross-sectional study explored how IT system and software development team members communicated in the workplace and whether teams that used more verbal communication (and less text-based communication) experienced higher levels of collaboration as measured using the Teamwork Quality (TWQ) scale. Although computer-mediated communication tools…
Achieving Agility and Stability in Large-Scale Software Development
2013-01-16
temporary team is assigned to prepare layers and frameworks for future feature teams. Presentation Layer Domain Layer Data Access Layer Framework...http://www.sei.cmu.edu/training/ elearning ~ Software Engineering Institute CarnegieMellon
Big Software for Big Data: Scaling Up Photometry for LSST (Abstract)
NASA Astrophysics Data System (ADS)
Rawls, M.
2017-06-01
(Abstract only) The Large Synoptic Survey Telescope (LSST) will capture mosaics of the sky every few nights, each containing more data than your computer's hard drive can store. As a result, the software to process these images is as critical to the science as the telescope and the camera. I discuss the algorithms and software being developed by the LSST Data Management team to handle such a large volume of data. All of our work is open source and available to the community. Once LSST comes online, our software will produce catalogs of objects and a stream of alerts. These will bring exciting new opportunities for follow-up observations and collaborations with LSST scientists.
Coast to Coast Support of the E-2C Hawkeye using Distributed TSP
2008-05-02
Share history of each subgroup and establish a vision f th f t f thi l f d to e u ure or s new y orme eam 3. Establish team operating principles...visits were made by team t th ld k t b d b ild th b id b t th Th ff t NAVAIR Systems/Software Support Center (NSSC) Slide 18 managemen so ey cou now...Success • High quality products from multiple teams delivered on ti d t d tme an cos o no happen in a vacuum • There is a need for common processes
NASA Technical Reports Server (NTRS)
Ferrell, Bob A.; Lewis, Mark E.; Perotti, Jose M.; Brown, Barbara L.; Oostdyk, Rebecca L.; Goetz, Jesse W.
2010-01-01
This paper's main purpose is to detail issues and lessons learned regarding designing, integrating, and implementing Fault Detection Isolation and Recovery (FDIR) for Constellation Exploration Program (CxP) Ground Operations at Kennedy Space Center (KSC). Part of the0 overall implementation of National Aeronautics and Space Administration's (NASA's) CxP, FDIR is being implemented in three main components of the program (Ares, Orion, and Ground Operations/Processing). While not initially part of the design baseline for the CxP Ground Operations, NASA felt that FDIR is important enough to develop, that NASA's Exploration Systems Mission Directorate's (ESMD's) Exploration Technology Development Program (ETDP) initiated a task for it under their Integrated System Health Management (ISHM) research area. This task, referred to as the FDIIR project, is a multi-year multi-center effort. The primary purpose of the FDIR project is to develop a prototype and pathway upon which Fault Detection and Isolation (FDI) may be transitioned into the Ground Operations baseline. Currently, Qualtech Systems Inc (QSI) Commercial Off The Shelf (COTS) software products Testability Engineering and Maintenance System (TEAMS) Designer and TEAMS RDS/RT are being utilized in the implementation of FDI within the FDIR project. The TEAMS Designer COTS software product is being utilized to model the system with Functional Fault Models (FFMs). A limited set of systems in Ground Operations are being modeled by the FDIR project, and the entire Ares Launch Vehicle is being modeled under the Functional Fault Analysis (FFA) project at Marshall Space Flight Center (MSFC). Integration of the Ares FFMs and the Ground Processing FFMs is being done under the FDIR project also utilizing the TEAMS Designer COTS software product. One of the most significant challenges related to integration is to ensure that FFMs developed by different organizations can be integrated easily and without errors. Software Interface Control Documents (ICDs) for the FFMs and their usage will be addressed as the solution to this issue. In particular, the advantages and disadvantages of these ICDs across physically separate development groups will be delineated.
Implementation of Task-Tracking Software for Clinical IT Management.
Purohit, Anne-Maria; Brutscheck, Clemens; Prokosch, Hans-Ulrich; Ganslandt, Thomas; Schneider, Martin
2017-01-01
Often in clinical IT departments, many different methods and IT systems are used for task-tracking and project organization. Based on managers' personal preferences and knowledge about project management methods, tools differ from team to team and even from employee to employee. This causes communication problems, especially when tasks need to be done in cooperation with different teams. Monitoring tasks and resources becomes impossible: there are no defined deliverables, which prevents reliable deadlines. Because of these problems, we implemented task-tracking software which is now in use across all seven teams at the University Hospital Erlangen. Over a period of seven months, a working group defined types of tasks (project, routine task, etc.), workflows, and views to monitor the tasks of the 7 divisions, 20 teams and 340 different IT services. The software has been in use since December 2016.
Agile Methods for Open Source Safety-Critical Software
Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John
2011-01-01
The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the right amount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion. PMID:21799545
Agile Methods for Open Source Safety-Critical Software.
Gary, Kevin; Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John
2011-08-01
The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.
Noninvasive Test Detects Cardiovascular Disease
NASA Technical Reports Server (NTRS)
2007-01-01
At NASA's Jet Propulsion Laboratory (JPL), NASA-developed Video Imaging Communication and Retrieval (VICAR) software laid the groundwork for analyzing images of all kinds. A project seeking to use imaging technology for health care diagnosis began when the imaging team considered using the VICAR software to analyze X-ray images of soft tissue. With marginal success using X-rays, the team applied the same methodology to ultrasound imagery, which was already digitally formatted. The new approach proved successful for assessing amounts of plaque build-up and arterial wall thickness, direct predictors of heart disease, and the result was a noninvasive diagnostic system with the ability to accurately predict heart health. Medical Technologies International Inc. (MTI) further developed and then submitted the technology to a vigorous review process at the FDA, which cleared the software for public use. The software, patented under the name Prowin, is being used in MTI's patented ArterioVision, a carotid intima-media thickness (CIMT) test that uses ultrasound image-capturing and analysis software to noninvasively identify the risk for the major cause of heart attack and strokes: atherosclerosis. ArterioVision provides a direct measurement of atherosclerosis by safely and painlessly measuring the thickness of the first two layers of the carotid artery wall using an ultrasound procedure and advanced image-analysis software. The technology is now in use in all 50 states and in many countries throughout the world.
Repository-Based Software Engineering Program: Working Program Management Plan
NASA Technical Reports Server (NTRS)
1993-01-01
Repository-Based Software Engineering Program (RBSE) is a National Aeronautics and Space Administration (NASA) sponsored program dedicated to introducing and supporting common, effective approaches to software engineering practices. The process of conceiving, designing, building, and maintaining software systems by using existing software assets that are stored in a specialized operational reuse library or repository, accessible to system designers, is the foundation of the program. In addition to operating a software repository, RBSE promotes (1) software engineering technology transfer, (2) academic and instructional support of reuse programs, (3) the use of common software engineering standards and practices, (4) software reuse technology research, and (5) interoperability between reuse libraries. This Program Management Plan (PMP) is intended to communicate program goals and objectives, describe major work areas, and define a management report and control process. This process will assist the Program Manager, University of Houston at Clear Lake (UHCL) in tracking work progress and describing major program activities to NASA management. The goal of this PMP is to make managing the RBSE program a relatively easy process that improves the work of all team members. The PMP describes work areas addressed and work efforts being accomplished by the program; however, it is not intended as a complete description of the program. Its focus is on providing management tools and management processes for monitoring, evaluating, and administering the program; and it includes schedules for charting milestones and deliveries of program products. The PMP was developed by soliciting and obtaining guidance from appropriate program participants, analyzing program management guidance, and reviewing related program management documents.
ALFA: The new ALICE-FAIR software framework
NASA Astrophysics Data System (ADS)
Al-Turany, M.; Buncic, P.; Hristov, P.; Kollegger, T.; Kouzinopoulos, C.; Lebedev, A.; Lindenstruth, V.; Manafov, A.; Richter, M.; Rybalchenko, A.; Vande Vyvre, P.; Winckler, N.
2015-12-01
The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of large parts of a common software framework in an experiment independent way. The FairRoot project has already shown the feasibility of such an approach for the FAIR experiments and extending it beyond FAIR to experiments at other facilities[1, 2]. The ALFA framework is a joint development between ALICE Online- Offline (O2) and FairRoot teams. ALFA is designed as a flexible, elastic system, which balances reliability and ease of development with performance using multi-processing and multithreading. A message- based approach has been adopted; such an approach will support the use of the software on different hardware platforms, including heterogeneous systems. Each process in ALFA assumes limited communication and reliance on other processes. Such a design will add horizontal scaling (multiple processes) to vertical scaling provided by multiple threads to meet computing and throughput demands. ALFA does not dictate any application protocols. Potentially, any content-based processor or any source can change the application protocol. The framework supports different serialization standards for data exchange between different hardware and software languages.
Idea Paper: The Lifecycle of Software for Scientific Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubey, Anshu; McInnes, Lois C.
The software lifecycle is a well researched topic that has produced many models to meet the needs of different types of software projects. However, one class of projects, software development for scientific computing, has received relatively little attention from lifecycle researchers. In particular, software for end-to-end computations for obtaining scientific results has received few lifecycle proposals and no formalization of a development model. An examination of development approaches employed by the teams implementing large multicomponent codes reveals a great deal of similarity in their strategies. This idea paper formalizes these related approaches into a lifecycle model for end-to-end scientific applicationmore » software, featuring loose coupling between submodels for development of infrastructure and scientific capability. We also invite input from stakeholders to converge on a model that captures the complexity of this development processes and provides needed lifecycle guidance to the scientific software community.« less
Payload Operations Support Team Tools
NASA Technical Reports Server (NTRS)
Askew, Bill; Barry, Matthew; Burrows, Gary; Casey, Mike; Charles, Joe; Downing, Nicholas; Jain, Monika; Leopold, Rebecca; Luty, Roger; McDill, David;
2007-01-01
Payload Operations Support Team Tools is a software system that assists in (1) development and testing of software for payloads to be flown aboard the space shuttles and (2) training of payload customers, flight controllers, and flight crews in payload operations
Managing complex research datasets using electronic tools: A meta-analysis exemplar
Brown, Sharon A.; Martin, Ellen E.; Garcia, Theresa J.; Winter, Mary A.; García, Alexandra A.; Brown, Adama; Cuevas, Heather E.; Sumlin, Lisa L.
2013-01-01
Meta-analyses of broad scope and complexity require investigators to organize many study documents and manage communication among several research staff. Commercially available electronic tools, e.g., EndNote, Adobe Acrobat Pro, Blackboard, Excel, and IBM SPSS Statistics (SPSS), are useful for organizing and tracking the meta-analytic process, as well as enhancing communication among research team members. The purpose of this paper is to describe the electronic processes we designed, using commercially available software, for an extensive quantitative model-testing meta-analysis we are conducting. Specific electronic tools improved the efficiency of (a) locating and screening studies, (b) screening and organizing studies and other project documents, (c) extracting data from primary studies, (d) checking data accuracy and analyses, and (e) communication among team members. The major limitation in designing and implementing a fully electronic system for meta-analysis was the requisite upfront time to: decide on which electronic tools to use, determine how these tools would be employed, develop clear guidelines for their use, and train members of the research team. The electronic process described here has been useful in streamlining the process of conducting this complex meta-analysis and enhancing communication and sharing documents among research team members. PMID:23681256
Managing complex research datasets using electronic tools: a meta-analysis exemplar.
Brown, Sharon A; Martin, Ellen E; Garcia, Theresa J; Winter, Mary A; García, Alexandra A; Brown, Adama; Cuevas, Heather E; Sumlin, Lisa L
2013-06-01
Meta-analyses of broad scope and complexity require investigators to organize many study documents and manage communication among several research staff. Commercially available electronic tools, for example, EndNote, Adobe Acrobat Pro, Blackboard, Excel, and IBM SPSS Statistics (SPSS), are useful for organizing and tracking the meta-analytic process as well as enhancing communication among research team members. The purpose of this article is to describe the electronic processes designed, using commercially available software, for an extensive, quantitative model-testing meta-analysis. Specific electronic tools improved the efficiency of (a) locating and screening studies, (b) screening and organizing studies and other project documents, (c) extracting data from primary studies, (d) checking data accuracy and analyses, and (e) communication among team members. The major limitation in designing and implementing a fully electronic system for meta-analysis was the requisite upfront time to decide on which electronic tools to use, determine how these tools would be used, develop clear guidelines for their use, and train members of the research team. The electronic process described here has been useful in streamlining the process of conducting this complex meta-analysis and enhancing communication and sharing documents among research team members.
Improving collaborative learning in online software engineering education
NASA Astrophysics Data System (ADS)
Neill, Colin J.; DeFranco, Joanna F.; Sangwan, Raghvinder S.
2017-11-01
Team projects are commonplace in software engineering education. They address a key educational objective, provide students critical experience relevant to their future careers, allow instructors to set problems of greater scale and complexity than could be tackled individually, and are a vehicle for socially constructed learning. While all student teams experience challenges, those in fully online programmes must also deal with remote working, asynchronous coordination, and computer-mediated communications all of which contribute to greater social distance between team members. We have developed a facilitation framework to aid team collaboration and have demonstrated its efficacy, in prior research, with respect to team performance and outcomes. Those studies indicated, however, that despite experiencing improved project outcomes, students working in effective software engineering teams did not experience significantly improved individual achievement. To address this deficiency we implemented theoretically grounded refinements to the collaboration model based upon peer-tutoring research. Our results indicate a modest, but statistically significant (p = .08), improvement in individual achievement using this refined model.
Taking advantage of ground data systems attributes to achieve quality results in testing software
NASA Technical Reports Server (NTRS)
Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.
1994-01-01
During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.
Effective Team Support: From Modeling to Software Agents
NASA Technical Reports Server (NTRS)
Remington, Roger W. (Technical Monitor); John, Bonnie; Sycara, Katia
2003-01-01
The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and engineers and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in modeling infrastructure and task infrastructure. Work is continuing under a different contract to complete empirical data collection, cognitive modeling, and the building of software agents to support the teams task.
NASA Technical Reports Server (NTRS)
Remington, Roger W. (Technical Monitor); John, Bonnie E.; Sycara, Katia
2005-01-01
The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in completing a system for empirical data collection, cognitive modeling, and the building of software agents to support a team's tasks, and in running experiments for the collection of baseline data.
Development of a PC-based ground support system for a small satellite instrument
NASA Astrophysics Data System (ADS)
Deschambault, Robert L.; Gregory, Philip R.; Spenler, Stephen; Whalen, Brian A.
1993-11-01
The importance of effective ground support for the remote control and data retrieval of a satellite instrument cannot be understated. Problems with ground support may include the need to base personnel at a ground tracking station for extended periods, and the delay between the instrument observation and the processing of the data by the science team. Flexible solutions to such problems in the case of small satellite systems are provided by using low-cost, powerful personal computers and off-the-shelf software for data acquisition and processing, and by using Internet as a communication pathway to enable scientists to view and manipulate satellite data in real time at any ground location. The personal computer based ground support system is illustrated for the case of the cold plasma analyzer flown on the Freja satellite. Commercial software was used as building blocks for writing the ground support equipment software. Several levels of hardware support, including unit tests and development, functional tests, and integration were provided by portable and desktop personal computers. Satellite stations in Saskatchewan and Sweden were linked to the science team via phone lines and Internet, which provided remote control through a central point. These successful strategies will be used on future small satellite space programs.
Case Study: Accelerating Process Improvement by Integrating the TSP and CMMI
2005-12-01
improve their work? Watts S . Humphrey , a founder of the process improvement initiative at the SEI, de- cided to apply SW-CMM principles to the...authorized PSP instructor. At Schwalb’s urging, Watts Humphrey briefed the SLT on the PSP and TSP, and after the briefing, the team understood...hefley.html. [ Humphrey 96] Humphrey , Watts S . Introduction to the Personal Software Process. Boston, MA: Addison-Wesley Publishing Company, Inc., 1996
Recipe for Success: Digital Viewables
NASA Technical Reports Server (NTRS)
LaPha, Steven; Gaydos, Frank
2014-01-01
The Engineering Services Contract (ESC) and Information Management Communication Support contract (IMCS) at Kennedy Space Center (KSC) provide services to NASA in respect to flight and ground systems design and development. These groups provides the necessary tools, aid, and best practice methodologies required for efficient, optimized design and process development. The team is responsible for configuring and implementing systems, software, along with training, documentation, and administering standards. The team supports over 200 engineers and design specialists with the use of Windchill, Creo Parametric, NX, AutoCAD, and a variety of other design and analysis tools.
Streamlining Software Aspects of Certification: Technical Team Report on the First Industry Workshop
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Holloway, C. Michael; Knight, John C.; Leveson, Nancy G.; Yang, Jeffrey C.; Dorsey, Cheryl A.; McCormick, G. Frank
1998-01-01
To address concerns about time and expense associated with software aspects of certification, the Federal Aviation Administration (FAA) began the Streamlining Software Aspects of Certification (SSAC) program. As part of this program, a Technical Team was established to determine whether the cost and time associated with certifying aircraft can be reduced while maintaining or improving safety, with the intent of impacting the FAA's Flight 2000 program. The Technical Team conducted a workshop to gain a better understanding of the major concerns in industry about software cost and schedule. Over 120 people attended the workshop, including representatives from the FAA,commercial transport and general aviation aircraft manufacturers and suppliers, and procurers and developers of non-airborne systems; and, more than 200 issues about software aspects of certification were recorded. This paper provides an overview of the SSAC program, motivation for the workshop, details of the workshop activities and outcomes, and recommendations for follow-on work.
Managing distributed software development in the Virtual Astronomical Observatory
NASA Astrophysics Data System (ADS)
Evans, Janet D.; Plante, Raymond L.; Boneventura, Nina; Busko, Ivo; Cresitello-Dittmar, Mark; D'Abrusco, Raffaele; Doe, Stephen; Ebert, Rick; Laurino, Omar; Pevunova, Olga; Refsdal, Brian; Thomas, Brian
2012-09-01
The U.S. Virtual Astronomical Observatory (VAO) is a product-driven organization that provides new scientific research capabilities to the astronomical community. Software development for the VAO follows a lightweight framework that guides development of science applications and infrastructure. Challenges to be overcome include distributed development teams, part-time efforts, and highly constrained schedules. We describe the process we followed to conquer these challenges while developing Iris, the VAO application for analysis of 1-D astronomical spectral energy distributions (SEDs). Iris was successfully built and released in less than a year with a team distributed across four institutions. The project followed existing International Virtual Observatory Alliance inter-operability standards for spectral data and contributed a SED library as a by-product of the project. We emphasize lessons learned that will be folded into future development efforts. In our experience, a well-defined process that provides guidelines to ensure the project is cohesive and stays on track is key to success. Internal product deliveries with a planned test and feedback loop are critical. Release candidates are measured against use cases established early in the process, and provide the opportunity to assess priorities and make course corrections during development. Also key is the participation of a stakeholder such as a lead scientist who manages the technical questions, advises on priorities, and is actively involved as a lead tester. Finally, frequent scheduled communications (for example a bi-weekly tele-conference) assure issues are resolved quickly and the team is working toward a common vision.
NASA Astrophysics Data System (ADS)
Fu, L.; West, P.; Zednik, S.; Fox, P. A.
2013-12-01
For simple portals such as vocabulary based services, which contain small amounts of data and require only hyper-textual representation, it is often an overkill to adopt the whole software stack of database, middleware and front end, or to use a general Web development framework as the starting point of development. Directly combining open source software is a much more favorable approach. However, our experience with the Coastal and Marine Spatial Planning Vocabulary (CMSPV) service portal shows that there are still issues such as system configuration and accommodating a new team member that need to be handled carefully. In this contribution, we share our experience in the context of the CMSPV portal, and focus on the tools and mechanisms we've developed to ease the configuration job and the incorporation process of new project members. We discuss the configuration issues that arise when we don't have complete control over how the software in use is configured and need to follow existing configuration styles that may not be well documented, especially when multiple pieces of such software need to work together as a combined system. As for the CMSPV portal, it is built on two pieces of open source software that are still under rapid development: a Fuseki data server and Epimorphics Linked Data API (ELDA) front end. Both lack mature documentation and tutorials. We developed comparison and labeling tools to ease the problem of system configuration. Another problem that slowed down the project is that project members came and went during the development process, so new members needed to start with a partially configured system and incomplete documentation left by old members. We developed documentation/tutorial maintenance mechanisms based on our comparison and labeling tools to make it easier for the new members to be incorporated into the project. These tools and mechanisms also provided benefit to other projects that reused the software components from the CMSPV system.
The Navy’s Management of Software Licenses Needs Improvement
2013-08-07
Enterprise Software Licensing ( ESL ) as a primary DON etliciency target. Through policy and Integrated Product Team actions, this efficiency...review, as well as with DoD Enterprise Software Initiative ( ESl ) Blanket Pw·chase Agreements and any r•elated fedeml Acquisition Regulation and General...organizational and multi-functional DON ESL team. The DON is also participating in DoD level enterprise softwru·e licensing project~ through the Dol
1979-12-01
team progranming in reducing software dleveloup- ment costs relative to ad hoc approaches and improving software product quality relative to...are interpreted as demonstrating the advantages of disciplined team programming in reducing software development costs relative to ad hoc approaches...is due oartialty to the cost and imoracticality of a valiI experimental setup within a oroauct ion environment. Thus the question remains, are
Ellis, Heidi J C; Nowling, Ronald J; Vyas, Jay; Martyn, Timothy O; Gryk, Michael R
2011-04-11
The CONNecticut Joint University Research (CONNJUR) team is a group of biochemical and software engineering researchers at multiple institutions. The vision of the team is to develop a comprehensive application that integrates a variety of existing analysis tools with workflow and data management to support the process of protein structure determination using Nuclear Magnetic Resonance (NMR). The use of multiple disparate tools and lack of data management, currently the norm in NMR data processing, provides strong motivation for such an integrated environment. This manuscript briefly describes the domain of NMR as used for protein structure determination and explains the formation of the CONNJUR team and its operation in developing the CONNJUR application. The manuscript also describes the evolution of the CONNJUR application through four prototypes and describes the challenges faced while developing the CONNJUR application and how those challenges were met.
NASA Astrophysics Data System (ADS)
Plasson, Ph.
2006-11-01
LESIA, in close cooperation with CNES, DLR and IWF, is responsible for the tests and validation of the CoRoT instrument digital process unit which is made up of the BEX and DPU assembly. The main part of the work has consisted in validating the DPU software and in testing the BEX/DPU coupling. This work took more than two years due to the central role of the software tested and its technical complexity. The first task, in the validation process, was to carry out the acceptance tests of the DPU software. These tests consisted in checking each of the 325 requirements identified in the URD (User Requirements Document) and were played in a configuration using the DPU coupled to a BEX simulator. During the acceptance tests, all the transversal functionalities of the DPU software, like the TC/TM management, the state machine management, the BEX driving, the system monitoring or the maintenance functionalities were checked in depth. The functionalities associated with the seismology and exoplanetology processing, like the loading of window and mask descriptors or the configuration of the service execution parameters, were also exhaustively tested. After having validated the DPU software against the user requirements using a BEX simulator, the following step consisted in coupling the DPU and the BEX in order to check that the formed unit worked correctly and met the performance requirements. These tests were conducted in two phases: the first one was devoted to the functional aspects and the tests of interface, the second one to the performance aspects. The performance tests were based on the use of the DPU software scientific services and on the use of full images representative of a realistic sky as inputs. These tests were also based on the use of a reference set of windows and parameters, which was provided by the scientific team and was representative, in terms of load and complexity, of the one that could be used during the observation mode of the CoRoT instrument. Theywere played in a configuration using either a BCC simulator or a real BCC coupled to a video simulator, to feed the BEX/DPU unit. The validation of the scientific algorithms was conducted in parallel to the phase of the BEX/DPU coupling tests. The objective of this phase was to check that the algorithms implemented in the scientific services of the DPU software were in good conformity with those specified in the URD and that the obtained numerical precision corresponded to that expected. Forty cases of tests were defined covering the fine and rough angular error measurement processing, the rejection of the brilliant pixels, the subtraction of the offset and the sky background, the photometry algorithms, the SAA handling and reference image management. For each test case, the LESIA scientific team produced, by simulation, using the model instrument, the dynamic data files and the parameter sets allowing to feed the DPU on the one hand, and, on the other hand, a model of the onboard software. These data files correspond to FITS images (black windows, star windows, offset windows) containing more or less disturbances and making it possible to test the DPU software in dynamic mode over durations of up to 48 hours. To perform the test and validation activities of the CoRoT instrument digital process unit, a set of software testing tools was developed by LESIA (Software Ground Support Equipment, hereafter "SGSE"). Thanks to their versatility and modularity, these software testing tools were actually used during all the activities of integration, tests and validation of the instrument and its subsystems CoRoTCase and CoRoTCam. The CoRoT SGSE were specified, designed and developed by LESIA. The objective was to have a software system allowing the users (validation team of the onboard software, instrument integration team, etc.) to remotely control and monitor the whole instrument or only one of the subsystems of the instrument like the DPU coupled to a simulator BEX or the BEX/DPU unit coupled to a BCC simulator. The idea was to be able to interact in real time with the system under test by driving the various EGSE, but also to play test procedures implemented as scripts organized into libraries, to record the telemetries and housekeeping data in a database, and to be able to carry out post-mortem analyses.
The Cooperate Assistive Teamwork Environment for Software Description Languages.
Groenda, Henning; Seifermann, Stephan; Müller, Karin; Jaworek, Gerhard
2015-01-01
Versatile description languages such as the Unified Modeling Language (UML) are commonly used in software engineering across different application domains in theory and practice. They often use graphical notations and leverage visual memory for expressing complex relations. Those notations are hard to access for people with visual impairment and impede their smooth inclusion in an engineering team. Existing approaches provide textual notations but require manual synchronization between the notations. This paper presents requirements for an accessible and language-aware team work environment as well as our plan for the assistive implementation of Cooperate. An industrial software engineering team consisting of people with and without visual impairment will evaluate the implementation.
Evolution of Software-Only-Simulation at NASA IV and V
NASA Technical Reports Server (NTRS)
McCarty, Justin; Morris, Justin; Zemerick, Scott
2014-01-01
Software-Only-Simulations have been an emerging but quickly developing field of study throughout NASA. The NASA Independent Verification Validation (IVV) Independent Test Capability (ITC) team has been rapidly building a collection of simulators for a wide range of NASA missions. ITC specializes in full end-to-end simulations that enable developers, VV personnel, and operators to test-as-you-fly. In four years, the team has delivered a wide variety of spacecraft simulations that have ranged from low complexity science missions such as the Global Precipitation Management (GPM) satellite and the Deep Space Climate Observatory (DSCOVR), to the extremely complex missions such as the James Webb Space Telescope (JWST) and Space Launch System (SLS).This paper describes the evolution of ITCs technologies and processes that have been utilized to design, implement, and deploy end-to-end simulation environments for various NASA missions. A comparison of mission simulators are discussed with focus on technology and lessons learned in complexity, hardware modeling, and continuous integration. The paper also describes the methods for executing the missions unmodified flight software binaries (not cross-compiled) for verification and validation activities.
NASA Technical Reports Server (NTRS)
Madden, Michael G.; Wyrick, Roberta; O'Neill, Dale E.
2005-01-01
Space Shuttle Processing is a complicated and highly variable project. The planning and scheduling problem, categorized as a Resource Constrained - Stochastic Project Scheduling Problem (RC-SPSP), has a great deal of variability in the Orbiter Processing Facility (OPF) process flow from one flight to the next. Simulation Modeling is a useful tool in estimation of the makespan of the overall process. However, simulation requires a model to be developed, which itself is a labor and time consuming effort. With such a dynamic process, often the model would potentially be out of synchronization with the actual process, limiting the applicability of the simulation answers in solving the actual estimation problem. Integration of TEAMS model enabling software with our existing schedule program software is the basis of our solution. This paper explains the approach used to develop an auto-generated simulation model from planning and schedule efforts and available data.
Using failure mode and effects analysis to plan implementation of smart i.v. pump technology.
Wetterneck, Tosha B; Skibinski, Kathleen A; Roberts, Tanita L; Kleppin, Susan M; Schroeder, Mark E; Enloe, Myra; Rough, Steven S; Hundt, Ann Schoofs; Carayon, Pascale
2006-08-15
Failure mode and effects analysis (FMEA) was used to evaluate a smart i.v. pump as it was implemented into a redesigned medication-use process. A multidisciplinary team conducted a FMEA to guide the implementation of a smart i.v. pump that was designed to prevent pump programming errors. The smart i.v. pump was equipped with a dose-error reduction system that included a pre-defined drug library in which dosage limits were set for each medication. Monitoring for potential failures and errors occurred for three months postimplementation of FMEA. Specific measures were used to determine the success of the actions that were implemented as a result of the FMEA. The FMEA process at the hospital identified key failure modes in the medication process with the use of the old and new pumps, and actions were taken to avoid errors and adverse events. I.V. pump software and hardware design changes were also recommended. Thirteen of the 18 failure modes reported in practice after pump implementation had been identified by the team. A beneficial outcome of FMEA was the development of a multidisciplinary team that provided the infrastructure for safe technology implementation and effective event investigation after implementation. With the continual updating of i.v. pump software and hardware after implementation, FMEA can be an important starting place for safe technology choice and implementation and can produce site experts to follow technology and process changes over time. FMEA was useful in identifying potential problems in the medication-use process with the implementation of new smart i.v. pumps. Monitoring for system failures and errors after implementation remains necessary.
Development and application of an acceptance testing model
NASA Technical Reports Server (NTRS)
Pendley, Rex D.; Noonan, Caroline H.; Hall, Kenneth R.
1992-01-01
The process of acceptance testing large software systems for NASA has been analyzed, and an empirical planning model of the process constructed. This model gives managers accurate predictions of the staffing needed, the productivity of a test team, and the rate at which the system will pass. Applying the model to a new system shows a high level of agreement between the model and actual performance. The model also gives managers an objective measure of process improvement.
Continuous Energy Photon Transport Implementation in MCATK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Terry R.; Trahan, Travis John; Sweezy, Jeremy Ed
2016-10-31
The Monte Carlo Application ToolKit (MCATK) code development team has implemented Monte Carlo photon transport into the MCATK software suite. The current particle transport capabilities in MCATK, which process the tracking and collision physics, have been extended to enable tracking of photons using the same continuous energy approximation. We describe the four photoatomic processes implemented, which are coherent scattering, incoherent scattering, pair-production, and photoelectric absorption. The accompanying background, implementation, and verification of these processes will be presented.
Communities of Ethical Practice: Using New Technologies for Ethical Dialectical Discourse
ERIC Educational Resources Information Center
Newman, Linda; Findlay, John
2008-01-01
The authors report on a project in which a new experiential form of professional learning combined ethical thinking processes with a collaborative meeting technology known as the Zing team learning system (ZTLS). A new software program called "Working Wisely" was built by the completion of the project. The ZTLS in combination with…
Using the Agile Development Methodology and Applying Best Practice Project Management Processes
2014-12-01
side of this writing: Like finicky domestic helpers who announce that they ‘don’t do windows,’ I’ve often heard software developers state proudly...positioned or motivated, but rather because they were the least skilled developer (2012, 34). This result turned a team of what should be generalists
2004-09-08
KENNEDY SPACE CENTER, FLA. - The work to clean up and secure the roof of the Processing Control Center which sustained damage from Hurricane Frances is under way. The storm's path over Florida took it through Cape Canaveral and KSC property during Labor Day weekend. Located in Launch Complex 39, the facility houses some of the staff and computers responsible for Launch Processing System (LPS) software development, launch team training, and LPS maintenance.
2004-09-08
KENNEDY SPACE CENTER, FLA. - KSC employees secure the roof of the Processing Control Center which sustained damage from Hurricane Frances. The storm's path over Florida took it through Cape Canaveral and KSC property during Labor Day weekend. Located in Launch Complex 39 adjacent to the Vehicle Assembly Building (background right), the facility houses some of the staff and computers responsible for Launch Processing System (LPS) software development, launch team training, and LPS maintenance.
2004-09-08
KENNEDY SPACE CENTER, FLA. - KSC employees begin the work to clean up and secure the roof of the Processing Control Center which sustained damage from Hurricane Frances. The storm's path over Florida took it through Cape Canaveral and KSC property during Labor Day weekend. Located in Launch Complex 39, the facility houses some of the staff and computers responsible for Launch Processing System (LPS) software development, launch team training, and LPS maintenance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Leary, Patrick
The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time ofmore » the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.« less
An Independent Orbit Determination Simulation for the OSIRIS-REx Asteroid Sample Return Mission
NASA Technical Reports Server (NTRS)
Getzandanner, Kenneth; Rowlands, David; Mazarico, Erwan; Antreasian, Peter; Jackman, Coralie; Moreau, Michael
2016-01-01
After arriving at the near-Earth asteroid (101955) Bennu in late 2018, the OSIRIS-REx spacecraft will execute a series of observation campaigns and orbit phases to accurately characterize Bennu and ultimately collect a sample of pristine regolith from its surface. While in the vicinity of Bennu, the OSIRIS-REx navigation team will rely on a combination of ground-based radiometric tracking data and optical navigation (OpNav) images to generate and deliver precision orbit determination products. Long before arrival at Bennu, the navigation team is performing multiple orbit determination simulations and thread tests to verify navigation performance and ensure interfaces between multiple software suites function properly. In this paper, we will summarize the results of an independent orbit determination simulation of the Orbit B phase of the mission performed to test the interface between the OpNav image processing and orbit determination software packages.
Agile development approach for the observatory control software of the DAG 4m telescope
NASA Astrophysics Data System (ADS)
Güçsav, B. Bülent; ćoker, Deniz; Yeşilyaprak, Cahit; Keskin, Onur; Zago, Lorenzo; Yerli, Sinan K.
2016-08-01
Observatory Control Software for the upcoming 4m infrared telescope of DAG (Eastern Anatolian Observatory in Turkish) is in the beginning of its lifecycle. After the process of elicitation-validation of the initial requirements, we have been focused on preparation of a rapid conceptual design not only to see the big picture of the system but also to clarify the further development methodology. The existing preliminary designs for both software (including TCS and active optics control system) and hardware shall be presented here in brief to exploit the challenges the DAG software team has been facing with. The potential benefits of an agile approach for the development will be discussed depending on the published experience of the community and on the resources available to us.
Flight Planning Branch NASA Co-op Tour
NASA Technical Reports Server (NTRS)
Marr, Aja M.
2013-01-01
This semester I worked with the Flight Planning Branch at the NASA Johnson Space Center. I learned about the different aspects of flight planning for the International Space Station as well as the software that is used internally and ISSLive! which is used to help educate the public on the space program. I had the opportunity to do on the job training in the Mission Control Center with the planning team. I transferred old timeline records from the planning team's old software to the new software in order to preserve the data for the future when the software is retired. I learned about the operations of the International Space Station, the importance of good communication between the different parts of the planning team, and enrolled in professional development classes as well as technical classes to learn about the space station.
Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papka, M.; Messina, P.; Coffey, R.
The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursormore » to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms. The Data Analytics and Visualization Team lends expertise in tools and methods for high-performance, post-processing of large datasets, interactive data exploration, batch visualization, and production visualization. The Operations Team ensures that system hardware and software work reliably and optimally; system tools are matched to the unique system architectures and scale of ALCF resources; the entire system software stack works smoothly together; and I/O performance issues, bug fixes, and requests for system software are addressed. The User Services and Outreach Team offers frontline services and support to existing and potential ALCF users. The team also provides marketing and outreach to users, DOE, and the broader community.« less
Adaptive awareness for personal and small group decision making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perano, Kenneth J.; Tucker, Steve; Pancerella, Carmen M.
2003-12-01
Many situations call for the use of sensors monitoring physiological and environmental data. In order to use the large amounts of sensor data to affect decision making, we are coupling heterogeneous sensors with small, light-weight processors, other powerful computers, wireless communications, and embedded intelligent software. The result is an adaptive awareness and warning tool, which provides both situation awareness and personal awareness to individuals and teams. Central to this tool is a sensor-independent architecture, which combines both software agents and a reusable core software framework that manages the available hardware resources and provides services to the agents. Agents can recognizemore » cues from the data, warn humans about situations, and act as decision-making aids. Within the agents, self-organizing maps (SOMs) are used to process physiological data in order to provide personal awareness. We have employed a novel clustering algorithm to train the SOM to discern individual body states and activities. This awareness tool has broad applicability to emergency teams, military squads, military medics, individual exercise and fitness monitoring, health monitoring for sick and elderly persons, and environmental monitoring in public places. This report discusses our hardware decisions, software framework, and a pilot awareness tool, which has been developed at Sandia National Laboratories.« less
Collaboration, Communication and Co-ordination in Agile Software Development Practice
NASA Astrophysics Data System (ADS)
Robinson, Hugh; Sharp, Helen
This chapter analyses the results of a series of observational studies of
Autonomous Real Time Requirements Tracing
NASA Technical Reports Server (NTRS)
Plattsmier, George I.; Stetson, Howard K.
2014-01-01
One of the more challenging aspects of software development is the ability to verify and validate the functional software requirements dictated by the Software Requirements Specification (SRS) and the Software Detail Design (SDD). Insuring the software has achieved the intended requirements is the responsibility of the Software Quality team and the Software Test team. The utilization of Timeliner-TLX(sup TM) Auto-Procedures for relocating ground operations positions to ISS automated on-board operations has begun the transition that would be required for manned deep space missions with minimal crew requirements. This transition also moves the auto-procedures from the procedure realm into the flight software arena and as such the operational requirements and testing will be more structured and rigorous. The autoprocedures would be required to meet NASA software standards as specified in the Software Safety Standard (NASASTD- 8719), the Software Engineering Requirements (NPR 7150), the Software Assurance Standard (NASA-STD-8739) and also the Human Rating Requirements (NPR-8705). The Autonomous Fluid Transfer System (AFTS) test-bed utilizes the Timeliner-TLX(sup TM) Language for development of autonomous command and control software. The Timeliner- TLX(sup TM) system has the unique feature of providing the current line of the statement in execution during real-time execution of the software. The feature of execution line number internal reporting unlocks the capability of monitoring the execution autonomously by use of a companion Timeliner-TLX(sup TM) sequence as the line number reporting is embedded inside the Timeliner-TLX(sup TM) execution engine. This negates I/O processing of this type data as the line number status of executing sequences is built-in as a function reference. This paper will outline the design and capabilities of the AFTS Autonomous Requirements Tracker, which traces and logs SRS requirements as they are being met during real-time execution of the targeted system. It is envisioned that real time requirements tracing will greatly assist the movement of autoprocedures to flight software enhancing the software assurance of auto-procedures and also their acceptance as reliable commanders
Autonomous Real Time Requirements Tracing
NASA Technical Reports Server (NTRS)
Plattsmier, George; Stetson, Howard
2014-01-01
One of the more challenging aspects of software development is the ability to verify and validate the functional software requirements dictated by the Software Requirements Specification (SRS) and the Software Detail Design (SDD). Insuring the software has achieved the intended requirements is the responsibility of the Software Quality team and the Software Test team. The utilization of Timeliner-TLX(sup TM) Auto- Procedures for relocating ground operations positions to ISS automated on-board operations has begun the transition that would be required for manned deep space missions with minimal crew requirements. This transition also moves the auto-procedures from the procedure realm into the flight software arena and as such the operational requirements and testing will be more structured and rigorous. The autoprocedures would be required to meet NASA software standards as specified in the Software Safety Standard (NASASTD- 8719), the Software Engineering Requirements (NPR 7150), the Software Assurance Standard (NASA-STD-8739) and also the Human Rating Requirements (NPR-8705). The Autonomous Fluid Transfer System (AFTS) test-bed utilizes the Timeliner-TLX(sup TM) Language for development of autonomous command and control software. The Timeliner-TLX(sup TM) system has the unique feature of providing the current line of the statement in execution during real-time execution of the software. The feature of execution line number internal reporting unlocks the capability of monitoring the execution autonomously by use of a companion Timeliner-TLX(sup TM) sequence as the line number reporting is embedded inside the Timeliner-TLX(sup TM) execution engine. This negates I/O processing of this type data as the line number status of executing sequences is built-in as a function reference. This paper will outline the design and capabilities of the AFTS Autonomous Requirements Tracker, which traces and logs SRS requirements as they are being met during real-time execution of the targeted system. It is envisioned that real time requirements tracing will greatly assist the movement of autoprocedures to flight software enhancing the software assurance of auto-procedures and also their acceptance as reliable commanders.
Software ``Best'' Practices: Agile Deconstructed
NASA Astrophysics Data System (ADS)
Fraser, Steven
Software “best” practices depend entirely on context - in terms of the problem domain, the system constructed, the software designers, and the “customers” ultimately deriving value from the system. Agile practices no longer have the luxury of “choosing” small non-mission critical projects with co-located teams. Project stakeholders are selecting and adapting practices based on a combina tion of interest, need and staffing. For example, growing product portfolios through a merger or the acquisition of a company exposes legacy systems to new staff, new software integration challenges, and new ideas. Innovation in communications (tools and processes) to span the growth and contraction of both information and organizations, while managing the adoption of changing software practices, is imperative for success. Traditional web-based tools such as web pages, document libraries, and forums are not suf ficient. A blend of tweeting, blogs, wikis, instant messaging, web-based confer encing, and telepresence creates a new dimension of communication “best” practices.
Infusing Software Assurance Research Techniques into Use
NASA Technical Reports Server (NTRS)
Pressburger, Thomas; DiVito, Ben; Feather, Martin S.; Hinchey, Michael; Markosian, Lawrence; Trevino, Luis C.
2006-01-01
Research in the software engineering community continues to lead to new development techniques that encompass processes, methods and tools. However, a number of obstacles impede their infusion into software development practices. These are the recurring obstacles common to many forms of research. Practitioners cannot readily identify the emerging techniques that may benefit them, and cannot afford to risk time and effort evaluating and trying one out while there remains uncertainty about whether it will work for them. Researchers cannot readily identify the practitioners whose problems would be amenable to their techniques, and, lacking feedback from practical applications, are hard-pressed to gauge the where and in what ways to evolve their techniques to make them more likely to be successful. This paper describes an ongoing effort conducted by a software engineering research infusion team established by NASA s Software Engineering Initiative to overcome these obstacles. .
Using "Facebook" to Improve Communication in Undergraduate Software Development Teams
ERIC Educational Resources Information Center
Charlton, Terence; Devlin, Marie; Drummond, Sarah
2009-01-01
As part of the CETL ALiC initiative (Centre of Excellence in Teaching and Learning: Active Learning in Computing), undergraduate computing science students at Newcastle and Durham universities participated in a cross-site team software development project. To ensure we offer adequate resources to support this collaboration, we conducted an…
NASA Technical Reports Server (NTRS)
Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen
2015-01-01
The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes.
ERIC Educational Resources Information Center
Monaghan, Conal; Bizumic, Boris; Reynolds, Katherine; Smithson, Michael; Johns-Boast, Lynette; van Rooy, Dirk
2015-01-01
One prominent approach in the exploration of the variations in project team performance has been to study two components of the aggregate personalities of the team members: conscientiousness and agreeableness. A second line of research, known as self-categorisation theory, argues that identifying as team members and the team's performance norms…
Reinventing The Design Process: Teams and Models
NASA Technical Reports Server (NTRS)
Wall, Stephen D.
1999-01-01
The future of space mission designing will be dramatically different from the past. Formerly, performance-driven paradigms emphasized data return with cost and schedule being secondary issues. Now and in the future, costs are capped and schedules fixed-these two variables must be treated as independent in the design process. Accordingly, JPL has redesigned its design process. At the conceptual level, design times have been reduced by properly defining the required design depth, improving the linkages between tools, and managing team dynamics. In implementation-phase design, system requirements will be held in crosscutting models, linked to subsystem design tools through a central database that captures the design and supplies needed configuration management and control. Mission goals will then be captured in timelining software that drives the models, testing their capability to execute the goals. Metrics are used to measure and control both processes and to ensure that design parameters converge through the design process within schedule constraints. This methodology manages margins controlled by acceptable risk levels. Thus, teams can evolve risk tolerance (and cost) as they would any engineering parameter. This new approach allows more design freedom for a longer time, which tends to encourage revolutionary and unexpected improvements in design.
2017-03-17
NASA engineers and test directors gather in Firing Room 3 in the Launch Control Center at NASA's Kennedy Space Center in Florida, to watch a demonstration of the automated command and control software for the agency's Space Launch System (SLS) and Orion spacecraft. The software is called the Ground Launch Sequencer. It will be responsible for nearly all of the launch commit criteria during the final phases of launch countdowns. The Ground and Flight Application Software Team (GFAST) demonstrated the software. It was developed by the Command, Control and Communications team in the Ground Systems Development and Operations (GSDO) Program. GSDO is helping to prepare the center for the first test flight of Orion atop the SLS on Exploration Mission 1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branson, Donald
The KCNSC Automated RAIL (Rolling Action Item List) system provides an electronic platform to manage and escalate rolling action items within an business and manufacturing environment at Honeywell. The software enables a tiered approach to issue management where issues are escalated up a management chain based on team input and compared to business metrics. The software manages action items at different levels of the organization and allows all users to discuss action items concurrently. In addition, the software drives accountability through timely emails and proper visibility during team meetings.
Development of N-version software samples for an experiment in software fault tolerance
NASA Technical Reports Server (NTRS)
Lauterbach, L.
1987-01-01
The report documents the task planning and software development phases of an effort to obtain twenty versions of code independently designed and developed from a common specification. These versions were created for use in future experiments in software fault tolerance, in continuation of the experimental series underway at the Systems Validation Methods Branch (SVMB) at NASA Langley Research Center. The 20 versions were developed under controlled conditions at four U.S. universities, by 20 teams of two researchers each. The versions process raw data from a modified Redundant Strapped Down Inertial Measurement Unit (RSDIMU). The specifications, and over 200 questions submitted by the developers concerning the specifications, are included as appendices to this report. Design documents, and design and code walkthrough reports for each version, were also obtained in this task for use in future studies.
Key ingredients needed when building large data processing systems for scientists
NASA Technical Reports Server (NTRS)
Miller, K. C.
2002-01-01
Why is building a large science software system so painful? Weren't teams of software engineers supposed to make life easier for scientists? Does it sometimes feel as if it would be easier to write the million lines of code in Fortran 77 yourself? The cause of this dissatisfaction is that many of the needs of the science customer remain hidden in discussions with software engineers until after a system has already been built. In fact, many of the hidden needs of the science customer conflict with stated needs and are therefore very difficult to meet unless they are addressed from the outset in a system's architectural requirements. What's missing is the consideration of a small set of key software properties in initial agreements about the requirements, the design and the cost of the system.
Building a Snow Data System on the Apache OODT Open Technology Stack
NASA Astrophysics Data System (ADS)
Goodale, C. E.; Painter, T. H.; Mattmann, C. A.; Hart, A. F.; Ramirez, P.; Zimdars, P.; Bryant, A. C.; Snow Data System Team
2011-12-01
Snow cover and its melt dominate regional climate and hydrology in many of the world's mountainous regions. One-sixth of Earth's population depends on snow- or glacier-melt for water resources. Operationally, seasonal forecasts of snowmelt-generated streamflow are leveraged through empirical relations based on past snowmelt periods. These historical data show that climate is changing, but the changes reduce the reliability of the empirical relations. Therefore optimal future management of snowmelt derived water resources will require explicit physical models driven by remotely sensed snow property data. Toward this goal, the Snow Optics Laboratory at the Jet Propulsion Laboratory has initiated a near real-time processing pipeline to generate and publish post-processed snow data products within a few hours of satellite acquisition. To solve this challenge, a Scientific Data Management and Processing System was required and the JPL Team leveraged an open-source project called Object Oriented Data Technology (OODT). OODT was developed within NASA's Jet Propulsion Laboratory across the last 10 years. OODT has supported various scientific data management and processing projects, providing solutions in the Earth, Planetary, and Medical science fields. It became apparent that the project needed to be opened to a larger audience to foster and promote growth and adoption. OODT was open-sourced at the Apache Software Foundation in November 2010 and has a growing community of users and committers that are constantly improving the software. Leveraging OODT, the JPL Snow Data System (SnowDS) Team was able to install and configure a core Data Management System (DMS) that would download MODIS raw data files and archive the products in a local repository for post processing. The team has since built an online data portal, and an algorithm-processing pipeline using the Apache OODT software as the foundation. We will present the working SnowDS system with its core remote sensing components: the MODIS Snow Covered Area and Grain size model (MODSCAG) and the MODIS Dust Radiative Forcing in Snow (MOD-DRFS). These products will be delivered in near real time to water managers and the broader cryosphere and climate community beginning in Winter 2012. We will then present the challenges and opportunities we see in the future as the SnowDS matures and contributions are made back to the OODT project.
Mars Science Laboratory Boot Robustness Testing
NASA Technical Reports Server (NTRS)
Banazadeh, Payam; Lam, Danny
2011-01-01
Mars Science Laboratory (MSL) is one of the most complex spacecrafts in the history of mankind. Due to the nature of its complexity, a large number of flight software (FSW) requirements have been written for implementation. In practice, these requirements necessitate very complex and very precise flight software with no room for error. One of flight software's responsibilities is to be able to boot up and check the state of all devices on the spacecraft after the wake up process. This boot up and initialization is crucial to the mission success since any misbehavior of different devices needs to be handled through the flight software. I have created a test toolkit that allows the FSW team to exhaustively test the flight software under variety of different unexpected scenarios and validate that flight software can handle any situation after booting up. The test includes initializing different devices on spacecraft to different configurations and validate at the end of the flight software boot up that the flight software has initialized those devices to what they are suppose to be in that particular scenario.
Collaborative engineering and design management for the Hobby-Eberly Telescope tracker upgrade
NASA Astrophysics Data System (ADS)
Mollison, Nicholas T.; Hayes, Richard J.; Good, John M.; Booth, John A.; Savage, Richard D.; Jackson, John R.; Rafal, Marc D.; Beno, Joseph H.
2010-07-01
The engineering and design of systems as complex as the Hobby-Eberly Telescope's* new tracker require that multiple tasks be executed in parallel and overlapping efforts. When the design of individual subsystems is distributed among multiple organizations, teams, and individuals, challenges can arise with respect to managing design productivity and coordinating successful collaborative exchanges. This paper focuses on design management issues and current practices for the tracker design portion of the Hobby-Eberly Telescope Wide Field Upgrade project. The scope of the tracker upgrade requires engineering contributions and input from numerous fields including optics, instrumentation, electromechanics, software controls engineering, and site-operations. Successful system-level integration of tracker subsystems and interfaces is critical to the telescope's ultimate performance in astronomical observation. Software and process controls for design information and workflow management have been implemented to assist the collaborative transfer of tracker design data. The tracker system architecture and selection of subsystem interfaces has also proven to be a determining factor in design task formulation and team communication needs. Interface controls and requirements change controls will be discussed, and critical team interactions are recounted (a group-participation Failure Modes and Effects Analysis [FMEA] is one of special interest). This paper will be of interest to engineers, designers, and managers engaging in multi-disciplinary and parallel engineering projects that require coordination among multiple individuals, teams, and organizations.
High Resolution X-Ray Micro-CT of Ultra-Thin Wall Space Components
NASA Technical Reports Server (NTRS)
Roth, Don J.; Rauser, R. W.; Bowman, Randy R.; Bonacuse, Peter; Martin, Richard E.; Locci, I. E.; Kelley, M.
2012-01-01
A high resolution micro-CT system has been assembled and is being used to provide optimal characterization for ultra-thin wall space components. The Glenn Research Center NDE Sciences Team, using this CT system, has assumed the role of inspection vendor for the Advanced Stirling Convertor (ASC) project at NASA. This article will discuss many aspects of the development of the CT scanning for this type of component, including CT system overview; inspection requirements; process development, software utilized and developed to visualize, process, and analyze results; calibration sample development; results on actual samples; correlation with optical/SEM characterization; CT modeling; and development of automatic flaw recognition software. Keywords: Nondestructive Evaluation, NDE, Computed Tomography, Imaging, X-ray, Metallic Components, Thin Wall Inspection
Bosma, Laine; Balen, Robert M; Davidson, Erin; Jewesson, Peter J
2003-01-01
The development and integration of a personal digital assistant (PDA)-based point-of-care database into an intravenous resource nurse (IVRN) consultation service for the purposes of consultation management and service characterization are described. The IVRN team provides a consultation service 7 days a week in this 1000-bed tertiary adult care teaching hospital. No simple, reliable method for documenting IVRN patient care activity and facilitating IVRN-initiated patient follow-up evaluation was available. Implementation of a PDA database with exportability of data to statistical analysis software was undertaken in July 2001. A Palm IIIXE PDA was purchased and a three-table, 13-field database was developed using HanDBase software. During the 7-month period of data collection, the IVRN team recorded 4868 consultations for 40 patient care areas. Full analysis of service characteristics was conducted using SPSS 10.0 software. Team members adopted the new technology with few problems, and the authors now can efficiently track and analyze the services provided by their IVRN team.
A Fundamental Mathematical Model of a Microbial Predenitrification System
NASA Technical Reports Server (NTRS)
Hoo, Karlene A.
2005-01-01
Space flight beyond Low Earth Orbit requires sophisticated systems to support all aspects of the mission (life support, real-time communications, etc.). A common concern that cuts across all these systems is the selection of information technology (IT) methodology, software and hardware architectures to provide robust monitoring, diagnosis, and control support. Another dimension of the problem space is that different systems must be integrated seamlessly so that communication speed and data handling appear as a continuum (un-interrupted). One such team investigating this problem is the Advanced Integration Matrix (AIM) team whose role is to define the critical requirements expected of software and hardware to support an integrated approach to the command and control of Advanced Life Support (ALS) for future long-duration human space missions, including permanent human presence on the Moon and Mars. A goal of the AIM team is to set the foundation for testing criteria that will assist in specifying tasks, control schemes and test scenarios to validate and verify systems capabilities. This project is to contribute to the goals of the AIM team by assisting with controls planning for ALS. Control for ALS is an enormous problem it involves air revitalization, water recovery, food production, solids processing and crew. In more general terms, these systems can be characterized as involving both continuous and discrete processes, dynamic interactions among the sub-systems, nonlinear behavior due to the complex operations, and a large number of multivariable interactions due to the dimension of the state space. It is imperative that a baseline approach from which to measure performance is established especially when the expectation for the control system is complete autonomous control.
Relay Sequence Generation Software
NASA Technical Reports Server (NTRS)
Gladden, Roy E.; Khanampompan, Teerapat
2009-01-01
Due to thermal and electromagnetic interactivity between the UHF (ultrahigh frequency) radio onboard the Mars Reconnaissance Orbiter (MRO), which performs relay sessions with the Martian landers, and the remainder of the MRO payloads, it is required to integrate and de-conflict relay sessions with the MRO science plan. The MRO relay SASF/PTF (spacecraft activity sequence file/ payload target file) generation software facilitates this process by generating a PTF that is needed to integrate the periods of time during which MRO supports relay activities with the rest of the MRO science plans. The software also generates the needed command products that initiate the relay sessions, some features of which are provided by the lander team, some are managed by MRO internally, and some being derived.
Decentralized formation flying control in a multiple-team hierarchy.
Mueller, Joseph B; Thomas, Stephanie J
2005-12-01
In recent years, formation flying has been recognized as an enabling technology for a variety of mission concepts in both the scientific and defense arenas. Examples of developing missions at NASA include magnetospheric multiscale (MMS), solar imaging radio array (SIRA), and terrestrial planet finder (TPF). For each of these missions, a multiple satellite approach is required in order to accomplish the large-scale geometries imposed by the science objectives. In addition, the paradigm shift of using a multiple satellite cluster rather than a large, monolithic spacecraft has also been motivated by the expected benefits of increased robustness, greater flexibility, and reduced cost. However, the operational costs of monitoring and commanding a fleet of close-orbiting satellites is likely to be unreasonable unless the onboard software is sufficiently autonomous, robust, and scalable to large clusters. This paper presents the prototype of a system that addresses these objectives-a decentralized guidance and control system that is distributed across spacecraft using a multiple team framework. The objective is to divide large clusters into teams of "manageable" size, so that the communication and computation demands driven by N decentralized units are related to the number of satellites in a team rather than the entire cluster. The system is designed to provide a high level of autonomy, to support clusters with large numbers of satellites, to enable the number of spacecraft in the cluster to change post-launch, and to provide for on-orbit software modification. The distributed guidance and control system will be implemented in an object-oriented style using a messaging architecture for networking and threaded applications (MANTA). In this architecture, tasks may be remotely added, removed, or replaced post launch to increase mission flexibility and robustness. This built-in adaptability will allow software modifications to be made on-orbit in a robust manner. The prototype system, which is implemented in Matlab, emulates the object-oriented and message-passing features of the MANTA software. In this paper, the multiple team organization of the cluster is described, and the modular software architecture is presented. The relative dynamics in eccentric reference orbits is reviewed, and families of periodic, relative trajectories are identified, expressed as sets of static geometric parameters. The guidance law design is presented, and an example reconfiguration scenario is used to illustrate the distributed process of assigning geometric goals to the cluster. Next, a decentralized maneuver planning approach is presented that utilizes linear-programming methods to enact reconfiguration and coarse formation keeping maneuvers. Finally, a method for performing online collision avoidance is discussed, and an example is provided to gauge its performance.
EPOS Data and Service Provision
NASA Astrophysics Data System (ADS)
Bailo, Daniele; Jeffery, Keith G.; Atakan, Kuvvet; Harrison, Matt
2017-04-01
EPOS is now in IP (implementation phase) after a successful PP (preparatory phase). EPOS consists of essentially two components, one ICS (Integrated Core Services) representing the integrating ICT (Information and Communication Technology) and many TCS (Thematic Core Services) representing the scientific domains. The architecture developed, demonstrated and agreed within the project during the PP is now being developed utilising co-design with the TCS teams and agile, spiral methods within the ICS team. The 'heart' of EPOS is the metadata catalog. This provides for the ICS a digital representation of the TCS assets (services, data, software, equipment, expertise…) thus facilitating access, interoperation and (re-)use. A major part of the work has been interactions with the TCS. The original intention to harvest information from the TCS required (and still requires) discussions to understand fully the TCS organisational structures linked with rights, security and privacy; their (meta)data syntax (structure) and semantics (meaning); their workflows and methods of working and the services offered. To complicate matters further the TCS are each at varying stages of development and the ICS design has to accommodate pre-existing, developing and expected future standards for metadata, data, software and processes. Through information documents, questionnaires and interviews/meetings the EPOS ICS team has collected DDSS (Data, Data Products, Software and Services) information from the TCS. The ICS team developed a simplified metadata model for presentation to the TCS and the ICS team will perform the mapping and conversion from this model to the internal detailed technical metadata model using (CERIF: a EU recommendation to Member States maintained, developed and promoted by euroCRIS www.eurocris.org ). At the time of writing the final modifications of the EPOS metadata model are being made, and the mappings to CERIF designed, prior to the main phase of (meta)data collection into the EPOS metadata catalog. In parallel work proceeds on the user interface softsare, the APIs (Application Programming Interfaces) to the TCS services, the harvesting method and software, the AAAI (Authentication, Authorisation, Accounting Infrastructure) and the system manager. The next steps will involve interfaces to ICS-D (Distributed ICS i.e. facilities and services for computing, data storage, detectors and instruments for data collection etc.) to which requests, software and data will be deployed and from which data will be generated. Associated with this will be the development of the workflow system which will assist the end-user in building a workflow to achieve the scientific objectives.
A study about the photometric variability in the M42 region
NASA Astrophysics Data System (ADS)
Lima, G. H. R. A.; Vaz, L. P. R.; Reipurth, B.
2003-08-01
The M42 region in Orion is one of the most active regarding stellar formation in the neighborhood of the solar system. At a distance of 450pc, it gives us an excellent oportunity to study star formation processes. By studying 22 films of this region, covering an area of 5 by 5 degrees, taken in almost regular intervals through 2.5 years by ESO 1m Schimdt Telescope, in La Silla, Chile, we seek to discover variable stars among the young stars. These films were digitalized by the SuperCOSMOS (the most precise scientific scanner today) team, and each film were exposed for 30 minutes. Our knowledge about the variability of low-mass young variable stars were outdated, and were based on old photographic plates, which were studied by the so called blink comparators and Iris photometers. Now we developed a process to study these data and identify possible candidate stars to be constants or variables, and developed some softwares based on this process. We also used some softwares supplied by the SuperCosmos team to help our analysis of the dataset. After identifying the stars, which we, definitively, can consider variables, we will study more deeply these ones in hope to obtain more data about the formation process. We expect to detect thousands of new variables within our data as also the light curves for each star detected.
An Open Source Tool to Test Interoperability
NASA Astrophysics Data System (ADS)
Bermudez, L. E.
2012-12-01
Scientists interact with information at various levels from gathering of the raw observed data to accessing portrayed processed quality control data. Geoinformatics tools help scientist on the acquisition, storage, processing, dissemination and presentation of geospatial information. Most of the interactions occur in a distributed environment between software components that take the role of either client or server. The communication between components includes protocols, encodings of messages and managing of errors. Testing of these communication components is important to guarantee proper implementation of standards. The communication between clients and servers can be adhoc or follow standards. By following standards interoperability between components increase while reducing the time of developing new software. The Open Geospatial Consortium (OGC), not only coordinates the development of standards but also, within the Compliance Testing Program (CITE), provides a testing infrastructure to test clients and servers. The OGC Web-based Test Engine Facility, based on TEAM Engine, allows developers to test Web services and clients for correct implementation of OGC standards. TEAM Engine is a JAVA open source facility, available at Sourceforge that can be run via command line, deployed in a web servlet container or integrated in developer's environment via MAVEN. The TEAM Engine uses the Compliance Test Language (CTL) and TestNG to test HTTP requests, SOAP services and XML instances against Schemas and Schematron based assertions of any type of web service, not only OGC services. For example, the OGC Web Feature Service (WFS) 1.0.0 test has more than 400 test assertions. Some of these assertions includes conformance of HTTP responses, conformance of GML-encoded data; proper values for elements and attributes in the XML; and, correct error responses. This presentation will provide an overview of TEAM Engine, introduction of how to test via the OGC Testing web site and description of performing local tests. It will also provide information about how to participate in the open source code development of TEAM Engine.
Cravens, Amanda E
2016-02-01
Environmental managers and planners have become increasingly enthusiastic about the potential of decision support tools (DSTs) to improve environmental decision-making processes as information technology transforms many aspects of daily life. Discussions about DSTs, however, rarely recognize the range of ways software can influence users' negotiation, problem-solving, or decision-making strategies and incentives, in part because there are few empirical studies of completed processes that used technology. This mixed-methods study-which draws on data from approximately 60 semi-structured interviews and an online survey--examines how one geospatial DST influenced participants' experiences during a multi-year marine planning process in California. Results suggest that DSTs can facilitate communication by creating a common language, help users understand the geography and scientific criteria in play during the process, aid stakeholders in identifying shared or diverging interests, and facilitate joint problem solving. The same design features that enabled the tool to aid in decision making, however, also presented surprising challenges in certain circumstances by, for example, making it difficult for participants to discuss information that was not spatially represented on the map-based interface. The study also highlights the importance of the social context in which software is developed and implemented, suggesting that the relationship between the software development team and other participants may be as important as technical software design in shaping how DSTs add value. The paper concludes with considerations to inform the future use of DSTs in environmental decision-making processes.
NASA Astrophysics Data System (ADS)
Cravens, Amanda E.
2016-02-01
Environmental managers and planners have become increasingly enthusiastic about the potential of decision support tools (DSTs) to improve environmental decision-making processes as information technology transforms many aspects of daily life. Discussions about DSTs, however, rarely recognize the range of ways software can influence users' negotiation, problem-solving, or decision-making strategies and incentives, in part because there are few empirical studies of completed processes that used technology. This mixed-methods study—which draws on data from approximately 60 semi-structured interviews and an online survey—examines how one geospatial DST influenced participants' experiences during a multi-year marine planning process in California. Results suggest that DSTs can facilitate communication by creating a common language, help users understand the geography and scientific criteria in play during the process, aid stakeholders in identifying shared or diverging interests, and facilitate joint problem solving. The same design features that enabled the tool to aid in decision making, however, also presented surprising challenges in certain circumstances by, for example, making it difficult for participants to discuss information that was not spatially represented on the map-based interface. The study also highlights the importance of the social context in which software is developed and implemented, suggesting that the relationship between the software development team and other participants may be as important as technical software design in shaping how DSTs add value. The paper concludes with considerations to inform the future use of DSTs in environmental decision-making processes.
Intelligent systems for KSC ground processing
NASA Technical Reports Server (NTRS)
Heard, Astrid E.
1992-01-01
The ground processing and launch of Shuttle vehicles and their payloads is the primary task of Kennedy Space Center. It is a process which is largely manual and contains little inherent automation. Business is conducted today much as it was during previous NASA programs such as Apollo. In light of new programs and decreasing budgets, NASA must find more cost effective ways in which to do business while retaining the quality and safety of activities. Advanced technologies including artificial intelligence could cut manpower and processing time. This paper is an overview of the research and development in Al technology at KSC with descriptions of the systems which have been implemented, as well as a few under development which are promising additions to ground processing software. Projects discussed cover many facets of ground processing activities, including computer sustaining engineering, subsystem monitor and diagnosis tools and launch team assistants. The deployed Al applications have proven an effectiveness which has helped to demonstrate the benefits of utilizing intelligent software in the ground processing task.
Dynamic feature analysis for Voyager at the Image Processing Laboratory
NASA Technical Reports Server (NTRS)
Yagi, G. M.; Lorre, J. J.; Jepsen, P. L.
1978-01-01
Voyager 1 and 2 were launched from Cape Kennedy to Jupiter, Saturn, and beyond on September 5, 1977 and August 20, 1977. The role of the Image Processing Laboratory is to provide the Voyager Imaging Team with the necessary support to identify atmospheric features (tiepoints) for Jupiter and Saturn data, and to analyze and display them in a suitable form. This support includes the software needed to acquire and store tiepoints, the hardware needed to interactively display images and tiepoints, and the general image processing environment necessary for decalibration and enhancement of the input images. The objective is an understanding of global circulation in the atmospheres of Jupiter and Saturn. Attention is given to the Voyager imaging subsystem, the Voyager imaging science objectives, hardware, software, display monitors, a dynamic feature study, decalibration, navigation, and data base.
Software Users Manual (SUM): Extended Testability Analysis (ETA) Tool
NASA Technical Reports Server (NTRS)
Maul, William A.; Fulton, Christopher E.
2011-01-01
This software user manual describes the implementation and use the Extended Testability Analysis (ETA) Tool. The ETA Tool is a software program that augments the analysis and reporting capabilities of a commercial-off-the-shelf (COTS) testability analysis software package called the Testability Engineering And Maintenance System (TEAMS) Designer. An initial diagnostic assessment is performed by the TEAMS Designer software using a qualitative, directed-graph model of the system being analyzed. The ETA Tool utilizes system design information captured within the diagnostic model and testability analysis output from the TEAMS Designer software to create a series of six reports for various system engineering needs. The ETA Tool allows the user to perform additional studies on the testability analysis results by determining the detection sensitivity to the loss of certain sensors or tests. The ETA Tool was developed to support design and development of the NASA Ares I Crew Launch Vehicle. The diagnostic analysis provided by the ETA Tool was proven to be valuable system engineering output that provided consistency in the verification of system engineering requirements. This software user manual provides a description of each output report generated by the ETA Tool. The manual also describes the example diagnostic model and supporting documentation - also provided with the ETA Tool software release package - that were used to generate the reports presented in the manual
A new approach for instrument software at Gemini
NASA Astrophysics Data System (ADS)
Gillies, Kim; Nunez, Arturo; Dunn, Jennifer
2008-07-01
Gemini Observatory is now developing its next generation of astronomical instruments, the Aspen instruments. These new instruments are sophisticated and costly requiring large distributed, collaborative teams. Instrument software groups often include experienced team members with existing mature code. Gemini has taken its experience from the previous generation of instruments and current hardware and software technology to create an approach for developing instrument software that takes advantage of the strengths of our instrument builders and our own operations needs. This paper describes this new software approach that couples a lightweight infrastructure and software library with aspects of modern agile software development. The Gemini Planet Imager instrument project, which is currently approaching its critical design review, is used to demonstrate aspects of this approach. New facilities under development will face similar issues in the future, and the approach presented here can be applied to other projects.
Supporting NASA Facilities Through GIS
NASA Technical Reports Server (NTRS)
Ingham, Mary E.
2000-01-01
The NASA GIS Team supports NASA facilities and partners in the analysis of spatial data. Geographic Information System (G[S) is an integration of computer hardware, software, and personnel linking topographic, demographic, utility, facility, image, and other geo-referenced data. The system provides a graphic interface to relational databases and supports decision making processes such as planning, design, maintenance and repair, and emergency response.
An Autonomous Flight Safety System
2008-11-01
are taken. AFSS can take vehicle navigation data from redundant onboard sensors and make flight termination decisions using software-based rules...implemented on redundant flight processors. By basing these decisions on actual Instantaneous Impact Predictions and by providing for an arbitrary...number of mission rules, it is the contention of the AFSS development team that the decision making process used by Missile Flight Control Officers
NASA Astrophysics Data System (ADS)
Comendant, T.; Strittholt, J. R.; Ward, B. C.; Bachelet, D. M.; Grossman, D.; Stevenson-Molnar, N.; Henifin, K.; Lundin, M.; Marvin, T. S.; Peterman, W. L.; Corrigan, G. N.; O'Connor, K.
2013-12-01
A multi-disciplinary team of scientists, software engineers, and outreach staff at the Conservation Biology Institute launched an open-access, web-based spatial data platform called Data Basin (www.databasin.org) in 2010. Primarily built to support research and environmental resource planning, Data Basin provides the capability for individuals and organizations to explore, create, interpret, and collaborate around their priority topics and geographies. We used a stakeholder analysis to assess the needs of data consumers/produces and help prioritize primary and secondary audiences. Data Basin's simple and user-friendly interface makes mapping and geo-processing tools more accessible to less technical audiences. Input from users is considered in system planning, testing, and implementation. The team continually develops using an agile software development approach, which allows new features, improvements, and bug fixes to be deployed to the live system on a frequent basis. The data import process is handled through administrative approval and Data Basin requires spatial data (biological, physical, and socio-economic) to be well-documented. Outreach and training is used to convey the scope and appropriate use of the scientific information and available resources.
Using Animated Language Software with Children Diagnosed with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Mulholland, Rita; Pete, Ann Marie; Popeson, Joanne
2008-01-01
We examined the impact of using an animated software program (Team Up With Timo) on the expressive and receptive language abilities of five children ages 5-9 in a self-contained Learning and Language Disabilities class. We chose to use Team Up With Timo (Animated Speech Corporation) because it allows the teacher to personalize the animation for…
The Cascading Impacts of Technology Selection: Incorporating Ruby on Rails into ECHO
NASA Astrophysics Data System (ADS)
Pilone, D.; Cechini, M.
2010-12-01
NASA’s Earth Observing System (EOS) ClearingHOuse (ECHO) is a SOA based Earth Science Data search and order system implemented in Java with one significant exception: the web client used by 98% of our users is written in Perl. After several decades of maintenance the Perl based application had reached the end of its serviceable life and ECHO was tasked with implementing a replacement. Despite a broad investment in Java, the ECHO team conducted a survey of modern development technologies including Flex, Python/Django, JSF2/Spring and Ruby on Rails. The team ultimately chose Ruby on Rails (RoR) with Cucumber for testing due to its perceived applicability to web application development and corresponding development efficiency gains. Both positive and negative impacts on the entire ECHO team, including our stakeholders, were immediate and sometimes subtle. The technology selection caused shifts in our architecture and design, development and deployment procedures, requirement definition approach, testing approach, and, somewhat surprisingly, our project team structure and software process. This presentation discusses our experiences, including technical, process, and psychological, using RoR on a production system. During this session we will discuss: - Real impacts of introducing a dynamic language to a Java team - Real and perceived efficiency advantages - Impediments to adoption and effectiveness - Impacts of transition from Test Driven Development to Behavior Driven Development - Leveraging Cucumber to provide fully executable requirement documents - Impacts on team structure and roles
Losiak, Anna; Gołębiowska, Izabela; Orgel, Csilla; Moser, Linda; MacArthur, Jane; Boyd, Andrea; Hettrich, Sebastian; Jones, Natalie; Groemer, Gernot
2014-05-01
MARS2013 was an integrated Mars analog field simulation in eastern Morocco performed by the Austrian Space Forum between February 1 and 28, 2013. The purpose of this paper is to discuss the system of data processing and utilization adopted by the Remote Science Support (RSS) team during this mission. The RSS team procedures were designed to optimize operational efficiency of the Flightplan, field crew, and RSS teams during a long-term analog mission with an introduced 10 min time delay in communication between "Mars" and Earth. The RSS workflow was centered on a single-file, easy-to-use, spatially referenced database that included all the basic information about the conditions at the site of study, as well as all previous and planned activities. This database was prepared in Google Earth software. The lessons learned from MARS2013 RSS team operations are as follows: (1) using a spatially referenced database is an efficient way of data processing and data utilization in a long-term analog mission with a large amount of data to be handled, (2) mission planning based on iterations can be efficiently supported by preparing suitability maps, (3) the process of designing cartographical products should start early in the planning stages of a mission and involve representatives of all teams, (4) all team members should be trained in usage of cartographical products, (5) technical problems (e.g., usage of a geological map while wearing a space suit) should be taken into account when planning a work flow for geological exploration, (6) a system that helps the astronauts to efficiently orient themselves in the field should be designed as part of future analog studies.
Human-computer interaction reflected in the design of user interfaces for general practitioners.
Stoicu-Tivadar, Lacramioara; Stoicu-Tivadar, Vasile
2006-01-01
To address the problem of properly built health information systems in general practice as an important issue for their approval and use in clinical practice. We present how a national general practitioner (GP) network was built, put in practice and several results of its activity seen from the clinician's and the software application team's points of view. We used a multi-level incremental development appropriate for the conditions of the required information system. After the development of the first version of the software components (based on rapid prototyping) of the sentinel network, a questionnaire addressed the needs and improvements required by the health professionals. Based on the answers, the functionality of the system and the interface were improved regarding the real needs expressed by the end-users. The network is functional and the collected data from the network are being processed using statistical methods. The academic software team developed a GP application that is well received by the GPs in the network, as resulted from the survey and discussions during the training period. As an added confirmation, several GPs outside the network enrolled after seeing the software at work. Another confirmation that we did a good job was that after the final presentation of the results of the project a representative from the Romanian Society for Cardiology expressed the wish of this society to access the data yielded by the network.
NASA Technical Reports Server (NTRS)
Entin, Elliot E.; Kerrigan, Caroline; Serfaty, Daniel; Young, Philip
1998-01-01
The goals of this project were to identify and investigate aspects of team and individual decision-making and risk-taking behaviors hypothesized to be most affected by prolonged isolation. A key premise driving our research approach is that effects of stressors that impact individual and team cognitive processes in an isolated, confined, and hazardous environment will be projected onto the performance of a simulation task. To elicit and investigate these team behaviors we developed a search and rescue task concept as a scenario domain that would be relevant for isolated crews. We modified the Distributed Dynamic Decision-making (DDD) simulator, a platform that has been extensively used for empirical research in team processes and taskwork performance, to portray the features of a search and rescue scenario and present the task components incorporated into that scenario. The resulting software is called DD-Search and Rescue (Version 1.0). To support the use of the DDD-Search and Rescue simulator in isolated experiment settings, we wrote a player's manual for teaching team members to operate the simulator and play the scenario. We then developed a research design and experiment plan that would allow quantitative measures of individual and team decision making skills using the DDD-Search and Rescue simulator as the experiment platform. A description of these activities and the associated materials that were produced under this contract are contained in this report.
A recent Cleanroom success story: The Redwing project
NASA Technical Reports Server (NTRS)
Hausler, Philip A.
1992-01-01
Redwing is the largest completed Cleanroom software engineering project in IBM, both in terms of lines of code and project staffing. The product provides a decision-support facility that utilizes artificial intelligence (AI) technology for predicting and preventing complex operating problems in an MVS environment. The project used the Cleanroom process for development and realized a defect rate of 2.6 errors/KLOC, measured from first execution. This represents the total amount of errors that were found in testing and installation at three field test sites. Development productivity was 486 LOC/PM, which included all development labor expended in design specification through completion of incremental testing. In short, the Redwing team produced a complex systems software product with an extraordinarily low error rate, while maintaining high productivity. All of this was accomplished by a project team using Cleanroom for the first time. An 'introductory implementation' of Cleanroom was defined and used on Redwing. This paper describes the quality and productivity results, the Redwing project, and how Cleanroom was implemented.
The SOFIA Mission Control System Software
NASA Astrophysics Data System (ADS)
Heiligman, G. M.; Brock, D. R.; Culp, S. D.; Decker, P. H.; Estrada, J. C.; Graybeal, J. B.; Nichols, D. M.; Paluzzi, P. R.; Sharer, P. J.; Pampell, R. J.; Papke, B. L.; Salovich, R. D.; Schlappe, S. B.; Spriestersbach, K. K.; Webb, G. L.
1999-05-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) will be delivered with a computerized mission control system (MCS). The MCS communicates with the aircraft's flight management system and coordinates the operations of the telescope assembly, mission-specific subsystems, and the science instruments. The software for the MCS must be reliable and flexible. It must be easily usable by many teams of observers with widely differing needs, and it must support non-intrusive access for education and public outreach. The technology must be appropriate for SOFIA's 20-year lifetime. The MCS software development process is an object-oriented, use case driven approach. The process is iterative: delivery will be phased over four "builds"; each build will be the result of many iterations; and each iteration will include analysis, design, implementation, and test activities. The team is geographically distributed, coordinating its work via Web pages, teleconferences, T.120 remote collaboration, and CVS (for Internet-enabled configuration management). The MCS software architectural design is derived in part from other observatories' experience. Some important features of the MCS are: * distributed computing over several UNIX and VxWorks computers * fast throughput of time-critical data * use of third-party components, such as the Adaptive Communications Environment (ACE) and the Common Object Request Broker Architecture (CORBA) * extensive configurability via stored, editable configuration files * use of several computer languages so developers have "the right tool for the job". C++, Java, scripting languages, Interactive Data Language (from Research Systems, Int'l.), XML, and HTML will all be used in the final deliverables. This paper reports on work in progress, with the final product scheduled for delivery in 2001. This work was performed for Universities Space Research Association for NASA under contract NAS2-97001.
Data Management for a Climate Data Record in an Evolving Technical Landscape
NASA Astrophysics Data System (ADS)
Moore, K. D.; Walter, J.; Gleason, J. L.
2017-12-01
For nearly twenty years, NASA Langley Research Center's Clouds and the Earth's Radiant Energy System (CERES) Science Team has been producing a suite of data products that forms a persistent climate data record of the Earth's radiant energy budget. Many of the team's physical scientists and key research contributors have been with the team since the launch of the first CERES instrument in 1997. This institutional knowledge is irreplaceable and its longevity and continuity are among the reasons that the team has been so productive. Such legacy involvement, however, can also be a limiting factor. Some CERES scientists-cum-coders might possess skills that were state-of-the-field when they were emerging scientists but may now be outdated with respect to developments in software development best practices and supporting technologies. Both programming languages and processing frameworks have evolved significantly in the past twenty years, and updating one of these factors warrants consideration of updating the other. With the imminent launch of a final CERES instrument and the good health of those in flight, the CERES data record stands to continue far into the future. The CERES Science Team is, therefore, undergoing a re-architecture of its codebase to maintain compatibility with newer data processing platforms and technologies and to leverage modern software development best practices. This necessitates training our staff and consequently presents several challenges, including: Development continues immediately on the next "edition" of research algorithms upon release of the previous edition. How can code be rewritten at the same time that the science algorithms are being updated and integrated? With limited time to devote to training, how can we update the staff's existing skillset without slowing progress or introducing new errors? The CERES Science Team is large and complex, much like the current state of its codebase. How can we identify, in a breadth-wise manner, areas for code improvement across multiple research groups that maintain code with varying semantics but common concepts? In this work, we discuss the successes and pitfalls of this major re-architecture effort and share how we will sustain improvement into the future.
NASA Astrophysics Data System (ADS)
Martin, Adrian
As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.
Real-Time Multimission Event Notification System for Mars Relay
NASA Technical Reports Server (NTRS)
Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Wang, Paul; Hy, Franklin H.
2013-01-01
As the Mars Relay Network is in constant flux (missions and teams going through their daily workflow), it is imperative that users are aware of such state changes. For example, a change by an orbiter team can affect operations on a lander team. This software provides an ambient view of the real-time status of the Mars network. The Mars Relay Operations Service (MaROS) comprises a number of tools to coordinate, plan, and visualize various aspects of the Mars Relay Network. As part of MaROS, a feature set was developed that operates on several levels of the software architecture. These levels include a Web-based user interface, a back-end "ReSTlet" built in Java, and databases that store the data as it is received from the network. The result is a real-time event notification and management system, so mission teams can track and act upon events on a moment-by-moment basis. This software retrieves events from MaROS and displays them to the end user. Updates happen in real time, i.e., messages are pushed to the user while logged into the system, and queued when the user is not online for later viewing. The software does not do away with the email notifications, but augments them with in-line notifications. Further, this software expands the events that can generate a notification, and allows user-generated notifications. Existing software sends a smaller subset of mission-generated notifications via email. A common complaint of users was that the system-generated e-mails often "get lost" with other e-mail that comes in. This software allows for an expanded set (including user-generated) of notifications displayed in-line of the program. By separating notifications, this can improve a user's workflow.
AnClim and ProClimDB software for data quality control and homogenization of time series
NASA Astrophysics Data System (ADS)
Stepanek, Petr
2015-04-01
During the last decade, a software package consisting of AnClim, ProClimDB and LoadData for processing (mainly climatological) data has been created. This software offers a complex solution for processing of climatological time series, starting from loading the data from a central database (e.g. Oracle, software LoadData), through data duality control and homogenization to time series analysis, extreme value evaluations and RCM outputs verification and correction (ProClimDB and AnClim software). The detection of inhomogeneities is carried out on a monthly scale through the application of AnClim, or newly by R functions called from ProClimDB, while quality control, the preparation of reference series and the correction of found breaks is carried out by the ProClimDB software. The software combines many statistical tests, types of reference series and time scales (monthly, seasonal and annual, daily and sub-daily ones). These can be used to create an "ensemble" of solutions, which may be more reliable than any single method. AnClim software is suitable for educational purposes: e.g. for students getting acquainted with methods used in climatology. Built-in graphical tools and comparison of various statistical tests help in better understanding of a given method. ProClimDB is, on the contrary, tool aimed for processing of large climatological datasets. Recently, functions from R may be used within the software making it more efficient in data processing and capable of easy inclusion of new methods (when available under R). An example of usage is easy comparison of methods for correction of inhomogeneities in daily data (HOM of Paul Della-Marta, SPLIDHOM method of Olivier Mestre, DAP - own method, QM of Xiaolan Wang and others). The software is available together with further information on www.climahom.eu . Acknowledgement: this work was partially funded by the project "Building up a multidisciplinary scientific team focused on drought" No. CZ.1.07/2.3.00/20.0248.
VLTI auxiliary telescopes: a full object-oriented approach
NASA Astrophysics Data System (ADS)
Chiozzi, Gianluca; Duhoux, Philippe; Karban, Robert
2000-06-01
The Very Large Telescope (VLT) Telescope Control Software (TCS) is a portable system. It is now in use or will be used in a whole family of ESO telescopes VLT Unit Telescopes, VLTI Auxiliary Telescopes, NTT, La Silla 3.6, VLT Survey Telescope and Astronomical Site Monitors in Paranal and La Silla). Although it has been developed making extensive usage of Object Oriented (OO) methodologies, the overall development process chosen at the beginning of the project used traditional methods. In order to warranty a longer lifetime to the system (improving documentation and maintainability) and to prepare for future projects, we have introduced a full OO process. We have taken as a basis the United Software Development Process with the Unified Modeling Language (UML) and we have adapted the process to our specific needs. This paper describes how the process has been applied to the VLTI Auxiliary Telescopes Control Software (ATCS). The ATCS is based on the portable VLT TCS, but some subsystems are new or have specific characteristics. The complete process has been applied to the new subsystems, while reused code has been integrated in the UML models. We have used the ATCS on one side to tune the process and train the team members and on the other side to provide a UML and WWW based documentation for the portable VLT TCS.
The SeaDAS Processing and Analysis System: SeaWiFS, MODIS, and Beyond
NASA Astrophysics Data System (ADS)
MacDonald, M. D.; Ruebens, M.; Wang, L.; Franz, B. A.
2005-12-01
The SeaWiFS Data Analysis System (SeaDAS) is a comprehensive software package for the processing, display, and analysis of ocean data from a variety of satellite sensors. Continuous development and user support by programmers and scientists for more than a decade has helped to make SeaDAS the most widely used software package in the world for ocean color applications, with a growing base of users from the land and sea surface temperature community. Full processing support for past (CZCS, OCTS, MOS) and present (SeaWiFS, MODIS) sensors, and anticipated support for future missions such as NPP/VIIRS, enables end users to reproduce the standard ocean archive product suite distributed by NASA's Ocean Biology Processing Group (OBPG), as well as a variety of evaluation and intermediate ocean, land, and atmospheric products. Availability of the processing algorithm source codes and a software build environment also provide users with the tools to implement custom algorithms. Recent SeaDAS enhancements include synchronization of MODIS processing with the latest code and calibration updates from the MODIS Calibration Support Team (MCST), support for all levels of MODIS processing including Direct Broadcast, a port to the Macintosh OS X operating system, release of the display/analysis-only SeaDAS-Lite, and an extremely active web-based user support forum.
Multidisciplinary Concurrent Design Optimization via the Internet
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Kelkar, Atul G.; Koganti, Gopichand
2001-01-01
A methodology is presented which uses commercial design and analysis software and the Internet to perform concurrent multidisciplinary optimization. The methodology provides a means to develop multidisciplinary designs without requiring that all software be accessible from the same local network. The procedures are amenable to design and development teams whose members, expertise and respective software are not geographically located together. This methodology facilitates multidisciplinary teams working concurrently on a design problem of common interest. Partition of design software to different machines allows each constituent software to be used on the machine that provides the most economy and efficiency. The methodology is demonstrated on the concurrent design of a spacecraft structure and attitude control system. Results are compared to those derived from performing the design with an autonomous FORTRAN program.
2017-03-17
NASA engineers and test directors gather in Firing Room 3 in the Launch Control Center at NASA's Kennedy Space Center in Florida, to watch a demonstration of the automated command and control software for the agency's Space Launch System (SLS) and Orion spacecraft. In front, far right, is Charlie Blackwell-Thompson, launch director for Exploration Mission 1 (EM-1). The software is called the Ground Launch Sequencer. It will be responsible for nearly all of the launch commit criteria during the final phases of launch countdowns. The Ground and Flight Application Software Team (GFAST) demonstrated the software. It was developed by the Command, Control and Communications team in the Ground Systems Development and Operations (GSDO) Program. GSDO is helping to prepare the center for the first test flight of Orion atop the SLS on EM-1.
SPRITE: the Spitzer proposal review website
NASA Astrophysics Data System (ADS)
Crane, Megan K.; Storrie-Lombardi, Lisa J.; Silbermann, Nancy A.; Rebull, Luisa M.
2008-07-01
The Spitzer Science Center (SSC), located on the campus of the California Institute of Technology, supports the science operations of NASA's infrared Spitzer Space Telescope. The SSC issues an annual Call for Proposals inviting investigators worldwide to submit Spitzer Space Telescope proposals. The Spitzer Proposal Review Website (SPRITE) is a MySQL/PHP web database application designed to support the SSC proposal review process. Review panel members use the software to view, grade, and write comments about the proposals, and SSC support team members monitor the grading and ranking process and ultimately generate a ranked list of all the proposals. The software is also used to generate, edit, and email award letters to the proposers. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.
Strengthening Interprofessional Requirements Engineering Through Action Sheets: A Pilot Study.
Kunz, Aline; Pohlmann, Sabrina; Heinze, Oliver; Brandner, Antje; Reiß, Christina; Kamradt, Martina; Szecsenyi, Joachim; Ose, Dominik
2016-10-18
The importance of information and communication technology for healthcare is steadily growing. Newly developed tools are addressing different user groups: physicians, other health care professionals, social workers, patients, and family members. Since often many different actors with different expertise and perspectives are involved in the development process it can be a challenge to integrate the user-reported requirements of those heterogeneous user groups. Nevertheless, the understanding and consideration of user requirements is the prerequisite of building a feasible technical solution. In the course of the presented project it proved to be difficult to gain clear action steps and priorities for the development process out of the primary requirements compilation. Even if a regular exchange between involved teams took place there was a lack of a common language. The objective of this paper is to show how the already existing requirements catalog was subdivided into specific, prioritized, and coherent working packages and the cooperation of multiple interprofessional teams within one development project was reorganized at the same time. In the case presented, the manner of cooperation was reorganized and a new instrument called an Action Sheet was implemented. This paper introduces the newly developed methodology which was meant to smooth the development of a user-centered software product and to restructure interprofessional cooperation. There were 10 focus groups in which views of patients with colorectal cancer, physicians, and other health care professionals were collected in order to create a requirements catalog for developing a personal electronic health record. Data were audio- and videotaped, transcribed verbatim, and thematically analyzed. Afterwards, the requirements catalog was reorganized in the form of Action Sheets which supported the interprofessional cooperation referring to the development process of a personal electronic health record for the Rhine-Neckar region. In order to improve the interprofessional cooperation the idea arose to align the requirements arising from the implementation project with the method of software development applied by the technical development team. This was realized by restructuring the original requirements set in a standardized way and under continuous adjustment between both teams. As a result not only the way of displaying the user demands but also of interprofessional cooperation was steered in a new direction. User demands must be taken into account from the very beginning of the development process, but it is not always obvious how to bring them together with IT knowhow and knowledge of the contextual factors of the health care system. Action Sheets seem to be an effective tool for making the software development process more tangible and convertible for all connected disciplines. Furthermore, the working method turned out to support interprofessional ideas exchange.
Strengthening Interprofessional Requirements Engineering Through Action Sheets: A Pilot Study
Pohlmann, Sabrina; Heinze, Oliver; Brandner, Antje; Reiß, Christina; Kamradt, Martina; Szecsenyi, Joachim; Ose, Dominik
2016-01-01
Background The importance of information and communication technology for healthcare is steadily growing. Newly developed tools are addressing different user groups: physicians, other health care professionals, social workers, patients, and family members. Since often many different actors with different expertise and perspectives are involved in the development process it can be a challenge to integrate the user-reported requirements of those heterogeneous user groups. Nevertheless, the understanding and consideration of user requirements is the prerequisite of building a feasible technical solution. In the course of the presented project it proved to be difficult to gain clear action steps and priorities for the development process out of the primary requirements compilation. Even if a regular exchange between involved teams took place there was a lack of a common language. Objective The objective of this paper is to show how the already existing requirements catalog was subdivided into specific, prioritized, and coherent working packages and the cooperation of multiple interprofessional teams within one development project was reorganized at the same time. In the case presented, the manner of cooperation was reorganized and a new instrument called an Action Sheet was implemented. This paper introduces the newly developed methodology which was meant to smooth the development of a user-centered software product and to restructure interprofessional cooperation. Methods There were 10 focus groups in which views of patients with colorectal cancer, physicians, and other health care professionals were collected in order to create a requirements catalog for developing a personal electronic health record. Data were audio- and videotaped, transcribed verbatim, and thematically analyzed. Afterwards, the requirements catalog was reorganized in the form of Action Sheets which supported the interprofessional cooperation referring to the development process of a personal electronic health record for the Rhine-Neckar region. Results In order to improve the interprofessional cooperation the idea arose to align the requirements arising from the implementation project with the method of software development applied by the technical development team. This was realized by restructuring the original requirements set in a standardized way and under continuous adjustment between both teams. As a result not only the way of displaying the user demands but also of interprofessional cooperation was steered in a new direction. Conclusions User demands must be taken into account from the very beginning of the development process, but it is not always obvious how to bring them together with IT knowhow and knowledge of the contextual factors of the health care system. Action Sheets seem to be an effective tool for making the software development process more tangible and convertible for all connected disciplines. Furthermore, the working method turned out to support interprofessional ideas exchange. PMID:27756716
Intelligent Software for System Design and Documentation
NASA Technical Reports Server (NTRS)
2002-01-01
In an effort to develop a real-time, on-line database system that tracks documentation changes in NASA's propulsion test facilities, engineers at Stennis Space Center teamed with ECT International of Brookfield, WI, through the NASA Dual-Use Development Program to create the External Data Program and Hyperlink Add-on Modules for the promis*e software. Promis*e is ECT's top-of-the-line intelligent software for control system design and documentation. With promis*e the user can make use of the automated design process to quickly generate control system schematics, panel layouts, bills of material, wire lists, terminal plans and more. NASA and its testing contractors currently use promis*e to create the drawings and schematics at the E2 Cell 2 test stand located at Stennis Space Center.
Fostering soft skills in project-oriented learning within an agile atmosphere
NASA Astrophysics Data System (ADS)
Chassidim, Hadas; Almog, Dani; Mark, Shlomo
2018-07-01
The project-oriented and Agile approaches have motivated a new generation of software engineers. Within the academic curriculum, the issue of whether students are being sufficiently prepared for the future has been raised. The objective of this work is to present the project-oriented environment as an influential factor that software engineering profession requires, using the second year course 'Software Development and Management in Agile Approach' as a case-study. This course combines academic topics, self-learned and soft skills implementation, the call for creativity, and the recognition of updated technologies and dynamic circumstances. The results of a survey that evaluated the perceived value of the course showed that the highest contribution of our environment was in the effectiveness of the team-work and the overall development process of the project.
Large scale database scrubbing using object oriented software components.
Herting, R L; Barnes, M R
1998-01-01
Now that case managers, quality improvement teams, and researchers use medical databases extensively, the ability to share and disseminate such databases while maintaining patient confidentiality is paramount. A process called scrubbing addresses this problem by removing personally identifying information while keeping the integrity of the medical information intact. Scrubbing entire databases, containing multiple tables, requires that the implicit relationships between data elements in different tables of the database be maintained. To address this issue we developed DBScrub, a Java program that interfaces with any JDBC compliant database and scrubs the database while maintaining the implicit relationships within it. DBScrub uses a small number of highly configurable object-oriented software components to carry out the scrubbing. We describe the structure of these software components and how they maintain the implicit relationships within the database.
Implementation of a Campuswide Distributed Mass Storage Service: the Dream Versus Reality
NASA Technical Reports Server (NTRS)
Prahst, Stephen; Armstead, Betty Jo
1996-01-01
In 1990, a technical team at NASA Lewis Research Center, Cleveland, Ohio, began defining a Mass Storage Service to pro- wide long-term archival storage, short-term storage for very large files, distributed Network File System access, and backup services for critical data dw resides on workstations and personal computers. Because of software availability and budgets, the total service was phased in over dm years. During the process of building the service from the commercial technologies available, our Mass Storage Team refined the original vision and learned from the problems and mistakes that occurred. We also enhanced some technologies to better meet the needs of users and system administrators. This report describes our team's journey from dream to reality, outlines some of the problem areas that still exist, and suggests some solutions.
Results of the Software Process Improvement Efforts of the Early Adopters in NAVAIR 4.0
2007-12-01
and customer satisfaction. AIRSpeed utilizes a structured, problem solving methodology called DMAIC (Define, Measure, Analyze, Improve, Control...widely used in business. DMAIC leads project teams through the logical steps from problem definition to problem resolution. Each phase has a specific set...costs and improving productivity and customer satisfaction. AIRSpeed utilizes the DMAIC (Define, Measure, Analyze, Improve, Control) structured problem
McDonald, Sandra A; Velasco, Elizabeth; Ilasi, Nicholas T
2010-12-01
Pfizer, Inc.'s Tissue Bank, in conjunction with Pfizer's BioBank (biofluid repository), endeavored to create an overarching internal software package to cover all general functions of both research facilities, including sample receipt, reconciliation, processing, storage, and ordering. Business process flow diagrams were developed by the Tissue Bank and Informatics teams as a way of characterizing best practices both within the Bank and in its interactions with key internal and external stakeholders. Besides serving as a first step for the software development, such formalized process maps greatly assisted the identification and communication of best practices and the optimization of current procedures. The diagrams shared here could assist other biospecimen research repositories (both pharmaceutical and other settings) for comparative purposes or as a guide to successful informatics design. Therefore, it is recommended that biorepositories consider establishing formalized business process flow diagrams for their laboratories, to address these objectives of communication and strategy.
Unified Geophysical Cloud Platform (UGCP) for Seismic Monitoring and other Geophysical Applications.
NASA Astrophysics Data System (ADS)
Synytsky, R.; Starovoit, Y. O.; Henadiy, S.; Lobzakov, V.; Kolesnikov, L.
2016-12-01
We present Unified Geophysical Cloud Platform (UGCP) or UniGeoCloud as an innovative approach for geophysical data processing in the Cloud environment with the ability to run any type of data processing software in isolated environment within the single Cloud platform. We've developed a simple and quick method of several open-source widely known software seismic packages (SeisComp3, Earthworm, Geotool, MSNoise) installation which does not require knowledge of system administration, configuration, OS compatibility issues etc. and other often annoying details preventing time wasting for system configuration work. Installation process is simplified as "mouse click" on selected software package from the Cloud market place. The main objective of the developed capability was the software tools conception with which users are able to design and install quickly their own highly reliable and highly available virtual IT-infrastructure for the organization of seismic (and in future other geophysical) data processing for either research or monitoring purposes. These tools provide access to any seismic station data available in open IP configuration from the different networks affiliated with different Institutions and Organizations. It allows also setting up your own network as you desire by selecting either regionally deployed stations or the worldwide global network based on stations selection form the global map. The processing software and products and research results could be easily monitored from everywhere using variety of user's devices form desk top computers to IT gadgets. Currents efforts of the development team are directed to achieve Scalability, Reliability and Sustainability (SRS) of proposed solutions allowing any user to run their applications with the confidence of no data loss and no failure of the monitoring or research software components. The system is suitable for quick rollout of NDC-in-Box software package developed for State Signatories and aimed for promotion of data processing collected by the IMS Network.
Conversion from Tree to Graph Representation of Requirements
NASA Technical Reports Server (NTRS)
Mayank, Vimal; Everett, David Frank; Shmunis, Natalya; Austin, Mark
2009-01-01
A procedure and software to implement the procedure have been devised to enable conversion from a tree representation to a graph representation of the requirements governing the development and design of an engineering system. The need for this procedure and software and for other requirements-management tools arises as follows: In systems-engineering circles, it is well known that requirements- management capability improves the likelihood of success in the team-based development of complex systems involving multiple technological disciplines. It is especially desirable to be able to visualize (in order to identify and manage) requirements early in the system- design process, when errors can be corrected most easily and inexpensively.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and detection and responses that can be tested in VMET and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM. The plan for VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.
The Standard Autonomous File Server, A Customized, Off-the-Shelf Success Story
NASA Technical Reports Server (NTRS)
Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)
2001-01-01
The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper describes the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system has been so successful; it is becoming a NASA standard resource, leading to its nomination for NASA's Software of the Year Award in 1999.
Evaluation of tactical training in team handball by means of artificial neural networks.
Hassan, Amr; Schrapf, Norbert; Ramadan, Wael; Tilp, Markus
2017-04-01
While tactical performance in competition has been analysed extensively, the assessment of training processes of tactical behaviour has rather been neglected in the literature. Therefore, the purpose of this study is to provide a methodology to assess the acquisition and implementation of offensive tactical behaviour in team handball. The use of game analysis software combined with an artificial neural network (ANN) software enabled identifying tactical target patterns from high level junior players based on their positions during offensive actions. These patterns were then trained by an amateur junior handball team (n = 14, 17 (0.5) years)). Following 6 weeks of tactical training an exhibition game was performed where the players were advised to use the target patterns as often as possible. Subsequently, the position data of the game was analysed with an ANN. The test revealed that 58% of the played patterns could be related to the trained target patterns. The similarity between executed patterns and target patterns was assessed by calculating the mean distance between key positions of the players in the game and the target pattern which was 0.49 (0.20) m. In summary, the presented method appears to be a valid instrument to assess tactical training.
Simulations in nursing practice: toward authentic leadership.
Shapira-Lishchinsky, Orly
2014-01-01
Aim This study explores nurses' ethical decision-making in team simulations in order to identify the benefits of these simulations for authentic leadership. Background While previous studies have indicated that team simulations may improve ethics in the workplace by reducing the number of errors, those studies focused mainly on clinical aspects and not on nurses' ethical experiences or on the benefits of authentic leadership. Methods Fifty nurses from 10 health institutions in central Israel participated in the study. Data about nurses' ethical experiences were collected from 10 teams. Qualitative data analysis based on Grounded Theory was applied, using the atlas.ti 5.0 software package. Findings Simulation findings suggest four main benefits that reflect the underlying components of authentic leadership: self-awareness, relational transparency, balanced information processing and internalized moral perspective. Conclusions Team-based simulation as a training tool may lead to authentic leadership among nurses. Implications for nursing management Nursing management should incorporate team simulations into nursing practice to help resolve power conflicts and to develop authentic leadership in nursing. Consequently, errors will decrease, patients' safety will increase and optimal treatment will be provided. © 2012 John Wiley & Sons Ltd.
Unobtrusive Monitoring of Spaceflight Team Functioning
NASA Technical Reports Server (NTRS)
Maidel, Veronica; Stanton, Jeffrey M.
2010-01-01
This document contains a literature review suggesting that research on industrial performance monitoring has limited value in assessing, understanding, and predicting team functioning in the context of space flight missions. The review indicates that a more relevant area of research explores the effectiveness of teams and how team effectiveness may be predicted through the elicitation of individual and team mental models. Note that the mental models referred to in this literature typically reflect a shared operational understanding of a mission setting such as the cockpit controls and navigational indicators on a flight deck. In principle, however, mental models also exist pertaining to the status of interpersonal relations on a team, collective beliefs about leadership, success in coordination, and other aspects of team behavior and cognition. Pursuing this idea, the second part of this document provides an overview of available off-the-shelf products that might assist in extraction of mental models and elicitation of emotions based on an analysis of communicative texts among mission personnel. The search for text analysis software or tools revealed no available tools to enable extraction of mental models automatically, relying only on collected communication text. Nonetheless, using existing software to analyze how a team is functioning may be relevant for selection or training, when human experts are immediately available to analyze and act on the findings. Alternatively, if output can be sent to the ground periodically and analyzed by experts on the ground, then these software packages might be employed during missions as well. A demonstration of two text analysis software applications is presented. Another possibility explored in this document is the option of collecting biometric and proxemic measures such as keystroke dynamics and interpersonal distance in order to expose various individual or dyadic states that may be indicators or predictors of certain elements of team functioning. This document summarizes interviews conducted with personnel currently involved in observing or monitoring astronauts or who are in charge of technology that allows communication and monitoring. The objective of these interviews was to elicit their perspectives on monitoring team performance during long-duration missions and the feasibility of potential automatic non-obtrusive monitoring systems. Finally, in the last section, the report describes several priority areas for research that can help transform team mental models, biometrics, and/or proxemics into workable systems for unobtrusive monitoring of space flight team effectiveness. Conclusions from this work suggest that unobtrusive monitoring of space flight personnel is likely to be a valuable future tool for assessing team functioning, but that several research gaps must be filled before prototype systems can be developed for this purpose.
Implementing Large Projects in Software Engineering Courses
ERIC Educational Resources Information Center
Coppit, David
2006-01-01
In software engineering education, large projects are widely recognized as a useful way of exposing students to the real-world difficulties of team software development. But large projects are difficult to put into practice. First, educators rarely have additional time to manage software projects. Second, classrooms have inherent limitations that…
Intelligence algorithms for autonomous navigation in a ground vehicle
NASA Astrophysics Data System (ADS)
Petkovsek, Steve; Shakya, Rahul; Shin, Young Ho; Gautam, Prasanna; Norton, Adam; Ahlgren, David J.
2012-01-01
This paper will discuss the approach to autonomous navigation used by "Q," an unmanned ground vehicle designed by the Trinity College Robot Study Team to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2011 competition, Q's intelligence was upgraded in several different areas, resulting in a more robust decision-making process and a more reliable system. In 2010-2011, the software of Q was modified to operate in a modular parallel manner, with all subtasks (including motor control, data acquisition from sensors, image processing, and intelligence) running simultaneously in separate software processes using the National Instruments (NI) LabVIEW programming language. This eliminated processor bottlenecks and increased flexibility in the software architecture. Though overall throughput was increased, the long runtime of the image processing process (150 ms) reduced the precision of Q's realtime decisions. Q had slow reaction times to obstacles detected only by its cameras, such as white lines, and was limited to slow speeds on the course. To address this issue, the image processing software was simplified and also pipelined to increase the image processing throughput and minimize the robot's reaction times. The vision software was also modified to detect differences in the texture of the ground, so that specific surfaces (such as ramps and sand pits) could be identified. While previous iterations of Q failed to detect white lines that were not on a grassy surface, this new software allowed Q to dynamically alter its image processing state so that appropriate thresholds could be applied to detect white lines in changing conditions. In order to maintain an acceptable target heading, a path history algorithm was used to deal with local obstacle fields and GPS waypoints were added to provide a global target heading. These modifications resulted in Q placing 5th in the autonomous challenge and 4th in the navigation challenge at IGVC.
WFF TOPEX Software Documentation Overview, May 1999. Volume 2
NASA Technical Reports Server (NTRS)
Brooks, Ronald L.; Lee, Jeffrey
2003-01-01
This document provides an overview'of software development activities and the resulting products and procedures developed by the TOPEX Software Development Team (SWDT) at Wallops Flight Facility, in support of the WFF TOPEX Engineering Assessment and Verification efforts.
GPM Timeline Inhibits For IT Processing
NASA Technical Reports Server (NTRS)
Dion, Shirley K.
2014-01-01
The Safety Inhibit Timeline Tool was created as one approach to capturing and understanding inhibits and controls from IT through launch. Global Precipitation Measurement (GPM) Mission, which launched from Japan in March 2014, was a joint mission under a partnership between the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM was one of the first NASA Goddard in-house programs that extensively used software controls. Using this tool during the GPM buildup allowed a thorough review of inhibit and safety critical software design for hazardous subsystems such as the high gain antenna boom, solar array, and instrument deployments, transmitter turn-on, propulsion system release, and instrument radar turn-on. The GPM safety team developed a methodology to document software safety as part of the standard hazard report. As a result of this process, a new tool safety inhibit timeline was created for management of inhibits and their controls during spacecraft buildup and testing during IT at GSFC and at the launch range in Japan. The Safety Inhibit Timeline Tool was a pathfinder approach for reviewing software that controls the electrical inhibits. The Safety Inhibit Timeline Tool strengthens the Safety Analysts understanding of the removal of inhibits during the IT process with safety critical software. With this tool, the Safety Analyst can confirm proper safe configuration of a spacecraft during each IT test, track inhibit and software configuration changes, and assess software criticality. In addition to understanding inhibits and controls during IT, the tool allows the Safety Analyst to better communicate to engineers and management the changes in inhibit states with each phase of hardware and software testing and the impact of safety risks. Lessons learned from participating in the GPM campaign at NASA and JAXA will be discussed during this session.
Valjevac, Salih; Ridjanovic, Zoran; Masic, Izet
2009-01-01
CONFLICT OF INTEREST: NONE DECLARED SUMMARY Introduction Agency for healthcare quality and accreditation in Federation of Bosnia and Herzegovina (AKAZ) is authorized body in the field of healthcare quality and safety improvement and accreditation of healthcare institutions. Beside accreditation standards for hospitals and primary health care centers, AKAZ has also developed accreditation standards for family medicine teams. Methods Software development was primarily based on Accreditation Standards for Family Medicine Teams. Seven chapters / topics: (1. Physical factors; 2. Equipment; 3. Organization and Management; 4. Health promotion and illness prevention; 5. Clinical services; 6. Patient survey; and 7. Patient’s rights and obligations) contain 35 standards describing expected level of family medicine team’s quality. Based on accreditation standards structure and needs of different potential users, it was concluded that software backbone should be a database containing all accreditation standards, self assessment and external assessment details. In this article we will present the development of standardized software for self and external evaluation of quality of service in family medicine, as well as plans for the future development of this software package. Conclusion Electronic data gathering and storing enhances the management, access and overall use of information. During this project we came to conclusion that software for self assessment and external assessment is ideal for accreditation standards distribution, their overview by the family medicine team members, their self assessment and external assessment. PMID:24109157
Perfecting scientists’ collaboration and problem-solving skills in the virtual team environment
USDA-ARS?s Scientific Manuscript database
Perfecting Scientists’ Collaboration and Problem-Solving Skills in the Virtual Team Environment Numerous factors have contributed to the proliferation of conducting work in virtual teams at the domestic, national, and global levels: innovations in technology, critical developments in software, co-lo...
Use of Dynamic Models and Operational Architecture to Solve Complex Navy Challenges
NASA Technical Reports Server (NTRS)
Grande, Darby; Black, J. Todd; Freeman, Jared; Sorber, TIm; Serfaty, Daniel
2010-01-01
The United States Navy established 8 Maritime Operations Centers (MOC) to enhance the command and control of forces at the operational level of warfare. Each MOC is a headquarters manned by qualified joint operational-level staffs, and enabled by globally interoperable C41 systems. To assess and refine MOC staffing, equipment, and schedules, a dynamic software model was developed. The model leverages pre-existing operational process architecture, joint military task lists that define activities and their precedence relations, as well as Navy documents that specify manning and roles per activity. The software model serves as a "computational wind-tunnel" in which to test a MOC on a mission, and to refine its structure, staffing, processes, and schedules. More generally, the model supports resource allocation decisions concerning Doctrine, Organization, Training, Material, Leadership, Personnel and Facilities (DOTMLPF) at MOCs around the world. A rapid prototype effort efficiently produced this software in less than five months, using an integrated process team consisting of MOC military and civilian staff, modeling experts, and software developers. The work reported here was conducted for Commander, United States Fleet Forces Command in Norfolk, Virginia, code N5-0LW (Operational Level of War) that facilitates the identification, consolidation, and prioritization of MOC capabilities requirements, and implementation and delivery of MOC solutions.
ULSGEN (Uplink Summary Generator)
NASA Technical Reports Server (NTRS)
Wang, Y.-F.; Schrock, M.; Reeve, T.; Nguyen, K.; Smith, B.
2014-01-01
Uplink is an important part of spacecraft operations. Ensuring the accuracy of uplink content is essential to mission success. Before commands are radiated to the spacecraft, the command and sequence must be reviewed and verified by various teams. In most cases, this process requires collecting the command data, reviewing the data during a command conference meeting, and providing physical signatures by designated members of various teams to signify approval of the data. If commands or sequences are disapproved for some reason, the whole process must be restarted. Recording data and decision history is important for traceability reasons. Given that many steps and people are involved in this process, an easily accessible software tool for managing the process is vital to reducing human error which could result in uplinking incorrect data to the spacecraft. An uplink summary generator called ULSGEN was developed to assist this uplink content approval process. ULSGEN generates a web-based summary of uplink file content and provides an online review process. Spacecraft operations personnel view this summary as a final check before actual radiation of the uplink data. .
Space Shuttle Ascent Flight Design Process: Evolution and Lessons Learned
NASA Technical Reports Server (NTRS)
Picka, Bret A.; Glenn, Christopher B.
2011-01-01
The Space Shuttle Ascent Flight Design team is responsible for defining a launch to orbit trajectory profile that satisfies all programmatic mission objectives and defines the ground and onboard reconfiguration requirements for this high-speed and demanding flight phase. This design, verification and reconfiguration process ensures that all applicable mission scenarios are enveloped within integrated vehicle and spacecraft certification constraints and criteria, and includes the design of the nominal ascent profile and trajectory profiles for both uphill and ground-to-ground aborts. The team also develops a wide array of associated training, avionics flight software verification, onboard crew and operations facility products. These key ground and onboard products provide the ultimate users and operators the necessary insight and situational awareness for trajectory dynamics, performance and event sequences, abort mode boundaries and moding, flight performance and impact predictions for launch vehicle stages for use in range safety, and flight software performance. These products also provide the necessary insight to or reconfiguration of communications and tracking systems, launch collision avoidance requirements, and day of launch crew targeting and onboard guidance, navigation and flight control updates that incorporate the final vehicle configuration and environment conditions for the mission. Over the course of the Space Shuttle Program, ascent trajectory design and mission planning has evolved in order to improve program flexibility and reduce cost, while maintaining outstanding data quality. Along the way, the team has implemented innovative solutions and technologies in order to overcome significant challenges. A number of these solutions may have applicability to future human spaceflight programs.
NASA Astrophysics Data System (ADS)
Isnur Haryudo, Subuh; Imam Agung, Achmad; Firmansyah, Rifqi
2018-04-01
The purpose of this research is to develop learning media of control technique using Matrix Laboratory software with industry requirement approach. Learning media serves as a tool for creating a better and effective teaching and learning situation because it can accelerate the learning process in order to enhance the quality of learning. Control Techniques using Matrix Laboratory software can enlarge the interest and attention of students, with real experience and can grow independent attitude. This research design refers to the use of research and development (R & D) methods that have been modified by multi-disciplinary team-based researchers. This research used Computer based learning method consisting of computer and Matrix Laboratory software which was integrated with props. Matrix Laboratory has the ability to visualize the theory and analysis of the Control System which is an integration of computing, visualization and programming which is easy to use. The result of this instructional media development is to use mathematical equations using Matrix Laboratory software on control system application with DC motor plant and PID (Proportional-Integral-Derivative). Considering that manufacturing in the field of Distributed Control systems (DCSs), Programmable Controllers (PLCs), and Microcontrollers (MCUs) use PID systems in production processes are widely used in industry.
Enhanced Training for Cyber Situational Awareness in Red versus Blue Team Exercises
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbajal, Armida J.; Stevens-Adams, Susan Marie; Silva, Austin Ray
This report summarizes research conducted through the Sandia National Laboratories Enhanced Training for Cyber Situational Awareness in Red Versus Blue Team Exercises Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding concerning how to best structure training for cyber defenders. Two modes of training were considered. The baseline training condition (Tool-Based training) was based on current practices where classroom instruction focuses on the functions of a software tool with various exercises in which students apply those functions. In the second training condition (Narrative-Based training), classroom instruction addressed software functions, but in the contextmore » of adversary tactics and techniques. It was hypothesized that students receiving narrative-based training would gain a deeper conceptual understanding of the software tools and this would be reflected in better performance within a red versus blue team exercise.« less
NASA Technical Reports Server (NTRS)
Irwin, Daniel E.
2004-01-01
The overall purpose of this training session is to familiarize Central American project cooperators with the remote sensing and image processing research that is being conducted by the NASA research team and to acquaint them with the data products being produced in the areas of Land Cover and Land Use Change and carbon modeling under the NASA SERVIR project. The training session, therefore, will be both informative and practical in nature. Specifically, the course will focus on the physics of remote sensing, various satellite and airborne sensors (Landsat, MODIS, IKONOS, Star-3i), processing techniques, and commercial off the shelf image processing software.
Cooperative Search by UAV Teams: A Model Predictive Approach Using Dynamic Graphs
2011-10-01
decentralized processing and control architecture. SLAMEM asset models accurately represent the Unicorn UAV platforms and other standard military platforms in...IMPLEMENTATION The CGBMPS algorithm has been successfully field-tested using both Unicorn [27] and Raven [20] UAV platforms. This section describes...the hardware-software system setup and implementation used for testing with Unicorns , Toyon’s UAV test platform. We also present some results from the
A Layered Solution for Supercomputing Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grider, Gary
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.
Just-in-Time Technology to Encourage Incremental, Dietary Behavior Change
Intille, Stephen S.; Kukla, Charles; Farzanfar, Ramesh; Bakr, Waseem
2003-01-01
Our multi-disciplinary team is developing mobile computing software that uses “just-in-time” presentation of information to motivate behavior change. Using a participatory design process, preliminary interviews have helped us to establish 10 design goals. We have employed some to create a prototype of a tool that encourages better dietary decision making through incremental, just-in-time motivation at the point of purchase. PMID:14728379
NASA Astrophysics Data System (ADS)
Wasielewska, K.; Ganzha, M.
2012-10-01
In this paper we consider combining ontologically demarcated information with Saaty's Analytic Hierarchy Process (AHP) [1] for the multicriterial assessment of offers during contract negotiations. The context for the proposal is provided by the Agents in Grid project (AiG; [2]), which aims at development of an agent-based infrastructure for efficient resource management in the Grid. In the AiG project, software agents representing users can either (1) join a team and earn money, or (2) find a team to execute a job. Moreover, agents form teams, managers of which negotiate with clients and workers terms of potential collaboration. Here, ontologically described contracts (Service Level Agreements) are the results of autonomous multiround negotiations. Therefore, taking into account relatively complex nature of the negotiated contracts, multicriterial assessment of proposals plays a crucial role. The AHP method is based on pairwise comparisons of criteria and relies on the judgement of a panel of experts. It measures how well does an offer serve the objective of a decision maker. In this paper, we propose how the AHP method can be used to assess ontologically described contract proposals.
Application of parallelized software architecture to an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Shakya, Rahul; Wright, Adam; Shin, Young Ho; Momin, Orko; Petkovsek, Steven; Wortman, Paul; Gautam, Prasanna; Norton, Adam
2011-01-01
This paper presents improvements made to Q, an autonomous ground vehicle designed to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2010 IGVC, Q was upgraded with a new parallelized software architecture and a new vision processor. Improvements were made to the power system reducing the number of batteries required for operation from six to one. In previous years, a single state machine was used to execute the bulk of processing activities including sensor interfacing, data processing, path planning, navigation algorithms and motor control. This inefficient approach led to poor software performance and made it difficult to maintain or modify. For IGVC 2010, the team implemented a modular parallel architecture using the National Instruments (NI) LabVIEW programming language. The new architecture divides all the necessary tasks - motor control, navigation, sensor data collection, etc. into well-organized components that execute in parallel, providing considerable flexibility and facilitating efficient use of processing power. Computer vision is used to detect white lines on the ground and determine their location relative to the robot. With the new vision processor and some optimization of the image processing algorithm used last year, two frames can be acquired and processed in 70ms. With all these improvements, Q placed 2nd in the autonomous challenge.
Florida alternative NTCIP testing software (ANTS) for actuated signal controllers.
DOT National Transportation Integrated Search
2009-01-01
The scope of this research project did include the development of a software tool to test devices for NTCIP compliance. Development of the Florida Alternative NTCIP Testing Software (ANTS) was developed by the research team due to limitations found w...
Level 1 Processing of MODIS Direct Broadcast Data From Terra
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Smith, Peter; Shotland, Larry; El-Ghazawi, Tarek; Zhu, Ming
2000-01-01
In February 2000, an effort was begun to adapt the Moderate Resolution Imaging Spectroradiometer (MODIS) Level 1 production software to process direct broadcast data. Three Level 1 algorithms have been adapted and packaged for release: Level 1A converts raw (level 0) data into Hierarchical Data Format (HDF), unpacking packets into scans; Geolocation computes geographic information for the data points in the Level 1A; and the Level 1B computes geolocated, calibrated radiances from the Level 1A and Geolocation products. One useful aspect of adapting the production software is the ability to incorporate enhancements contributed by the MODIS Science Team. We have therefore tried to limit changes to the software. However, in order to process the data immediately on receipt, we have taken advantage of a branch in the geolocation software that reads orbit and altitude information from the packets themselves, rather than external ancillary files used in standard production. We have also verified that the algorithms can be run with smaller time increments (2.5 minutes) than the five-minute increments used in production. To make the code easier to build and run, we have simplified directories and build scripts. Also, dependencies on a commercial numerics library have been replaced by public domain software. A version of the adapted code has been released for Silicon Graphics machines running lrix. Perhaps owing to its origin in production, the software is rather CPU-intensive. Consequently, a port to Linux is underway, followed by a version to run on PC clusters, with an eventual goal of running in near-real-time (i.e., process a ten-minute pass in ten minutes).
Multimedia software to help caregivers cope.
Chambers, Mary G; Connor, Samantha L; McGonigle, Mary; Diver, Mike G
2003-01-01
This report describes the design and evaluation of a software application to help carers cope when faced with caring problems and emergencies. The design process involved users at each stage to ensure the content of the software application was appropriate, and the research team carefully considered the requirements of disabled and elderly users. Focus group discussions and individual interviews were conducted in five European countries to ascertain the needs of caregivers in this area. The findings were used to design a three-part multimedia software application to help family caregivers prepare to cope with sudden, unexpected, and difficult situations that may arise during their time as a caregiver. This prototype then was evaluated via user trials and usability questionnaires to consider the usability and acceptance of the application and any changes that may be required. User acceptance of the software application was high, and the key features of usability such as content, appearance, and navigation were highly rated. In general, comments were positive and enthusiastic regarding the content of the software application and relevance to the caring situation. The software application has the potential to offer information and support to those who are caring for the elderly and disabled at home and to help them prepare for a crisis.
Domain specific software architectures: Command and control
NASA Technical Reports Server (NTRS)
Braun, Christine; Hatch, William; Ruegsegger, Theodore; Balzer, Bob; Feather, Martin; Goldman, Neil; Wile, Dave
1992-01-01
GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 applications development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE's approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team's DSSA approach and then presents our work on automated support for message processing.
2014-08-15
CAPE CANAVERAL, Fla. – Florida middle school students and their teachers greet students from other locations via webex before the start of the Zero Robotics finals competition. The Florida teams are at the Space Station Processing Facility at NASA's Kennedy Space Center in Florida. Students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
2014-08-15
CAPE CANAVERAL, Fla. – Florida middle school students and their teachers watch the Zero Robotics finals competition broadcast live via webex from the International Space Station. The Florida teams are at the Space Station Processing Facility at NASA's Kennedy Space Center in Florida. Students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
Team Expo: A State-of-the-Art JSC Advanced Design Team
NASA Technical Reports Server (NTRS)
Tripathi, Abhishek
2001-01-01
In concert with the NASA-wide Intelligent Synthesis Environment Program, the Exploration Office at the Johnson Space Center has assembled an Advanced Design Team. The purpose of this team is two-fold. The first is to identify, use, and develop software applications, tools, and design processes that streamline and enhance a collaborative engineering environment. The second is to use this collaborative engineering environment to produce conceptual, system-level-of-detail designs in a relatively short turnaround time, using a standing team of systems and integration experts. This includes running rapid trade studies on varying mission architectures, as well as producing vehicle and/or subsystem designs. The standing core team is made up of experts from all of the relevant engineering divisions (e.g. Power, Thermal, Structures, etc.) as well as representatives from Risk and Safety, Mission Operations, and Crew Life Sciences among others. The Team works together during 2- hour sessions in the same specially enhanced room to ensure real-time integration/identification of cross-disciplinary issues and solutions. All subsystem designs are collectively reviewed and approved during these same sessions. In addition there is an Information sub-team that captures and formats all data and makes it accessible for use by the following day. The result is Team Expo: an Advanced Design Team that is leading the change from a philosophy of "over the fence" design to one of collaborative engineering that pushes the envelope to achieve the next-generation analysis and design environment.
U.S. Team Green Building Challenge 2002
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2002-09-01
Flier about the U.S. Team and its projects participating in the International Green Building Challenge. Along with many other countries, the United States accepted the Green Building Challenge (GBC), an international effort to evaluate and improve the performance of buildings worldwide. GBC started out in 1996 as a competition to determine which country had the greenest buildings; it evolved into a cooperative process among the countries to measure the performance of green buildings. Although the auto industry can easily measure efficiency in terms of miles per gallon, the buildings industry has no standard way to quantify energy and environmental performance.more » The Green Building Challenge participants hope that better tools for measuring the energy and environmental performance of buildings will be an outcome of their efforts and that these tools will lead to higher and better performance levels in buildings around the world. The ultimate goal is to design, construct, and operate buildings that contribute to global sustainability by conserving and/or regenerating natural resources and minimizing nonrenewable energy use. The United States' Green Building Challenge Team '02 selected five buildings from around the country to serve as case studies; each of the five U.S. building designs (as well as all international case studies) were assessed using an in-depth evaluation tool, called the Green Building Assessment Tool (GBTool). The GBTool was specifically created and refined by international teams, for the GBC efforts. The goal of this collaborative effort is to improve this evaluation software tool so that it can be used globally, while taking into account regional and national conditions. The GBTool was used by the U.S. Team to assess and evaluate the energy and environmental performance of these five buildings: (1) Retail (in operation): BigHorn Home Improvement Center, Silverthorne, Colorado; (2) Office (in operation), Philip Merrill Environmental; (3) School (in construction), Clearview Elementary School, Hanover, Pennsylvania; (4) Multi-family residential (in construction), Twenty River Terrace, Battery Park City, New York; and (5) Office/lab (in design), National Oceanic Atmospheric Administration, Honolulu, Hawaii. These projects were selected, not only because they were good examples of high-performance buildings and had interested owners/design team members, but also because building data was available as inputs to test the software tool. Both the tool and the process have been repeatedly refined and enhanced since the first Green Building Challenge event in 1998; participating countries are continuously providing feedback to further improve the tool and global process for the greatest positive effect.« less
GLAS Long-Term Archive: Preservation and Stewardship for a Vital Earth Observing Mission
NASA Astrophysics Data System (ADS)
Fowler, D. K.; Moses, J. F.; Zwally, J.; Schutz, B. E.; Hancock, D.; McAllister, M.; Webster, D.; Bond, C.
2012-12-01
Data Stewardship, preservation, and reproducibility are fast becoming principal parts of a data manager's work. In an era of distributed data and information systems, it is of vital importance that organizations make a commitment to both current and long-term goals of data management and the preservation of scientific data. Satellite missions and instruments go through a lifecycle that involves pre-launch calibration, on-orbit data acquisition and product generation, and final reprocessing. Data products and descriptions flow to the archives for distribution on a regular basis during the active part of the mission. However there is additional information from the product generation and science teams needed to ensure the observations will be useful for long term climate studies. Examples include ancillary input datasets, product generation software, and production history as developed by the team during the course of product generation. These data and information will need to be archived after product data processing is completed. NASA has developed a set of Earth science data and information content requirements for long term preservation that is being used for all the EOS missions as they come to completion. Since the ICESat/GLAS mission was one of the first to end, NASA and NSIDC, in collaboration with the science team, are collecting data, software, and documentation, preparing for long-term support of the ICESat mission. For a long-term archive, it is imperative to preserve sufficient information about how products were prepared in order to ensure future researchers that the scientific results are accurate, understandable, and useable. Our experience suggests data centers know what to preserve in most cases. That is, the processing algorithms along with the Level 0 or Level 1a input and ancillary products used to create the higher-level products will be archived and made available to users. In other cases, such as pre-launch, calibration/validation, and test data, the data centers must seek guidance from the science team. All these data are essential for product provenance, contributing to and helping establish the integrity of the scientific observations for long term climate studies. In this presentation we will describe application of information gathering with guidance from the ICESat/GLAS Science Team, and the flow of additional information from the ICESat Science team and Science Investigator-Led Processing System to the NSIDC Distributed Active Archive Center. This presentation will also cover how we envision user support through the years of the Long-Term Archive.
Biotechnology software in the digital age: are you winning?
Scheitz, Cornelia Johanna Franziska; Peck, Lawrence J; Groban, Eli S
2018-01-16
There is a digital revolution taking place and biotechnology companies are slow to adapt. Many pharmaceutical, biotechnology, and industrial bio-production companies believe that software must be developed and maintained in-house and that data are more secure on internal servers than on the cloud. In fact, most companies in this space continue to employ large IT and software teams and acquire computational infrastructure in the form of in-house servers. This is due to a fear of the cloud not sufficiently protecting in-house resources and the belief that their software is valuable IP. Over the next decade, the ability to quickly adapt to changing market conditions, with agile software teams, will quickly become a compelling competitive advantage. Biotechnology companies that do not adopt the new regime may lose on key business metrics such as return on invested capital, revenue, profitability, and eventually market share.
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
User systems guidelines for software projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrahamson, L.
1986-04-01
This manual presents guidelines for software standards which were developed so that software project-development teams and management involved in approving the software could have a generalized view of all phases in the software production procedure and the steps involved in completing each phase. Guidelines are presented for six phases of software development: project definition, building a user interface, designing software, writing code, testing code, and preparing software documentation. The discussions for each phase include examples illustrating the recommended guidelines. 45 refs. (DWL)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yixing; Zhang, Jianshun; Pelken, Michael
Executive Summary The objective of this study was to develop a “Virtual Design Studio (VDS)”: a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. This VDS is intended to assist collaborating architects, engineers and project management team members throughout from the early phases to the detailed building design stages. It can be used to plan design tasks and workflow, and evaluate the potential impacts of various green building strategies on the building performance by using the state of the art simulation toolsmore » as well as industrial/professional standards and guidelines for green building system design. Engaged in the development of VDS was a multi-disciplinary research team that included architects, engineers, and software developers. Based on the review and analysis of how existing professional practices in building systems design operate, particularly those used in the U.S., Germany and UK, a generic process for performance-based building design, construction and operation was proposed. It distinguishes the whole process into five distinct stages: Assess, Define, Design, Apply, and Monitoring (ADDAM). The current VDS is focused on the first three stages. The VDS considers building design as a multi-dimensional process, involving multiple design teams, design factors, and design stages. The intersection among these three dimensions defines a specific design task in terms of “who”, “what” and “when”. It also considers building design as a multi-objective process that aims to enhance the five aspects of performance for green building systems: site sustainability, materials and resource efficiency, water utilization efficiency, energy efficiency and impacts to the atmospheric environment, and IEQ. The current VDS development has been limited to energy efficiency and IEQ performance, with particular focus on evaluating thermal performance, air quality and lighting environmental quality because of their strong interaction with the energy performance of buildings. The VDS software framework contains four major functions: 1) Design coordination: It enables users to define tasks using the Input-Process-Output flow approach, which specifies the anticipated activities (i.e., the process), required input and output information, and anticipated interactions with other tasks. It also allows task scheduling to define the work flow, and sharing of the design data and information via the internet. 2) Modeling and simulation: It enables users to perform building simulations to predict the energy consumption and IEQ conditions at any of the design stages by using EnergyPlus and a combined heat, air, moisture and pollutant simulation (CHAMPS) model. A method for co-simulation was developed to allow the use of both models at the same time step for the combined energy and indoor air quality analysis. 3) Results visualization: It enables users to display a 3-D geometric design of the building by reading BIM (building information model) file generated by design software such as SketchUp, and the predicted results of heat, air, moisture, pollutant and light distributions in the building. 4) Performance evaluation: It enables the users to compare the performance of a proposed building design against a reference building that is defined for the same type of buildings under the same climate condition, and predicts the percent of improvements over the minimum requirements specified in ASHRAE Standard 55-2010, 62.1-2010 and 90.1-2010. An approach was developed to estimate the potential impact of a design factor on the whole building performance, and hence can assist the user to identify areas that have most pay back for investment. The VDS software was developed by using C++ with the conventional Model, View and Control (MVC) software architecture. The software has been verified by using a simple 3-zone case building. The application of the VDS concepts and framework for building design and performance analysis has been illustrated by using a medium-sized, five story office building that received LEED Platinum Certification from USGBC.« less
NASA Astrophysics Data System (ADS)
Tomlin, M. C.; Jenkyns, R.
2015-12-01
Ocean Networks Canada (ONC) collects data from observatories in the northeast Pacific, Salish Sea, Arctic Ocean, Atlantic Ocean, and land-based sites in British Columbia. Data are streamed, collected autonomously, or transmitted via satellite from a variety of instruments. The Software Engineering group at ONC develops and maintains Oceans 2.0, an in-house software system that acquires and archives data from sensors, and makes data available to scientists, the public, government and non-government agencies. The Oceans 2.0 workflow tool was developed by ONC to manage a large volume of tasks and processes required for instrument installation, recovery and maintenance activities. Since 2013, the workflow tool has supported 70 expeditions and grown to include 30 different workflow processes for the increasing complexity of infrastructures at ONC. The workflow tool strives to keep pace with an increasing heterogeneity of sensors, connections and environments by supporting versioning of existing workflows, and allowing the creation of new processes and tasks. Despite challenges in training and gaining mutual support from multidisciplinary teams, the workflow tool has become invaluable in project management in an innovative setting. It provides a collective place to contribute to ONC's diverse projects and expeditions and encourages more repeatable processes, while promoting interactions between the multidisciplinary teams who manage various aspects of instrument development and the data they produce. The workflow tool inspires documentation of terminologies and procedures, and effectively links to other tools at ONC such as JIRA, Alfresco and Wiki. Motivated by growing sensor schemes, modes of collecting data, archiving, and data distribution at ONC, the workflow tool ensures that infrastructure is managed completely from instrument purchase to data distribution. It integrates all areas of expertise and helps fulfill ONC's mandate to offer quality data to users.
ToxPredictor: a Toxicity Estimation Software Tool
The Computational Toxicology Team within the National Risk Management Research Laboratory has developed a software tool that will allow the user to estimate the toxicity for a variety of endpoints (such as acute aquatic toxicity). The software tool is coded in Java and can be ac...
NASA Technical Reports Server (NTRS)
Clark, David A.
1998-01-01
In light of the escalation of terrorism, the Department of Defense spearheaded the development of new antiterrorist software for all Government agencies by issuing a Broad Agency Announcement to solicit proposals. This Government-wide competition resulted in a team that includes NASA Lewis Research Center's Computer Services Division, who will develop the graphical user interface (GUI) and test it in their usability lab. The team launched a program entitled Joint Sphere of Security (JSOS), crafted a design architecture (see the following figure), and is testing the interface. This software system has a state-ofthe- art, object-oriented architecture, with a main kernel composed of the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS will be used as the software "breadboard" for assembling the components of explosions, such as blast and collapse simulations.
Impact of agile methodologies on team capacity in automotive radio-navigation projects
NASA Astrophysics Data System (ADS)
Prostean, G.; Hutanu, A.; Volker, S.
2017-01-01
The development processes used in automotive radio-navigation projects are constantly under adaption pressure. While the software development models are based on automotive production processes, the integration of peripheral components into an automotive system will trigger a high number of requirement modifications. The use of traditional development models in automotive industry will bring team’s development capacity to its boundaries. The root cause lays in the inflexibility of actual processes and their adaption limits. This paper addresses a new project management approach for the development of radio-navigation projects. The understanding of weaknesses of current used models helped us in development and integration of agile methodologies in traditional development model structure. In the first part we focus on the change management methods to reduce request for change inflow. Established change management risk analysis processes enables the project management to judge the impact of a requirement change and also gives time to the project to implement some changes. However, in big automotive radio-navigation projects the saved time is not enough to implement the large amount of changes, which are submitted to the project. In the second phase of this paper we focus on increasing team capacity by integrating at critical project phases agile methodologies into the used traditional model. The overall objective of this paper is to prove the need of process adaption in order to solve project team capacity bottlenecks.
Shaping Software Engineering Curricula Using Open Source Communities: A Case Study
ERIC Educational Resources Information Center
Bowring, James; Burke, Quinn
2016-01-01
This paper documents four years of a novel approach to teaching a two-course sequence in software engineering as part of the ABET-accredited computer science curriculum at the College of Charleston. This approach is team-based and centers on learning software engineering in the context of open source software projects. In the first course, teams…
Zero to Integration in Eight Months, the Dawn Ground Data System Engineering Challange
NASA Technical Reports Server (NTRS)
Dubon, Lydia P.
2006-01-01
The Dawn Project has presented the Ground Data System (GDS) with technical challenges driven by cost and schedule constraints commonly associated with National Aeronautics and Space Administration (NASA) Discovery Projects. The Dawn mission consists of a new and exciting Deep Space partnership among: the Jet Propulsion Laboratory (JPL), responsible for project management and flight operations; Orbital Sciences Corporation (OSC), spacecraft builder and responsible for flight system test and integration; and the University of California, at Los Angeles (UCLA), responsible for science planning and operations. As a cost-capped mission, one of Dawn s implementation strategies is to leverage from both flight and ground heritage. OSC's ground data system is used for flight system test and integration as part of the flight heritage strategy. Mission operations, however, are to be conducted with JPL s ground system. The system engineering challenge of dealing with two heterogeneous ground systems emerged immediately. During the first technical interchange meeting between the JPL s GDS Team and OSC's Flight Software Team, August 2003, the need to integrate the ground system with the flight software was brought to the table. This need was driven by the project s commitment to enable instrument engineering model integration in a spacecraft simulator environment, for both demonstration and risk mitigation purposes, by April 2004. This paper will describe the system engineering approach that was undertaken by JPL's GDS Team in order to meet the technical challenge within a non-negotiable eight-month schedule. Key to the success was adherence to an overall systems engineering process and fundamental systems engineering practices: decomposition of the project request into manageable requirements; definition of a structured yet flexible development process; integration of multiple ground disciplines and experts into a focused team effort; in-process risk management; and aggregation of the intermediate products to an integrated final product. In addition, this paper will highlight the role of lessons learned from the integration experience. The lessons learned from an early GDS deployment have served as the foundation for the design and implementation of the Dawn Ground Data System.
NASA Technical Reports Server (NTRS)
Maidel, Veronica; Stanton, Jeffrey M.
2010-01-01
This document contains a literature review suggesting that research on industrial performance monitoring has limited value in assessing, understanding, and predicting team functioning in the context of space flight missions. The review indicates that a more relevant area of research explores the effectiveness of teams and how team effectiveness may be predicted through the elicitation of individual and team mental models. Note that the mental models referred to in this literature typically reflect a shared operational understanding of a mission setting such as the cockpit controls and navigational indicators on a flight deck. In principle, however, mental models also exist pertaining to the status of interpersonal relations on a team, collective beliefs about leadership, success in coordination, and other aspects of team behavior and cognition. Pursuing this idea, the second part of this document provides an overview of available off-the-shelf products that might assist in extraction of mental models and elicitation of emotions based on an analysis of communicative texts among mission personnel. The search for text analysis software or tools revealed no available tools to enable extraction of mental models automatically, relying only on collected communication text. Nonetheless, using existing software to analyze how a team is functioning may be relevant for selection or training, when human experts are immediately available to analyze and act on the findings. Alternatively, if output can be sent to the ground periodically and analyzed by experts on the ground, then these software packages might be employed during missions as well. A demonstration of two text analysis software applications is presented. Another possibility explored in this document is the option of collecting biometric and proxemic measures such as keystroke dynamics and interpersonal distance in order to expose various individual or dyadic states that may be indicators or predictors of certain elements of team functioning. This document summarizes interviews conducted with personnel currently involved in observing or monitoring astronauts or who are in charge of technology that allows communication and monitoring. The objective of these interviews was to elicit their perspectives on monitoring team performance during long-duration missions and the feasibility of potential automatic non-obtrusive monitoring systems. Finally, in the last section, the report describes several priority areas for research that can help transform team mental models, biometrics, and/or proxemics into workable systems for unobtrusive monitoring of space flight team effectiveness. Conclusions from this work suggest that unobtrusive monitoring of space flight personnel is likely to be a valuable future tool for assessing team functioning, but that several research gaps must be filled before prototype systems can be developed for this purpose.
An Overview of the JPSS Ground Project Algorithm Integration Process
NASA Astrophysics Data System (ADS)
Vicente, G. A.; Williams, R.; Dorman, T. J.; Williamson, R. C.; Shaw, F. J.; Thomas, W. M.; Hung, L.; Griffin, A.; Meade, P.; Steadley, R. S.; Cember, R. P.
2015-12-01
The smooth transition, implementation and operationalization of scientific software's from the National Oceanic and Atmospheric Administration (NOAA) development teams to the Join Polar Satellite System (JPSS) Ground Segment requires a variety of experiences and expertise. This task has been accomplished by a dedicated group of scientist and engineers working in close collaboration with the NOAA Satellite and Information Services (NESDIS) Center for Satellite Applications and Research (STAR) science teams for the JPSS/Suomi-NPOES Preparatory Project (S-NPP) Advanced Technology Microwave Sounder (ATMS), Cross-track Infrared Sounder (CrIS), Visible Infrared Imaging Radiometer Suite (VIIRS) and Ozone Mapping and Profiler Suite (OMPS) instruments. The presentation purpose is to describe the JPSS project process for algorithm implementation from the very early delivering stages by the science teams to the full operationalization into the Interface Processing Segment (IDPS), the processing system that provides Environmental Data Records (EDR's) to NOAA. Special focus is given to the NASA Data Products Engineering and Services (DPES) Algorithm Integration Team (AIT) functional and regression test activities. In the functional testing phase, the AIT uses one or a few specific chunks of data (granules) selected by the NOAA STAR Calibration and Validation (cal/val) Teams to demonstrate that a small change in the code performs properly and does not disrupt the rest of the algorithm chain. In the regression testing phase, the modified code is placed into to the Government Resources for Algorithm Verification, Integration, Test and Evaluation (GRAVITE) Algorithm Development Area (ADA), a simulated and smaller version of the operational IDPS. Baseline files are swapped out, not edited and the whole code package runs in one full orbit of Science Data Records (SDR's) using Calibration Look Up Tables (Cal LUT's) for the time of the orbit. The purpose of the regression test is to identify unintended outcomes. Overall the presentation provides a general and easy to follow overview of the JPSS Algorithm Change Process (ACP) and is intended to facility the audience understanding of a very extensive and complex process.
Desiderata for a Computer-Assisted Audit Tool for Clinical Data Source Verification Audits
Duda, Stephany N.; Wehbe, Firas H.; Gadd, Cynthia S.
2013-01-01
Clinical data auditing often requires validating the contents of clinical research databases against source documents available in health care settings. Currently available data audit software, however, does not provide features necessary to compare the contents of such databases to source data in paper medical records. This work enumerates the primary weaknesses of using paper forms for clinical data audits and identifies the shortcomings of existing data audit software, as informed by the experiences of an audit team evaluating data quality for an international research consortium. The authors propose a set of attributes to guide the development of a computer-assisted clinical data audit tool to simplify and standardize the audit process. PMID:20841814
Wireless Sensor Networks for Developmental and Flight Instrumentation
NASA Technical Reports Server (NTRS)
Alena, Richard; Figueroa, Fernando; Becker, Jeffrey; Foster, Mark; Wang, Ray; Gamudevelli, Suman; Studor, George
2011-01-01
Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network and ZigBee Pro 2007 standards are finding increasing use in home automation and smart energy markets providing a framework for interoperable software. The Wireless Connections in Space Project, funded by the NASA Engineering and Safety Center, is developing technology, metrics and requirements for next-generation spacecraft avionics incorporating wireless data transport. The team from Stennis Space Center and Mobitrum Corporation, working under a NASA SBIR grant, has developed techniques for embedding plug-and-play software into ZigBee WSN prototypes implementing the IEEE 1451 Transducer Electronic Datasheet (TEDS) standard. The TEDS provides meta-information regarding sensors such as serial number, calibration curve and operational status. Incorporation of TEDS into wireless sensors leads directly to building application level software that can recognize sensors at run-time, dynamically instantiating sensors as they are added or removed. The Ames Research Center team has been experimenting with this technology building demonstration prototypes for on-board health monitoring. Innovations in technology, software and process can lead to dramatic improvements for managing sensor systems applied to Developmental and Flight Instrumentation (DFI) aboard aerospace vehicles. A brief overview of the plug-and-play ZigBee WSN technology is presented along with specific targets for application within the aerospace DFI market. The software architecture for the sensor nodes incorporating the TEDS information is described along with the functions of the Network Capable Gateway processor which bridges 802.15.4 PAN to the TCP/IP network. Client application software connects to the Gateway and is used to display TEDS information and real-time sensor data values updated every few seconds, incorporating error detection and logging to help measure performance and reliability in relevant target environments. Test results from our prototype WSN running the Mobitrum software system are summarized and the implications to the scalability and reliability for DFI applications are discussed. Our demonstration system, incorporating sensors for life support system and structural health monitoring is described along with test results obtained by running the demonstration prototype in relevant environments such as the Wireless Habitat Testbed at Johnson Space Center in Houston. An operations concept for improved sensor process flow from design to flight test is outlined specific to the areas of Environmental Control and Life Support System performance characterization and structural health monitoring of human-rated spacecraft. This operations concept will be used to highlight the areas where WSN technology, particularly plug-and-play software based on IEEE 1451, can improve the current process, resulting in significant reductions in the technical effort, overall cost and schedule for providing DFI capability for future spacecraft. RELEASED -
Robot Sequencing and Visualization Program (RSVP)
NASA Technical Reports Server (NTRS)
Cooper, Brian K.; Maxwell,Scott A.; Hartman, Frank R.; Wright, John R.; Yen, Jeng; Toole, Nicholas T.; Gorjian, Zareh; Morrison, Jack C
2013-01-01
The Robot Sequencing and Visualization Program (RSVP) is being used in the Mars Science Laboratory (MSL) mission for downlink data visualization and command sequence generation. RSVP reads and writes downlink data products from the operations data server (ODS) and writes uplink data products to the ODS. The primary users of RSVP are members of the Rover Planner team (part of the Integrated Planning and Execution Team (IPE)), who use it to perform traversability/articulation analyses, take activity plan input from the Science and Mission Planning teams, and create a set of rover sequences to be sent to the rover every sol. The primary inputs to RSVP are downlink data products and activity plans in the ODS database. The primary outputs are command sequences to be placed in the ODS for further processing prior to uplink to each rover. RSVP is composed of two main subsystems. The first, called the Robot Sequence Editor (RoSE), understands the MSL activity and command dictionaries and takes care of converting incoming activity level inputs into command sequences. The Rover Planners use the RoSE component of RSVP to put together command sequences and to view and manage command level resources like time, power, temperature, etc. (via a transparent realtime connection to SEQGEN). The second component of RSVP is called HyperDrive, a set of high-fidelity computer graphics displays of the Martian surface in 3D and in stereo. The Rover Planners can explore the environment around the rover, create commands related to motion of all kinds, and see the simulated result of those commands via its underlying tight coupling with flight navigation, motor, and arm software. This software is the evolutionary replacement for the Rover Sequencing and Visualization software used to create command sequences (and visualize the Martian surface) for the Mars Exploration Rover mission.
Post-Flight Data Analysis Tool
NASA Technical Reports Server (NTRS)
George, Marina
2018-01-01
A software tool that facilitates the retrieval and analysis of post-flight data. This allows our team and other teams to effectively and efficiently analyze and evaluate post-flight data in order to certify commercial providers.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Challenges and Approaches to Make Multidisciplinary Team Meetings Interoperable - The KIMBo Project.
Krauss, Oliver; Holzer, Karl; Schuler, Andreas; Egelkraut, Reinhard; Franz, Barbara
2017-01-01
Multidisciplinary team meetings (MDTMs) are already in use for certain areas in healthcare (e.g. treatment of cancer). Due to the lack of common standards and accessibility for the applied IT systems, their potential is not yet completely exploited. Common requirements for MDTMs shall be identified and aggregated into a process definition to be automated by an application architecture utilizing modern standards in electronic healthcare, e.g. HL7 FHIR. To identify requirements, an extensive literature review as well as semi-structured expert interviews were conducted. Results showed, that interoperability and flexibility in terms of the process are key requirements to be addressed. An architecture blueprint as well as an aggregated process definition were derived from the insights gained. To evaluate the feasibility of identified requirements, methods of explorative prototyping in software engineering were used. MDTMs will become an important part of modern and future healthcare but the need for standardization in terms of interoperability is imminent.
ERIC Educational Resources Information Center
Holcomb, Glenda S.
2010-01-01
This qualitative, phenomenological doctoral dissertation research study explored the software project team members perceptions of changing organizational cultures based on management decisions made at project deviation points. The research study provided a view into challenged or failing government software projects through the lived experiences…
Resource Allocation Planning Helper (RALPH): Lessons learned
NASA Technical Reports Server (NTRS)
Durham, Ralph; Reilly, Norman B.; Springer, Joe B.
1990-01-01
The current task of Resource Allocation Process includes the planning and apportionment of JPL's Ground Data System composed of the Deep Space Network and Mission Control and Computing Center facilities. The addition of the data driven, rule based planning system, RALPH, has expanded the planning horizon from 8 weeks to 10 years and has resulted in large labor savings. Use of the system has also resulted in important improvements in science return through enhanced resource utilization. In addition, RALPH has been instrumental in supporting rapid turn around for an increased volume of special what if studies. The status of RALPH is briefly reviewed and important lessons learned from the creation of an highly functional design team are focused on through an evolutionary design and implementation period in which an AI shell was selected, prototyped, and ultimately abandoned, and through the fundamental changes to the very process that spawned the tool kit. Principal topics include proper integration of software tools within the planning environment, transition from prototype to delivered to delivered software, changes in the planning methodology as a result of evolving software capabilities and creation of the ability to develop and process generic requirements to allow planning flexibility.
ERIC Educational Resources Information Center
Houck, Christiana L.
2013-01-01
This interpretative phenomenological study used semi-structured interviews of 10 participants to gain a deeper understanding of the experience for virtual team members using collaborative technology. The participants were knowledge workers from global software companies working on cross-functional project teams at a distance. There were no…
A Layered Solution for Supercomputing Storage
Grider, Gary
2018-06-13
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storageâbased on inexpensive, failure-prone disk drivesâbetween disk drives and tape archives.
Defense Healthcare Information Assurance Program
1999-11-01
56k Modem • Cisco 1OS 12.0 operation at the MTFs, the Team • Cisco 3640 Router Configuration Fileo 24- 56k Modems recommended the Cisco 3600 series...temporarily substituted by the vendor pending availability of ordered components (e.g., the modem circuit board for the router). " Each site created a...control for software / hardware modifications and upgrades? 3.5 Is there a process for introducing new equipment (such as hosts, printers, or modems
Best practices for team-based assistive technology design courses.
Goldberg, Mary R; Pearlman, Jonathan L
2013-09-01
Team-based design courses focused on products for people with disabilities have become relatively common, in part because of training grants such as the NSF Research to Aid Persons with Disabilities course grants. An output from these courses is an annual description of courses and projects but has yet to be complied into a "best practices guide," though it could be helpful for instructors. To meet this need, we conducted a study to generate best practices for assistive technology product development courses and how to use these courses to teach students the fundamentals of innovation. A full list of recommendations is comprised in the manuscript and include identifying a client through a reliable clinical partner; allowing for transparency between the instructors, the client, and the team(s); establishing multi-disciplinary teams; using a process-oriented vs. solution-oriented product development model; using a project management software to facilitate and archive communication and outputs; facilitating client interaction through frequent communication; seeking to develop professional role confidence to inspire students' commitment to engineering and (where applicable) rehabilitation field; publishing student designs on repositories; incorporating both formal and informal education opportunities related to design; and encouraging students to submit their designs to local or national entrepreneurship competitions.
Computer-aided field editing in DHS: the Turkey experiment.
1995-01-01
A study comparing field editing using a Notebook computer, computer-aided field editing (CAFE), with that done manually in the standard manner, during the 1993 Demographic and Health Survey (DHS) in Turkey, demonstrated that there was less missing data and a lower mean number of errors for teams using CAFE. 6 of 13 teams used CAFE in the Turkey experiment; the computers were equipped with Integrated System for Survey Analysis (ISSA) software for editing the DHS questionnaires. The CAFE teams completed 2466 out of 8619 household questionnaires and 1886 out of 6649 individual questionnaires. The CAFE team editor entered data into the computer and marked any detected errors on the questionnaire; the errors were then corrected by the editor, in the field, based on other responses in the questionnaire, or on corrections made by the interviewer to which the questionnaire was returned. Errors in questionnaires edited manually are not identified until they are sent to the survey office for data processing, when it is too late to ask for clarification from respondents. There was one area where the error rate was higher for CAFE teams; the CAFE editors paid less attention to errors presented as warnings only.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingerfelt, Eric J; Messer, II, Otis E
2017-01-02
The Bellerophon software system supports CHIMERA, a production-level HPC application that simulates the evolution of core-collapse supernovae. Bellerophon enables CHIMERA's geographically dispersed team of collaborators to perform job monitoring and real-time data analysis from multiple supercomputing resources, including platforms at OLCF, NERSC, and NICS. Its multi-tier architecture provides an encapsulated, end-to-end software solution that enables the CHIMERA team to quickly and easily access highly customizable animated and static views of results from anywhere in the world via a cross-platform desktop application.
DOT National Transportation Integrated Search
2013-04-01
The Rural Road Upgrade Inventory and Cost Estimation Software is designed by the AUTC : research team to help the Fairbanks North Star Borough (FNSB) estimate the cost of upgrading : rural roads located in the Borough's Service Areas. The Software pe...
Knowledge Sharing through Pair Programming in Learning Environments: An Empirical Study
ERIC Educational Resources Information Center
Kavitha, R. K.; Ahmed, M. S.
2015-01-01
Agile software development is an iterative and incremental methodology, where solutions evolve from self-organizing, cross-functional teams. Pair programming is a type of agile software development technique where two programmers work together with one computer for developing software. This paper reports the results of the pair programming…
Space and Missile Systems Center Standard: Software Development
2015-01-16
maintenance , or any other activity or combination of activities resulting in products . Within this standard, requirements to “develop,” “define...integration, reuse, reengineering, maintenance , or any other activity that results in products ). The term “developer” encompasses all software team...activities that results in software products . Software development includes new development, modification, reuse, reengineering, maintenance , and any other
Implementing large projects in software engineering courses
NASA Astrophysics Data System (ADS)
Coppit, David
2006-03-01
In software engineering education, large projects are widely recognized as a useful way of exposing students to the real-world difficulties of team software development. But large projects are difficult to put into practice. First, educators rarely have additional time to manage software projects. Second, classrooms have inherent limitations that threaten the realism of large projects. Third, quantitative evaluation of individuals who work in groups is notoriously difficult. As a result, many software engineering courses compromise the project experience by reducing the team sizes, project scope, and risk. In this paper, we present an approach to teaching a one-semester software engineering course in which 20 to 30 students work together to construct a moderately sized (15KLOC) software system. The approach combines carefully coordinated lectures and homeworks, a hierarchical project management structure, modern communication technologies, and a web-based project tracking and individual assessment system. Our approach provides a more realistic project experience for the students, without incurring significant additional overhead for the instructor. We present our experiences using the approach the last 2 years for the software engineering course at The College of William and Mary. Although the approach has some weaknesses, we believe that they are strongly outweighed by the pedagogical benefits.
Managing MDO Software Development Projects
NASA Technical Reports Server (NTRS)
Townsend, J. C.; Salas, A. O.
2002-01-01
Over the past decade, the NASA Langley Research Center developed a series of 'grand challenge' applications demonstrating the use of parallel and distributed computation and multidisciplinary design optimization. All but the last of these applications were focused on the high-speed civil transport vehicle; the final application focused on reusable launch vehicles. Teams of discipline experts developed these multidisciplinary applications by integrating legacy engineering analysis codes. As teams became larger and the application development became more complex with increasing levels of fidelity and numbers of disciplines, the need for applying software engineering practices became evident. This paper briefly introduces the application projects and then describes the approaches taken in project management and software engineering for each project; lessons learned are highlighted.
Clinical records anonymisation and text extraction (CRATE): an open-source software system.
Cardinal, Rudolf N
2017-04-26
Electronic medical records contain information of value for research, but contain identifiable and often highly sensitive confidential information. Patient-identifiable information cannot in general be shared outside clinical care teams without explicit consent, but anonymisation/de-identification allows research uses of clinical data without explicit consent. This article presents CRATE (Clinical Records Anonymisation and Text Extraction), an open-source software system with separable functions: (1) it anonymises or de-identifies arbitrary relational databases, with sensitivity and precision similar to previous comparable systems; (2) it uses public secure cryptographic methods to map patient identifiers to research identifiers (pseudonyms); (3) it connects relational databases to external tools for natural language processing; (4) it provides a web front end for research and administrative functions; and (5) it supports a specific model through which patients may consent to be contacted about research. Creation and management of a research database from sensitive clinical records with secure pseudonym generation, full-text indexing, and a consent-to-contact process is possible and practical using entirely free and open-source software.
Towards a Better Understanding of CMMI and Agile Integration - Multiple Case Study of Four Companies
NASA Astrophysics Data System (ADS)
Pikkarainen, Minna
The amount of software is increasing in the different domains in Europe. This provides the industries in smaller countries good opportunities to work in the international markets. Success in the global markets however demands the rapid production of high quality, error free software. Both CMMI and agile methods seem to provide a ready solution for quality and lead time improvements. There is not, however, much empirical evidence available either about 1) how the integration of these two aspects can be done in practice or 2) what it actually demands from assessors and software process improvement groups. The goal of this paper is to increase the understanding of CMMI and agile integration, in particular, focusing on the research question: how to use ‘lightweight’ style of CMMI assessments in agile contexts. This is done via four case studies in which assessments were conducted using the goals of CMMI integrated project management and collaboration and coordination with relevant stakeholder process areas and practices from XP and Scrum. The study shows that the use of agile practices may support the fulfilment of the goals of CMMI process areas but there are still many challenges for the agile teams to be solved within the continuous improvement programs. It also identifies practical advices to the assessors and improvement groups to take into consideration when conducting assessment in the context of agile software development.
Software Program: Software Management Guidebook
NASA Technical Reports Server (NTRS)
1996-01-01
The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.
The KSC Simulation Team practices for contingencies in Firing Room 1
NASA Technical Reports Server (NTRS)
1998-01-01
In Firing Room 1 at KSC, Shuttle launch team members put the Shuttle system through an integrated simulation. The control room is set up with software used to simulate flight and ground systems in the launch configuration. A Simulation Team, comprised of KSC engineers, introduce 12 or more major problems to prepare the launch team for worst-case scenarios. Such tests and simulations keep the Shuttle launch team sharp and ready for liftoff. The next liftoff is targeted for Oct. 29.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stamp, Jason E.; Eddy, John P.; Jensen, Richard P.
Microgrids are a focus of localized energy production that support resiliency, security, local con- trol, and increased access to renewable resources (among other potential benefits). The Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) Joint Capa- bility Technology Demonstration (JCTD) program between the Department of Defense (DOD), Department of Energy (DOE), and Department of Homeland Security (DHS) resulted in the pre- liminary design and deployment of three microgrids at military installations. This paper is focused on the analysis process and supporting software used to determine optimal designs for energy surety microgrids (ESMs) in the SPIDERS project. There aremore » two key pieces of software, an ex- isting software application developed by Sandia National Laboratories (SNL) called Technology Management Optimization (TMO) and a new simulation developed for SPIDERS called the per- formance reliability model (PRM). TMO is a decision support tool that performs multi-objective optimization over a mixed discrete/continuous search space for which the performance measures are unrestricted in form. The PRM is able to statistically quantify the performance and reliability of a microgrid operating in islanded mode (disconnected from any utility power source). Together, these two software applications were used as part of the ESM process to generate the preliminary designs presented by SNL-led DOE team to the DOD. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military instal- lations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Tarek Abdallah, Melanie Johnson, and Harold Sanborn of the U.S. Army Corps of Engineers Construction Engineering Research Laboratory * Colleagues from Sandia National Laboratories (SNL) for their reviews, suggestions, and participation in the work.« less
A project-based geoscience curriculum: select examples
NASA Astrophysics Data System (ADS)
Brown, L. M.; Kelso, P. R.; White, R. J.; Rexroad, C. B.
2007-12-01
Principles of constructivist educational philosophy serve as a foundation for the recently completed National Science Foundation sponsored undergraduate curricular revision undertaken by the Geology Department of Lake Superior State University. We integrate lecture and laboratory sessions utilizing active learning strategies that focus on real-world geoscience experiences and problems. In this presentation, we discuss details of three research-like projects that require students to access original data, process and model the data using appropriate geological software, interpret and defend results, and disseminate results in reports, posters, and class presentations. The projects are from three upper division courses, Carbonate Systems, Sequence Stratigraphy, and Geophysical Systems, where teams of two to four students are presented with defined problems of durations ranging from a few weeks to an entire semester. Project goals and location, some background information, and specified dates and expectations for interim and final written and oral reports are provided to students. Some projects require the entire class to work on one data set, some require each team to be initially responsible for a portion of the project with teams ultimately merging data for interpretation and to arrive at final conclusions. Some projects require students to utilize data from appropriate geological web sites such as state geological surveys. Others require students to design surveys and utilize appropriate instruments of their choice for field data collection. Students learn usage and applications of appropriate geological software in compiling, processing, modeling, and interpreting data and preparing formal reports and presentations. Students uniformly report heightened interest and motivation when engaged in these projects. Our new curriculum has resulted in an increase in students" quantitative and interpretive skills along with dramatic improvement in communication and interpersonal skills related to group dynamics.
The X-windows interactive navigation data editor
NASA Technical Reports Server (NTRS)
Rinker, G. C.
1992-01-01
A new computer program called the X-Windows Interactive Data Editor (XIDE) was developed and demonstrated as a prototype application for editing radio metric data in the orbit-determination process. The program runs on a variety of workstations and employs pull-down menus and graphical displays, which allow users to easily inspect and edit radio metric data in the orbit data files received from the Deep Space Network (DSN). The XIDE program is based on the Open Software Foundation OSF/Motif Graphical User Interface (GUI) and has proven to be an efficient tool for editing radio metric data in the navigation operations environment. It was adopted by the Magellan Navigation Team as their primary data-editing tool. Because the software was designed from the beginning to be portable, the prototype was successfully moved to new workstation environments. It was also itegrated into the design of the next-generation software tool for DSN multimission navigation interactive launch support.
Technology evaluation, assessment, modeling, and simulation: the TEAMS capability
NASA Astrophysics Data System (ADS)
Holland, Orgal T.; Stiegler, Robert L.
1998-08-01
The United States Marine Corps' Technology Evaluation, Assessment, Modeling and Simulation (TEAMS) capability, located at the Naval Surface Warfare Center in Dahlgren Virginia, provides an environment for detailed test, evaluation, and assessment of live and simulated sensor and sensor-to-shooter systems for the joint warfare community. Frequent use of modeling and simulation allows for cost effective testing, bench-marking, and evaluation of various levels of sensors and sensor-to-shooter engagements. Interconnectivity to live, instrumented equipment operating in real battle space environments and to remote modeling and simulation facilities participating in advanced distributed simulations (ADS) exercises is available to support a wide- range of situational assessment requirements. TEAMS provides a valuable resource for a variety of users. Engineers, analysts, and other technology developers can use TEAMS to evaluate, assess and analyze tactical relevant phenomenological data on tactical situations. Expeditionary warfare and USMC concept developers can use the facility to support and execute advanced warfighting experiments (AWE) to better assess operational maneuver from the sea (OMFTS) concepts, doctrines, and technology developments. Developers can use the facility to support sensor system hardware, software and algorithm development as well as combat development, acquisition, and engineering processes. Test and evaluation specialists can use the facility to plan, assess, and augment their processes. This paper presents an overview of the TEAMS capability and focuses specifically on the technical challenges associated with the integration of live sensor hardware into a synthetic environment and how those challenges are being met. Existing sensors, recent experiments and facility specifications are featured.
Temporal motifs reveal collaboration patterns in online task-oriented networks
NASA Astrophysics Data System (ADS)
Xuan, Qi; Fang, Huiting; Fu, Chenbo; Filkov, Vladimir
2015-05-01
Real networks feature layers of interactions and complexity. In them, different types of nodes can interact with each other via a variety of events. Examples of this complexity are task-oriented social networks (TOSNs), where teams of people share tasks towards creating a quality artifact, such as academic research papers or software development in commercial or open source environments. Accomplishing those tasks involves both work, e.g., writing the papers or code, and communication, to discuss and coordinate. Taking into account the different types of activities and how they alternate over time can result in much more precise understanding of the TOSNs behaviors and outcomes. That calls for modeling techniques that can accommodate both node and link heterogeneity as well as temporal change. In this paper, we report on methodology for finding temporal motifs in TOSNs, limited to a system of two people and an artifact. We apply the methods to publicly available data of TOSNs from 31 Open Source Software projects. We find that these temporal motifs are enriched in the observed data. When applied to software development outcome, temporal motifs reveal a distinct dependency between collaboration and communication in the code writing process. Moreover, we show that models based on temporal motifs can be used to more precisely relate both individual developer centrality and team cohesion to programmer productivity than models based on aggregated TOSNs.
Temporal motifs reveal collaboration patterns in online task-oriented networks.
Xuan, Qi; Fang, Huiting; Fu, Chenbo; Filkov, Vladimir
2015-05-01
Real networks feature layers of interactions and complexity. In them, different types of nodes can interact with each other via a variety of events. Examples of this complexity are task-oriented social networks (TOSNs), where teams of people share tasks towards creating a quality artifact, such as academic research papers or software development in commercial or open source environments. Accomplishing those tasks involves both work, e.g., writing the papers or code, and communication, to discuss and coordinate. Taking into account the different types of activities and how they alternate over time can result in much more precise understanding of the TOSNs behaviors and outcomes. That calls for modeling techniques that can accommodate both node and link heterogeneity as well as temporal change. In this paper, we report on methodology for finding temporal motifs in TOSNs, limited to a system of two people and an artifact. We apply the methods to publicly available data of TOSNs from 31 Open Source Software projects. We find that these temporal motifs are enriched in the observed data. When applied to software development outcome, temporal motifs reveal a distinct dependency between collaboration and communication in the code writing process. Moreover, we show that models based on temporal motifs can be used to more precisely relate both individual developer centrality and team cohesion to programmer productivity than models based on aggregated TOSNs.
Telescience Resource Kit Software Capabilities and Future Enhancements
NASA Technical Reports Server (NTRS)
Schneider, Michelle
2004-01-01
The Telescience Resource Kit (TReK) is a suite of PC-based software applications that can be used to monitor and control a payload on board the International Space Station (ISS). This software provides a way for payload users to operate their payloads from their home sites. It can be used by an individual or a team of people. TReK provides both local ground support system services and an interface to utilize remote services provided by the Payload Operations Integration Center (POIC). by the POIC and to perform local data functions such as processing the data, storing it in local files, and forwarding it to other computer systems. TReK can also be used to build, send, and track payload commands. In addition to these features, work is in progress to add a new command management capability. This capability will provide a way to manage a multi- platform command environment that can include geographically distributed computers. This is intended to help those teams that need to manage a shared on-board resource such as a facility class payload. The environment can be configured such that one individual can manage all the command activities associated with that payload. This paper will provide a summary of existing TReK capabilities and a description of the new command management capability. For example, 7'ReK can be used to receive payload data distributed
Automating Mission Scheduling for Space-Based Observatories
NASA Technical Reports Server (NTRS)
Pell, Barney; Muscettola, Nicola; Hansson, Othar; Mohan, Sunil
1998-01-01
In this paper we describe the use of our planning and scheduling framework, HSTS, to reduce the complexity of science mission planning. This work is part of an overall project to enable a small team of scientists to control the operations of a spacecraft. The present process is highly labor intensive. Users (scientists and operators) rely on a non-codified understanding of the different spacecraft subsystems and of their operating constraints. They use a variety of software tools to support their decision making process. This paper considers the types of decision making that need to be supported/automated, the nature of the domain constraints and the capabilities needed to address them successfully, and the nature of external software systems with which the core planning/scheduling engine needs to interact. HSTS has been applied to science scheduling for EUVE and Cassini and is being adapted to support autonomous spacecraft operations in the New Millennium initiative.
Freimuth, Robert R; Schauer, Michael W; Lodha, Preeti; Govindrao, Poornima; Nagarajan, Rakesh; Chute, Christopher G
2008-11-06
The caBIG Compatibility Review System (CRS) is a web-based application to support compatibility reviews, which certify that software applications that pass the review meet a specific set of criteria that allow them to interoperate. The CRS contains workflows that support both semantic and syntactic reviews, which are performed by the caBIG Vocabularies and Common Data Elements (VCDE) and Architecture workspaces, respectively. The CRS increases the efficiency of compatibility reviews by reducing administrative overhead and it improves uniformity by ensuring that each review is conducted according to a standard process. The CRS provides metrics that allow the review team to evaluate the level of data element reuse in an application, a first step towards quantifying the extent of harmonization between applications. Finally, functionality is being added that will provide automated validation of checklist criteria, which will further simplify the review process.
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Schrock, Mitchell; Baldwin, John R.; Borden, Charles S.
2010-01-01
The Ground Resource Allocation and Planning Environment (GRAPE 1.0) is a Web-based, collaborative team environment based on the Microsoft SharePoint platform, which provides Deep Space Network (DSN) resource planners tools and services for sharing information and performing analysis.
The Standard Autonomous File Server, a Customized, Off-the-Shelf Success Story
NASA Technical Reports Server (NTRS)
Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)
2001-01-01
The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper will describe the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system his been so successful, it is becoming a NASA standard resource, leading to its nomination for NASA's Software or the Year Award in 1999.
Photo-realistic Terrain Modeling and Visualization for Mars Exploration Rover Science Operations
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Sims, Michael; Kunz, Clayton; Lees, David; Bowman, Judd
2005-01-01
Modern NASA planetary exploration missions employ complex systems of hardware and software managed by large teams of. engineers and scientists in order to study remote environments. The most complex and successful of these recent projects is the Mars Exploration Rover mission. The Computational Sciences Division at NASA Ames Research Center delivered a 30 visualization program, Viz, to the MER mission that provides an immersive, interactive environment for science analysis of the remote planetary surface. In addition, Ames provided the Athena Science Team with high-quality terrain reconstructions generated with the Ames Stereo-pipeline. The on-site support team for these software systems responded to unanticipated opportunities to generate 30 terrain models during the primary MER mission. This paper describes Viz, the Stereo-pipeline, and the experiences of the on-site team supporting the scientists at JPL during the primary MER mission.
ERIC Educational Resources Information Center
Pieterse, Vreda; Thompson, Lisa
2010-01-01
The acquisition of effective teamwork skills is crucial in all disciplines. Using an interpretive approach, this study investigates collaboration and co-operation in teams of software engineering students. Teams whose members were both homogeneous and heterogeneous in terms of their members' academic abilities, skills and goals were identified and…
The Chandra X-ray Center data system: supporting the mission of the Chandra X-ray Observatory
NASA Astrophysics Data System (ADS)
Evans, Janet D.; Cresitello-Dittmar, Mark; Doe, Stephen; Evans, Ian; Fabbiano, Giuseppina; Germain, Gregg; Glotfelty, Kenny; Hall, Diane; Plummer, David; Zografou, Panagoula
2006-06-01
The Chandra X-ray Center Data System provides end-to-end scientific software support for Chandra X-ray Observatory mission operations. The data system includes the following components: (1) observers' science proposal planning tools; (2) science mission planning tools; (3) science data processing, monitoring, and trending pipelines and tools; and (4) data archive and database management. A subset of the science data processing component is ported to multiple platforms and distributed to end-users as a portable data analysis package. Web-based user tools are also available for data archive search and retrieval. We describe the overall architecture of the data system and its component pieces, and consider the design choices and their impacts on maintainability. We discuss the many challenges involved in maintaining a large, mission-critical software system with limited resources. These challenges include managing continually changing software requirements and ensuring the integrity of the data system and resulting data products while being highly responsive to the needs of the project. We describe our use of COTS and OTS software at the subsystem and component levels, our methods for managing multiple release builds, and adapting a large code base to new hardware and software platforms. We review our experiences during the life of the mission so-far, and our approaches for keeping a small, but highly talented, development team engaged during the maintenance phase of a mission.
A Metadata Management Framework for Collaborative Review of Science Data Products
NASA Astrophysics Data System (ADS)
Hart, A. F.; Cinquini, L.; Mattmann, C. A.; Thompson, D. R.; Wagstaff, K.; Zimdars, P. A.; Jones, D. L.; Lazio, J.; Preston, R. A.
2012-12-01
Data volumes generated by modern scientific instruments often preclude archiving the complete observational record. To compensate, science teams have developed a variety of "triage" techniques for identifying data of potential scientific interest and marking it for prioritized processing or permanent storage. This may involve multiple stages of filtering with both automated and manual components operating at different timescales. A promising approach exploits a fast, fully automated first stage followed by a more reliable offline manual review of candidate events. This hybrid approach permits a 24-hour rapid real-time response while also preserving the high accuracy of manual review. To support this type of second-level validation effort, we have developed a metadata-driven framework for the collaborative review of candidate data products. The framework consists of a metadata processing pipeline and a browser-based user interface that together provide a configurable mechanism for reviewing data products via the web, and capturing the full stack of associated metadata in a robust, searchable archive. Our system heavily leverages software from the Apache Object Oriented Data Technology (OODT) project, an open source data integration framework that facilitates the construction of scalable data systems and places a heavy emphasis on the utilization of metadata to coordinate processing activities. OODT provides a suite of core data management components for file management and metadata cataloging that form the foundation for this effort. The system has been deployed at JPL in support of the V-FASTR experiment [1], a software-based radio transient detection experiment that operates commensally at the Very Long Baseline Array (VLBA), and has a science team that is geographically distributed across several countries. Daily review of automatically flagged data is a shared responsibility for the team, and is essential to keep the project within its resource constraints. We describe the development of the platform using open source software, and discuss our experience deploying the system operationally. [1] R.B.Wayth,W.F.Brisken,A.T.Deller,W.A.Majid,D.R.Thompson, S. J. Tingay, and K. L. Wagstaff, "V-fastr: The vlba fast radio transients experiment," The Astrophysical Journal, vol. 735, no. 2, p. 97, 2011. Acknowledgement: This effort was supported by the Jet Propulsion Laboratory, managed by the California Institute of Technology under a contract with the National Aeronautics and Space Administration.
NASA Technical Reports Server (NTRS)
Equils, Douglas J.
2008-01-01
Launched on October 15, 1997, the Cassini-Huygens spacecraft began its ambitious journey to the Saturnian system with a complex suite of 12 scientific instruments, and another 6 instruments aboard the European Space Agencies Huygens Probe. Over the next 6 1/2 years, Cassini would continue its relatively simplistic cruise phase operations, flying past Venus, Earth, and Jupiter. However, following Saturn Orbit Insertion (SOI), Cassini would become involved in a complex series of tasks that required detailed resource management, distributed operations collaboration, and a data base for capturing science objectives. Collectively, these needs were met through a web-based software tool designed to help with the Cassini uplink process and ultimately used to generate more robust sequences for spacecraft operations. In 2001, in conjunction with the Southwest Research Institute (SwRI) and later Venustar Software and Engineering Inc., the Cassini Information Management System (CIMS) was released which enabled the Cassini spacecraft and science planning teams to perform complex information management and team collaboration between scientists and engineers in 17 countries. Originally tailored to help manage the science planning uplink process, CIMS has been actively evolving since its inception to meet the changing and growing needs of the Cassini uplink team and effectively reduce mission risk through a series of resource management validation algorithms. These algorithms have been implemented in the web-based software tool to identify potential sequence conflicts early in the science planning process. CIMS mitigates these sequence conflicts through identification of timing incongruities, pointing inconsistencies, flight rule violations, data volume issues, and by assisting in Deep Space Network (DSN) coverage analysis. In preparation for extended mission operations, CIMS has also evolved further to assist in the planning and coordination of the dual playback redundancy of highvalue data from targets such as Titan and Enceladus. This paper will outline the critical role that CIMS has played for Cassini in the distributed ops paradigm throughout operations. This paper will also examine the evolution that CIMS has undergone in the face of new science discoveries and fluctuating operational needs. And finally, this paper will conclude with theoretical adaptation of CIMS for other projects and the potential savings in cost and risk reduction that could potentially be tapped into by future missions.
Lesselroth, Blake J; Adams, Kathleen; Tallett, Stephanie; Wood, Scott D; Keeling, Amy; Cheng, Karen; Church, Victoria L; Felder, Robert; Tran, Hanna
2013-01-01
Our objectives were to (1) develop an in-depth understanding of the workflow and information flow in medication reconciliation, and (2) design medication reconciliation support technology using a combination of rapid-cycle prototyping and human-centered design. Although medication reconciliation is a national patient safety goal, limitations both of physical environment and in workflow can make it challenging to implement durable systems. We used several human factors techniques to gather requirements and develop a new process to collect a medication history at hospital admission. We completed an ethnography and time and motion analysis of pharmacists in order to illustrate the processes used to reconcile medications. We then used the requirements to design prototype multimedia software for collecting a bedside medication history. We observed how pharmacists incorporated the technology into their physical environment and documented usability issues. Admissions occurred in three phases: (1) list compilation, (2) order processing, and (3) team coordination. Current medication reconciliation processes at the hospital average 19 minutes to complete and do not include a bedside interview. Use of our technology during a bedside interview required an average of 29 minutes. The software represents a viable proof-of-concept to automate parts of history collection and enhance patient communication. However, we discovered several usability issues that require attention. We designed a patient-centered technology to enhance how clinicians collect a patient's medication history. By using multiple human factors methods, our research team identified system themes and design constraints that influence the quality of the medication reconciliation process and implementation effectiveness of new technology. Evidence-based design, human factors, patient-centered care, safety, technology.
The development of participatory health research among incarcerated women in a Canadian prison
Murphy, K.; Hanson, D.; Hemingway, C.; Ramsden, V.; Buxton, J.; Granger-Brown, A.; Condello, L-L.; Buchanan, M.; Espinoza-Magana, N.; Edworthy, G.; Hislop, T. G.
2009-01-01
This paper describes the development of a unique prison participatory research project, in which incarcerated women formed a research team, the research activities and the lessons learned. The participatory action research project was conducted in the main short sentence minimum/medium security women's prison located in a Western Canadian province. An ethnographic multi-method approach was used for data collection and analysis. Quantitative data was collected by surveys and analysed using descriptive statistics. Qualitative data was collected from orientation package entries, audio recordings, and written archives of research team discussions, forums and debriefings, and presentations. These data and ethnographic observations were transcribed and analysed using iterative and interpretative qualitative methods and NVivo 7 software. Up to 15 women worked each day as prison research team members; a total of 190 women participated at some time in the project between November 2005 and August 2007. Incarcerated women peer researchers developed the research processes including opportunities for them to develop leadership and technical skills. Through these processes, including data collection and analysis, nine health goals emerged. Lessons learned from the research processes were confirmed by the common themes that emerged from thematic analysis of the research activity data. Incarceration provides a unique opportunity for engagement of women as expert partners alongside academic researchers and primary care workers in participatory research processes to improve their health. PMID:25759141
Using Pilots to Assess the Value and Approach of CMMI Implementation
NASA Technical Reports Server (NTRS)
Godfrey, Sara; Andary, James; Rosenberg, Linda
2002-01-01
At Goddard Space Flight Center (GSFC), we have chosen to use Capability Maturity Model Integrated (CMMI) to guide our process improvement program. Projects at GSFC consist of complex systems of software and hardware that control satellites, operate ground systems, run instruments, manage databases and data and support scientific research. It is a challenge to launch a process improvement program that encompasses our diverse systems, yet is manageable in terms of cost effectiveness. In order to establish the best approach for improvement, our process improvement effort was divided into three phases: 1) Pilot projects; 2) Staged implementation; and 3) Sustainment and continual improvement. During Phase 1 the focus of the activities was on a baselining process, using pre-appraisals in order to get a baseline for making a better cost and effort estimate for the improvement effort. Pilot pre-appraisals were conducted from different perspectives so different approaches for process implementation could be evaluated. Phase 1 also concentrated on establishing an improvement infrastructure and training of the improvement teams. At the time of this paper, three pilot appraisals have been completed. Our initial appraisal was performed in a flight software area, considering the flight software organization as the organization. The second appraisal was done from a project perspective, focusing on systems engineering and acquisition, and using the organization as GSFC. The final appraisal was in a ground support software area, again using GSFC as the organization. This paper will present our initial approach, lessons learned from all three pilots and the changes in our approach based on the lessons learned.
The need for scientific software engineering in the pharmaceutical industry
NASA Astrophysics Data System (ADS)
Luty, Brock; Rose, Peter W.
2017-03-01
Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.
The need for scientific software engineering in the pharmaceutical industry.
Luty, Brock; Rose, Peter W
2017-03-01
Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.
Improving hospital weekend handover: a user-centered, standardised approach.
Mehra, Avi; Henein, Christin
2014-01-01
Clinical Handover remains one of the most perilous procedures in medicine (1). Weekend handover has emerged as a key area of concern with high variability in handover processes across hospitals (1,2,4, 5-10). Studying weekend handover processes within medicine at an acute teaching hospital revealed huge variability in documented content and structure. A total of 12 different pro formas were in use by the medical day-team to handover to the weekend team on-call. A Likert-survey of doctors revealed 93% felt the current handover system needed improvement with 71% stating that it did not ensure patient safety (Chi-squared, p-value <0.001, n=32). Semi-structured interviews of doctors identified common themes including "a lack of consistency in approach" "poor standardization" and "high variability". Seeking to address concerns of standardization, a standardized handover pro forma was developed using Royal College of Physician (RCP) guidelines (2), with direct end-user input. Results following implementation revealed a considerable improvement in documented ceiling of care, urgency of task and team member assignment with 100% uptake of the new proforma at both 4-week and 6-month post-implementation analyses. 88% of doctors surveyed perceived that the new proforma improved patient safety (p<0.01, n=25), with 62% highlighting that it allowed doctors to work more efficiently. Results also revealed that 44% felt further improvements were needed and highlighted electronic solutions and handover training as main priorities. Handover briefing was subsequently incorporated into junior doctor induction and education modules delivered, with good feedback. Following collaboration with key stakeholders and with end-user input, integrated electronic handover software was designed and funding secured. The software is currently under final development. Introducing a standardized handover proforma can be an effective initial step in improving weekend handover. Handover education and end-user involvement are key in improving the process. Electronic handover solutions have been shown to significantly increase the quality of handover and are worth considering (9, 10).
Improving hospital weekend handover: a user-centered, standardised approach
Mehra, Avi; Henein, Christin
2014-01-01
Clinical Handover remains one of the most perilous procedures in medicine (1). Weekend handover has emerged as a key area of concern with high variability in handover processes across hospitals (1,2,4, 5–10). Studying weekend handover processes within medicine at an acute teaching hospital revealed huge variability in documented content and structure. A total of 12 different pro formas were in use by the medical day-team to handover to the weekend team on-call. A Likert-survey of doctors revealed 93% felt the current handover system needed improvement with 71% stating that it did not ensure patient safety (Chi-squared, p-value <0.001, n=32). Semi-structured interviews of doctors identified common themes including “a lack of consistency in approach” “poor standardization” and “high variability”. Seeking to address concerns of standardization, a standardized handover pro forma was developed using Royal College of Physician (RCP) guidelines (2), with direct end-user input. Results following implementation revealed a considerable improvement in documented ceiling of care, urgency of task and team member assignment with 100% uptake of the new proforma at both 4-week and 6-month post-implementation analyses. 88% of doctors surveyed perceived that the new proforma improved patient safety (p<0.01, n=25), with 62% highlighting that it allowed doctors to work more efficiently. Results also revealed that 44% felt further improvements were needed and highlighted electronic solutions and handover training as main priorities. Handover briefing was subsequently incorporated into junior doctor induction and education modules delivered, with good feedback. Following collaboration with key stakeholders and with end-user input, integrated electronic handover software was designed and funding secured. The software is currently under final development. Introducing a standardized handover proforma can be an effective initial step in improving weekend handover. Handover education and end-user involvement are key in improving the process. Electronic handover solutions have been shown to significantly increase the quality of handover and are worth considering (9, 10). PMID:26734248
Spacelab software development and integration concepts study report. Volume 2: Appendices
NASA Technical Reports Server (NTRS)
1973-01-01
Software considerations were developed for incorporation in the spacelab systems design, and include management concepts for top-down structured programming, composite designs for modular programs, and team management methods for production programming.
NASA Astrophysics Data System (ADS)
Gordov, Evgeny; Shiklomanov, Alexander; Okladinikov, Igor; Prusevich, Alex; Titov, Alexander
2016-04-01
Description and first results of the cooperative project "Development of Distributed Research Center for monitoring and projecting of regional climatic and environmental changes" recently started by SCERT IMCES and ESRC UNH are reported. The project is aimed at development of hardware and software platform prototype of Distributed Research Center (DRC) for monitoring and projecting regional climatic and environmental changes over the areas of mutual interest and demonstration the benefits of such collaboration that complements skills and regional knowledge across the northern extratropics. In the framework of the project, innovative approaches of "cloud" processing and analysis of large geospatial datasets will be developed on the technical platforms of two U.S. and Russian leading institutions involved in research of climate change and its consequences. Anticipated results will create a pathway for development and deployment of thematic international virtual research centers focused on interdisciplinary environmental studies by international research teams. DRC under development will comprise best features and functionality of earlier developed by the cooperating teams' information-computational systems RIMS (http://rims.unh.edu) and CLIMATE(http://climate.scert.ru/), which are widely used in Northern Eurasia environment studies. The project includes several major directions of research (Tasks) listed below. 1. Development of architecture and defining major hardware and software components of DRC for monitoring and projecting of regional environmental changes. 2. Development of an information database and computing software suite for distributed processing and analysis of large geospatial data hosted at ESRC and IMCES SB RAS. 3. Development of geoportal, thematic web client and web services providing international research teams with an access to "cloud" computing resources at DRC; two options will be executed: access through a basic graphical web browser and using geographic information systems - (GIS). 4. Using the output of the first three tasks, compilation of the DRC prototype, its validation, and testing the DRC feasibility for analyses of the recent regional environmental changes over Northern Eurasia and North America. Results of the first stage of the Project implementation are presented. This work is supported by the Ministry of Education and Science of the Russian Federation, Agreement № 14.613.21.0037.
Nolden, Marco; Zelzer, Sascha; Seitel, Alexander; Wald, Diana; Müller, Michael; Franz, Alfred M; Maleike, Daniel; Fangerau, Markus; Baumhauer, Matthias; Maier-Hein, Lena; Maier-Hein, Klaus H; Meinzer, Hans-Peter; Wolf, Ivo
2013-07-01
The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control. MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams. MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process. MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today's and tomorrow's clinically motivated research.
Software thresholds alter the bias of actigraphy for monitoring sleep in team-sport athletes.
Fuller, Kate L; Juliff, Laura; Gore, Christopher J; Peiffer, Jeremiah J; Halson, Shona L
2017-08-01
Actical ® actigraphy is commonly used to monitor athlete sleep. The proprietary software, called Actiware ® , processes data with three different sleep-wake thresholds (Low, Medium or High), but there is no standardisation regarding their use. The purpose of this study was to examine validity and bias of the sleep-wake thresholds for processing Actical ® sleep data in team sport athletes. Validation study comparing actigraph against accepted gold standard polysomnography (PSG). Sixty seven nights of sleep were recorded simultaneously with polysomnography and Actical ® devices. Individual night data was compared across five sleep measures for each sleep-wake threshold using Actiware ® software. Accuracy of each sleep-wake threshold compared with PSG was evaluated from mean bias with 95% confidence limits, Pearson moment-product correlation and associated standard error of estimate. The Medium threshold generated the smallest mean bias compared with polysomnography for total sleep time (8.5min), sleep efficiency (1.8%) and wake after sleep onset (-4.1min); whereas the Low threshold had the smallest bias (7.5min) for wake bouts. Bias in sleep onset latency was the same across thresholds (-9.5min). The standard error of the estimate was similar across all thresholds; total sleep time ∼25min, sleep efficiency ∼4.5%, wake after sleep onset ∼21min, and wake bouts ∼8 counts. Sleep parameters measured by the Actical ® device are greatly influenced by the sleep-wake threshold applied. In the present study the Medium threshold produced the smallest bias for most parameters compared with PSG. Given the magnitude of measurement variability, confidence limits should be employed when interpreting changes in sleep parameters. Copyright © 2017 Sports Medicine Australia. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alex; Billings, Jay Jay; de Almeida, Valmor F
2011-08-01
This report details the progress made in the development of the Reprocessing Plant Toolkit (RPTk) for the DOE Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. RPTk is an ongoing development effort intended to provide users with an extensible, integrated, and scalable software framework for the modeling and simulation of spent nuclear fuel reprocessing plants by enabling the insertion and coupling of user-developed physicochemical modules of variable fidelity. The NEAMS Safeguards and Separations IPSC (SafeSeps) and the Enabling Computational Technologies (ECT) supporting program element have partnered to release an initial version of the RPTk with a focus on software usabilitymore » and utility. RPTk implements a data flow architecture that is the source of the system's extensibility and scalability. Data flows through physicochemical modules sequentially, with each module importing data, evolving it, and exporting the updated data to the next downstream module. This is accomplished through various architectural abstractions designed to give RPTk true plug-and-play capabilities. A simple application of this architecture, as well as RPTk data flow and evolution, is demonstrated in Section 6 with an application consisting of two coupled physicochemical modules. The remaining sections describe this ongoing work in full, from system vision and design inception to full implementation. Section 3 describes the relevant software development processes used by the RPTk development team. These processes allow the team to manage system complexity and ensure stakeholder satisfaction. This section also details the work done on the RPTk ``black box'' and ``white box'' models, with a special focus on the separation of concerns between the RPTk user interface and application runtime. Section 4 and 5 discuss that application runtime component in more detail, and describe the dependencies, behavior, and rigorous testing of its constituent components.« less
Software Engineering for Scientific Computer Simulations
NASA Astrophysics Data System (ADS)
Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.
2004-11-01
Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.
Learning to Write Programs with Others: Collaborative Quadruple Programming
ERIC Educational Resources Information Center
Arora, Ritu; Goel, Sanjay
2012-01-01
Most software development is carried out by teams of software engineers working collaboratively to achieve the desired goal. Consequently software development education not only needs to develop a student's ability to write programs that can be easily comprehended by others and be able to comprehend programs written by others, but also the ability…
Are Vulnerability Disclosure Deadlines Justified?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miles McQueen; Jason L. Wright; Lawrence Wellman
2011-09-01
Vulnerability research organizations Rapid7, Google Security team, and Zero Day Initiative recently imposed grace periods for public disclosure of vulnerabilities. The grace periods ranged from 45 to 182 days, after which disclosure might occur with or without an effective mitigation from the affected software vendor. At this time there is indirect evidence that the shorter grace periods of 45 and 60 days may not be practical. However, there is strong evidence that the recently announced Zero Day Initiative grace period of 182 days yields benefit in speeding up the patch creation process, and may be practical for many software products.more » Unfortunately, there is also evidence that the 182 day grace period results in more vulnerability announcements without an available patch.« less
NASA Technical Reports Server (NTRS)
Johnson, Jeffrey R.
2006-01-01
This viewgraph presentation reviews the problems that non-mission researchers have in accessing data to use in their analysis of Mars. The increasing complexity of Mars datasets results in custom software development by instrument teams that is often the only means to visualize and analyze the data. The solutions to the problem are to continue efforts toward synergizing data from multiple missions and making the data, s/w, derived products available in standardized, easily-accessible formats, encourage release of "lite" versions of mission-related software prior to end-of-mission, and planetary image data should be systematically processed in a coordinated way and made available in an easily accessed form. The recommendations of Mars Environmental GIS Workshop are reviewed.
Datathons and Software to Promote Reproducible Research.
Celi, Leo Anthony; Lokhandwala, Sharukh; Montgomery, Robert; Moses, Christopher; Naumann, Tristan; Pollard, Tom; Spitz, Daniel; Stretch, Robert
2016-08-24
Datathons facilitate collaboration between clinicians, statisticians, and data scientists in order to answer important clinical questions. Previous datathons have resulted in numerous publications of interest to the critical care community and serve as a viable model for interdisciplinary collaboration. We report on an open-source software called Chatto that was created by members of our group, in the context of the second international Critical Care Datathon, held in September 2015. Datathon participants formed teams to discuss potential research questions and the methods required to address them. They were provided with the Chatto suite of tools to facilitate their teamwork. Each multidisciplinary team spent the next 2 days with clinicians working alongside data scientists to write code, extract and analyze data, and reformulate their queries in real time as needed. All projects were then presented on the last day of the datathon to a panel of judges that consisted of clinicians and scientists. Use of Chatto was particularly effective in the datathon setting, enabling teams to reduce the time spent configuring their research environments to just a few minutes-a process that would normally take hours to days. Chatto continued to serve as a useful research tool after the conclusion of the datathon. This suite of tools fulfills two purposes: (1) facilitation of interdisciplinary teamwork through archiving and version control of datasets, analytical code, and team discussions, and (2) advancement of research reproducibility by functioning postpublication as an online environment in which independent investigators can rerun or modify analyses with relative ease. With the introduction of Chatto, we hope to solve a variety of challenges presented by collaborative data mining projects while improving research reproducibility.
1998-08-20
In Firing Room 1 at KSC, Shuttle launch team members put the Shuttle system through an integrated simulation. The control room is set up with software used to simulate flight and ground systems in the launch configuration. A Simulation Team, comprised of KSC engineers, introduce 12 or more major problems to prepare the launch team for worst-case scenarios. Such tests and simulations keep the Shuttle launch team sharp and ready for liftoff. The next liftoff is targeted for Oct. 29
The Transition to a Many-core World
NASA Astrophysics Data System (ADS)
Mattson, T. G.
2012-12-01
The need to increase performance within a fixed energy budget has pushed the computer industry to many core processors. This is grounded in the physics of computing and is not a trend that will just go away. It is hard to overestimate the profound impact of many-core processors on software developers. Virtually every facet of the software development process will need to change to adapt to these new processors. In this talk, we will look at many-core hardware and consider its evolution from a perspective grounded in the CPU. We will show that the number of cores will inevitably increase, but in addition, a quest to maximize performance per watt will push these cores to be heterogeneous. We will show that the inevitable result of these changes is a computing landscape where the distinction between the CPU and the GPU is blurred. We will then consider the much more pressing problem of software in a many core world. Writing software for heterogeneous many core processors is well beyond the ability of current programmers. One solution is to support a software development process where programmer teams are split into two distinct groups: a large group of domain-expert productivity programmers and much smaller team of computer-scientist efficiency programmers. The productivity programmers work in terms of high level frameworks to express the concurrency in their problems while avoiding any details for how that concurrency is exploited. The second group, the efficiency programmers, map applications expressed in terms of these frameworks onto the target many-core system. In other words, we can solve the many-core software problem by creating a software infrastructure that only requires a small subset of programmers to become master parallel programmers. This is different from the discredited dream of automatic parallelism. Note that productivity programmers still need to define the architecture of their software in a way that exposes the concurrency inherent in their problem. We submit that domain-expert programmers understand "what is concurrent". The parallel programming problem emerges from the complexity of "how that concurrency is utilized" on real hardware. The research described in this talk was carried out in collaboration with the ParLab at UC Berkeley. We use a design pattern language to define the high level frameworks exposed to domain-expert, productivity programmers. We then use tools from the SEJITS project (Selective embedded Just In time Specializers) to build the software transformation tool chains thst turn these framework-oriented designs into highly efficient code. The final ingredient is a software platform to serve as a target for these tools. One such platform is the OpenCL industry standard for programming heterogeneous systems. We will briefly describe OpenCL and show how it provides a vendor-neutral software target for current and future many core systems; both CPU-based, GPU-based, and heterogeneous combinations of the two.
A proposed research program in information processing
NASA Technical Reports Server (NTRS)
Schorr, Herbert
1992-01-01
The goal of the Formalized Software Development (FSD) project was to demonstrate improvements productivity of software development and maintenance through the use of a new software lifecycle paradigm. The paradigm calls for the mechanical, but human-guided, derivation of software implementations from formal specifications of the desired software behavior. It relies on altering a system's specification and rederiving its implementation as the standard technology for software maintenance. A system definition for this paradigm is composed of a behavioral specification together with a body of annotations that control the derivation of executable code from the specification. Annotations generally achieve the selection of certain data representations and/or algorithms that are consistent with, but not mandated by, the behavioral specification. In doing this, they may yield systems which exhibit only certain behaviors among multiple alternatives permitted by the behavioral specification. The FSD project proposed to construct a testbed in which to explore the realization of this new paradigm. The testbed was to provide operational support environment for software design, implementation, and maintenance. The testbed was proposed to provide highly automated support for individual programmers ('programming in the small'), but not to address the additional needs of programming teams ('programming in the large'). The testbed proposed to focus on supporting rapid construction and evolution of useful prototypes of software systems, as opposed to focusing on the problems of achieving production quality performance of systems.
The STARLINK software collection
NASA Astrophysics Data System (ADS)
Penny, A. J.; Wallace, P. T.; Sherman, J. C.; Terret, D. L.
1993-12-01
A demonstration will be given of some recent Starlink software. STARLINK is: a network of computers used by UK astronomers; a collection of programs for the calibration and analysis of astronomical data; a team of people giving hardware, software and administrative support. The Starlink Project has been in operation since 1980 to provide UK astronomers with interactive image processing and data reduction facilities. There are now Starlink computer systems at 25 UK locations, serving about 1500 registered users. The Starlink software collection now has about 25 major packages covering a wide range of astronomical data reduction and analysis techniques, as well as many smaller programs and utilities. At the core of most of the packages is a common `software environment', which provides many of the functions which applications need and offers standardized methods of structuring and accessing data. The software environment simplifies programming and support, and makes it easy to use different packages for different stages of the data reduction. Users see a consistent style, and can mix applications without hitting problems of differing data formats. The Project group coordinates the writing and distribution of this software collection, which is Unix based. Outside the UK, Starlink is used at a large number of places, which range from installations at major UK telescopes, which are Starlink-compatible and managed like Starlink sites, to individuals who run only small parts of the Starlink software collection.
Managing Risk in Safety Critical Operations - Lessons Learned from Space Operations
NASA Technical Reports Server (NTRS)
Gonzalez, Steven A.
2002-01-01
The Mission Control Center (MCC) at Johnson Space Center (JSC) has a rich legacy of supporting Human Space Flight operations throughout the Apollo, Shuttle and International Space Station eras. Through the evolution of ground operations and the Mission Control Center facility, NASA has gained a wealth of experience of what it takes to manage the risk in Safety Critical Operations, especially when human life is at risk. The focus of the presentation will be on the processes (training, operational rigor, team dynamics) that enable the JSC/MCC team to be so successful. The presentation will also share the evolution of the Mission Control Center architecture and how the evolution was introduced while managing the risk to the programs supported by the team. The details of the MCC architecture (e.g., the specific software, hardware or tools used in the facility) will not be shared at the conference since it would not give any additional insight as to how risk is managed in Space Operations.
2014-08-15
CAPE CANAVERAL, Fla. – The Kennedy Space Center Visitor Complex Spaceperson poses for a photo with Carver Middle School students and their teacher from Orlando, Florida, during the Zero Robotics finals competition at NASA Kennedy Space Center's Space Station Processing Facility in Florida. The team, members of the After School All-Stars, were regional winners and advanced to the final competition. For the competition, students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
Applications of Modeling and Simulation for Flight Hardware Processing at Kennedy Space Center
NASA Technical Reports Server (NTRS)
Marshall, Jennifer L.
2010-01-01
The Boeing Design Visualization Group (DVG) is responsible for the creation of highly-detailed representations of both on-site facilities and flight hardware using computer-aided design (CAD) software, with a focus on the ground support equipment (GSE) used to process and prepare the hardware for space. Throughout my ten weeks at this center, I have had the opportunity to work on several projects: the modification of the Multi-Payload Processing Facility (MPPF) High Bay, weekly mapping of the Space Station Processing Facility (SSPF) floor layout, kinematics applications for the Orion Command Module (CM) hatches, and the design modification of the Ares I Upper Stage hatch for maintenance purposes. The main goal of each of these projects was to generate an authentic simulation or representation using DELMIA V5 software. This allowed for evaluation of facility layouts, support equipment placement, and greater process understanding once it was used to demonstrate future processes to customers and other partners. As such, I have had the opportunity to contribute to a skilled team working on diverse projects with a central goal of providing essential planning resources for future center operations.
Image processing and products for the Magellan mission to Venus
NASA Technical Reports Server (NTRS)
Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche
1992-01-01
The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.
A Capstone Course on Agile Software Development Using Scrum
ERIC Educational Resources Information Center
Mahnic, V.
2012-01-01
In this paper, an undergraduate capstone course in software engineering is described that not only exposes students to agile software development, but also makes it possible to observe the behavior of developers using Scrum for the first time. The course requires students to work as Scrum Teams, responsible for the implementation of a set of user…
Which factors affect software projects maintenance cost more?
Dehaghani, Sayed Mehdi Hejazi; Hajrahimi, Nafiseh
2013-03-01
The software industry has had significant progress in recent years. The entire life of software includes two phases: production and maintenance. Software maintenance cost is increasingly growing and estimates showed that about 90% of software life cost is related to its maintenance phase. Extraction and considering the factors affecting the software maintenance cost help to estimate the cost and reduce it by controlling the factors. In this study, the factors affecting software maintenance cost were determined then were ranked based on their priority and after that effective ways to reduce the maintenance costs were presented. This paper is a research study. 15 software related to health care centers information systems in Isfahan University of Medical Sciences and hospitals function were studied in the years 2010 to 2011. Among Medical software maintenance team members, 40 were selected as sample. After interviews with experts in this field, factors affecting maintenance cost were determined. In order to prioritize the factors derived by AHP, at first, measurement criteria (factors found) were appointed by members of the maintenance team and eventually were prioritized with the help of EC software. Based on the results of this study, 32 factors were obtained which were classified in six groups. "Project" was ranked the most effective feature in maintenance cost with the highest priority. By taking into account some major elements like careful feasibility of IT projects, full documentation and accompany the designers in the maintenance phase good results can be achieved to reduce maintenance costs and increase longevity of the software.
Certification of production-quality gLite Job Management components
NASA Astrophysics Data System (ADS)
Andreetto, P.; Bertocco, S.; Capannini, F.; Cecchi, M.; Dorigo, A.; Frizziero, E.; Giacomini, F.; Gianelle, A.; Mezzadri, M.; Molinari, E.; Monforte, S.; Prelz, F.; Rebatto, D.; Sgaravatto, M.; Zangrando, L.
2011-12-01
With the advent of the recent European Union (EU) funded projects aimed at achieving an open, coordinated and proactive collaboration among the European communities that provide distributed computing services, more strict requirements and quality standards will be asked to middleware providers. Such a highly competitive and dynamic environment, organized to comply a business-oriented model, has already started pursuing quality criteria, thus requiring to formally define rigorous procedures, interfaces and roles for each step of the software life-cycle. This will ensure quality-certified releases and updates of the Grid middleware. In the European Middleware Initiative (EMI), the release management for one or more components will be organized into Product Team (PT) units, fully responsible for delivering production ready, quality-certified software and for coordinating each other to contribute to the EMI release as a whole. This paper presents the certification process, with respect to integration, installation, configuration and testing, adopted at INFN by the Product Team responsible for the gLite Web-Service based Computing Element (CREAM CE) and for the Workload Management System (WMS). The used resources, the testbeds layout, the integration and deployment methods, the certification steps to provide feedback to developers and to grant quality results are described.
Technology-driven dietary assessment: a software developer’s perspective
Buday, Richard; Tapia, Ramsey; Maze, Gary R.
2015-01-01
Dietary researchers need new software to improve nutrition data collection and analysis, but creating information technology is difficult. Software development projects may be unsuccessful due to inadequate understanding of needs, management problems, technology barriers or legal hurdles. Cost overruns and schedule delays are common. Barriers facing scientific researchers developing software include workflow, cost, schedule, and team issues. Different methods of software development and the role that intellectual property rights play are discussed. A dietary researcher must carefully consider multiple issues to maximize the likelihood of success when creating new software. PMID:22591224
Proposing an Evidence-Based Strategy for Software Requirements Engineering.
Lindoerfer, Doris; Mansmann, Ulrich
2016-01-01
This paper discusses an evidence-based approach to software requirements engineering. The approach is called evidence-based, since it uses publications on the specific problem as a surrogate for stakeholder interests, to formulate risks and testing experiences. This complements the idea that agile software development models are more relevant, in which requirements and solutions evolve through collaboration between self-organizing cross-functional teams. The strategy is exemplified and applied to the development of a Software Requirements list used to develop software systems for patient registries.
Albert, S; Cristofari, J-P; Cox, A; Bensimon, J-L; Guedon, C; Barry, B
2011-12-01
The techniques of free tissue transfers are mainly used for mandibular reconstruction by specialized surgical teams. This type of reconstruction is mostly realized in matters of head and neck cancers affecting mandibular bone and requiring a wide surgical resection and interruption of the mandible. To decrease the duration of the operation, surgical procedure involves generally two teams, one devoted to cancer resection and the other one to raise the fibular flap and making the reconstruction. For a better preparation of this surgical procedure, we propose here the use of a medical imaging software enabling mandibular reconstructions in three dimensions using the CT-scan done during the initial disease-staging checkup. The software used is Osirix®, developed since 2004 by a team of radiologists from Geneva and UCLA, working on Apple® computers and downloadable free of charge in its basic version. We report here our experience of this software in 17 patients, with a preoperative modelling in three dimensions of the mandible, of the segment of mandible to be removed. It also forecasts the numbers of fragments of fibula needed and the location of osteotomies. Copyright © 2009 Elsevier Masson SAS. All rights reserved.
Arrott, M.; Alexander, Corrine; Graybeal, J.; Mueller, C.; Signell, R.; de La Beaujardière, J.; Taylor, A.; Wilkin, J.; Powell, B.; Orcutt, J.
2011-01-01
The NOAA-led U.S. Integrated Ocean Observing System (IOOS) and the National Science Foundation's Ocean Observatories Initiative (OOI) have been collaborating since 2007 on advanced tools and technologies that ensure open access to ocean observations and models. Initial collaboration focused on serving ocean data via cloud computing-a key component of the OOI cyberinfrastructure (CI) architecture. As the OOI transitioned from planning to execution in the Fall of 2009, an OOI/IOOS team developed a customer-based "use case" to align more closely with the emerging objectives of OOI-CI team's first software release scheduled for Summer 2011 and provide a quantitative capacity for stress-testing these tools and protocols. A requirements process was initiated with coastal modelers, focusing on improved workflows to deliver ocean observation data. Accomplishments to date include the documentation and assessment of scientific workflows for two "early adopter" modeling teams from IOOS Regional partners (Rutgers-the State University of New Jersey and University of Hawaii's School of Ocean and Earth Science and Technology) to enable full understanding of data sources and needs; generation of all-inclusive lists of the data sets required and those obtainable through IOOS; a more complete understanding of areas where IOOS can expand data access capabilities to better serve the needs of the modeling community; and development of "data set agents" (software) to facilitate data acquisition from numerous data providers and conversions of the data format to the OOI-CI canonical form. ?? 2011 MTS.
Model-driven approach to data collection and reporting for quality improvement
Curcin, Vasa; Woodcock, Thomas; Poots, Alan J.; Majeed, Azeem; Bell, Derek
2014-01-01
Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed. PMID:24874182
Hands-on Marine Geology and Geophysics Field Instruction at the University of Texas
NASA Astrophysics Data System (ADS)
Saustrup, S.; Gulick, S. P. S.; Goff, J. A.; Fernandez, R.; Davis, M. B.; Duncan, D.
2015-12-01
The University of Texas Institute for Geophysics, part of the Jackson School of Geosciences, annually offers an intensive three-week marine geology and geophysics field course during the spring-summer intersession. Now in its ninth year, the course provides instruction in survey design, data acquisition, processing, interpretation, and visualization. Methods covered include seismic reflection, multibeam bathymetry, sidescan sonar, and sediment sampling. The emphasis of the course is team-oriented, hands-on, field training in real-world situations. The course begins with classroom instruction covering the field area and field methods, followed by a week of at-sea field work in 4-student teams. The students then return to the classroom where they integrate, interpret, and visualize data using industry-standard software. The teams present results in a series of professional-level final presentations before academic and industry supporters. Our rotating field areas provide ideal locations for students to investigate coastal and sedimentary processes of the Gulf Coast and continental shelf . In the field, student teams rotate between two research vessels: the smaller vessel, the Jackson School's newly-commissioned R/V Scott Petty (26 feet LOA), is used principally for multibeam bathymetry, sidescan sonar, and sediment sampling; the other, NOAA's R/V Manta (82 feet LOA) is used for high-resolution seismic reflection, CHIRP sub-bottom profiling, multibeam bathymetry, gravity coring, and vibracoring. Teams also rotate through a field laboratory performing processing of geophysical data and sediment samples. This past year's course in Freeport, Texas proceeded unabated despite concurrent record-breaking rainfall and flooding, which offered students a unique opportunity to observe and image, in real time, flood-related bedform migration on a time scale of hours. The data also allowed an in-class opportunity to examine natural and anthropogenic processes recorded in the river and coastal morphology and stratigraphy. http://www.ig.utexas.edu/research/mgg/courses/geof348K/
MaROS: Information Management Service
NASA Technical Reports Server (NTRS)
Allard, Daniel A.; Gladden, Roy E.; Wright, Jesse J.; Hy, Franklin H.; Rabideau, Gregg R.; Wallick, Michael N.
2011-01-01
This software is provided by the Mars Relay Operations Service (MaROS) task to a variety of Mars projects for the purpose of coordinating communications sessions between landed spacecraft assets and orbiting spacecraft assets at Mars. The Information Management Service centralizes a set of functions previously distributed across multiple spacecraft operations teams, and as such, greatly improves visibility into the end-to-end strategic coordination process. Most of the process revolves around the scheduling of communications sessions between the spacecraft during periods of time when a landed asset on Mars is geometrically visible by an orbiting spacecraft. These relay sessions are used to transfer data both to and from the landed asset via the orbiting asset on behalf of Earth-based spacecraft operators. This software component is an application process running as a Java virtual machine. The component provides all service interfaces via a Representational State Transfer (REST) protocol over https to external clients. There are two general interaction modes with the service: upload and download of data. For data upload, the service must execute logic specific to the upload data type and trigger any applicable calculations including pass delivery latencies and overflight conflicts. For data download, the software must retrieve and correlate requested information and deliver to the requesting client. The provision of this service enables several key advancements over legacy processes and systems. For one, this service represents the first time that end-to-end relay information is correlated into a single shared repository. The software also provides the first multimission latency calculator; previous latency calculations had been performed on a mission-by-mission basis.
2004-10-01
Top-Level Process for Identification and Analysis of Safety-Related Re- quirements 4.4 Collaborators The primary SEI team members were Don Firesmith...Graff, M. & van Wyk, K. Secure Coding Principles & Practices. O’Reilly, 2003. • Hoglund, G. & McGraw, G. Exploiting Software: How to Break Code. Addison...Eisenecker, U.; Glück, R.; Vandevoorde, D.; & Veldhuizen , T. “Generative Programming and Active Libraries (Extended Abstract)” <osl.iu.edu/~tveldhui/papers
Computer Aided Software Engineering (CASE) Environment Issues.
1987-06-01
tasks tend to be error prone and slowv when done by humans . Ti-.c,. are e’.el nt anidates for automation using a computer. (MacLennan. 10S1. p. 51 2...CASE r,’sourCcs; * human resources. Lonsisting of the people who use and facilitate utilization in !:1e case of manual resource, of the environment...engineering process in a given er,%irent rnizthe nature of rnanua! and human resources. CA.SU_ -esources should provide the softwvare enizincerin2 team
ERIC Educational Resources Information Center
Stamm, Meelis; Stamm, Raini; Koskel, Sade
2008-01-01
Study aim: Assessment of feasibility of using own computer software "Game" at competitions. Material and methods: The data were collected during Estonian championships in 2006 for male volleyball teams of the 13-15-years age group (n = 8). In all games, the performance of both teams was recorded in parallel with two computers. A total of…
1998-08-19
KENNEDY SPACE CENTER, FLA. -- In Firing Room 1 at KSC, Shuttle launch team members put the Shuttle system through an integrated simulation. The control room is set up with software used to simulate flight and ground systems in the launch configuration. A Simulation Team, comprisING KSC engineers, introduce 12 or more major problems to prepare the launch team for worst-case scenarios. Such tests and simulations keep the Shuttle launch team sharp and ready for liftoff. The next liftoff is targeted for Oct. 29.
1998-08-20
KENNEDY SPACE CENTER, FLA. -- In Firing Room 1 at KSC, Shuttle launch team members put the Shuttle system through an integrated simulation. The control room is set up with software used to simulate flight and ground systems in the launch configuration. A Simulation Team, comprising KSC engineers, introduce 12 or more major problems to prepare the launch team for worst-case scenarios. Such tests and simulations keep the Shuttle launch team sharp and ready for liftoff. The next liftoff is targeted for Oct. 29
Putting the Power of Configuration in the Hands of the Users
NASA Technical Reports Server (NTRS)
Al-Shihabi, Mary-Jo; Brown, Mark; Rigolini, Marianne
2011-01-01
Goal was to reduce the overall cost of human space flight while maintaining the most demanding standards for safety and mission success. In support of this goal, a project team was chartered to replace 18 legacy Space Shuttle nonconformance processes and systems with one fully integrated system Problem Reporting and Corrective Action (PRACA) processes provide a closed-loop system for the identification, disposition, resolution, closure, and reporting of all Space Shuttle hardware/software problems PRACA processes are integrated throughout the Space Shuttle organizational processes and are critical to assuring a safe and successful program Primary Project Objectives Develop a fully integrated system that provides an automated workflow with electronic signatures Support multiple NASA programs and contracts with a single "system" architecture Define standard processes, implement best practices, and minimize process variations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kris A.; Scholtz, Jean; Whiting, Mark A.
The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less
Sleep apps: what role do they play in clinical medicine?
Lorenz, Christopher P; Williams, Adrian J
2017-11-01
Today's smartphones boast more computing power than the Apollo Guidance Computer. Given the ubiquity and popularity of smartphones, are we already carrying around miniaturized sleep labs in our pockets? There is still a lack of validation studies for consumer sleep technologies in general and apps for monitoring sleep in particular. To overcome this gap, multidisciplinary teams are needed that focus on feasibility work at the intersection of software engineering, data science and clinical sleep medicine. To date, no smartphone app for monitoring sleep through movement sensors has been successfully validated against polysomnography, despite the role and validity of actigraphy in sleep medicine having been well established. Missing separation of concerns, not methodology, poses the key limiting factor: The two essential steps in the monitoring process, data collection and scoring, are chained together inside a black box due to the closed nature of consumer devices. This leaves researchers with little room for influence nor can they access raw data. Multidisciplinary teams that wield complete power over the sleep monitoring process are sorely needed.
Team Production of Learner-Controlled Courseware: A Progress Report.
ERIC Educational Resources Information Center
Bunderson, C. Victor
A project being conducted by the MITRE Corporation and Brigham Young University (BYU) is developing hardware, software, and courseware for the TICCIT (Time Shared, Interactive, Computer Controlled Information Television) computer-assisted instructional system. Four instructional teams at BYU, each having an instructional psychologist, subject…
Automated Sequence Processor: Something Old, Something New
NASA Technical Reports Server (NTRS)
Streiffert, Barbara; Schrock, Mitchell; Fisher, Forest; Himes, Terry
2012-01-01
High productivity required for operations teams to meet schedules Risk must be minimized. Scripting used to automate processes. Scripts perform essential operations functions. Automated Sequence Processor (ASP) was a grass-roots task built to automate the command uplink process System engineering task for ASP revitalization organized. ASP is a set of approximately 200 scripts written in Perl, C Shell, AWK and other scripting languages.. ASP processes/checks/packages non-interactive commands automatically.. Non-interactive commands are guaranteed to be safe and have been checked by hardware or software simulators.. ASP checks that commands are non-interactive.. ASP processes the commands through a command. simulator and then packages them if there are no errors.. ASP must be active 24 hours/day, 7 days/week..
The shuttle main engine: A first look
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1996-01-01
Anyone entering the Space Shuttle Main Engine (SSME) team attends a two week course to become familiar with the design and workings of the engine. This course provides intensive coverage of the individual hardware items and their functions. Some individuals, particularly those involved with software maintenance and development, have felt overwhelmed by this volume of material and their lack of a logical framework in which to place it. To provide this logical framework, it was decided that a brief self-taught introduction to the overall operation of the SSME should be designed. To aid the people or new team members with an interest in the software, this new course should also explain the structure and functioning of the controller and its software. This paper presents a description of this presentation.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure detection, and confirm responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - the ARINC 6535-partitioned Operating System, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by FSW. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure their effectiveness and performance in the exterior FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.
Using Docker Containers to Extend Reproducibility Architecture for the NASA Earth Exchange (NEX)
NASA Technical Reports Server (NTRS)
Votava, Petr; Michaelis, Andrew; Spaulding, Ryan; Becker, Jeffrey C.
2016-01-01
NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. As NEX has been growing into a petabyte-size platform for analysis, experiments and data production, it has been increasingly important to enable users to easily retrace their steps, identify what datasets were produced by which process chains, and give them ability to readily reproduce their results. This can be a tedious and difficult task even for a small project, but is almost impossible on large processing pipelines. We have developed an initial reproducibility and knowledge capture solution for the NEX, however, if users want to move the code to another system, whether it is their home institution cluster, laptop or the cloud, they have to find, build and install all the required dependencies that would run their code. This can be a very tedious and tricky process and is a big impediment to moving code to data and reproducibility outside the original system. The NEX team has tried to assist users who wanted to move their code into OpenNEX on Amazon cloud by creating custom virtual machines with all the software and dependencies installed, but this, while solving some of the issues, creates a new bottleneck that requires the NEX team to be involved with any new request, updates to virtual machines and general maintenance support. In this presentation, we will describe a solution that integrates NEX and Docker to bridge the gap in code-to-data migration. The core of the solution is saemi-automatic conversion of science codes, tools and services that are already tracked and described in the NEX provenance system, to Docker - an open-source Linux container software. Docker is available on most computer platforms, easy to install and capable of seamlessly creating and/or executing any application packaged in the appropriate format. We believe this is an important step towards seamless process deployment in heterogeneous environments that will enhance community access to NASA data and tools in a scalable way, promote software reuse, and improve reproducibility of scientific results.
Using Docker Containers to Extend Reproducibility Architecture for the NASA Earth Exchange (NEX)
NASA Astrophysics Data System (ADS)
Votava, P.; Michaelis, A.; Spaulding, R.; Becker, J. C.
2016-12-01
NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. As NEX has been growing into a petabyte-size platform for analysis, experiments and data production, it has been increasingly important to enable users to easily retrace their steps, identify what datasets were produced by which process chains, and give them ability to readily reproduce their results. This can be a tedious and difficult task even for a small project, but is almost impossible on large processing pipelines. We have developed an initial reproducibility and knowledge capture solution for the NEX, however, if users want to move the code to another system, whether it is their home institution cluster, laptop or the cloud, they have to find, build and install all the required dependencies that would run their code. This can be a very tedious and tricky process and is a big impediment to moving code to data and reproducibility outside the original system. The NEX team has tried to assist users who wanted to move their code into OpenNEX on Amazon cloud by creating custom virtual machines with all the software and dependencies installed, but this, while solving some of the issues, creates a new bottleneck that requires the NEX team to be involved with any new request, updates to virtual machines and general maintenance support. In this presentation, we will describe a solution that integrates NEX and Docker to bridge the gap in code-to-data migration. The core of the solution is saemi-automatic conversion of science codes, tools and services that are already tracked and described in the NEX provenance system, to Docker - an open-source Linux container software. Docker is available on most computer platforms, easy to install and capable of seamlessly creating and/or executing any application packaged in the appropriate format. We believe this is an important step towards seamless process deployment in heterogeneous environments that will enhance community access to NASA data and tools in a scalable way, promote software reuse, and improve reproducibility of scientific results.
The Use of Flexible, Interactive, Situation-Focused Software for the E-Learning of Mathematics.
ERIC Educational Resources Information Center
Farnsworth, Ralph Edward
This paper discusses the classroom, home, and distance use of new, flexible, interactive, application-oriented software known as Active Learning Suite. The actual use of the software, not just a controlled experiment, is reported on. Designed for the e-learning of university mathematics, the program was developed by a joint U.S.-Russia team and…
Zero to Integration in Eight Months, the Dawn Ground Data System Engineering Challenge
NASA Technical Reports Server (NTRS)
Dubon, Lydia P.
2006-01-01
The Dawn Project has presented the Ground Data System (GDS) with technical challenges driven by cost and schedule constraints commonly associated with National Aeronautics and Space Administration (NASA) Discovery Projects. The Dawn mission consists of a new and exciting Deep Space partnership among: the Jet Propulsion Laboratory (JPL), manages the project and is responsible for flight operation; Orbital Sciences Corporation (OSC), is the spacecraft builder and is responsible for flight system test and integration; and the University of California, at Los Angeles (UCLA), is responsible for science planning and operations. As a cost-capped mission, one of Dawn's implementation strategies is to leverage from both flight and ground heritage. OSC's ground data system is used for flight system test and integration as part of the flight heritage strategy. Mission operations, however, are to be conducted with JPL's ground system. The system engineering challenge of dealing with two heterogeneous ground systems emerged immediately. During the first technical interchange meeting between the JPL's GDS Team and OSC's Flight Software Team, August 2003, the need to integrate the ground system with the flight software was brought to the table. This need was driven by the project's commitment to enable instrument engineering model integration in a spacecraft simulator environment, for both demonstration and risk mitigation purposes, by April 2004. This paper will describe the system engineering approach that was undertaken by JPL's GDS Team in order to meet the technical challenge within a non-negotiable eight-month schedule. Key to the success was adherence to fundamental systems engineering practices: decomposition of the project request into manageable requirements; integration of multiple ground disciplines and experts into a focused team effort; definition of a structured yet flexible development process; definition of an in-process risk reduction plan; and aggregation of the intermediate products to an integrated final product. In addition, this paper will highlight the role of lessons learned from the integration experience. The lessons learned from an early GDS deployment have served as the foundation for the design and implementation of the Dawn Ground Data System.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kristin A.; Scholtz, Jean; Whiting, Mark A.
The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less
SDDL- SOFTWARE DESIGN AND DOCUMENTATION LANGUAGE
NASA Technical Reports Server (NTRS)
Kleine, H.
1994-01-01
Effective, efficient communication is an essential element of the software development process. The Software Design and Documentation Language (SDDL) provides an effective communication medium to support the design and documentation of complex software applications. SDDL supports communication between all the members of a software design team and provides for the production of informative documentation on the design effort. Even when an entire development task is performed by a single individual, it is important to explicitly express and document communication between the various aspects of the design effort including concept development, program specification, program development, and program maintenance. SDDL ensures that accurate documentation will be available throughout the entire software life cycle. SDDL offers an extremely valuable capability for the design and documentation of complex programming efforts ranging from scientific and engineering applications to data management and business sytems. Throughout the development of a software design, the SDDL generated Software Design Document always represents the definitive word on the current status of the ongoing, dynamic design development process. The document is easily updated and readily accessible in a familiar, informative form to all members of the development team. This makes the Software Design Document an effective instrument for reconciling misunderstandings and disagreements in the development of design specifications, engineering support concepts, and the software design itself. Using the SDDL generated document to analyze the design makes it possible to eliminate many errors that might not be detected until coding and testing is attempted. As a project management aid, the Software Design Document is useful for monitoring progress and for recording task responsibilities. SDDL is a combination of language, processor, and methodology. The SDDL syntax consists of keywords to invoke design structures and a collection of directives which control processor actions. The designer has complete control over the choice of keywords, commanding the capabilities of the processor in a way which is best suited to communicating the intent of the design. The SDDL processor translates the designer's creative thinking into an effective document for communication. The processor performs as many automatic functions as possible, thereby freeing the designer's energy for the creative effort. Document formatting includes graphical highlighting of structure logic, accentuation of structure escapes and module invocations, logic error detection, and special handling of title pages and text segments. The SDDL generated document contains software design summary information including module invocation hierarchy, module cross reference, and cross reference tables of user selected words or phrases appearing in the document. The basic forms of the methodology are module and block structures and the module invocation statement. A design is stated in terms of modules that represent problem abstractions which are complete and independent enough to be treated as separate problem entities. Blocks are lower-level structures used to build the modules. Both kinds of structures may have an initiator part, a terminator part, an escape segment, or a substructure. The SDDL processor is written in PASCAL for batch execution on a DEC VAX series computer under VMS. SDDL was developed in 1981 and last updated in 1984.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise
2006-09-01
The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.
KEYNOTE 2 : Rebuilding the Tower of Babel - Better Communication with Standards
2013-02-01
and a member of the Object Management Group (OMG) SysML specification team. He has been developing multi-national complex systems for almost 35 years...critical systems development, virtual team management, systems development, and software development with UML, SysML and Architectural Frameworks
Evolving the Reuse Process at the Flight Dynamics Division (FDD) Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Condon, S.; Seaman, C.; Basili, Victor; Kraft, S.; Kontio, J.; Kim, Y.
1996-01-01
This paper presents the interim results from the Software Engineering Laboratory's (SEL) Reuse Study. The team conducting this study has, over the past few months, been studying the Generalized Support Software (GSS) domain asset library and architecture, and the various processes associated with it. In particular, we have characterized the process used to configure GSS-based attitude ground support systems (AGSS) to support satellite missions at NASA's Goddard Space Flight Center. To do this, we built detailed models of the tasks involved, the people who perform these tasks, and the interdependencies and information flows among these people. These models were based on information gleaned from numerous interviews with people involved in this process at various levels. We also analyzed effort data in order to determine the cost savings in moving from actual development of AGSSs to support each mission (which was necessary before GSS was available) to configuring AGSS software from the domain asset library. While characterizing the GSS process, we became aware of several interesting factors which affect the successful continued use of GSS. Many of these issues fall under the subject of evolving technologies, which were not available at the inception of GSS, but are now. Some of these technologies could be incorporated into the GSS process, thus making the whole asset library more usable. Other technologies are being considered as an alternative to the GSS process altogether. In this paper, we outline some of issues we will be considering in our continued study of GSS and the impact of evolving technologies.
Agile Software Teams: How They Engage with Systems Engineering on DoD Acquisition Programs
2014-07-01
under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineer- ing Institute, a federally funded...issues that would preclude or limit the use of Agile methods within the DoD” [Broadus 2013]. As operational tempos increase and programs fight to...environment in which it operates . This makes software different from other disciplines that have toleranc- es, generally resulting in software engineering
Spaceport Command and Control System - Support Software Development
NASA Technical Reports Server (NTRS)
Tremblay, Shayne
2016-01-01
The Information Architecture Support (IAS) Team, the component of the Spaceport Command and Control System (SCCS) that is in charge of all the pre-runtime data, was in need of some report features to be added to their internal web application, Information Architecture (IA). Development of these reports is crucial for the speed and productivity of the development team, as they are needed to quickly and efficiently make specific and complicated data requests against the massive IA database. These reports were being put on the back burner, as other development of IA was prioritized over them, but the need for them resulted in internships being created to fill this need. The creation of these reports required learning Ruby on Rails development, along with related web technologies, and they will continue to serve IAS and other support software teams and their IA data needs.
Telemetry Monitoring and Display Using LabVIEW
NASA Technical Reports Server (NTRS)
Wells, George; Baroth, Edmund C.
1993-01-01
The Measurement Technology Center of the Instrumentation Section configures automated data acquisition systems to meet the diverse needs of JPL's experimental research community. These systems are based on personal computers or workstations (Apple, IBM/Compatible, Hewlett-Packard, and Sun Microsystems) and often include integrated data analysis, visualization and experiment control functions in addition to data acquisition capabilities. These integrated systems may include sensors, signal conditioning, data acquisition interface cards, software, and a user interface. Graphical programming is used to simplify configuration of such systems. Employment of a graphical programming language is the most important factor in enabling the implementation of data acquisition, analysis, display and visualization systems at low cost. Other important factors are the use of commercial software packages and off-the-shelf data acquisition hardware where possible. Understanding the experimenter's needs is also critical. An interactive approach to user interface construction and training of operators is also important. One application was created as a result of a competative effort between a graphical programming language team and a text-based C language programming team to verify the advantages of using a graphical programming language approach. With approximately eight weeks of funding over a period of three months, the text-based programming team accomplished about 10% of the basic requirements, while the Macintosh/LabVIEW team accomplished about 150%, having gone beyond the original requirements to simulate a telemetry stream and provide utility programs. This application verified that using graphical programming can significantly reduce software development time. As a result of this initial effort, additional follow-on work was awarded to the graphical programming team.
What Not To Do: Anti-patterns for Developing Scientific Workflow Software Components
NASA Astrophysics Data System (ADS)
Futrelle, J.; Maffei, A. R.; Sosik, H. M.; Gallager, S. M.; York, A.
2013-12-01
Scientific workflows promise to enable efficient scaling-up of researcher code to handle large datasets and workloads, as well as documentation of scientific processing via standardized provenance records, etc. Workflow systems and related frameworks for coordinating the execution of otherwise separate components are limited, however, in their ability to overcome software engineering design problems commonly encountered in pre-existing components, such as scripts developed externally by scientists in their laboratories. In practice, this often means that components must be rewritten or replaced in a time-consuming, expensive process. In the course of an extensive workflow development project involving large-scale oceanographic image processing, we have begun to identify and codify 'anti-patterns'--problematic design characteristics of software--that make components fit poorly into complex automated workflows. We have gone on to develop and document low-effort solutions and best practices that efficiently address the anti-patterns we have identified. The issues, solutions, and best practices can be used to evaluate and improve existing code, as well as guiding the development of new components. For example, we have identified a common anti-pattern we call 'batch-itis' in which a script fails and then cannot perform more work, even if that work is not precluded by the failure. The solution we have identified--removing unnecessary looping over independent units of work--is often easier to code than the anti-pattern, as it eliminates the need for complex control flow logic in the component. Other anti-patterns we have identified are similarly easy to identify and often easy to fix. We have drawn upon experience working with three science teams at Woods Hole Oceanographic Institution, each of which has designed novel imaging instruments and associated image analysis code. By developing use cases and prototypes within these teams, we have undertaken formal evaluations of software components developed by programmers with widely varying levels of expertise, and have been able to discover and characterize a number of anti-patterns. Our evaluation methodology and testbed have also enabled us to assess the efficacy of strategies to address these anti-patterns according to scientifically relevant metrics, such as ability of algorithms to perform faster than the rate of data acquisition and the accuracy of workflow component output relative to ground truth. The set of anti-patterns and solutions we have identified augments of the body of more well-known software engineering anti-patterns by addressing additional concerns that obtain when a software component has to function as part of a workflow assembled out of independently-developed codebases. Our experience shows that identifying and resolving these anti-patterns reduces development time and improves performance without reducing component reusability.
A Browser-Based Multi-User Working Environment for Physicists
NASA Astrophysics Data System (ADS)
Erdmann, M.; Fischer, R.; Glaser, C.; Klingebiel, D.; Komm, M.; Müller, G.; Rieger, M.; Steggemann, J.; Urban, M.; Winchen, T.
2014-06-01
Many programs in experimental particle physics do not yet have a graphical interface, or demand strong platform and software requirements. With the most recent development of the VISPA project, we provide graphical interfaces to existing software programs and access to multiple computing clusters through standard web browsers. The scalable clientserver system allows analyses to be performed in sizable teams, and disburdens the individual physicist from installing and maintaining a software environment. The VISPA graphical interfaces are implemented in HTML, JavaScript and extensions to the Python webserver. The webserver uses SSH and RPC to access user data, code and processes on remote sites. As example applications we present graphical interfaces for steering the reconstruction framework OFFLINE of the Pierre-Auger experiment, and the analysis development toolkit PXL. The browser based VISPA system was field-tested in biweekly homework of a third year physics course by more than 100 students. We discuss the system deployment and the evaluation by the students.
Spaceport Command and Control System Automated Verification Software Development
NASA Technical Reports Server (NTRS)
Backus, Michael W.
2017-01-01
For as long as we have walked the Earth, humans have always been explorers. We have visited our nearest celestial body and sent Voyager 1 beyond our solar system1 out into interstellar space. Now it is finally time for us to step beyond our home and onto another planet. The Spaceport Command and Control System (SCCS) is being developed along with the Space Launch System (SLS) to take us on a journey further than ever attempted. Within SCCS are separate subsystems and system level software, each of which have to be tested and verified. Testing is a long and tedious process, so automating it will be much more efficient and also helps to remove the possibility of human error from mission operations. I was part of a team of interns and full-time engineers who automated tests for the requirements on SCCS, and with that was able to help verify that the software systems are performing as expected.
Continuous integration for concurrent MOOSE framework and application development on GitHub
Slaughter, Andrew E.; Peterson, John W.; Gaston, Derek R.; ...
2015-11-20
For the past several years, Idaho National Laboratory’s MOOSE framework team has employed modern software engineering techniques (continuous integration, joint application/framework source code repos- itories, automated regression testing, etc.) in developing closed-source multiphysics simulation software (Gaston et al., Journal of Open Research Software vol. 2, article e10, 2014). In March 2014, the MOOSE framework was released under an open source license on GitHub, significantly expanding and diversifying the pool of current active and potential future contributors on the project. Despite this recent growth, the same philosophy of concurrent framework and application development continues to guide the project’s development roadmap. Severalmore » specific practices, including techniques for managing multiple repositories, conducting automated regression testing, and implementing a cascading build process are discussed in this short paper. Furthermore, special attention is given to describing the manner in which these practices naturally synergize with the GitHub API and GitHub-specific features such as issue tracking, Pull Requests, and project forks.« less
Continuous integration for concurrent MOOSE framework and application development on GitHub
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaughter, Andrew E.; Peterson, John W.; Gaston, Derek R.
For the past several years, Idaho National Laboratory’s MOOSE framework team has employed modern software engineering techniques (continuous integration, joint application/framework source code repos- itories, automated regression testing, etc.) in developing closed-source multiphysics simulation software (Gaston et al., Journal of Open Research Software vol. 2, article e10, 2014). In March 2014, the MOOSE framework was released under an open source license on GitHub, significantly expanding and diversifying the pool of current active and potential future contributors on the project. Despite this recent growth, the same philosophy of concurrent framework and application development continues to guide the project’s development roadmap. Severalmore » specific practices, including techniques for managing multiple repositories, conducting automated regression testing, and implementing a cascading build process are discussed in this short paper. Furthermore, special attention is given to describing the manner in which these practices naturally synergize with the GitHub API and GitHub-specific features such as issue tracking, Pull Requests, and project forks.« less
Issues in NASA Program and Project Management: Focus on Project Planning and Scheduling
NASA Technical Reports Server (NTRS)
Hoffman, Edward J. (Editor); Lawbaugh, William M. (Editor)
1997-01-01
Topics addressed include: Planning and scheduling training for working project teams at NASA, overview of project planning and scheduling workshops, project planning at NASA, new approaches to systems engineering, software reliability assessment, and software reuse in wind tunnel control systems.
Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems
NASA Technical Reports Server (NTRS)
Berrick, Stephen; Lynnes, Christopher
2007-01-01
The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed several reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple Scalable Script based Science Processor (S4P) and an online data visualization and analysis system (Giovanni). These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust interoperable and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems the emphasis on value-added customer service and the continual goal for achieving higher cost efficiencies. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor In the success of S4P and S4PM which are now available to the open source community under the NASA Open source Agreement
Custom software development for use in a clinical laboratory
Sinard, John H.; Gershkovich, Peter
2012-01-01
In-house software development for use in a clinical laboratory is a controversial issue. Many of the objections raised are based on outdated software development practices, an exaggeration of the risks involved, and an underestimation of the benefits that can be realized. Buy versus build analyses typically do not consider total costs of ownership, and unfortunately decisions are often made by people who are not directly affected by the workflow obstacles or benefits that result from those decisions. We have been developing custom software for clinical use for over a decade, and this article presents our perspective on this practice. A complete analysis of the decision to develop or purchase must ultimately examine how the end result will mesh with the departmental workflow, and custom-developed solutions typically can have the greater positive impact on efficiency and productivity, substantially altering the decision balance sheet. Involving the end-users in preparation of the functional specifications is crucial to the success of the process. A large development team is not needed, and even a single programmer can develop significant solutions. Many of the risks associated with custom development can be mitigated by a well-structured development process, use of open-source tools, and embracing an agile development philosophy. In-house solutions have the significant advantage of being adaptable to changing departmental needs, contributing to efficient and higher quality patient care. PMID:23372985
Custom software development for use in a clinical laboratory.
Sinard, John H; Gershkovich, Peter
2012-01-01
In-house software development for use in a clinical laboratory is a controversial issue. Many of the objections raised are based on outdated software development practices, an exaggeration of the risks involved, and an underestimation of the benefits that can be realized. Buy versus build analyses typically do not consider total costs of ownership, and unfortunately decisions are often made by people who are not directly affected by the workflow obstacles or benefits that result from those decisions. We have been developing custom software for clinical use for over a decade, and this article presents our perspective on this practice. A complete analysis of the decision to develop or purchase must ultimately examine how the end result will mesh with the departmental workflow, and custom-developed solutions typically can have the greater positive impact on efficiency and productivity, substantially altering the decision balance sheet. Involving the end-users in preparation of the functional specifications is crucial to the success of the process. A large development team is not needed, and even a single programmer can develop significant solutions. Many of the risks associated with custom development can be mitigated by a well-structured development process, use of open-source tools, and embracing an agile development philosophy. In-house solutions have the significant advantage of being adaptable to changing departmental needs, contributing to efficient and higher quality patient care.
Using CASE to Exploit Process Modeling in Technology Transfer
NASA Technical Reports Server (NTRS)
Renz-Olar, Cheryl
2003-01-01
A successful business will be one that has processes in place to run that business. Creating processes, reengineering processes, and continually improving processes can be accomplished through extensive modeling. Casewise(R) Corporate Modeler(TM) CASE is a computer aided software engineering tool that will enable the Technology Transfer Department (TT) at NASA Marshall Space Flight Center (MSFC) to capture these abilities. After successful implementation of CASE, it could then go on to be applied in other departments at MSFC and other centers at NASA. The success of a business process is dependent upon the players working as a team and continuously improving the process. A good process fosters customer satisfaction as well as internal satisfaction in the organizational infrastructure. CASE provides a method for business process success through functions consisting of systems and processes business models; specialized diagrams; matrix management; simulation; report generation and publishing; and, linking, importing, and exporting documents and files. The software has an underlying repository or database to support these functions. The Casewise. manual informs us that dynamics modeling is a technique used in business design and analysis. Feedback is used as a tool for the end users and generates different ways of dealing with the process. Feedback on this project resulted from collection of issues through a systems analyst interface approach of interviews with process coordinators and Technical Points of Contact (TPOCs).
ERIC Educational Resources Information Center
Al-Busaidi, Fatma; Al Hashmi, Abdullah; Al Musawi, Ali; Kazem, Ali
2016-01-01
This paper is part of a strategic research project that aimed to assess the effectiveness of the design and use of new software for Arabic language learning (ALL). However, the focus of this paper is to understand Arabic teachers' perceptions of the effectiveness of the software that was designed purposely by the project's team to facilitate ALL…
Repository-based software engineering program
NASA Technical Reports Server (NTRS)
Wilson, James
1992-01-01
The activities performed during September 1992 in support of Tasks 01 and 02 of the Repository-Based Software Engineering Program are outlined. The recommendations and implementation strategy defined at the September 9-10 meeting of the Reuse Acquisition Action Team (RAAT) are attached along with the viewgraphs and reference information presented at the Institute for Defense Analyses brief on legal and patent issues related to software reuse.
Computer Technology and Its Impact on Recreation and Sport Programs.
ERIC Educational Resources Information Center
Ross, Craig M.
This paper describes several types of computer programs that can be useful to sports and recreation programs. Computerized tournament scheduling software is helpful to recreation and parks staff working with tournaments of 50 teams/individuals or more. Important features include team capacity, league formation, scheduling conflicts, scheduling…
2014-08-15
CAPE CANAVERAL, Fla. – Kennedy Space Center Director and former astronaut Bob Cabana, talks to Florida middle school students and their teachers during the Zero Robotics finals competition at the center's Space Station Processing Facility in Florida. Students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
2014-08-15
CAPE CANAVERAL, Fla. – Kennedy Space Center Director and former astronaut Bob Cabana, talks to Florida middle school students and their teachers during the Zero Robotics finals competition at the center's Space Station Processing Facility in Florida. Students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
2014-08-15
CAPE CANAVERAL, Fla. – Kennedy Space Center Director and former astronaut Bob Cabana, talks to Florida middle school students and their teachers during the Zero Robotics finals competition at the center's Space Station Processing Facility in Florida. Students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
Using OpenEHR in SICTI an electronic health record system for critical medicine
NASA Astrophysics Data System (ADS)
Filgueira, R.; Odriazola, A.; Simini, F.
2007-11-01
SICTI is a software tool for registering health records in critical medicine environments. Version 1.0 has been in use since 2003. The Biomedical Engineering Group (Núcleo de Ingeniería Biomédica), with support from the Technological Development Programme (Programa de Desarrollo Tecnológico), decided to develop a new version, to provide an aid for more critical medicine processes, based on a framework which would make the application domain change oriented. The team analyzed three alternatives: to develop an original product based on new research, to base the development on OpenEHR framework, or to use HL7 RIM as the reference model for SICTI. The team opted for OpenEHR. This work describes the use of OpenEHR, its strong and weak points, and states future work perspectives.
PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah
2009-12-01
In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for themore » longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.« less
State of the Practice of Intrusion Detection Technologies
2000-01-01
security incident response teams ) - the role of IDS in threat management, such as defining alarm severity, monitoring, alerting, and policy-based...attacks in an effort to sneak under the radar of security specialists and intrusion detection software, a U.S. Navy network security team said today...to get the smoking gun," said Stephen Northcutt, head of the Shadow intrusion detection team at the Naval Surface Warfare Center. "To know what’s
1994-02-28
improvements. Pare 10 ka•- V •DkI U Release Manager The Release Manager provides franchisees with media copies of existing libraries, as needed. Security...implementors, and potential library franchisees . Security Team The Security Team assists the Security Officer with security analysis. Team members are...and Franchisees . A Potential User is an individual who requests a Library Account. A User Recruit has been sent a CARDS Library Account Registration
Virtual Team Governance: Addressing the Governance Mechanisms and Virtual Team Performance
NASA Astrophysics Data System (ADS)
Zhan, Yihong; Bai, Yu; Liu, Ziheng
As technology has improved and collaborative software has been developed, virtual teams with geographically dispersed members spread across diverse physical locations have become increasingly prominent. Virtual team is supported by advancing communication technologies, which makes virtual teams able to largely transcend time and space. Virtual teams have changed the corporate landscape, which are more complex and dynamic than traditional teams since the members of virtual teams are spread on diverse geographical locations and their roles in the virtual team are different. Therefore, how to realize good governance of virtual team and arrive at good virtual team performance is becoming critical and challenging. Good virtual team governance is essential for a high-performance virtual team. This paper explores the performance and the governance mechanism of virtual team. It establishes a model to explain the relationship between the performance and the governance mechanisms in virtual teams. This paper is focusing on managing virtual teams. It aims to find the strategies to help business organizations to improve the performance of their virtual teams and arrive at the objectives of good virtual team management.
NASA Technical Reports Server (NTRS)
Weise, Timothy M
2012-01-01
NASA's Dawn mission to the asteroid Vesta and dwarf planet Ceres launched September 27, 2007 and arrived at Vesta in July of 2011. This mission uses ion propulsion to achieve the necessary delta-V to reach and maneuver at Vesta and Ceres. This paper will show how the evolution of ground system automation and process improvement allowed a relatively small engineering team to transition from cruise operations to asteroid operations while maintaining robust processes. The cruise to Vesta phase lasted almost 4 years and consisted of activities that were built with software tools, but each tool was open loop and required engineers to review the output to ensure consistency. Additionally, this same time period was characterized by the evolution from manually retrieved and reviewed data products to automatically generated data products and data value checking. Furthermore, the team originally took about three to four weeks to design and build about four weeks of spacecraft activities, with spacecraft contacts only once a week. Operations around the asteroid Vesta increased the tempo dramatically by transitioning from one contact a week to three or four contacts a week, to fourteen contacts a week (every 12 hours). This was accompanied by a similar increase in activity complexity as well as very fast turn around activity design and build cycles. The design process became more automated and the tools became closed loop, allowing the team to build more activities without sacrificing rigor. Additionally, these activities were dependent on the results of flight system performance, so more automation was added to analyze the flight data and provide results in a timely fashion to feed the design cycle. All of this automation and process improvement enabled up the engineers to focus on other aspects of spacecraft operations, including spacecraft health monitoring and anomaly resolution.
Perspectives on bioanalytical mass spectrometry and automation in drug discovery.
Janiszewski, John S; Liston, Theodore E; Cole, Mark J
2008-11-01
The use of high speed synthesis technologies has resulted in a steady increase in the number of new chemical entities active in the drug discovery research stream. Large organizations can have thousands of chemical entities in various stages of testing and evaluation across numerous projects on a weekly basis. Qualitative and quantitative measurements made using LC/MS are integrated throughout this process from early stage lead generation through candidate nomination. Nearly all analytical processes and procedures in modern research organizations are automated to some degree. This includes both hardware and software automation. In this review we discuss bioanalytical mass spectrometry and automation as components of the analytical chemistry infrastructure in pharma. Analytical chemists are presented as members of distinct groups with similar skillsets that build automated systems, manage test compounds, assays and reagents, and deliver data to project teams. The ADME-screening process in drug discovery is used as a model to highlight the relationships between analytical tasks in drug discovery. Emerging software and process automation tools are described that can potentially address gaps and link analytical chemistry related tasks. The role of analytical chemists and groups in modern 'industrialized' drug discovery is also discussed.
An overview of the model integration process: From pre ...
Integration of models requires linking models which can be developed using different tools, methodologies, and assumptions. We performed a literature review with the aim of improving our understanding of model integration process, and also presenting better strategies for building integrated modeling systems. We identified five different phases to characterize integration process: pre-integration assessment, preparation of models for integration, orchestration of models during simulation, data interoperability, and testing. Commonly, there is little reuse of existing frameworks beyond the development teams and not much sharing of science components across frameworks. We believe this must change to enable researchers and assessors to form complex workflows that leverage the current environmental science available. In this paper, we characterize the model integration process and compare integration practices of different groups. We highlight key strategies, features, standards, and practices that can be employed by developers to increase reuse and interoperability of science software components and systems. The paper provides a review of the literature regarding techniques and methods employed by various modeling system developers to facilitate science software interoperability. The intent of the paper is to illustrate the wide variation in methods and the limiting effect the variation has on inter-framework reuse and interoperability. A series of recommendation
Distributed subterranean exploration and mapping with teams of UAVs
NASA Astrophysics Data System (ADS)
Rogers, John G.; Sherrill, Ryan E.; Schang, Arthur; Meadows, Shava L.; Cox, Eric P.; Byrne, Brendan; Baran, David G.; Curtis, J. Willard; Brink, Kevin M.
2017-05-01
Teams of small autonomous UAVs can be used to map and explore unknown environments which are inaccessible to teams of human operators in humanitarian assistance and disaster relief efforts (HA/DR). In addition to HA/DR applications, teams of small autonomous UAVs can enhance Warfighter capabilities and provide operational stand-off for military operations such as cordon and search, counter-WMD, and other intelligence, surveillance, and reconnaissance (ISR) operations. This paper will present a hardware platform and software architecture to enable distributed teams of heterogeneous UAVs to navigate, explore, and coordinate their activities to accomplish a search task in a previously unknown environment.
Globus Online: Climate Data Management for Small Teams
NASA Astrophysics Data System (ADS)
Ananthakrishnan, R.; Foster, I.
2013-12-01
Large and highly distributed climate data demands new approaches to data organization and lifecycle management. We need, in particular, catalogs that can allow researchers to track the location and properties of large numbers of data files, and management tools that can allow researchers to update data properties and organization during their research, move data among different locations, and invoke analysis computations on data--all as easily as if they were working with small numbers of files on their desktop computer. Both catalogs and management tools often need to be able to scale to extremely large quantities of data. When developing solutions to these problems, it is important to distinguish between the needs of (a) large communities, for whom the ability to organize published data is crucial (e.g., by implementing formal data publication processes, assigning DOIs, recording definitive metadata, providing for versioning), and (b) individual researchers and small teams, who are more frequently concerned with tracking the diverse data and computations involved in what highly dynamic and iterative research processes. Key requirements in the latter case include automated data registration and metadata extraction, ease of update, close-to-zero management overheads (e.g., no local software install); and flexible, user-managed sharing support, allowing read and write privileges within small groups. We describe here how new capabilities provided by the Globus Online system address the needs of the latter group of climate scientists, providing for the rapid creation and establishment of lightweight individual- or team-specific catalogs; the definition of logical groupings of data elements, called datasets; the evolution of catalogs, dataset definitions, and associated metadata over time, to track changes in data properties and organization as a result of research processes; and the manipulation of data referenced by catalog entries (e.g., replication of a dataset to a remote location for analysis, sharing of a dataset). Its software-as-a-service ('SaaS') architecture means that these capabilities are provided to users over the network, without a need for local software installation. In addition, Globus Online provides well defined APIs, thus providing a platform that can be leveraged to integrate the capabilities with other portals and applications. We describe early applications of these new Globus Online to climate science. We focus in particular on applications that demonstrate how Globus Online capabilities complement those of the Earth System Grid Federation (ESGF), the premier system for publication and discovery of large community datasets. ESGF already uses Globus Online mechanisms for data download. We demonstrate methods by which the two systems can be further integrated and harmonized, so that for example data collections produced within a small team can be easily published from Globus Online to ESGF for archival storage and broader access--and a Globus Online catalog can be used to organize an individual view of a subset of data held in ESGF.
NASA Technical Reports Server (NTRS)
Shell, Elaine M.; Lue, Yvonne; Chu, Martha I.
1999-01-01
Flight software is a mission critical element of spacecraft functionality and performance. When ground operations personnel interface to a spacecraft, they are typically dealing almost entirely with the capabilities of onboard software. This software, even more than critical ground/flight communications systems, is expected to perform perfectly during all phases of spacecraft life. Due to the fact that it can be reprogrammed on-orbit to accommodate degradations or failures in flight hardware, new insights into spacecraft characteristics, new control options which permit enhanced science options, etc., the on- orbit flight software maintenance team is usually significantly responsible for the long term success of a science mission. Failure of flight software to perform as needed can result in very expensive operations work-around costs and lost science opportunities. There are three basic approaches to maintaining spacecraft software--namely using the original developers, using the mission operations personnel, or assembling a center of excellence for multi-spacecraft software maintenance. Not planning properly for flight software maintenance can lead to unnecessarily high on-orbit costs and/or unacceptably long delays, or errors, in patch installations. A common approach for flight software maintenance is to access the original development staff. The argument for utilizing the development staff is that the people who developed the software will be the best people to modify the software on-orbit. However, it can quickly becomes a challenge to obtain the services of these key people. They may no longer be available to the organization. They may have a more urgent job to perform, quite likely on another project under different project management. If they havn't worked on the software for a long time, they may need precious time for refamiliarization to the software, testbeds and tools. Further, a lack of insight into issues related to flight software in its on-orbit environment, may find the developer unprepared for the challenges. The second approach is to train a member of the flight operations team to maintain the spacecraft software. This can prove to be a costly and inflexible solution. The person assigned to this duty may not have enough work to do during a problem free period and may have too much to do when a problem arises. If the person is a talented software engineer, he/she may not enjoy the limited software opportunities available in this position; and may eventually leave for newer technology computer science opportunities. Training replacement flight software personnel can be a difficult and lengthy process. The third approach is to assemble a center of excellence for on-orbit spacecraft software maintenance. Personnel in this specialty center can be managed to support flight software of multiple missions at once. The variety of challenges among a set of on-orbit missions, can result in a dedicated, talented staff which is fully trained and available to support each mission's needs. Such staff are not software developers but are rather spacecraft software systems engineers. The cost to any one mission is extremely low because the software staff works and charges, minimally on missions with no current operations issues; and their professional insight into on-orbit software troubleshooting and maintenance methods ensures low risk, effective and minimal-cost solutions to on-orbit issues.
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2004-01-01
In 2001, NASA Goddard Space Flight Center's Laboratory for Terrestrial Physics started the construction of a science Investigator-led Processing System (SIPS) for processing data from the Ozone Monitoring Instrument (OMI) which will launch on the Aura platform in mid 2004. The Ozone Monitoring Instrument (OMI) is a contribution of the Netherlands Agency for Aerospace Programs (NIVR) in collaboration with the Finnish Meteorological Institute (FMI) to the Earth Observing System (EOS) Aura mission. It will continue the Total Ozone Monitoring System (TOMS) record for total ozone and other atmospheric parameters related to ozone chemistry and climate. OMI measurements will be highly synergistic with the other instruments on the EOS Aura platform. The LTP previously developed the Moderate Resolution Imaging Spectrometer (MODIS) Data Processing System (MODAPS), which has been in full operations since the launches of the Terra and Aqua spacecrafts in December, 1999 and May, 2002 respectively. During that time, it has continually evolved to better support the needs of the MODIS team. We now run multiple instances of the system managing faster than real time reprocessings of the data as well as continuing forward processing. The new OMI Data Processing System (OMIDAPS) was adapted from the MODAPS. It will ingest raw data from the satellite ground station and process it to produce calibrated, geolocated higher level data products. These data products will be transmitted to the Goddard Distributed Active Archive Center (GDAAC) instance of the Earth Observing System (EOS) Data and Information System (EOSDIS) for long term archive and distribution to the public. The OMIDAPS will also provide data distribution to the OMI Science Team for quality assessment, algorithm improvement, calibration, etc. We have taken advantage of lessons learned from the MODIS experience and software already developed for MODIS. We made some changes in the hardware system organization, database and software to adapt the system for OMI. We replaced the fundamental database system, Sybase, with an Open Source RDBMS called PostgreSQL, and based the entire OMIDAPS on a cluster of Linux based commodity computers rather than the large SGI servers that MODAPS uses. Rather than relying on a central I/O server host, the new system distributes its data archive among multiple server hosts in the cluster. OMI is also customizing the graphical user interfaces and reporting structure to more closely meet the needs of the OMI Science Team. Prior to 2003, simulated OMI data and the science algorithms were not ready for production testing. We initially constructed a prototype system and tested using a 25 year dataset of Total Ozone Mapping Spectrometer (TOMS) and Solar Backscatter Ultraviolet Instrument (SBUV) data. This prototype system provided a platform to support the adaptation of the algorithms for OMI, and provided reprocessing of the historical data aiding in its analysis. In a recent reanalysis of the TOMS data, the OMIDAPS processed 108,000 full orbits of data through 4 processing steps per orbit, producing about 800,000 files (400 GiB) of level 2 and greater data files. More recently we have installed two instances of the OMIDAPS for integration and testing of OM1 science processes as they get delivered from the Science Team. A Test instance of the OMIDAPS has also supported a series of "Interface Confidence Tests" (ICTs) and End-to-End Ground System tests to ensure the launch readiness of the system. This paper will discuss the high-level hardware, software, and database organization of the OMIDAPS and how it builds on the MODAPS heritage system. It will also provide an overview of the testing and implementation of the production OMIDAPS.
Model-driven approach to data collection and reporting for quality improvement.
Curcin, Vasa; Woodcock, Thomas; Poots, Alan J; Majeed, Azeem; Bell, Derek
2014-12-01
Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Laireiter, Anton Rupert
2017-01-01
Background In recent years, the assessment of mental disorders has become more and more personalized. Modern advancements such as Internet-enabled mobile phones and increased computing capacity make it possible to tap sources of information that have long been unavailable to mental health practitioners. Objective Software packages that combine algorithm-based treatment planning, process monitoring, and outcome monitoring are scarce. The objective of this study was to assess whether the DynAMo Web application can fill this gap by providing a software solution that can be used by both researchers to conduct state-of-the-art psychotherapy process research and clinicians to plan treatments and monitor psychotherapeutic processes. Methods In this paper, we report on the current state of a Web application that can be used for assessing the temporal structure of mental disorders using information on their temporal and synchronous associations. A treatment planning algorithm automatically interprets the data and delivers priority scores of symptoms to practitioners. The application is also capable of monitoring psychotherapeutic processes during therapy and of monitoring treatment outcomes. This application was developed using the R programming language (R Core Team, Vienna) and the Shiny Web application framework (RStudio, Inc, Boston). It is made entirely from open-source software packages and thus is easily extensible. Results The capabilities of the proposed application are demonstrated. Case illustrations are provided to exemplify its usefulness in clinical practice. Conclusions With the broad availability of Internet-enabled mobile phones and similar devices, collecting data on psychopathology and psychotherapeutic processes has become easier than ever. The proposed application is a valuable tool for capturing, processing, and visualizing these data. The combination of dynamic assessment and process- and outcome monitoring has the potential to improve the efficacy and effectiveness of psychotherapy. PMID:28729233
Janus: Graphical Software for Analyzing In-Situ Measurements of Solar-Wind Ions
NASA Astrophysics Data System (ADS)
Maruca, B.; Stevens, M. L.; Kasper, J. C.; Korreck, K. E.
2016-12-01
In-situ observations of solar-wind ions provide tremendous insights into the physics of space plasmas. Instrument on spacecraft measure distributions of ion energies, which can be processed into scientifically useful data (e.g., values for ion densities and temperatures). This analysis requires a strong, technical understanding of the instrument, so it has traditionally been carried out by the instrument teams using automated software that they had developed for that purpose. The automated routines are optimized for typical solar-wind conditions, so they can fail to capture the complex (and scientifically interesting) microphysics of transient solar-wind - such as coronal mass ejections (CME's) and co-rotating interaction regions (CIR's) - which are often better analyzed manually.This presentation reports on the ongoing development of Janus, a new software package for processing in-situ measurement of solar-wind ions. Janus will provide user with an easy-to-use graphical user interface (GUI) for carrying out highly customized analyses. Transparent to the user, Janus will automatically handle the most technical tasks (e.g., the retrieval and calibration of measurements). For the first time, users with only limited knowledge about the instruments (e.g., non-instrumentalists and students) will be able to easily process measurements of solar-wind ions. Version 1 of Janus focuses specifically on such measurements from the Wind spacecraft's Faraday Cups and is slated for public release in time for this presentation.
Evaluation and Validation (E&V) Team Public Report. Volume 5
1990-10-31
aspects, software engineering practices, etc. The E&V requirements which are developed will be used to guide the E&V technical effort. The currently...interoperability of Ada software engineering environment tools and data. The scope of the CAIS-A includes the functionality affecting transportability that is...requirement that they be CAIS conforming tools or data. That is, for example numerous CIVC data exist on special purpose software currently available
A new dataset validation system for the Planetary Science Archive
NASA Astrophysics Data System (ADS)
Manaud, N.; Zender, J.; Heather, D.; Martinez, S.
2007-08-01
The Planetary Science Archive is the official archive for the Mars Express mission. It has received its first data by the end of 2004. These data are delivered by the PI teams to the PSA team as datasets, which are formatted conform to the Planetary Data System (PDS). The PI teams are responsible for analyzing and calibrating the instrument data as well as the production of reduced and calibrated data. They are also responsible of the scientific validation of these data. ESA is responsible of the long-term data archiving and distribution to the scientific community and must ensure, in this regard, that all archived products meet quality. To do so, an archive peer-review is used to control the quality of the Mars Express science data archiving process. However a full validation of its content is missing. An independent review board recently recommended that the completeness of the archive as well as the consistency of the delivered data should be validated following well-defined procedures. A new validation software tool is being developed to complete the overall data quality control system functionality. This new tool aims to improve the quality of data and services provided to the scientific community through the PSA, and shall allow to track anomalies in and to control the completeness of datasets. It shall ensure that the PSA end-users: (1) can rely on the result of their queries, (2) will get data products that are suitable for scientific analysis, (3) can find all science data acquired during a mission. We defined dataset validation as the verification and assessment process to check the dataset content against pre-defined top-level criteria, which represent the general characteristics of good quality datasets. The dataset content that is checked includes the data and all types of information that are essential in the process of deriving scientific results and those interfacing with the PSA database. The validation software tool is a multi-mission tool that has been designed to provide the user with the flexibility of defining and implementing various types of validation criteria, to iteratively and incrementally validate datasets, and to generate validation reports.
NASA Astrophysics Data System (ADS)
Regnell, Björn; Höst, Martin; Nilsson, Fredrik; Bengtsson, Henrik
When developing software-intensive products for a market-place it is important for a development organisation to create innovative features for coming releases in order to achieve advantage over competitors. This paper focuses on assessment of innovation capability at team level in relation to the requirements engineering that is taking place before the actual product development projects are decided, when new business models, technology opportunities and intellectual property rights are created and investigated through e.g. prototyping and concept development. The result is a measurement framework focusing on four areas: innovation elicitation, selection, impact and ways-of-working. For each area, candidate measurements were derived from interviews to be used as inspiration in the development of a tailored measurement program. The framework is based on interviews with participants of a software team with specific innovation responsibilities and validated through cross-case analysis and feedback from practitioners.
A Computer Supported Teamwork Project for People with a Visual Impairment.
ERIC Educational Resources Information Center
Hale, Greg
2000-01-01
Discussion of the use of computer supported teamwork (CSTW) in team-based organizations focuses on problems that visually impaired people have reading graphical user interface software via screen reader software. Describes a project that successfully used email for CSTW, and suggests issues needing further research. (LRW)
Tutor Training in Computer Science: Tutor Opinions and Student Results.
ERIC Educational Resources Information Center
Carbone, Angela; Mitchell, Ian
Edproj, a project team of faculty from the departments of computer science, software development and education at Monash University (Australia) investigated the quality of teaching and student learning and understanding in the computer science and software development departments. Edproj's research led to the development of a training program to…
Improving Collaborative Learning in Online Software Engineering Education
ERIC Educational Resources Information Center
Neill, Colin J.; DeFranco, Joanna F.; Sangwan, Raghvinder S.
2017-01-01
Team projects are commonplace in software engineering education. They address a key educational objective, provide students critical experience relevant to their future careers, allow instructors to set problems of greater scale and complexity than could be tackled individually, and are a vehicle for socially constructed learning. While all…
Learning Teamwork Skills in University Programming Courses
ERIC Educational Resources Information Center
Sancho-Thomas, Pilar; Fuentes-Fernandez, Ruben; Fernandez-Manjon, Baltasar
2009-01-01
University courses about computer programming usually seek to provide students not only with technical knowledge, but also with the skills required to work in real-life software projects. Nowadays, the development of software applications requires the coordinated efforts of the members of one or more teams. Therefore, it is important for software…
Performance and Perceptions of Student Teams Created and Stratified Based on Academic Abilities.
Camiel, Lana Dvorkin; Kostka-Rokosz, Maria; Tataronis, Gary; Goldman, Jennifer
2017-04-01
Objective. To compare student performance, elements of peer evaluation and satisfaction of teams created according to students' course entrance grade point average (GPA). Methods. Two course sections were divided into teams of four to five students utilizing Comprehensive Assessment of Team Member Effectiveness (CATME) software. Results. Of 336 students enrolled, 324 consented to participation. Weekly team quiz averages were 99.1% (higher GPA), 97.2% (lower GPA), 97.7% (mixed GPA). Weekly individual quiz averages were 87.2% (higher GPA), 83.3% (lower GPA), 85.2% (mixed GPA). Students with same GPA performed similarly individually independent of team assignment. Satisfaction ranged from 4.52 (higher GPA), 4.73 (lower GPA), 4.53 (mixed GPA). Conclusion. Academically stronger students in mixed GPA teams appeared to be at a slight disadvantage compared to similar students in higher GPA teams. There was no difference in team performance for academically weaker students in lower GPA versus mixed GPA teams. Team satisfaction was higher in lower GPA teams.
Performance and Perceptions of Student Teams Created and Stratified Based on Academic Abilities
Kostka-Rokosz, Maria; Tataronis, Gary; Goldman, Jennifer
2017-01-01
Objective. To compare student performance, elements of peer evaluation and satisfaction of teams created according to students’ course entrance grade point average (GPA). Methods. Two course sections were divided into teams of four to five students utilizing Comprehensive Assessment of Team Member Effectiveness (CATME) software. Results. Of 336 students enrolled, 324 consented to participation. Weekly team quiz averages were 99.1% (higher GPA), 97.2% (lower GPA), 97.7% (mixed GPA). Weekly individual quiz averages were 87.2% (higher GPA), 83.3% (lower GPA), 85.2% (mixed GPA). Students with same GPA performed similarly individually independent of team assignment. Satisfaction ranged from 4.52 (higher GPA), 4.73 (lower GPA), 4.53 (mixed GPA). Conclusion. Academically stronger students in mixed GPA teams appeared to be at a slight disadvantage compared to similar students in higher GPA teams. There was no difference in team performance for academically weaker students in lower GPA versus mixed GPA teams. Team satisfaction was higher in lower GPA teams. PMID:28496267
Lessons Learned from Optical Payload for Lasercomm Science (OPALS) Mission Operations
NASA Technical Reports Server (NTRS)
Sindiy, Oleg V.; Abrahamson, Matthew J.; Biswas, Abhijit; Wright, Malcolm W.; Padams, Jordan H.; Konyha, Alexander L.
2015-01-01
This paper provides an overview of Optical Payload for Lasercomm Science (OPALS) activities and lessons learned during mission operations. Activities described cover the periods of commissioning, prime, and extended mission operations, during which primary and secondary mission objectives were achieved for demonstrating space-to-ground optical communications. Lessons learned cover Mission Operations System topics in areas of: architecture verification and validation, staffing, mission support area, workstations, workstation tools, interfaces with support services, supporting ground stations, team training, procedures, flight software upgrades, post-processing tools, and public outreach.
2010-06-01
researchers outside the government to produce the kinds of algorithms and software that would easily transition into solutions for teams of autonomous ... vehicles for military scenarios. To accomplish this, we began modifying the RoboCup soccer game step-by-step to incorporate rules that simulate these
Ada training evaluation and recommendations from the Gamma Ray Observatory Ada Development Team
NASA Technical Reports Server (NTRS)
1985-01-01
The Ada training experiences of the Gamma Ray Observatory Ada development team are related, and recommendations are made concerning future Ada training for software developers. Training methods are evaluated, deficiencies in the training program are noted, and a recommended approach, including course outline, time allocation, and reference materials, is offered.
ERIC Educational Resources Information Center
Rogers, Camille, Ed.
The conference paper topics include: business and information technology (IT) education; knowledge management; teaching software applications; development of multimedia teaching materials; technology job skills in demand; IT management for executives; self-directed teams in information systems courses; a team building exercise to software…
Adaptive cyber-attack modeling system
NASA Astrophysics Data System (ADS)
Gonsalves, Paul G.; Dougherty, Edward T.
2006-05-01
The pervasiveness of software and networked information systems is evident across a broad spectrum of business and government sectors. Such reliance provides an ample opportunity not only for the nefarious exploits of lone wolf computer hackers, but for more systematic software attacks from organized entities. Much effort and focus has been placed on preventing and ameliorating network and OS attacks, a concomitant emphasis is required to address protection of mission critical software. Typical software protection technique and methodology evaluation and verification and validation (V&V) involves the use of a team of subject matter experts (SMEs) to mimic potential attackers or hackers. This manpower intensive, time-consuming, and potentially cost-prohibitive approach is not amenable to performing the necessary multiple non-subjective analyses required to support quantifying software protection levels. To facilitate the evaluation and V&V of software protection solutions, we have designed and developed a prototype adaptive cyber attack modeling system. Our approach integrates an off-line mechanism for rapid construction of Bayesian belief network (BN) attack models with an on-line model instantiation, adaptation and knowledge acquisition scheme. Off-line model construction is supported via a knowledge elicitation approach for identifying key domain requirements and a process for translating these requirements into a library of BN-based cyber-attack models. On-line attack modeling and knowledge acquisition is supported via BN evidence propagation and model parameter learning.
Supporting Real-Time Operations and Execution through Timeline and Scheduling Aids
NASA Technical Reports Server (NTRS)
Marquez, Jessica J.; Pyrzak, Guy; Hashemi, Sam; Ahmed, Samia; McMillin, Kevin Edward; Medwid, Joseph Daniel; Chen, Diana; Hurtle, Esten
2013-01-01
Since 2003, the NASA Ames Research Center has been actively involved in researching and advancing the state-of-the-art of planning and scheduling tools for NASA mission operations. Our planning toolkit SPIFe (Scheduling and Planning Interface for Exploration) has supported a variety of missions and field tests, scheduling activities for Mars rovers as well as crew on-board International Space Station and NASA earth analogs. The scheduled plan is the integration of all the activities for the day/s. In turn, the agents (rovers, landers, spaceships, crew) execute from this schedule while the mission support team members (e.g., flight controllers) follow the schedule during execution. Over the last couple of years, our team has begun to research and validate methods that will better support users during realtime operations and execution of scheduled activities. Our team utilizes human-computer interaction principles to research user needs, identify workflow processes, prototype software aids, and user test these. This paper discusses three specific prototypes developed and user tested to support real-time operations: Score Mobile, Playbook, and Mobile Assistant for Task Execution (MATE).
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yoshioka, Satoshi
2015-08-01
We are planning JASMINE (Japan Astrometric Satellite Mission for INfrared Exploration) as a series missions of Nano-JASMINE, Small-JASMINE, and JASMINE. Nano-JASMINE data analysis will be performed as a collaboration with Gaia data analysis team. We apply Gaia core processing software named AGIS as a Nano-JASMINE core solution. Applicability has been confirmed by D. Michalik and Gaia DPAC team. Converting telemetry data to AGIS input is a JASMINE team's task. It includes centroid caoculatoin of the stellar image. Accuracy of Gaia is two-order better than that of Nano-JASMINE. But there are only two astrometric satellite missions with CCD detector for global astrometry. So, Nano-JASMINE will have role of calibrating Gaia data. Bright star centroiding is the most important science target.Small-JASMINE has completely different observation strategy. It will observe step stair observation with about a million observations for individual star. Sub milli arcsec centroid errors of individual steallar images will be reduced by two order and getting 10 micro arcsecond astrometric accuracy by applying square root N law of million observations. Various systematic noise should be estimated, modelled, and subtracted. Some statistical study will be shown in this poster.
Rupcic, Sonia; Tamrat, Tigest; Kachnowski, Stan
2012-11-01
This study reviews the state of diabetes information technology (IT) initiatives and presents a set of recommendations for improvement based on interviews with commercial IT innovators. Semistructured interviews were conducted with 10 technology developers, representing 12 of the most successful IT companies in the world. Average interview time was approximately 45 min. Interviews were audio-recorded, transcribed, and entered into ATLAS.ti for qualitative data analysis. Themes were identified through a process of selective and open coding by three researchers. We identified two practices, common among successful IT companies, that have allowed them to avoid or surmount the challenges that confront healthcare professionals involved in diabetes IT development: (1) employing a diverse research team of software developers and engineers, statisticians, consumers, and business people and (2) conducting rigorous research and analytics on technology use and user preferences. Because of the nature of their respective fields, healthcare professionals and commercial innovators face different constraints. With these in mind we present three recommendations, informed by practices shared by successful commercial developers, for those involved in developing diabetes IT programming: (1) include software engineers on the implementation team throughout the intervention, (2) conduct more extensive baseline testing of users and monitor the usage data derived from the technology itself, and (3) pursue Institutional Review Board-exempt research.
Optimal Planning and Problem-Solving
NASA Technical Reports Server (NTRS)
Clemet, Bradley; Schaffer, Steven; Rabideau, Gregg
2008-01-01
CTAEMS MDP Optimal Planner is a problem-solving software designed to command a single spacecraft/rover, or a team of spacecraft/rovers, to perform the best action possible at all times according to an abstract model of the spacecraft/rover and its environment. It also may be useful in solving logistical problems encountered in commercial applications such as shipping and manufacturing. The planner reasons around uncertainty according to specified probabilities of outcomes using a plan hierarchy to avoid exploring certain kinds of suboptimal actions. Also, planned actions are calculated as the state-action space is expanded, rather than afterward, to reduce by an order of magnitude the processing time and memory used. The software solves planning problems with actions that can execute concurrently, that have uncertain duration and quality, and that have functional dependencies on others that affect quality. These problems are modeled in a hierarchical planning language called C_TAEMS, a derivative of the TAEMS language for specifying domains for the DARPA Coordinators program. In realistic environments, actions often have uncertain outcomes and can have complex relationships with other tasks. The planner approaches problems by considering all possible actions that may be taken from any state reachable from a given, initial state, and from within the constraints of a given task hierarchy that specifies what tasks may be performed by which team member.
Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L
2012-11-01
Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.
Development of a smart type motor operated valve for nuclear power plants
NASA Astrophysics Data System (ADS)
Kim, Chang-Hwoi; Park, Joo-Hyun; Lee, Dong-young; Koo, In-Soo
2005-12-01
In this paper, the design concept of the smart type motor operator valve for nuclear power plant was described. The development objective of the smart valve is to achieve superior accuracy, long-term reliability, and ease of use. In this reasons, developed smart valve has fieldbus communication such as deviceNet and Profibus-DP, auto-tuning PID controller, self-diagnostics, and on-line calibration capabilities. And also, to achieve pressure, temperature, and flow control with internal PID controller, the pressure sensor and transmitter were included in this valve. And, temperature and flow signal acquisition port was prepared. The developed smart valve will be performed equipment qualification test such as environment, EMI/EMC, and vibration in Korea Test Lab. And, the valve performance is tested in a test loop which is located in Seoul National University Lab. To apply nuclear power plant, the software is being developed according to software life cycle. The developed software is verified by independent software V and V team. It is expected that the smart valve can be applied to an existing NPPs for replacing or to a new nuclear power plants. The design and fabrication of smart valve is now being processed.
Multidisciplinary Tool for Systems Analysis of Planetary Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2011-01-01
Systems analysis of a planetary entry (SAPE), descent, and landing (EDL) is a multidisciplinary activity in nature. SAPE improves the performance of the systems analysis team by automating and streamlining the process, and this improvement can reduce the errors that stem from manual data transfer among discipline experts. SAPE is a multidisciplinary tool for systems analysis of planetary EDL for Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Titan. It performs EDL systems analysis for any planet, operates cross-platform (i.e., Windows, Mac, and Linux operating systems), uses existing software components and open-source software to avoid software licensing issues, performs low-fidelity systems analysis in one hour on a computer that is comparable to an average laptop, and keeps discipline experts in the analysis loop. SAPE uses Python, a platform-independent, open-source language, for integration and for the user interface. Development has relied heavily on the object-oriented programming capabilities that are available in Python. Modules are provided to interface with commercial and government off-the-shelf software components (e.g., thermal protection systems and finite-element analysis). SAPE currently includes the following analysis modules: geometry, trajectory, aerodynamics, aerothermal, thermal protection system, and interface for structural sizing.
Accessing Information on the Mars Exploration Rovers Mission
NASA Astrophysics Data System (ADS)
Walton, J. D.; Schreiner, J. A.
2005-12-01
In January 2004, the Mars Exploration Rovers (MER) mission successfully deployed two robotic geologists - Spirit and Opportunity - to opposite sides of the red planet. Onboard each rover is an array of cameras and scientific instruments that send data back to Earth, where ground-based systems process and store the information. During the height of the mission, a team of about 250 scientists and engineers worked around the clock to analyze the collected data, determine a strategy and activities for the next day and then carefully compose the command sequences that would instruct the rovers in how to perform their tasks. The scientists and engineers had to work closely together to balance the science objectives with the engineering constraints so that the mission achieved its goals safely and quickly. To accomplish this coordinated effort, they adhered to a tightly orchestrated schedule of meetings and processes. To keep on time, it was critical that all team members were aware of what was happening, knew how much time they had to complete their tasks, and could easily access the information they need to do their jobs. Computer scientists and software engineers at NASA Ames Research Center worked closely with the mission managers at the Jet Propulsion Laboratory (JPL) to create applications that support the mission. One such application, the Collaborative Information Portal (CIP), helps mission personnel perform their daily tasks, whether they work inside mission control or the science areas at JPL, or in their homes, schools, or offices. With a three-tiered, service-oriented architecture (SOA) - client, middleware, and data repository - built using Java and commercial software, CIP provides secure access to mission schedules and to data and images transmitted from the Mars rovers. This services-based approach proved highly effective for building distributed, flexible applications, and is forming the basis for the design of future mission software systems. Almost two years after the landings on Mars, the rovers are still going strong, and CIP continues to provide data access to mission personnel.
Scientific Data Analysis and Software Support: Geodynamics
NASA Technical Reports Server (NTRS)
Klosko, Steven; Sanchez, B. (Technical Monitor)
2000-01-01
The support on this contract centers on development of data analysis strategies, geodynamic models, and software codes to study four-dimensional geodynamic and oceanographic processes, as well as studies and mission support for near-Earth and interplanetary satellite missions. SRE had a subcontract to maintain the optical laboratory for the LTP, where instruments such as MOLA and GLAS are developed. NVI performed work on a Raytheon laser altimetry task through a subcontract, providing data analysis and final data production for distribution to users. HBG had a subcontract for specialized digital topography analysis and map generation. Over the course of this contract, Raytheon ITSS staff have supported over 60 individual tasks. Some tasks have remained in place during this entire interval whereas others have been completed and were of shorter duration. Over the course of events, task numbers were changed to reflect changes in the character of the work or new funding sources. The description presented below will detail the technical accomplishments that have been achieved according to their science and technology areas. What will be shown is a brief overview of the progress that has been made in each of these investigative and software development areas. Raytheon ITSS staff members have received many awards for their work on this contract, including GSFC Group Achievement Awards for TOPEX Precision Orbit Determination and the Joint Gravity Model One Team. NASA JPL gave the TOPEX/POSEIDON team a medal commemorating the completion of the primary mission and a Certificate of Appreciation. Raytheon ITSS has also received a Certificate of Appreciation from GSFC for its extensive support of the Shuttle Laser Altimeter Experiment.
AWIPS II Application Development, a SPoRT Perspective
NASA Technical Reports Server (NTRS)
Burks, Jason E.; Smith, Matthew; McGrath, Kevin M.
2014-01-01
The National Weather Service (NWS) is deploying its next-generation decision support system, called AWIPS II (Advanced Weather Interactive Processing System II). NASA's Short-term Prediction Research and Transition (SPoRT) Center has developed several software 'plug-ins' to extend the capabilities of AWIPS II. SPoRT aims to continue its mission of improving short-term forecasts by providing NASA and NOAA products on the decision support system used at NWS weather forecast offices (WFOs). These products are not included in the standard Satellite Broadcast Network feed provided to WFOs. SPoRT has had success in providing support to WFOs as they have transitioned to AWIPS II. Specific examples of transitioning SPoRT plug-ins to WFOs with newly deployed AWIPS II systems will be presented. Proving Ground activities (GOES-R and JPSS) will dominate SPoRT's future AWIPS II activities, including tool development as well as enhancements to existing products. In early 2012 SPoRT initiated the Experimental Product Development Team, a group of AWIPS II developers from several institutions supporting NWS forecasters with innovative products. The results of the team's spring and fall 2013 meeting will be presented. Since AWIPS II developers now include employees at WFOs, as well as many other institutions related to weather forecasting, the NWS has dealt with a multitude of software governance issues related to the difficulties of multiple remotely collaborating software developers. This presentation will provide additional examples of Research-to-Operations plugins, as well as an update on how governance issues are being handled in the AWIPS II developer community.
NASA Technical Reports Server (NTRS)
Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paula S.; Crump, John W.; Ackley, Keith A.
1992-01-01
In the second volume of the Demonstration Framework Document, the graphical representation of the demonstration framework is given. This second document was created to facilitate the reading and comprehension of the demonstration framework. It is designed to be viewed in parallel with Section 4.2 of the first volume to help give a picture of the relationships between the UOB's (Unit of Behavior) of the model. The model is quite large and the design team felt that this form of presentation would make it easier for the reader to get a feel for the processes described in this document. The IDEF3 (Process Description Capture Method) diagrams of the processes of an Information System Development are presented. Volume 1 describes the processes and the agents involved with each process, while this volume graphically shows the precedence relationships among the processes.
PRIDE: new developments and new datasets.
Jones, Philip; Côté, Richard G; Cho, Sang Yun; Klie, Sebastian; Martens, Lennart; Quinn, Antony F; Thorneycroft, David; Hermjakob, Henning
2008-01-01
The PRIDE (http://www.ebi.ac.uk/pride) database of protein and peptide identifications was previously described in the NAR Database Special Edition in 2006. Since this publication, the volume of public data in the PRIDE relational database has increased by more than an order of magnitude. Several significant public datasets have been added, including identifications and processed mass spectra generated by the HUPO Brain Proteome Project and the HUPO Liver Proteome Project. The PRIDE software development team has made several significant changes and additions to the user interface and tool set associated with PRIDE. The focus of these changes has been to facilitate the submission process and to improve the mechanisms by which PRIDE can be queried. The PRIDE team has developed a Microsoft Excel workbook that allows the required data to be collated in a series of relatively simple spreadsheets, with automatic generation of PRIDE XML at the end of the process. The ability to query PRIDE has been augmented by the addition of a BioMart interface allowing complex queries to be constructed. Collaboration with groups outside the EBI has been fruitful in extending PRIDE, including an approach to encode iTRAQ quantitative data in PRIDE XML.
NASA Software Engineering Benchmarking Study
NASA Technical Reports Server (NTRS)
Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.
2013-01-01
To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5.onsolidate, collect and, if needed, develop common processes principles and other assets across the Agency in order to provide more consistency in software development and acquisition practices and to reduce the overall cost of maintaining or increasing current NASA CMMI maturity levels. 6. Provide additional support for small projects that includes: (a) guidance for appropriate tailoring of requirements for small projects, (b) availability of suitable tools, including support tool set-up and training, and (c) training for small project personnel, assurance personnel and technical authorities on the acceptable options for tailoring requirements and performing assurance on small projects. 7. Develop software training classes for the more experienced software engineers using on-line training, videos, or small separate modules of training that can be accommodated as needed throughout a project. 8. Create guidelines to structure non-classroom training opportunities such as mentoring, peer reviews, lessons learned sessions, and on-the-job training. 9. Develop a set of predictive software defect data and a process for assessing software testing metric data against it. 10. Assess Agency-wide licenses for commonly used software tools. 11. Fill the knowledge gap in common software engineering practices for new hires and co-ops.12. Work through the Science, Technology, Engineering and Mathematics (STEM) program with universities in strengthening education in the use of common software engineering practices and standards. 13. Follow up this benchmark study with a deeper look into what both internal and external organizations perceive as the scope of software assurance, the value they expect to obtain from it, and the shortcomings they experience in the current practice. 14. Continue interactions with external software engineering environment through collaborations, knowledge sharing, and benchmarking.
Addressing Challenges in the Acquisition of Secure Software Systems With Open Architectures
2012-04-30
as a “broker” to market specific research topics identified by our sponsors to NPS graduate students. This three-pronged approach provides for a...breaks, and the day-ending socials. Many of our researchers use these occasions to establish new teaming arrangements for future research work. In the...software (CSS) and open source software (OSS). Federal government acquisition policy, as well as many leading enterprise IT centers, now encourage the use
Penn State University ground software support for X-ray missions.
NASA Astrophysics Data System (ADS)
Townsley, L. K.; Nousek, J. A.; Corbet, R. H. D.
1995-03-01
The X-ray group at Penn State is charged with two software development efforts in support of X-ray satellite missions. As part of the ACIS instrument team for AXAF, the authors are developing part of the ground software to support the instrument's calibration. They are also designing a translation program for Ginga data, to change it from the non-standard FRF format, which closely parallels the original telemetry format, to FITS.
2014-08-15
CAPE CANAVERAL, Fla. – Former astronaut Greg Johnson, executive director of the Center for the Advancement of Science in Space, talks to Florida middle school students and their teachers before the start of the Zero Robotics finals competition at NASA Kennedy Space Center's Space Station Processing Facility in Florida. Students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
2014-08-15
CAPE CANAVERAL, Fla. – Former astronaut Greg Johnson, executive director of the Center for the Advancement of Science in Space, talks to Florida middle school students and their teachers before the start of the Zero Robotics finals competition at NASA Kennedy Space Center's Space Station Processing Facility in Florida. Students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
2014-08-15
CAPE CANAVERAL, Fla. – Former astronaut Greg Johnson, executive director of the Center for the Advancement of Science in Space, talks to Florida middle school students and their teachers before the start of the Zero Robotics finals competition at NASA Kennedy Space Center's Space Station Processing Facility in Florida. Students designed software to control Synchronized Position Hold Engage and Reorient Experimental Satellites, or SPHERES, and competed with other teams locally. The Zero Robotics is a robotics programming competition where the robots are SPHERES. The competition starts online, where teams program the SPHERES to solve an annual challenge. After several phases of virtual competition in a simulation environment that mimics the real SPHERES, finalists are selected to compete in a live championship aboard the space station. Students compete to win a technically challenging game by programming their strategies into the SPHERES satellites. The programs are autonomous and the students cannot control the satellites during the test. Photo credit: NASA/Daniel Casper
Booth, N; Jain, N L; Sugden, B
1999-01-01
The TextBase project is a laboratory experiment to assess the feasibility of a common exchange format for sending a transcription of the contents of the Electronic Patient Record (EPR) between different general practices, when patients move from one practice to another in the NHS in England. The project was managed using a partnership arrangement between the four EPR systems vendors who agreed to collaborate and the project team. It lasted one year and consisted of an iterative design process followed by creation of message generation and reading modules within the collaborating EPR systems according to a software requirement specification created by the project team. The paper describes the creation of a common record display format, the implementation of transfer using a floppy disk in the lab, and considers the further barriers before a national implementation might be achieved.
Language and Program for Documenting Software Design
NASA Technical Reports Server (NTRS)
Kleine, H.; Zepko, T. M.
1986-01-01
Software Design and Documentation Language (SDDL) provides effective communication medium to support design and documentation of complex software applications. SDDL supports communication among all members of software design team and provides for production of informative documentation on design effort. Use of SDDL-generated document to analyze design makes it possible to eliminate many errors not detected until coding and testing attempted. SDDL processor program translates designer's creative thinking into effective document for communication. Processor performs as many automatic functions as possible, freeing designer's energy for creative effort. SDDL processor program written in PASCAL.
Crawling The Web for Libre: Selecting, Integrating, Extending and Releasing Open Source Software
NASA Astrophysics Data System (ADS)
Truslove, I.; Duerr, R. E.; Wilcox, H.; Savoie, M.; Lopez, L.; Brandt, M.
2012-12-01
Libre is a project developed by the National Snow and Ice Data Center (NSIDC). Libre is devoted to liberating science data from its traditional constraints of publication, location, and findability. Libre embraces and builds on the notion of making knowledge freely available, and both Creative Commons licensed content and Open Source Software are crucial building blocks for, as well as required deliverable outcomes of the project. One important aspect of the Libre project is to discover cryospheric data published on the internet without prior knowledge of the location or even existence of that data. Inspired by well-known search engines and their underlying web crawling technologies, Libre has explored tools and technologies required to build a search engine tailored to allow users to easily discover geospatial data related to the polar regions. After careful consideration, the Libre team decided to base its web crawling work on the Apache Nutch project (http://nutch.apache.org). Nutch is "an open source web-search software project" written in Java, with good documentation, a significant user base, and an active development community. Nutch was installed and configured to search for the types of data of interest, and the team created plugins to customize the default Nutch behavior to better find and categorize these data feeds. This presentation recounts the Libre team's experiences selecting, using, and extending Nutch, and working with the Nutch user and developer community. We will outline the technical and organizational challenges faced in order to release the project's software as Open Source, and detail the steps actually taken. We distill these experiences into a set of heuristics and recommendations for using, contributing to, and releasing Open Source Software.
CrossTalk, The Journal of Defense Software Engineering. Volume 28 Number 1. Jan/Feb 2015
2015-02-01
5.63 1.03 Positive Gain 1.19 42% 1.10 27% 1.20 31% 0.44 12% Table 7. Group 1 & 2 Pretest and Posttest Means and Gain Scores. The one ...linked to team performance [6][7][8] and is considered one of the most important small group variables [9] with cohesion-performance being driven by...increased team cohesion. Measuring Cohesion In order to measure team cohesion, one must first understand the correlated cohesion constructs. The Group
The benefits of flexible team interaction during crises.
Stachowski, Alicia A; Kaplan, Seth A; Waller, Mary J
2009-11-01
Organizations increasingly rely on teams to respond to crises. While research on team effectiveness during nonroutine events is growing, naturalistic studies examining team behaviors during crises are relatively scarce. Furthermore, the relevant literature offers competing theoretical rationales concerning effective team response to crises. In this article, the authors investigate whether high- versus average-performing teams can be distinguished on the basis of the number and complexity of their interaction patterns. Using behavioral observation methodology, the authors coded the discrete verbal and nonverbal behaviors of 14 nuclear power plant control room crews as they responded to a simulated crisis. Pattern detection software revealed systematic differences among crews in their patterns of interaction. Mean comparisons and discriminant function analysis indicated that higher performing crews exhibited fewer, shorter, and less complex interaction patterns. These results illustrate the limitations of standardized response patterns and highlight the importance of team adaptability. Implications for future research and for team training are included.
2011-02-01
Command CASE Computer Aided Software Engineering CASEVAC Casualty Evacuation CASTFOREM Combined Arms And Support Task Force Evaluation Model CAT Center For...Advanced Technologies CAT Civil Affairs Team CAT Combined Arms Training CAT Crew Integration CAT Crisis Action Team CATIA Computer-Aided Three...Dimensional Interactive Application CATOX Catalytic Oxidation CATS Combined Arms Training Strategy CATT Combined Arms Tactical Trainer CATT Computer
Orion MPCV GN and C End-to-End Phasing Tests
NASA Technical Reports Server (NTRS)
Neumann, Brian C.
2013-01-01
End-to-end integration tests are critical risk reduction efforts for any complex vehicle. Phasing tests are an end-to-end integrated test that validates system directional phasing (polarity) from sensor measurement through software algorithms to end effector response. Phasing tests are typically performed on a fully integrated and assembled flight vehicle where sensors are stimulated by moving the vehicle and the effectors are observed for proper polarity. Orion Multi-Purpose Crew Vehicle (MPCV) Pad Abort 1 (PA-1) Phasing Test was conducted from inertial measurement to Launch Abort System (LAS). Orion Exploration Flight Test 1 (EFT-1) has two end-to-end phasing tests planned. The first test from inertial measurement to Crew Module (CM) reaction control system thrusters uses navigation and flight control system software algorithms to process commands. The second test from inertial measurement to CM S-Band Phased Array Antenna (PAA) uses navigation and communication system software algorithms to process commands. Future Orion flights include Ascent Abort Flight Test 2 (AA-2) and Exploration Mission 1 (EM-1). These flights will include additional or updated sensors, software algorithms and effectors. This paper will explore the implementation of end-to-end phasing tests on a flight vehicle which has many constraints, trade-offs and compromises. Orion PA-1 Phasing Test was conducted at White Sands Missile Range (WSMR) from March 4-6, 2010. This test decreased the risk of mission failure by demonstrating proper flight control system polarity. Demonstration was achieved by stimulating the primary navigation sensor, processing sensor data to commands and viewing propulsion response. PA-1 primary navigation sensor was a Space Integrated Inertial Navigation System (INS) and Global Positioning System (GPS) (SIGI) which has onboard processing, INS (3 accelerometers and 3 rate gyros) and no GPS receiver. SIGI data was processed by GN&C software into thrust magnitude and direction commands. The processing changes through three phases of powered flight: pitchover, downrange and reorientation. The primary inputs to GN&C are attitude position, attitude rates, angle of attack (AOA) and angle of sideslip (AOS). Pitch and yaw attitude and attitude rate responses were verified by using a flight spare SIGI mounted to a 2-axis rate table. AOA and AOS responses were verified by using a data recorded from SIGI movements on a robotic arm located at NASA Johnson Space Center. The data was consolidated and used in an open-loop data input to the SIGI. Propulsion was the Launch Abort System (LAS) Attitude Control Motor (ACM) which consisted of a solid motor with 8 nozzles. Each nozzle has active thrust control by varying throat area with a pintle. LAS ACM pintles are observable through optically transparent nozzle covers. SIGI movements on robot arm, SIGI rate table movements and LAS ACM pintle responses were video recorded as test artifacts for analysis and evaluation. The PA-1 Phasing Test design was determined based on test performance requirements, operational restrictions and EGSE capabilities. This development progressed during different stages. For convenience these development stages are initial, working group, tiger team, Engineering Review Team (ERT) and final.
Modi, Riddhi A; Mugavero, Michael J; Amico, Rivet K; Keruly, Jeanne; Quinlivan, Evelyn Byrd; Crane, Heidi M; Guzman, Alfredo; Zinski, Anne; Montue, Solange; Roytburd, Katya; Church, Anna; Willig, James H
2017-06-16
Meticulous tracking of study data must begin early in the study recruitment phase and must account for regulatory compliance, minimize missing data, and provide high information integrity and/or reduction of errors. In behavioral intervention trials, participants typically complete several study procedures at different time points. Among HIV-infected patients, behavioral interventions can favorably affect health outcomes. In order to empower newly diagnosed HIV positive individuals to learn skills to enhance retention in HIV care, we developed the behavioral health intervention Integrating ENGagement and Adherence Goals upon Entry (iENGAGE) funded by the National Institute of Allergy and Infectious Diseases (NIAID), where we deployed an in-clinic behavioral health intervention in 4 urban HIV outpatient clinics in the United States. To scale our intervention strategy homogenously across sites, we developed software that would function as a behavioral sciences research platform. This manuscript aimed to: (1) describe the design and implementation of a Web-based software application to facilitate deployment of a multisite behavioral science intervention; and (2) report on results of a survey to capture end-user perspectives of the impact of this platform on the conduct of a behavioral intervention trial. In order to support the implementation of the NIAID-funded trial iENGAGE, we developed software to deploy a 4-site behavioral intervention for new clinic patients with HIV/AIDS. We integrated the study coordinator into the informatics team to participate in the software development process. Here, we report the key software features and the results of the 25-item survey to evaluate user perspectives on research and intervention activities specific to the iENGAGE trial (N=13). The key features addressed are study enrollment, participant randomization, real-time data collection, facilitation of longitudinal workflow, reporting, and reusability. We found 100% user agreement (13/13) that participation in the database design and/or testing phase made it easier to understand user roles and responsibilities and recommended participation of research teams in developing databases for future studies. Users acknowledged ease of use, color flags, longitudinal work flow, and data storage in one location as the most useful features of the software platform and issues related to saving participant forms, security restrictions, and worklist layout as least useful features. The successful development of the iENGAGE behavioral science research platform validated an approach of early and continuous involvement of the study team in design development. In addition, we recommend post-hoc collection of data from users as this led to important insights on how to enhance future software and inform standard clinical practices. Clinicaltrials.gov NCT01900236; (https://clinicaltrials.gov/ct2/show/NCT01900236 (Archived by WebCite at http://www.webcitation.org/6qAa8ld7v). ©Riddhi A Modi, Michael J Mugavero, Rivet K Amico, Jeanne Keruly, Evelyn Byrd Quinlivan, Heidi M Crane, Alfredo Guzman, Anne Zinski, Solange Montue, Katya Roytburd, Anna Church, James H Willig. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 16.06.2017.
Passive perception system for day/night autonomous off-road navigation
NASA Astrophysics Data System (ADS)
Rankin, Arturo L.; Bergh, Charles F.; Goldberg, Steven B.; Bellutta, Paolo; Huertas, Andres; Matthies, Larry H.
2005-05-01
Passive perception of terrain features is a vital requirement for military related unmanned autonomous vehicle operations, especially under electromagnetic signature management conditions. As a member of Team Raptor, the Jet Propulsion Laboratory developed a self-contained passive perception system under the DARPA funded PerceptOR program. An environmentally protected forward-looking sensor head was designed and fabricated in-house to straddle an off-the-shelf pan-tilt unit. The sensor head contained three color cameras for multi-baseline daytime stereo ranging, a pair of cooled mid-wave infrared cameras for nighttime stereo ranging, and supporting electronics to synchronize captured imagery. Narrow-baseline stereo provided improved range data density in cluttered terrain, while wide-baseline stereo provided more accurate ranging for operation at higher speeds in relatively open areas. The passive perception system processed stereo images and outputted over a local area network terrain maps containing elevation, terrain type, and detected hazards. A novel software architecture was designed and implemented to distribute the data processing on a 533MHz quad 7410 PowerPC single board computer under the VxWorks real-time operating system. This architecture, which is general enough to operate on N processors, has been subsequently tested on Pentium-based processors under Windows and Linux, and a Sparc based-processor under Unix. The passive perception system was operated during FY04 PerceptOR program evaluations at Fort A. P. Hill, Virginia, and Yuma Proving Ground, Arizona. This paper discusses the Team Raptor passive perception system hardware and software design, implementation, and performance, and describes a road map to faster and improved passive perception.
Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems
NASA Astrophysics Data System (ADS)
Berrick, S. W.; Lynnes, C.
2007-12-01
The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed a number of reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple, Scalable, Script-based Science Processor (S4P); an online data visualization and analysis system (Giovanni); and the radically simple and fast data search tool, Mirador. These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust, interoperable, and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems, the emphasis on value-added customer service, and continual cost reduction pressures. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor in the success of S4P and S4PM, which are now available to the open source community under the NASA Open Source Agreement.
NASA Astrophysics Data System (ADS)
Brewer, Denise
The air transport industry (ATI) is a dynamic, communal, international, and intercultural environment in which the daily operations of airlines, airports, and service providers are dependent on information technology (IT). Many of the IT legacy systems are more than 30 years old, and current regulations and the globally distributed workplace have brought profound changes to the way the ATI community interacts. The purpose of the study was to identify the areas of resistance to change in the ATI community and the corresponding factors in change management requirements that minimize product development delays and lead to a successful and timely shift from legacy to open web-based systems in upgrading ATI operations. The research questions centered on product development team processes as well as the members' perceived need for acceptance of change. A qualitative case study approach rooted in complexity theory was employed using a single case of an intercultural product development team dispersed globally. Qualitative data gathered from questionnaires were organized using Nvivo software, which coded the words and themes. Once coded, themes emerged identifying the areas of resistance within the product development team. Results of follow-up interviews with team members suggests that intercultural relationship building prior to and during project execution; focus on common team goals; and, development of relationships to enhance interpersonal respect, understanding and overall communication help overcome resistance to change. Positive social change in the form of intercultural group effectiveness evidenced in increased team functioning during major project transitions is likely to result when global managers devote time to cultural understanding.
[Investigation of team processes that enhance team performance in business organization].
Nawata, Kengo; Yamaguchi, Hiroyuki; Hatano, Toru; Aoshima, Mika
2015-02-01
Many researchers have suggested team processes that enhance team performance. However, past team process models were based on crew team, whose all team members perform an indivisible temporary task. These models may be inapplicable business teams, whose individual members perform middle- and long-term tasks assigned to individual members. This study modified the teamwork model of Dickinson and McIntyre (1997) and aimed to demonstrate a whole team process that enhances the performance of business teams. We surveyed five companies (member N = 1,400, team N = 161) and investigated team-level-processes. Results showed that there were two sides of team processes: "communication" and "collaboration to achieve a goal." Team processes in which communication enhanced collaboration improved team performance with regard to all aspects of the quantitative objective index (e.g., current income and number of sales), supervisor rating, and self-rating measurements. On the basis of these results, we discuss the entire process by which teamwork enhances team performance in business organizations.
An approach to verification and validation of a reliable multicasting protocol: Extended Abstract
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. This initial version did not handle off-nominal cases such as network partitions or site failures. Meanwhile, the V&V team concurrently developed a formal model of the requirements using a variant of SCR-based state tables. Based on these requirements tables, the V&V team developed test cases to exercise the implementation. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test in the model and implementation agreed, then the test either found a potential problem or verified a required behavior. However, if the execution of a test was different in the model and implementation, then the differences helped identify inconsistencies between the model and implementation. In either case, the dialogue between both teams drove the co-evolution of the model and implementation. We have found that this interactive, iterative approach to development allows software designers to focus on delivery of nominal functionality while the V&V team can focus on analysis of off nominal cases. Testing serves as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP. Although RMP has provided our research effort with a rich set of test cases, it also has practical applications within NASA. For example, RMP is being considered for use in the NASA EOSDIS project due to its significant performance benefits in applications that need to replicate large amounts of data to many network sites.
A Toolkit For CryoSat Investigations By The ESRIN EOP-SER Altimetry Team
NASA Astrophysics Data System (ADS)
Dinardo, Salvatore; Bruno, Lucas; Benveniste, Jerome
2013-12-01
The scope of this work is to feature the new tool for the exploitation of the CryoSat data, designed and developed entirely by the Altimetry Team at ESRIN EOP-SER (Earth Observation - Exploitation, Research and Development). The tool framework is composed of two separate components: the first one handles the data collection and management, the second one is the processing toolkit. The CryoSat FBR (Full Bit Rate) data is downlinked uncompressed from the satellite, containing un-averaged individual echoes. This data is made available in the Kiruna CalVal server in a 10 day rolling archive. Daily at ESRIN all the CryoSat FBR data, in SAR and SARin Mode, are downloaded (around 30 Gigabytes) catalogued and archived in local ESRIN EOP-SER workstations. As of March 2013, the total amount of FBR data is over 9 Terabytes, with CryoSat acquisition dates spanning January 2011 to February 2013 (with some gaps). This archive was built by merging partial datasets available at ESTEC and NOAA, that have been kindly made available for EOP-SER team. The on-demand access to this low level data is restricted to expert users with validated ESA P.I. credentials. Currently the main users of the archiving functionality are the team members of the Project CP4O (STSE- CryoSat Plus for Ocean), CNES and NOAA. The second component of the service is the processing toolkit. On the EOP-SER workstations there is internally and independently developed software that is able to process the FBR data in SAR/SARin mode to generate multi-looked echoes (Level 1B) and subsequently able to re-track them in SAR and SARin mode (Level 2) over open ocean, exploiting the SAMOSA model and other internally developed models. The processing segment is used for research & development scopes, supporting the development contracts awarded confronting the deliverables to ESA, on site demonstrations/training to selected users, cross- comparison against third part products (CLS/CNES CPP Products for instance), preparation to Sentinel-3 mission, publications, etc. Samples of these experimental SAR/SARin L1b/L2 Products can be provided to the scientific community for comparison with self-processed data, on-request. So far, the processing has been designed and optimized for open ocean studies and is fully functional only over this kind of surface but there are plans to augment this processing capacity over coastal zones, inland waters and over land in sight of maximizing the exploitation of the upcoming Sentinel-3 Topographic mission over all surfaces. There are also plans to make the toolkit fully accessible through software “gridification” to run in the ESRin GPod (Grid Processing on Demand) Service and to extend the tool's functionalities to support Sentinel-3 Mission (both Simulated and Real Data). Graphs and statistics about the spatial coverage and amount of FBR data actually archived on the EOP-SER workstations and some scientific results will be shown in this paper along with the tests that have been designed and performed to validate the products (tests against CryoSat Kiruna PDGS Products and against transponder data).
Dust Tsunamis, Blackouts and 50 deg C: Teaching MATLAB in East Africa
NASA Astrophysics Data System (ADS)
Trauth, M. H.
2016-12-01
MATLAB is the tool of choice when analyzing earth and environmental data from East Africa. The software and companion toolboxes helps to process satellite images and digital elevation models, to detect trends, cycles, and recurrent, characteristic types of climate transitions in climate time series, and to model the hydrological balance of ancient lakes. The advantage of MATLAB is that the user can do many different types of analyses with the same software, making the software very attractive for young scientists at African universities. Since 2009 we are organizing summer schools on the subject of data analysis with various tools including MATLAB in Ethiopia, Kenya and Tanzania. Throughout the summerschool, participants are instructed by teams of senior researchers, together with young scientists, some of which were participants of an earlier summerschool. The participants are themselves integrated in teaching, depending on previous knowledge, so that the boundary between teachers and learners constantly shifts or even dissolves. From the extraordinarily positive experience, but also the difficulties in teaching data analysis methods with MATLAB in East Africa is reported.
Introducing Risk Management Techniques Within Project Based Software Engineering Courses
NASA Astrophysics Data System (ADS)
Port, Daniel; Boehm, Barry
2002-03-01
In 1996, USC switched its core two-semester software engineering course from a hypothetical-project, homework-and-exam course based on the Bloom taxonomy of educational objectives (knowledge, comprehension, application, analysis, synthesis, and evaluation). The revised course is a real-client team-project course based on the CRESST model of learning objectives (content understanding, problem solving, collaboration, communication, and self-regulation). We used the CRESST cognitive demands analysis to determine the necessary student skills required for software risk management and the other major project activities, and have been refining the approach over the last 5 years of experience, including revised versions for one-semester undergraduate and graduate project course at Columbia. This paper summarizes our experiences in evolving the risk management aspects of the project course. These have helped us mature more general techniques such as risk-driven specifications, domain-specific simplifier and complicator lists, and the schedule as an independent variable (SAIV) process model. The largely positive results in terms of review of pass / fail rates, client evaluations, product adoption rates, and hiring manager feedback are summarized as well.
Formal methods demonstration project for space applications
NASA Technical Reports Server (NTRS)
Divito, Ben L.
1995-01-01
The Space Shuttle program is cooperating in a pilot project to apply formal methods to live requirements analysis activities. As one of the larger ongoing shuttle Change Requests (CR's), the Global Positioning System (GPS) CR involves a significant upgrade to the Shuttle's navigation capability. Shuttles are to be outfitted with GPS receivers and the primary avionics software will be enhanced to accept GPS-provided positions and integrate them into navigation calculations. Prior to implementing the CR, requirements analysts at Loral Space Information Systems, the Shuttle software contractor, must scrutinize the CR to identify and resolve any requirements issues. We describe an ongoing task of the Formal Methods Demonstration Project for Space Applications whose goal is to find an effective way to use formal methods in the GPS CR requirements analysis phase. This phase is currently under way and a small team from NASA Langley, ViGYAN Inc. and Loral is now engaged in this task. Background on the GPS CR is provided and an overview of the hardware/software architecture is presented. We outline the approach being taken to formalize the requirements, only a subset of which is being attempted. The approach features the use of the PVS specification language to model 'principal functions', which are major units of Shuttle software. Conventional state machine techniques form the basis of our approach. Given this background, we present interim results based on a snapshot of work in progress. Samples of requirements specifications rendered in PVS are offered to illustration. We walk through a specification sketch for the principal function known as GPS Receiver State processing. Results to date are summarized and feedback from Loral requirements analysts is highlighted. Preliminary data is shown comparing issues detected by the formal methods team versus those detected using existing requirements analysis methods. We conclude by discussing our plan to complete the remaining activities of this task.
SU-E-T-419: Workflow and FMEA in a New Proton Therapy (PT) Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, C; Wessels, B; Hamilton, H
2014-06-01
Purpose: Workflow is an important component in the operational planning of a new proton facility. By integrating the concept of failure mode and effect analysis (FMEA) and traditional QA requirements, a workflow for a proton therapy treatment course is set up. This workflow serves as the blue print for the planning of computer hardware/software requirements and network flow. A slight modification of the workflow generates a process map(PM) for FMEA and the planning of QA program in PT. Methods: A flowchart is first developed outlining the sequence of processes involved in a PT treatment course. Each process consists of amore » number of sub-processes to encompass a broad scope of treatment and QA procedures. For each subprocess, the personnel involved, the equipment needed and the computer hardware/software as well as network requirements are defined by a team of clinical staff, administrators and IT personnel. Results: Eleven intermediate processes with a total of 70 sub-processes involved in a PT treatment course are identified. The number of sub-processes varies, ranging from 2-12. The sub-processes within each process are used for the operational planning. For example, in the CT-Sim process, there are 12 sub-processes: three involve data entry/retrieval from a record-and-verify system, two controlled by the CT computer, two require department/hospital network, and the other five are setup procedures. IT then decides the number of computers needed and the software and network requirement. By removing the traditional QA procedures from the workflow, a PM is generated for FMEA analysis to design a QA program for PT. Conclusion: Significant efforts are involved in the development of the workflow in a PT treatment course. Our hybrid model of combining FMEA and traditional QA program serves a duo purpose of efficient operational planning and designing of a QA program in PT.« less
2002-11-21
The second X-45A Unmanned Combat Air Vehicle (UCAV) technology demonstrator completed its first flight on November 21, 2002, after taking off from a dry lakebed at NASA's Dryden Flight Research Center, Edwards Air Force Base, California. X-45A vehicle two flew for approximately 30 minutes and reached an airspeed of 195 knots and an altitude of 7500 feet. This flight validated the functionality of the UCAV flight software on the second air vehicle. Dryden is supporting the DARPA/Boeing team in the design, development, integration, and demonstration of the critical technologies, processes, and system attributes leading to an operational UCAV system. Dryden support of the X-45A demonstrator system includes analysis, component development, simulations, ground and flight tests.
Plagiarism Detection by Online Solutions.
Masic, Izet; Begic, Edin; Dobraca, Amra
2017-01-01
The problem of plagiarism represents one of the burning issues of the modern scientific world. Detection of plagiarism is a problem that the Editorial Board encounters in their daily work. Software solutions represent a good solution for the detection of plagiarism. The problem of plagiarism will become most discussed topic of the modern scientific world, especially due to the development of standard measures, which rank the work of one author. Investment in education, education of young research personnel about the importance of scientific research, with paying particular attention on ethical behavior, becomes an imperative of academic staff. Editors have to invest additional effort in the development of the base of reviewers team as well as in their proper guidance, because after all, despite the software solutions, they are the best weapon to fight plagiarism. Peer review process should be a key of successful operation of each journal.
Workflow in interventional radiology: uterine fibroid embolization (UFE)
NASA Astrophysics Data System (ADS)
Lindisch, David; Neumuth, Thomas; Burgert, Oliver; Spies, James; Cleary, Kevin
2008-03-01
Workflow analysis can be used to record the steps taken during clinical interventions with the goal of identifying bottlenecks and streamlining the procedure efficiency. In this study, we recorded the workflow for uterine fibroid embolization (UFE) procedures in the interventional radiology suite at Georgetown University Hospital in Washington, DC, USA. We employed a custom client/server software architecture developed by the Innovation Center for Computer Assisted Surgery (ICCAS) at the University of Leipzig, Germany. This software runs in a JAVA environment and enables an observer to record the actions taken by the physician and surgical team during these interventions. The data recorded is stored as an XML document, which can then be further processed. We recorded data from 30 patients and found a mean intervention time of 01:49:46 (+/- 16:04) minutes. The critical intervention step, the embolization, had a mean time of 00:15:42 (+/- 05:49) minutes, which was only 15% of the total intervention time.
From, by, and for the OSSD: Software Engineering Education Using an Open Source Software Approach
ERIC Educational Resources Information Center
Huang, Kun; Dong, Yifei; Ge, Xun
2006-01-01
Computing is a complex, multidisciplinary field that requires a range of professional proficiencies. Computing students are expected to develop in-depth knowledge and skills, integrate and apply their knowledge flexibly to solve complex problems, and work successfully in teams. However, many students who graduate with degrees in computing fail to…
Agile Methods: Selected DoD Management and Acquisition Concerns
2011-10-01
SIDRE Software Intensive Innovative Development and Reengineering/Evolution SLIM Software Lifecycle Management -Estimate SLOC source lines of code...ISBN #0321502752 Coaching Agile Teams Lyssa Adkins ISBN #0321637704 Agile Project Management : Creating Innovative Products – Second Edition Jim...Accessed July 13, 2011. [Highsmith 2009] Highsmith, J. Agile Project Management : Creating Innovative Products, 2nd ed. Addison- Wesley, 2009
2007-09-01
Motion URL: http://www.blackberry.com/products/blackberry/index.shtml Software Name: Bricolage Company: Bricolage URL: http://www.bricolage.cc...Workflow Customizable control over editorial content. Bricolage Bricolage Feature Description Software Company Workflow Allows development...content for Nuxeo Collaborative Portal projects. Nuxeo Workspace Add, edit, delete, content through web interface. Bricolage Bricolage
Using iKidTools™ Software Support Systems to Develop and Implement Self-Monitoring Interventions
ERIC Educational Resources Information Center
Patti, Angela L.; Miller, Kevin J.
2011-01-01
Educational teams often are faced with the task of developing and implementing Behavioral Intervention Plans (BIPs) for students who present challenging and/or disruptive behaviors. This article describes the steps used to develop and implement a self-monitoring BIP that incorporated an innovative software system, iKidTools™. An authentic case…
NURBS-Based Geometry for Integrated Structural Analysis
NASA Technical Reports Server (NTRS)
Oliver, James H.
1997-01-01
This grant was initiated in April 1993 and completed in September 1996. The primary goal of the project was to exploit the emerging defacto CAD standard of Non- Uniform Rational B-spline (NURBS) based curve and surface geometry to integrate and streamline the process of turbomachinery structural analysis. We focused our efforts on critical geometric modeling challenges typically posed by the requirements of structural analysts. We developed a suite of software tools that facilitate pre- and post-processing of NURBS-based turbomachinery blade models for finite element structural analyses. We also developed tools to facilitate the modeling of blades in their manufactured (or cold) state based on nominal operating shape and conditions. All of the software developed in the course of this research is written in the C++ language using the Iris Inventor 3D graphical interface tool-kit from Silicon Graphics. In addition to enhanced modularity, improved maintainability, and efficient prototype development, this design facilitates the re-use of code developed for other NASA projects and provides a uniform and professional 'look and feel' for all applications developed by the Iowa State Team.
NASA Technical Reports Server (NTRS)
Freeman, W.; Ilcewicz, L.; Swanson, G.; Gutowski, T.
1992-01-01
The Structures Technology Program Office (STPO) at NASA LaRC has initiated development of a conceptual and preliminary designers' cost prediction model. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state-of-the-art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a database and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. This paper presents the team members, approach, goals, plans, and progress to date for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).
Cassini-Huygens maneuver automation for navigation
NASA Technical Reports Server (NTRS)
Goodson, Troy; Attiyah, Amy; Buffington, Brent; Hahn, Yungsun; Pojman, Joan; Stavert, Bob; Strange, Nathan; Stumpf, Paul; Wagner, Sean; Wolff, Peter;
2006-01-01
Many times during the Cassini-Huygens mission to Saturn, propulsive maneuvers must be spaced so closely together that there isn't enough time or workforce to execute the maneuver-related software manually, one subsystem at a time. Automation is required. Automating the maneuver design process has involved close cooperation between teams. We present the contribution from the Navigation system. In scope, this includes trajectory propagation and search, generation of ephemerides, general tasks such as email notification and file transfer, and presentation materials. The software has been used to help understand maneuver optimization results, Huygens probe delivery statistics, and Saturn ring-plane crossing geometry. The Maneuver Automation Software (MAS), developed for the Cassini-Huygens program enables frequent maneuvers by handling mundane tasks such as creation of deliverable files, file delivery, generation and transmission of email announcements, generation of presentation material and other supporting documentation. By hand, these tasks took up hours, if not days, of work for each maneuver. Automated, these tasks may be completed in under an hour. During the cruise trajectory the spacing of maneuvers was such that development of a maneuver design could span about a month, involving several other processes in addition to that described, above. Often, about the last five days of this process covered the generation of a final design using an updated orbit-determination estimate. To support the tour trajectory, the orbit determination data cut-off of five days before the maneuver needed to be reduced to approximately one day and the whole maneuver development process needed to be reduced to less than a week..
Perfecting Scientists' Collaboration and Problem-Solving Skills in the Virtual Team Environment
NASA Astrophysics Data System (ADS)
Jabro, A.; Jabro, J.
2012-04-01
PPerfecting Scientists' Collaboration and Problem-Solving Skills in the Virtual Team Environment Numerous factors have contributed to the proliferation of conducting work in virtual teams at the domestic, national, and global levels: innovations in technology, critical developments in software, co-located research partners and diverse funding sources, dynamic economic and political environments, and a changing workforce. Today's scientists must be prepared to not only perform work in the virtual team environment, but to work effectively and efficiently despite physical and cultural barriers. Research supports that students who have been exposed to virtual team experiences are desirable in the professional and academic arenas. Research supports establishing and maintaining established protocols for communication behavior prior to task discussion provides for successful team outcomes. Research conducted on graduate and undergraduate virtual teams' behaviors led to the development of successful pedagogic practices and assessment strategies.
NASA Technical Reports Server (NTRS)
Davis, George; Cary, Everett; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis
2003-01-01
The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.
A Distributed Simulation Software System for Multi-Spacecraft Missions
NASA Technical Reports Server (NTRS)
Burns, Richard; Davis, George; Cary, Everett
2003-01-01
The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.
NASA Astrophysics Data System (ADS)
Segret, Boris; Semery, Alain; Vannitsen, Jordan; Mosser, Benoît.; Miau, Jiun-Jih; Juang, Jyh-Ching; Deleflie, Florent
2014-08-01
The AGILE principles in the software industry seems well adapted to the paradigm of CubeSat missions that involve students for the development of space missions. Some of well-known engineering and program processes are revisited on the example of an interplanetary CubeSat mission profile that has been developed by several teams of students in various countries and at various educational levels since 02/2013. The lessons learned at adapting traditional space mission methods are emphasized and they produce a metaphoric image of paving stones.
The First Year of Croatian Meteor Network
NASA Astrophysics Data System (ADS)
Andreic, Zeljko; Segon, Damir
2010-08-01
The idea and a short history of Croatian Meteor Network (CMN) is described. Based on use of cheap surveillance cameras, standard PC-TV cards and old PCs, the Network allows schools, amateur societies and individuals to participate in photographic meteor patrol program. The network has a strong educational component and many cameras are located at or around teaching facilities. Data obtained by these cameras are collected and processed by the scientific team of the network. Currently 14 cameras are operable, covering a large part of the croatian sky, data gathering is fully functional, and data reduction software is in testing phase.
VR Medical Gamification for Training and Education.
Nicola, Stelian; Virag, Ioan; Stoicu-Tivadar, Lăcrămioara
2017-01-01
The new virtual reality based medical applications is providing a better understanding of healthcare related subjects for both medical students and physicians. The work presented in this paper underlines gamification as a concept and uses VR as a new modality to study the human skeleton. The team proposes a mobile Android platform application based on Unity 5.4 editor and Google VR SDK. The results confirmed that the approach provides a more intuitive user experience during the learning process, concluding that the gamification of classical medical software provides an increased interactivity level for medical students during the study of the human skeleton.
Rotational fluid flow experiment
NASA Technical Reports Server (NTRS)
1991-01-01
This project which began in 1986 as part of the Worcester Polytechnic Institute (WPI) Advanced Space Design Program focuses on the design and implementation of an electromechanical system for studying vortex behavior in a microgravity environment. Most of the existing equipment was revised and redesigned by this project team, as necessary. Emphasis was placed on documentation and integration of the electrical and mechanical subsystems. Project results include reconfiguration and thorough testing of all hardware subsystems, implementation of an infrared gas entrainment detector, new signal processing circuitry for the ultrasonic fluid circulation device, improved prototype interface circuits, and software for overall control of experiment operation.
Solar wind monitor—a school geophysics project
NASA Astrophysics Data System (ADS)
Robinson, Ian
2018-05-01
Described is an established geophysics project to construct a solar wind monitor based on a nT resolution fluxgate magnetometer. Low-cost and appropriate from school to university level it incorporates elements of astrophysics, geophysics, electronics, programming, computer networking and signal processing. The system monitors the earth’s field in real-time uploading data and graphs to a website every few minutes. Modular design encourages construction and testing by teams of students as well as expansion and refinement. The system has been tested running unattended for months at a time. Both the hardware design and software is published as open-source [1, 10].
Farinango, Charic D; Benavides, Juan S; Cerón, Jesús D; López, Diego M; Álvarez, Rosa E
2018-01-01
Previous studies have demonstrated the effectiveness of information and communication technologies to support healthy lifestyle interventions. In particular, personal health record systems (PHR-Ss) empower self-care, essential to support lifestyle changes. Approaches such as the user-centered design (UCD), which is already a standard within the software industry (ISO 9241-210:2010), provide specifications and guidelines to guarantee user acceptance and quality of eHealth systems. However, no single PHR-S for metabolic syndrome (MS) developed following the recommendations of the ISO 9241-210:2010 specification has been found in the literature. The aim of this study was to describe the development of a PHR-S for the management of MS according to the principles and recommendations of the ISO 9241-210 standard. The proposed PHR-S was developed using a formal software development process which, in addition to the traditional activities of any software process, included the principles and recommendations of the ISO 9241-210 standard. To gather user information, a survey sample of 1,187 individuals, eight interviews, and a focus group with seven people were performed. Throughout five iterations, three prototypes were built. Potential users of each system evaluated each prototype. The quality attributes of efficiency, effectiveness, and user satisfaction were assessed using metrics defined in the ISO/IEC 25022 standard. The following results were obtained: 1) a technology profile from 1,187 individuals at risk for MS from the city of Popayan, Colombia, identifying that 75.2% of the people use the Internet and 51% had a smartphone; 2) a PHR-S to manage MS developed (the PHR-S has the following five main functionalities: record the five MS risk factors, share these measures with health care professionals, and three educational modules on nutrition, stress management, and a physical activity); and 3) usability tests on each prototype obtaining the following results: 100% effectiveness, 100% efficiency, and 84.2 points in the system usability scale. The software development methodology used was based on the ISO 9241-210 standard, which allowed the development team to maintain a focus on user's needs and requirements throughout the project, which resulted in an increased satisfaction and acceptance of the system. Additionally, the establishment of a multidisciplinary team allowed the application of considerations not only from the disciplines of software engineering and health sciences but also from other disciplines such as graphical design and media communication. Finally, usability testing allowed the observation of flaws in the designs, which helped to improve the solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas R.; Pointer, William David; Sieger, Matt
2016-04-01
The goal of this review is to enable application of codes or software packages for safety assessment of advanced sodium-cooled fast reactor (SFR) designs. To address near-term programmatic needs, the authors have focused on two objectives. First, the authors have focused on identification of requirements for software QA that must be satisfied to enable the application of software to future safety analyses. Second, the authors have collected best practices applied by other code development teams to minimize cost and time of initial code qualification activities and to recommend a path to the stated goal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Randy R.; Bass, Robert B.; Kouzes, Richard T.
2003-01-20
This paper provides a brief overview of the implementation of the Advanced Encryption Standard (AES) as a hash function for confirming the identity of software resident on a computer system. The PNNL Software Authentication team chose to use a hash function to confirm software identity on a system for situations where: (1) there is limited time to perform the confirmation and (2) access to the system is restricted to keyboard or thumbwheel input and output can only be displayed on a monitor. PNNL reviewed three popular algorithms: the Secure Hash Algorithm - 1 (SHA-1), the Message Digest - 5 (MD-5),more » and the Advanced Encryption Standard (AES) and selected the AES to incorporate in software confirmation tool we developed. This paper gives a brief overview of the SHA-1, MD-5, and the AES and sites references for further detail. It then explains the overall processing steps of the AES to reduce a large amount of generic data-the plain text, such is present in memory and other data storage media in a computer system, to a small amount of data-the hash digest, which is a mathematically unique representation or signature of the former that could be displayed on a computer's monitor. This paper starts with a simple definition and example to illustrate the use of a hash function. It concludes with a description of how the software confirmation tool uses the hash function to confirm the identity of software on a computer system.« less
2014-01-01
Background According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). Methods The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. Results The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. Conclusions The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers. PMID:24655818
Höss, Angelika; Lampe, Christian; Panse, Ralf; Ackermann, Benjamin; Naumann, Jakob; Jäkel, Oliver
2014-03-21
According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers.
Ease of adoption of clinical natural language processing software: An evaluation of five systems.
Zheng, Kai; Vydiswaran, V G Vinod; Liu, Yang; Wang, Yue; Stubbs, Amber; Uzuner, Özlem; Gururaj, Anupama E; Bayer, Samuel; Aberdeen, John; Rumshisky, Anna; Pakhomov, Serguei; Liu, Hongfang; Xu, Hua
2015-12-01
In recognition of potential barriers that may inhibit the widespread adoption of biomedical software, the 2014 i2b2 Challenge introduced a special track, Track 3 - Software Usability Assessment, in order to develop a better understanding of the adoption issues that might be associated with the state-of-the-art clinical NLP systems. This paper reports the ease of adoption assessment methods we developed for this track, and the results of evaluating five clinical NLP system submissions. A team of human evaluators performed a series of scripted adoptability test tasks with each of the participating systems. The evaluation team consisted of four "expert evaluators" with training in computer science, and eight "end user evaluators" with mixed backgrounds in medicine, nursing, pharmacy, and health informatics. We assessed how easy it is to adopt the submitted systems along the following three dimensions: communication effectiveness (i.e., how effective a system is in communicating its designed objectives to intended audience), effort required to install, and effort required to use. We used a formal software usability testing tool, TURF, to record the evaluators' interactions with the systems and 'think-aloud' data revealing their thought processes when installing and using the systems and when resolving unexpected issues. Overall, the ease of adoption ratings that the five systems received are unsatisfactory. Installation of some of the systems proved to be rather difficult, and some systems failed to adequately communicate their designed objectives to intended adopters. Further, the average ratings provided by the end user evaluators on ease of use and ease of interpreting output are -0.35 and -0.53, respectively, indicating that this group of users generally deemed the systems extremely difficult to work with. While the ratings provided by the expert evaluators are higher, 0.6 and 0.45, respectively, these ratings are still low indicating that they also experienced considerable struggles. The results of the Track 3 evaluation show that the adoptability of the five participating clinical NLP systems has a great margin for improvement. Remedy strategies suggested by the evaluators included (1) more detailed and operation system specific use instructions; (2) provision of more pertinent onscreen feedback for easier diagnosis of problems; (3) including screen walk-throughs in use instructions so users know what to expect and what might have gone wrong; (4) avoiding jargon and acronyms in materials intended for end users; and (5) packaging prerequisites required within software distributions so that prospective adopters of the software do not have to obtain each of the third-party components on their own. Copyright © 2015 Elsevier Inc. All rights reserved.
FY 2002 Report on Software Visualization Techniques for IV and V
NASA Technical Reports Server (NTRS)
Fotta, Michael E.
2002-01-01
One of the major challenges software engineers often face in performing IV&V is developing an understanding of a system created by a development team they have not been part of. As budgets shrink and software increases in complexity, this challenge will become even greater as these software engineers face increased time and resource constraints. This research will determine which current aspects of providing this understanding (e.g., code inspections, use of control graphs, use of adjacency matrices, requirements traceability) are critical to the performing IV&V and amenable to visualization techniques. We will then develop state-of-the-art software visualization techniques to facilitate the use of these aspects to understand software and perform IV&V.
NASA Technical Reports Server (NTRS)
Hicks, Rebecca
2010-01-01
A fiber Bragg grating is a portion of a core of a fiber optic stand that has been treated to affect the way light travels through the strand. Light within a certain narrow range of wavelengths will be reflected along the fiber by the grating, while light outside that range will pass through the grating mostly undisturbed. Since the range of wavelengths that can penetrate the grating depends on the grating itself as well as temperature and mechanical strain, fiber Bragg gratings can be used as temperature and strain sensors. This capability, along with the light-weight nature of the fiber optic strands in which the gratings reside, make fiber optic sensors an ideal candidate for flight testing and monitoring in which temperature and wing strain are factors. A team of NASA Dryden engineers has been working to advance the fiber optic sensor technology since the mid 1990 s. The team has been able to improve the dependability and sample rate of fiber optic sensor systems, making them more suitable for real-time wing shape and strain monitoring and capable of rivaling traditional strain gauge sensors in accuracy. The sensor system was recently tested on the Ikhana unmanned aircraft and will be used on the Global Observer unmanned aircraft. Since a fiber Bragg grating sensor can be placed every halfinch on each optic fiber, and since fibers of approximately 40 feet in length each are to be used on the Global Observer, each of these fibers will have approximately 1,000 sensors. A total of 32 fibers are to be placed on the Global Observer aircraft, to be sampled at a rate of about 50 Hz, meaning about 1.6 million data points will be taken every second. The fiber optic sensors system is capable of producing massive amounts of potentially useful data; however, methods to capture, record, and analyze all of this data in a way that makes the information useful to flight test engineers are currently limited. The purpose of this project is to research the availability of software capable of processing massive amounts of data in both real-time and post-flight settings, and to produce software segments that can be integrated to assist in the task as well. The selected software must be able to: (1) process massive amounts of data (up to 4GB) at a speed useful in a real-time settings (small fractions of a second); (2) process data in post-flight settings to allow test reproduction or further data analysis, inclusive; (3) produce, or make easier to produce, three-dimensional plots/graphs to make the data accessible to flight test engineers; and (4) be customized to allow users to use their own processing formulas or functions and display the data in formats they prefer. Several software programs were evaluated to determine their utility in completing the research objectives. These programs include: OriginLab, Graphis, 3D Grapher, Visualization Sciences Group (VSG) Avizo Wind, Interactive Analysis and Display System (IADS), SigmaPlot, and MATLAB.
2013-09-11
CAPE CANAVERAL, Fla. – Engineers from NASA's Kennedy Space Center prep a remote-controlled aircraft for take-off. The aircraft is equipped with a unique set of sensors and software and was assembled by a team of engineers for a competition at the agency's Kennedy Space Center. Teams from Johnson Space Center and Marshall Space Flight Center joined the Kennedy team in competing in an unmanned aerial systems event to evaluate designs and work by engineers learning new specialties. The competition took place at the Shuttle Landing Facility at Kennedy. Photo credit: NASA/Dmitri Gerondidakis
2013-09-11
CAPE CANAVERAL, Fla. – Engineers from NASA's Marshall Space Flight Center prep a remote-controlled aircraft for take-off. The aircraft is equipped with a unique set of sensors and software and was assembled by a team of engineers for a competition at the agency's Kennedy Space Center. Teams from Johnson Space Center and Marshall Space Flight Center joined the Kennedy team in competing in an unmanned aerial systems event to evaluate designs and work by engineers learning new specialties. The competition took place at the Shuttle Landing Facility at Kennedy. Photo credit: NASA/Dmitri Gerondidakis
2013-09-11
CAPE CANAVERAL, Fla. – Engineers from NASA's Kennedy Space Center prep a remote-controlled aircraft for take-off. The aircraft is equipped with a unique set of sensors and software and was assembled by a team of engineers for a competition at the agency's Kennedy Space Center. Teams from Johnson Space Center and Marshall Space Flight Center joined the Kennedy team in competing in an unmanned aerial systems event to evaluate designs and work by engineers learning new specialties. The competition took place at the Shuttle Landing Facility at Kennedy. Photo credit: NASA/Dmitri Gerondidakis
2013-09-11
CAPE CANAVERAL, Fla. – An engineer from NASA's Marshall Space Flight Center prep a remote-controlled aircraft for take-off. The aircraft is equipped with a unique set of sensors and software and was assembled by a team of engineers for a competition at the agency's Kennedy Space Center. Teams from Johnson Space Center and Marshall Space Flight Center joined the Kennedy team in competing in an unmanned aerial systems event to evaluate designs and work by engineers learning new specialties. The competition took place at the Shuttle Landing Facility at Kennedy. Photo credit: NASA/Dmitri Gerondidakis
2013-09-11
CAPE CANAVERAL, Fla. – An engineer from NASA's Marshall Space Flight Center watches the landing of remote-controlled aircraft. The aircraft is equipped with a unique set of sensors and software and was assembled by a team of engineers for a competition at the agency's Kennedy Space Center. Teams from Johnson Space Center and Marshall Space Flight Center joined a Kennedy team in competing in an unmanned aerial systems event to evaluate designs and work by engineers learning new specialties. The competition took place at the Shuttle Landing Facility at Kennedy. Photo credit: NASA/Dmitri Gerondidakis
NASA Astrophysics Data System (ADS)
Jeffery, Keith; Harrison, Matt; Bailo, Daniele
2016-04-01
The EPOS-PP Project 2010-2014 proposed an architecture and demonstrated feasibility with a prototype. Requirements based on use cases were collected and an inventory of assets (e.g. datasets, software, users, computing resources, equipment/detectors, laboratory services) (RIDE) was developed. The architecture evolved through three stages of refinement with much consultation both with the EPOS community representing EPOS users and participants in geoscience and with the overall ICT community especially those working on research such as the RDA (Research Data Alliance) community. The architecture consists of a central ICS (Integrated Core Services) consisting of a portal and catalog, the latter providing to end-users a 'map' of all EPOS resources (datasets, software, users, computing, equipment/detectors etc.). ICS is extended to ICS-d (distributed ICS) for certain services (such as visualisation software services or Cloud computing resources) and CES (Computational Earth Science) for specific simulation or analytical processing. ICS also communicates with TCS (Thematic Core Services) which represent European-wide portals to national and local assets, resources and services in the various specific domains (e.g. seismology, volcanology, geodesy) of EPOS. The EPOS-IP project 2015-2019 started October 2015. Two work-packages cover the ICT aspects; WP6 involves interaction with the TCS while WP7 concentrates on ICS including interoperation with ICS-d and CES offerings: in short the ICT architecture. Based on the experience and results of EPOS-PP the ICT team held a pre-meeting in July 2015 and set out a project plan. The first major activity involved requirements (re-)collection with use cases and also updating the inventory of assets held by the various TCS in EPOS. The RIDE database of assets is currently being converted to CERIF (Common European Research Information Format - an EU Recommendation to Member States) to provide the basis for the EPOS-IP ICS Catalog. In parallel the ICT team is tracking developments in ICT for relevance to EPOS-IP. In particular, the potential utilisation of e-Is (e-Infrastructures) such as GEANT(network), AARC (security), EGI (GRID computing), EUDAT (data curation), PRACE (High Performance Computing), HELIX-Nebula / Open Science Cloud (Cloud computing) are being assessed. Similarly relationships to other e-RIs (e-Research Infrastructures) such as ENVRI+, EXCELERATE and other ESFRI (European Strategic Forum for Research Infrastructures) projects are developed to share experience and technology and to promote interoperability. EPOS ICT team members are also involved in VRE4EIC, a project developing a reference architecture and component software services for a Virtual Research Environment to be superimposed on EPOS-ICS. The challenge which is being tackled now is therefore to keep consistency and interoperability among the different modules, initiatives and actors which participate to the process of running the EPOS platform. It implies both a continuous update about IT aspects of mentioned initiatives and a refinement of the e-architecture designed so far. One major aspect of EPOS-IP is the ICT support for legalistic, financial and governance aspects of the EPOS ERIC to be initiated during EPOS-IP. This implies a sophisticated AAAI (Authentication, authorization, accounting infrastructure) with consistency throughout the software, communications and data stack.
2001-02-03
The lid is off the shipping container with the Multi-Purpose Logistics Module Donatello inside. It sits on a transporter inside the Space Station Processing Facility. In the SSPF, Donatello will undergo processing by the payload test team, including integrated electrical tests with other Station elements in the SSPF, leak tests, electrical and software compatibility tests with the Space Shuttle (using the Cargo Integrated Test equipment) and an Interface Verification Test once the module is installed in the Space Shuttle’s payload bay at the launch pad. The most significant mechanical task to be performed on Donatello in the SSPF is the installation and outfitting of the racks for carrying the various experiments and cargo. Donatello will be launched on mission STS-130, currently planned for September 2004
2001-02-03
Workers in the Space Station Processing Facility attach an overhead crane to the Multi-Purpose Logistics Module Donatello to lift it out of the shipping container. In the SSPF, Donatello will undergo processing by the payload test team, including integrated electrical tests with other Station elements in the SSPF, leak tests, electrical and software compatibility tests with the Space Shuttle (using the Cargo Integrated Test equipment) and an Interface Verification Test once the module is installed in the Space Shuttle’s payload bay at the launch pad. The most significant mechanical task to be performed on Donatello in the SSPF is the installation and outfitting of the racks for carrying the various experiments and cargo. Donatello will be launched on mission STS-130, currently planned for September 2004