Sample records for implementing team software

  1. Software Defined GPS API: Development and Implementation of GPS Correlator Architectures Using MATLAB with Focus on SDR Implementations

    DTIC Science & Technology

    2014-05-18

    intention of offering improved software libraries for GNSS signal acquisition. It has been the team mission to implement new and improved techniques...with the intention of offering improved software libraries for GNSS signal acquisition. It has been the team mission to implement new and improved...intention of offering improved software libraries for GNSS signal acquisition. It has been the team mission to implement new and improved techniques to

  2. Implementing Extreme Programming in Distributed Software Project Teams: Strategies and Challenges

    NASA Astrophysics Data System (ADS)

    Maruping, Likoebe M.

    Agile software development methods and distributed forms of organizing teamwork are two team process innovations that are gaining prominence in today's demanding software development environment. Individually, each of these innovations has yielded gains in the practice of software development. Agile methods have enabled software project teams to meet the challenges of an ever turbulent business environment through enhanced flexibility and responsiveness to emergent customer needs. Distributed software project teams have enabled organizations to access highly specialized expertise across geographic locations. Although much progress has been made in understanding how to more effectively manage agile development teams and how to manage distributed software development teams, managers have little guidance on how to leverage these two potent innovations in combination. In this chapter, I outline some of the strategies and challenges associated with implementing agile methods in distributed software project teams. These are discussed in the context of a study of a large-scale software project in the United States that lasted four months.

  3. The Effects of Development Team Skill on Software Product Quality

    NASA Technical Reports Server (NTRS)

    Beaver, Justin M.; Schiavone, Guy A.

    2006-01-01

    This paper provides an analysis of the effect of the skill/experience of the software development team on the quality of the final software product. A method for the assessment of software development team skill and experience is proposed, and was derived from a workforce management tool currently in use by the National Aeronautics and Space Administration. Using data from 26 smallscale software development projects, the team skill measures are correlated to 5 software product quality metrics from the ISO/IEC 9126 Software Engineering Product Quality standard. in the analysis of the results, development team skill is found to be a significant factor in the adequacy of the design and implementation. In addition, the results imply that inexperienced software developers are tasked with responsibilities ill-suited to their skill level, and thus have a significant adverse effect on the quality of the software product. Keywords: software quality, development skill, software metrics

  4. Implementation of Task-Tracking Software for Clinical IT Management.

    PubMed

    Purohit, Anne-Maria; Brutscheck, Clemens; Prokosch, Hans-Ulrich; Ganslandt, Thomas; Schneider, Martin

    2017-01-01

    Often in clinical IT departments, many different methods and IT systems are used for task-tracking and project organization. Based on managers' personal preferences and knowledge about project management methods, tools differ from team to team and even from employee to employee. This causes communication problems, especially when tasks need to be done in cooperation with different teams. Monitoring tasks and resources becomes impossible: there are no defined deliverables, which prevents reliable deadlines. Because of these problems, we implemented task-tracking software which is now in use across all seven teams at the University Hospital Erlangen. Over a period of seven months, a working group defined types of tasks (project, routine task, etc.), workflows, and views to monitor the tasks of the 7 divisions, 20 teams and 340 different IT services. The software has been in use since December 2016.

  5. The Cooperate Assistive Teamwork Environment for Software Description Languages.

    PubMed

    Groenda, Henning; Seifermann, Stephan; Müller, Karin; Jaworek, Gerhard

    2015-01-01

    Versatile description languages such as the Unified Modeling Language (UML) are commonly used in software engineering across different application domains in theory and practice. They often use graphical notations and leverage visual memory for expressing complex relations. Those notations are hard to access for people with visual impairment and impede their smooth inclusion in an engineering team. Existing approaches provide textual notations but require manual synchronization between the notations. This paper presents requirements for an accessible and language-aware team work environment as well as our plan for the assistive implementation of Cooperate. An industrial software engineering team consisting of people with and without visual impairment will evaluate the implementation.

  6. Using iKidTools™ Software Support Systems to Develop and Implement Self-Monitoring Interventions

    ERIC Educational Resources Information Center

    Patti, Angela L.; Miller, Kevin J.

    2011-01-01

    Educational teams often are faced with the task of developing and implementing Behavioral Intervention Plans (BIPs) for students who present challenging and/or disruptive behaviors. This article describes the steps used to develop and implement a self-monitoring BIP that incorporated an innovative software system, iKidTools™. An authentic case…

  7. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan : ASC software quality engineering practices Version 3.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turgeon, Jennifer L.; Minana, Molly A.; Hackney, Patricia

    2009-01-01

    The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and themore » software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.« less

  8. TeamWATCH: Visualizing development activities using a 3-D city metaphor to improve conflict detection and team awareness

    PubMed Central

    Ye, Xin

    2018-01-01

    The awareness of others’ activities has been widely recognized as essential in facilitating coordination in a team among Computer-Supported Cooperative Work communities. Several field studies of software developers in large software companies such as Microsoft have shown that coworker and artifact awareness are the most common information needs for software developers; however, they are also two of the seven most frequently unsatisfied information needs. To address this problem, we built a workspace awareness tool named TeamWATCH to visualize developer activities using a 3-D city metaphor. In this paper, we discuss the importance of awareness in software development, review existing workspace awareness tools, present the design and implementation of TeamWATCH, and evaluate how it could help detect and resolve conflicts earlier and better maintain group awareness via a controlled experiment. The experimental results showed that the subjects using TeamWATCH performed significantly better with respect to early conflict detection and resolution. PMID:29558519

  9. Implementing Large Projects in Software Engineering Courses

    ERIC Educational Resources Information Center

    Coppit, David

    2006-01-01

    In software engineering education, large projects are widely recognized as a useful way of exposing students to the real-world difficulties of team software development. But large projects are difficult to put into practice. First, educators rarely have additional time to manage software projects. Second, classrooms have inherent limitations that…

  10. Improving collaborative learning in online software engineering education

    NASA Astrophysics Data System (ADS)

    Neill, Colin J.; DeFranco, Joanna F.; Sangwan, Raghvinder S.

    2017-11-01

    Team projects are commonplace in software engineering education. They address a key educational objective, provide students critical experience relevant to their future careers, allow instructors to set problems of greater scale and complexity than could be tackled individually, and are a vehicle for socially constructed learning. While all student teams experience challenges, those in fully online programmes must also deal with remote working, asynchronous coordination, and computer-mediated communications all of which contribute to greater social distance between team members. We have developed a facilitation framework to aid team collaboration and have demonstrated its efficacy, in prior research, with respect to team performance and outcomes. Those studies indicated, however, that despite experiencing improved project outcomes, students working in effective software engineering teams did not experience significantly improved individual achievement. To address this deficiency we implemented theoretically grounded refinements to the collaboration model based upon peer-tutoring research. Our results indicate a modest, but statistically significant (p = .08), improvement in individual achievement using this refined model.

  11. Team Software Process (TSP) Coach Mentoring Program Guidebook

    DTIC Science & Technology

    2009-08-01

    SEI TSP Initiative Team. • All training was conducted in English only, and observations were limited to English- speaking coaches and teams. The...Certified TSP Mentor Coach programs also enable the expansion of TSP implementation to non-English- speaking teams and organizations. This pro- gram also...Communication Needs Significant Improvement Could Benefit from Development Capable and Effective Role Model 1. I listen before speaking . 2. I

  12. Implementing Kanban for agile process management within the ALMA Software Operations Group

    NASA Astrophysics Data System (ADS)

    Reveco, Johnny; Mora, Matias; Shen, Tzu-Chiang; Soto, Ruben; Sepulveda, Jorge; Ibsen, Jorge

    2014-07-01

    After the inauguration of the Atacama Large Millimeter/submillimeter Array (ALMA), the Software Operations Group in Chile has refocused its objectives to: (1) providing software support to tasks related to System Integration, Scientific Commissioning and Verification, as well as Early Science observations; (2) testing the remaining software features, still under development by the Integrated Computing Team across the world; and (3) designing and developing processes to optimize and increase the level of automation of operational tasks. Due to their different stakeholders, each of these tasks presents a wide diversity of importances, lifespans and complexities. Aiming to provide the proper priority and traceability for every task without stressing our engineers, we introduced the Kanban methodology in our processes in order to balance the demand on the team against the throughput of the delivered work. The aim of this paper is to share experiences gained during the implementation of Kanban in our processes, describing the difficulties we have found, solutions and adaptations that led us to our current but still evolving implementation, which has greatly improved our throughput, prioritization and problem traceability.

  13. Delay Tolerant Networking on NASA's Space Communication and Navigation Testbed

    NASA Technical Reports Server (NTRS)

    Johnson, Sandra; Eddy, Wesley

    2016-01-01

    This presentation covers the status of the implementation of an open source software that implements the specifications developed by the CCSDS Working Group. Interplanetary Overlay Network (ION) is open source software and it implements specifications that have been developed by two international working groups through IETF and CCSDS. ION was implemented on the SCaN Testbed, a testbed located on an external pallet on ISS, by the GRC team. The presentation will cover the architecture of the system, high level implementation details, and issues porting ION to VxWorks.

  14. Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model

    NASA Astrophysics Data System (ADS)

    Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.

    2013-12-01

    Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient as developers do not need to spend time iterating over small changes. Instead, these changes are realized in early prototypes and implemented before the task is seen by developers. The development practices followed by the LROC SOC DevOps team help facilitate a high level of software quality that is necessary for LROC SOC operations. Application to the Scientific Community: There is no replacement for having software developed by professional developers. While it is beneficial for scientists to write software, this activity should be seen as prototyping, which is then made production ready by professional developers. When constructed properly, even a small development team has the ability to increase the rate of software development for a research group while creating more efficient, reliable, and maintainable products. This strategy allows scientists to accomplish more, focusing on teamwork, rather than software development, which may not be their primary focus. 1. Robinson et al. (2010) Space Sci. Rev. 150, 81-124 2. DeGrandis. (2011) Cutter IT Journal. Vol 24, No. 8, 34-39 3. Estes, N.M.; Hanger, C.D.; Licht, A.A.; Bowman-Cisneros, E.; Lunaserv Web Map Service: History, Implementation Details, Development, and Uses, http://adsabs.harvard.edu/abs/2013LPICo1719.2609E.

  15. Developing high-quality educational software.

    PubMed

    Johnson, Lynn A; Schleyer, Titus K L

    2003-11-01

    The development of effective educational software requires a systematic process executed by a skilled development team. This article describes the core skills required of the development team members for the six phases of successful educational software development. During analysis, the foundation of product development is laid including defining the audience and program goals, determining hardware and software constraints, identifying content resources, and developing management tools. The design phase creates the specifications that describe the user interface, the sequence of events, and the details of the content to be displayed. During development, the pieces of the educational program are assembled. Graphics and other media are created, video and audio scripts written and recorded, the program code created, and support documentation produced. Extensive testing by the development team (alpha testing) and with students (beta testing) is conducted. Carefully planned implementation is most likely to result in a flawless delivery of the educational software and maintenance ensures up-to-date content and software. Due to the importance of the sixth phase, evaluation, we have written a companion article on it that follows this one. The development of a CD-ROM product is described including the development team, a detailed description of the development phases, and the lessons learned from the project.

  16. Quantitative CMMI Assessment for Offshoring through the Analysis of Project Management Repositories

    NASA Astrophysics Data System (ADS)

    Sunetnanta, Thanwadee; Nobprapai, Ni-On; Gotel, Olly

    The nature of distributed teams and the existence of multiple sites in offshore software development projects pose a challenging setting for software process improvement. Often, the improvement and appraisal of software processes is achieved through a turnkey solution where best practices are imposed or transferred from a company’s headquarters to its offshore units. In so doing, successful project health checks and monitoring for quality on software processes requires strong project management skills, well-built onshore-offshore coordination, and often needs regular onsite visits by software process improvement consultants from the headquarters’ team. This paper focuses on software process improvement as guided by the Capability Maturity Model Integration (CMMI) and proposes a model to evaluate the status of such improvement efforts in the context of distributed multi-site projects without some of this overhead. The paper discusses the application of quantitative CMMI assessment through the collection and analysis of project data gathered directly from project repositories to facilitate CMMI implementation and reduce the cost of such implementation for offshore-outsourced software development projects. We exemplify this approach to quantitative CMMI assessment through the analysis of project management data and discuss the future directions of this work in progress.

  17. A Capstone Course on Agile Software Development Using Scrum

    ERIC Educational Resources Information Center

    Mahnic, V.

    2012-01-01

    In this paper, an undergraduate capstone course in software engineering is described that not only exposes students to agile software development, but also makes it possible to observe the behavior of developers using Scrum for the first time. The course requires students to work as Scrum Teams, responsible for the implementation of a set of user…

  18. Team Software Process (TSP) Coach Mentoring Program Guidebook Version 1.1

    DTIC Science & Technology

    2010-06-01

    All training was conducted in English only, and observations were limited to English- speaking coaches and teams. The SEI-Certified TSP Coach...programs also enable the expansion of TSP implementation to non-English- speaking teams and organizations. This expanded capacity for qualifying candidate...Improvement Could Benefit from Development Capable and Effective Role Model 1. I listen before speaking . 2. I demonstrate persuasiveness in

  19. An agile implementation of SCRUM

    NASA Astrophysics Data System (ADS)

    Gannon, Michele

    Is Agile a way to cut corners? To some, the use of an Agile Software Development Methodology has a negative connotation - “ Oh, you're just not producing any documentation” . So can a team with no experience in Agile successfully implement and use SCRUM?

  20. Lessons Learned on Implementing Fault Detection, Isolation, and Recovery (FDIR) in a Ground Launch Environment

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob A.; Lewis, Mark E.; Perotti, Jose M.; Brown, Barbara L.; Oostdyk, Rebecca L.; Goetz, Jesse W.

    2010-01-01

    This paper's main purpose is to detail issues and lessons learned regarding designing, integrating, and implementing Fault Detection Isolation and Recovery (FDIR) for Constellation Exploration Program (CxP) Ground Operations at Kennedy Space Center (KSC). Part of the0 overall implementation of National Aeronautics and Space Administration's (NASA's) CxP, FDIR is being implemented in three main components of the program (Ares, Orion, and Ground Operations/Processing). While not initially part of the design baseline for the CxP Ground Operations, NASA felt that FDIR is important enough to develop, that NASA's Exploration Systems Mission Directorate's (ESMD's) Exploration Technology Development Program (ETDP) initiated a task for it under their Integrated System Health Management (ISHM) research area. This task, referred to as the FDIIR project, is a multi-year multi-center effort. The primary purpose of the FDIR project is to develop a prototype and pathway upon which Fault Detection and Isolation (FDI) may be transitioned into the Ground Operations baseline. Currently, Qualtech Systems Inc (QSI) Commercial Off The Shelf (COTS) software products Testability Engineering and Maintenance System (TEAMS) Designer and TEAMS RDS/RT are being utilized in the implementation of FDI within the FDIR project. The TEAMS Designer COTS software product is being utilized to model the system with Functional Fault Models (FFMs). A limited set of systems in Ground Operations are being modeled by the FDIR project, and the entire Ares Launch Vehicle is being modeled under the Functional Fault Analysis (FFA) project at Marshall Space Flight Center (MSFC). Integration of the Ares FFMs and the Ground Processing FFMs is being done under the FDIR project also utilizing the TEAMS Designer COTS software product. One of the most significant challenges related to integration is to ensure that FFMs developed by different organizations can be integrated easily and without errors. Software Interface Control Documents (ICDs) for the FFMs and their usage will be addressed as the solution to this issue. In particular, the advantages and disadvantages of these ICDs across physically separate development groups will be delineated.

  1. Decentralized Formation Flying Control in a Multiple-Team Hierarchy

    NASA Technical Reports Server (NTRS)

    Mueller, Joseph .; Thomas, Stephanie J.

    2005-01-01

    This paper presents the prototype of a system that addresses these objectives-a decentralized guidance and control system that is distributed across spacecraft using a multiple-team framework. The objective is to divide large clusters into teams of manageable size, so that the communication and computational demands driven by N decentralized units are related to the number of satellites in a team rather than the entire cluster. The system is designed to provide a high-level of autonomy, to support clusters with large numbers of satellites, to enable the number of spacecraft in the cluster to change post-launch, and to provide for on-orbit software modification. The distributed guidance and control system will be implemented in an object-oriented style using MANTA (Messaging Architecture for Networking and Threaded Applications). In this architecture, tasks may be remotely added, removed or replaced post-launch to increase mission flexibility and robustness. This built-in adaptability will allow software modifications to be made on-orbit in a robust manner. The prototype system, which is implemented in MATLAB, emulates the object-oriented and message-passing features of the MANTA software. In this paper, the multiple-team organization of the cluster is described, and the modular software architecture is presented. The relative dynamics in eccentric reference orbits is reviewed, and families of periodic, relative trajectories are identified, expressed as sets of static geometric parameters. The guidance law design is presented, and an example reconfiguration scenario is used to illustrate the distributed process of assigning geometric goals to the cluster. Next, a decentralized maneuver planning approach is presented that utilizes linear-programming methods to enact reconfiguration and coarse formation keeping maneuvers. Finally, a method for performing online collision avoidance is discussed, and an example is provided to gauge its performance.

  2. Point of care use of a personal digital assistant for patient consultation management: experience of an intravenous resource nurse team in a major Canadian teaching hospital.

    PubMed

    Bosma, Laine; Balen, Robert M; Davidson, Erin; Jewesson, Peter J

    2003-01-01

    The development and integration of a personal digital assistant (PDA)-based point-of-care database into an intravenous resource nurse (IVRN) consultation service for the purposes of consultation management and service characterization are described. The IVRN team provides a consultation service 7 days a week in this 1000-bed tertiary adult care teaching hospital. No simple, reliable method for documenting IVRN patient care activity and facilitating IVRN-initiated patient follow-up evaluation was available. Implementation of a PDA database with exportability of data to statistical analysis software was undertaken in July 2001. A Palm IIIXE PDA was purchased and a three-table, 13-field database was developed using HanDBase software. During the 7-month period of data collection, the IVRN team recorded 4868 consultations for 40 patient care areas. Full analysis of service characteristics was conducted using SPSS 10.0 software. Team members adopted the new technology with few problems, and the authors now can efficiently track and analyze the services provided by their IVRN team.

  3. Repository-based software engineering program

    NASA Technical Reports Server (NTRS)

    Wilson, James

    1992-01-01

    The activities performed during September 1992 in support of Tasks 01 and 02 of the Repository-Based Software Engineering Program are outlined. The recommendations and implementation strategy defined at the September 9-10 meeting of the Reuse Acquisition Action Team (RAAT) are attached along with the viewgraphs and reference information presented at the Institute for Defense Analyses brief on legal and patent issues related to software reuse.

  4. Software Quality Assurance Metrics

    NASA Technical Reports Server (NTRS)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  5. NA-42 TI Shared Software Component Library FY2011 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knudson, Christa K.; Rutz, Frederick C.; Dorow, Kevin E.

    The NA-42 TI program initiated an effort in FY2010 to standardize its software development efforts with the long term goal of migrating toward a software management approach that will allow for the sharing and reuse of code developed within the TI program, improve integration, ensure a level of software documentation, and reduce development costs. The Pacific Northwest National Laboratory (PNNL) has been tasked with two activities that support this mission. PNNL has been tasked with the identification, selection, and implementation of a Shared Software Component Library. The intent of the library is to provide a common repository that is accessiblemore » by all authorized NA-42 software development teams. The repository facilitates software reuse through a searchable and easy to use web based interface. As software is submitted to the repository, the component registration process captures meta-data and provides version control for compiled libraries, documentation, and source code. This meta-data is then available for retrieval and review as part of library search results. In FY2010, PNNL and staff from the Remote Sensing Laboratory (RSL) teamed up to develop a software application with the goal of replacing the aging Aerial Measuring System (AMS). The application under development includes an Advanced Visualization and Integration of Data (AVID) framework and associated AMS modules. Throughout development, PNNL and RSL have utilized a common AMS code repository for collaborative code development. The AMS repository is hosted by PNNL, is restricted to the project development team, is accessed via two different geographic locations and continues to be used. The knowledge gained from the collaboration and hosting of this repository in conjunction with PNNL software development and systems engineering capabilities were used in the selection of a package to be used in the implementation of the software component library on behalf of NA-42 TI. The second task managed by PNNL is the development and continued maintenance of the NA-42 TI Software Development Questionnaire. This questionnaire is intended to help software development teams working under NA-42 TI in documenting their development activities. When sufficiently completed, the questionnaire illustrates that the software development activities recorded incorporate significant aspects of the software engineering lifecycle. The questionnaire template is updated as comments are received from NA-42 and/or its development teams and revised versions distributed to those using the questionnaire. PNNL also maintains a list of questionnaire recipients. The blank questionnaire template, the AVID and AMS software being developed, and the completed AVID AMS specific questionnaire are being used as the initial content to be established in the TI Component Library. This report summarizes the approach taken to identify requirements, search for and evaluate technologies, and the approach taken for installation of the software needed to host the component library. Additionally, it defines the process by which users request access for the contribution and retrieval of library content.« less

  6. Verifying Architectural Design Rules of the Flight Software Product Line

    NASA Technical Reports Server (NTRS)

    Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen

    2009-01-01

    This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.

  7. Mitigating Motion Base Safety Issues: The NASA LaRC CMF Implementation

    NASA Technical Reports Server (NTRS)

    Bryant, Richard B., Jr.; Grupton, Lawrence E.; Martinez, Debbie; Carrelli, David J.

    2005-01-01

    The NASA Langley Research Center (LaRC), Cockpit Motion Facility (CMF) motion base design has taken advantage of inherent hydraulic characteristics to implement safety features using hardware solutions only. Motion system safety has always been a concern and its implementation is addressed differently by each organization. Some approaches rely heavily on software safety features. Software which performs safety functions is subject to more scrutiny making its approval, modification, and development time consuming and expensive. The NASA LaRC's CMF motion system is used for research and, as such, requires that the software be updated or modified frequently. The CMF's customers need the ability to update the simulation software frequently without the associated cost incurred with safety critical software. This paper describes the CMF engineering team's approach to achieving motion base safety by designing and implementing all safety features in hardware, resulting in applications software (including motion cueing and actuator dynamic control) being completely independent of the safety devices. This allows the CMF safety systems to remain intact and unaffected by frequent research system modifications.

  8. Using failure mode and effects analysis to plan implementation of smart i.v. pump technology.

    PubMed

    Wetterneck, Tosha B; Skibinski, Kathleen A; Roberts, Tanita L; Kleppin, Susan M; Schroeder, Mark E; Enloe, Myra; Rough, Steven S; Hundt, Ann Schoofs; Carayon, Pascale

    2006-08-15

    Failure mode and effects analysis (FMEA) was used to evaluate a smart i.v. pump as it was implemented into a redesigned medication-use process. A multidisciplinary team conducted a FMEA to guide the implementation of a smart i.v. pump that was designed to prevent pump programming errors. The smart i.v. pump was equipped with a dose-error reduction system that included a pre-defined drug library in which dosage limits were set for each medication. Monitoring for potential failures and errors occurred for three months postimplementation of FMEA. Specific measures were used to determine the success of the actions that were implemented as a result of the FMEA. The FMEA process at the hospital identified key failure modes in the medication process with the use of the old and new pumps, and actions were taken to avoid errors and adverse events. I.V. pump software and hardware design changes were also recommended. Thirteen of the 18 failure modes reported in practice after pump implementation had been identified by the team. A beneficial outcome of FMEA was the development of a multidisciplinary team that provided the infrastructure for safe technology implementation and effective event investigation after implementation. With the continual updating of i.v. pump software and hardware after implementation, FMEA can be an important starting place for safe technology choice and implementation and can produce site experts to follow technology and process changes over time. FMEA was useful in identifying potential problems in the medication-use process with the implementation of new smart i.v. pumps. Monitoring for system failures and errors after implementation remains necessary.

  9. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1 : ASC software quality engineering practices version 1.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minana, Molly A.; Sturtevant, Judith E.; Heaphy, Robert

    2005-01-01

    The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in DOE/AL Quality Criteria (QC-1) as conformance to customer requirements and expectations. This quality plan defines the ASC program software quality practices and provides mappings of these practices to the SNL Corporate Process Requirements (CPR 1.3.2 and CPR 1.3.6) and the Department of Energy (DOE) document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines (GP&G). This quality plan identifies ASC management andmore » software project teams' responsibilities for cost-effective software engineering quality practices. The SNL ASC Software Quality Plan establishes the signatories commitment to improving software products by applying cost-effective software engineering quality practices. This document explains the project teams opportunities for tailoring and implementing the practices; enumerates the practices that compose the development of SNL ASC's software products; and includes a sample assessment checklist that was developed based upon the practices in this document.« less

  10. The Texas Children's Hospital immunization forecaster: conceptualization to implementation.

    PubMed

    Cunningham, Rachel M; Sahni, Leila C; Kerr, G Brady; King, Laura L; Bunker, Nathan A; Boom, Julie A

    2014-12-01

    Immunization forecasting systems evaluate patient vaccination histories and recommend the dates and vaccines that should be administered. We described the conceptualization, development, implementation, and distribution of a novel immunization forecaster, the Texas Children's Hospital (TCH) Forecaster. In 2007, TCH convened an internal expert team that included a pediatrician, immunization nurse, software engineer, and immunization subject matter experts to develop the TCH Forecaster. Our team developed the design of the model, wrote the software, populated the Excel tables, integrated the software, and tested the Forecaster. We created a table of rules that contained each vaccine's recommendations, minimum ages and intervals, and contraindications, which served as the basis for the TCH Forecaster. We created 15 vaccine tables that incorporated 79 unique dose states and 84 vaccine types to operationalize the entire United States recommended immunization schedule. The TCH Forecaster was implemented throughout the TCH system, the Indian Health Service, and the Virginia Department of Health. The TCH Forecast Tester is currently being used nationally. Immunization forecasting systems might positively affect adherence to vaccine recommendations. Efforts to support health care provider utilization of immunization forecasting systems and to evaluate their impact on patient care are needed.

  11. Clinician user involvement in the real world: Designing an electronic tool to improve interprofessional communication and collaboration in a hospital setting.

    PubMed

    Tang, Terence; Lim, Morgan E; Mansfield, Elizabeth; McLachlan, Alexander; Quan, Sherman D

    2018-02-01

    User involvement is vital to the success of health information technology implementation. However, involving clinician users effectively and meaningfully in complex healthcare organizations remains challenging. The objective of this paper is to share our real-world experience of applying a variety of user involvement methods in the design and implementation of a clinical communication and collaboration platform aimed at facilitating care of complex hospitalized patients by an interprofessional team of clinicians. We designed and implemented an electronic clinical communication and collaboration platform in a large community teaching hospital. The design team consisted of both technical and healthcare professionals. Agile software development methodology was used to facilitate rapid iterative design and user input. We involved clinician users at all stages of the development lifecycle using a variety of user-centered, user co-design, and participatory design methods. Thirty-six software releases were delivered over 24 months. User involvement has resulted in improvement in user interface design, identification of software defects, creation of new modules that facilitated workflow, and identification of necessary changes to the scope of the project early on. A variety of user involvement methods were complementary and benefited the design and implementation of a complex health IT solution. Combining these methods with agile software development methodology can turn designs into functioning clinical system to support iterative improvement. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Toughen up.

    PubMed

    Donaldson, D; Mayes, M

    1999-10-01

    Within six months, AHS needed to integrate three recently merged hospitals running on disparate hardware and software systems into one unified system. AHS partnered with DataStudy Inc., Parsippany, N.J., and formed a team to address the specific enterprise resource planning needs of this healthcare organization. The implementation team completed the project within the six-month time frame and incorporated functionality that went beyond the initial specifications for the project. "To maximize the return on the always substantial ERP investment, healthcare executives must be aware of the many pitfalls waiting to derail every well-intentioned implementation."

  13. Software Users Manual (SUM): Extended Testability Analysis (ETA) Tool

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Fulton, Christopher E.

    2011-01-01

    This software user manual describes the implementation and use the Extended Testability Analysis (ETA) Tool. The ETA Tool is a software program that augments the analysis and reporting capabilities of a commercial-off-the-shelf (COTS) testability analysis software package called the Testability Engineering And Maintenance System (TEAMS) Designer. An initial diagnostic assessment is performed by the TEAMS Designer software using a qualitative, directed-graph model of the system being analyzed. The ETA Tool utilizes system design information captured within the diagnostic model and testability analysis output from the TEAMS Designer software to create a series of six reports for various system engineering needs. The ETA Tool allows the user to perform additional studies on the testability analysis results by determining the detection sensitivity to the loss of certain sensors or tests. The ETA Tool was developed to support design and development of the NASA Ares I Crew Launch Vehicle. The diagnostic analysis provided by the ETA Tool was proven to be valuable system engineering output that provided consistency in the verification of system engineering requirements. This software user manual provides a description of each output report generated by the ETA Tool. The manual also describes the example diagnostic model and supporting documentation - also provided with the ETA Tool software release package - that were used to generate the reports presented in the manual

  14. Catalyst for Change: A Case Report of a Campus-wide Student Information System Software Implementation Project

    ERIC Educational Resources Information Center

    Stivers, Jan; Garrity, N. B.

    2004-01-01

    When a mid-sized public college made a politically unpopular decision to purchase new student information system software, a team of fourteen people from across campus was assembled and charged with facilitating the transition from the home-grown system. This case report describes the challenges they faced as they worked to understand their…

  15. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1: ASC software quality engineering practices, Version 2.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise

    2006-09-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  16. Implementation of Audio Computer-Assisted Interviewing Software in HIV/AIDS Research

    PubMed Central

    Pluhar, Erika; Yeager, Katherine A.; Corkran, Carol; McCarty, Frances; Holstad, Marcia McDonnell; Denzmore-Nwagbara, Pamela; Fielder, Bridget; DiIorio, Colleen

    2007-01-01

    Computer assisted interviewing (CAI) has begun to play a more prominent role in HIV/AIDS prevention research. Despite the increased popularity of CAI, particularly audio computer assisted self-interviewing (ACASI), some research teams are still reluctant to implement ACASI technology due to lack of familiarity with the practical issues related to using these software packages. The purpose of this paper is to describe the implementation of one particular ACASI software package, the Questionnaire Development System™ (QDS™), in several nursing and HIV/AIDS prevention research settings. We present acceptability and satisfaction data from two large-scale public health studies in which we have used QDS with diverse populations. We also address issues related to developing and programming a questionnaire, discuss practical strategies related to planning for and implementing ACASI in the field, including selecting equipment, training staff, and collecting and transferring data, and summarize advantages and disadvantages of computer assisted research methods. PMID:17662924

  17. Continuous Energy Photon Transport Implementation in MCATK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Terry R.; Trahan, Travis John; Sweezy, Jeremy Ed

    2016-10-31

    The Monte Carlo Application ToolKit (MCATK) code development team has implemented Monte Carlo photon transport into the MCATK software suite. The current particle transport capabilities in MCATK, which process the tracking and collision physics, have been extended to enable tracking of photons using the same continuous energy approximation. We describe the four photoatomic processes implemented, which are coherent scattering, incoherent scattering, pair-production, and photoelectric absorption. The accompanying background, implementation, and verification of these processes will be presented.

  18. Implementing large projects in software engineering courses

    NASA Astrophysics Data System (ADS)

    Coppit, David

    2006-03-01

    In software engineering education, large projects are widely recognized as a useful way of exposing students to the real-world difficulties of team software development. But large projects are difficult to put into practice. First, educators rarely have additional time to manage software projects. Second, classrooms have inherent limitations that threaten the realism of large projects. Third, quantitative evaluation of individuals who work in groups is notoriously difficult. As a result, many software engineering courses compromise the project experience by reducing the team sizes, project scope, and risk. In this paper, we present an approach to teaching a one-semester software engineering course in which 20 to 30 students work together to construct a moderately sized (15KLOC) software system. The approach combines carefully coordinated lectures and homeworks, a hierarchical project management structure, modern communication technologies, and a web-based project tracking and individual assessment system. Our approach provides a more realistic project experience for the students, without incurring significant additional overhead for the instructor. We present our experiences using the approach the last 2 years for the software engineering course at The College of William and Mary. Although the approach has some weaknesses, we believe that they are strongly outweighed by the pedagogical benefits.

  19. Combining Architecture-Centric Engineering with the Team Software Process

    DTIC Science & Technology

    2010-12-01

    colleagues from Quarksoft and CIMAT have re- cently reported on their experiences in “Introducing Software Architecture Development Methods into a TSP...Postmortem Lessons, new goals, new requirements, new risk , etc. Business and technical goals Estimates, plans, process, commitment Work products...architecture to mitigate the risks unco- vered by the ATAM. At the end of the iteration, version 1.0 of the architec- ture is available. Implement a second

  20. Introduction to the Navigation Team: Johnson Space Center EG6 Internship

    NASA Technical Reports Server (NTRS)

    Gualdoni, Matthew

    2017-01-01

    The EG6 navigation team at NASA Johnson Space Center, like any team of engineers, interacts with the engineering process from beginning to end; from exploring solutions to a problem, to prototyping and studying the implementations, all the way to polishing and verifying a final flight-ready design. This summer, I was privileged enough to gain exposure to each of these processes, while also getting to truly experience working within a team of engineers. My summer can be broken up into three projects: i) Initial study and prototyping: investigating a manual navigation method that can be utilized onboard Orion in the event of catastrophic failure of navigation systems; ii) Finalizing and verifying code: altering a software routine to improve its robustness and reliability, as well as designing unit tests to verify its performance; and iii) Development of testing equipment: assisting in developing and integrating of a high-fidelity testbed to verify the performance of software and hardware.

  1. Software Capability Evaluation (SCE) Version 2.0 Implementation Guide

    DTIC Science & Technology

    1994-02-01

    Affected By SCE B-40 Figure 3-1 SCE Usage Decision Making Criteria 3-44 Figure 3-2 Estimated SCE Labor For One Source Selection 3-53 Figure 3-3 SCE...incorporated into the source selection sponsoring organization’s technical/management team for incorporation into acquisition decisions . The SCE team...expertise, past performance, and organizational capacity in acquisition decisions . The Capability Maturity Model Basic Concepts The CMM is based on the

  2. Team Software Development for Aerothermodynamic and Aerodynamic Analysis and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, N.; Atkins, H. L.; Bibb, K. L.; Biedron, R. T.; Carpenter, M. H.; Gnoffo, P. A.; Hammond, D. P.; Jones, W. T.; Kleb, W. L.; Lee-Rausch, E. M.

    2003-01-01

    A collaborative approach to software development is described. The approach employs the agile development techniques: project retrospectives, Scrum status meetings, and elements of Extreme Programming to efficiently develop a cohesive and extensible software suite. The software product under development is a fluid dynamics simulator for performing aerodynamic and aerothermodynamic analysis and design. The functionality of the software product is achieved both through the merging, with substantial rewrite, of separate legacy codes and the authorship of new routines. Examples of rapid implementation of new functionality demonstrate the benefits obtained with this agile software development process. The appendix contains a discussion of coding issues encountered while porting legacy Fortran 77 code to Fortran 95, software design principles, and a Fortran 95 coding standard.

  3. Maneuver Automation Software

    NASA Technical Reports Server (NTRS)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; hide

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  4. Implementation of audio computer-assisted interviewing software in HIV/AIDS research.

    PubMed

    Pluhar, Erika; McDonnell Holstad, Marcia; Yeager, Katherine A; Denzmore-Nwagbara, Pamela; Corkran, Carol; Fielder, Bridget; McCarty, Frances; Diiorio, Colleen

    2007-01-01

    Computer-assisted interviewing (CAI) has begun to play a more prominent role in HIV/AIDS prevention research. Despite the increased popularity of CAI, particularly audio computer-assisted self-interviewing (ACASI), some research teams are still reluctant to implement ACASI technology because of lack of familiarity with the practical issues related to using these software packages. The purpose of this report is to describe the implementation of one particular ACASI software package, the Questionnaire Development System (QDS; Nova Research Company, Bethesda, MD), in several nursing and HIV/AIDS prevention research settings. The authors present acceptability and satisfaction data from two large-scale public health studies in which they have used QDS with diverse populations. They also address issues related to developing and programming a questionnaire; discuss practical strategies related to planning for and implementing ACASI in the field, including selecting equipment, training staff, and collecting and transferring data; and summarize advantages and disadvantages of computer-assisted research methods.

  5. Implementation Guidance for the Accelerated Improvement Method (AIM). Software Engineering Process Management: Special Report

    DTIC Science & Technology

    2010-12-01

    PSP and TSP books by Watts Humphrey or in the TSP-MT (multi-team) process extension. A few additional items should be created, e.g., see OPD-2...Institute, Carnegie Mellon University, 2000. www.sei.cmu.edu/library/abstracts/reports/00tr023.cfm [ Humphrey 2005] Humphrey , Watts S . PSP : A Self... Humphrey 2006] Humphrey , Watts S . TSP: Coaching Development Teams. Addison Wesley, 2006 (ISBN 978- 0201731132). www.sei.cmu.edu/library/abstracts/

  6. A recent Cleanroom success story: The Redwing project

    NASA Technical Reports Server (NTRS)

    Hausler, Philip A.

    1992-01-01

    Redwing is the largest completed Cleanroom software engineering project in IBM, both in terms of lines of code and project staffing. The product provides a decision-support facility that utilizes artificial intelligence (AI) technology for predicting and preventing complex operating problems in an MVS environment. The project used the Cleanroom process for development and realized a defect rate of 2.6 errors/KLOC, measured from first execution. This represents the total amount of errors that were found in testing and installation at three field test sites. Development productivity was 486 LOC/PM, which included all development labor expended in design specification through completion of incremental testing. In short, the Redwing team produced a complex systems software product with an extraordinarily low error rate, while maintaining high productivity. All of this was accomplished by a project team using Cleanroom for the first time. An 'introductory implementation' of Cleanroom was defined and used on Redwing. This paper describes the quality and productivity results, the Redwing project, and how Cleanroom was implemented.

  7. The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.

    PubMed

    Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin

    2007-11-01

    This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.

  8. A Quantitative Study of Global Software Development Teams, Requirements, and Software Projects

    ERIC Educational Resources Information Center

    Parker, Linda L.

    2016-01-01

    The study explored the relationship between global software development teams, effective software requirements, and stakeholders' perception of successful software development projects within the field of information technology management. It examined the critical relationship between Global Software Development (GSD) teams creating effective…

  9. Cooperative Search by UAV Teams: A Model Predictive Approach Using Dynamic Graphs

    DTIC Science & Technology

    2011-10-01

    decentralized processing and control architecture. SLAMEM asset models accurately represent the Unicorn UAV platforms and other standard military platforms in...IMPLEMENTATION The CGBMPS algorithm has been successfully field-tested using both Unicorn [27] and Raven [20] UAV platforms. This section describes...the hardware-software system setup and implementation used for testing with Unicorns , Toyon’s UAV test platform. We also present some results from the

  10. A proposed research program in information processing

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert

    1992-01-01

    The goal of the Formalized Software Development (FSD) project was to demonstrate improvements productivity of software development and maintenance through the use of a new software lifecycle paradigm. The paradigm calls for the mechanical, but human-guided, derivation of software implementations from formal specifications of the desired software behavior. It relies on altering a system's specification and rederiving its implementation as the standard technology for software maintenance. A system definition for this paradigm is composed of a behavioral specification together with a body of annotations that control the derivation of executable code from the specification. Annotations generally achieve the selection of certain data representations and/or algorithms that are consistent with, but not mandated by, the behavioral specification. In doing this, they may yield systems which exhibit only certain behaviors among multiple alternatives permitted by the behavioral specification. The FSD project proposed to construct a testbed in which to explore the realization of this new paradigm. The testbed was to provide operational support environment for software design, implementation, and maintenance. The testbed was proposed to provide highly automated support for individual programmers ('programming in the small'), but not to address the additional needs of programming teams ('programming in the large'). The testbed proposed to focus on supporting rapid construction and evolution of useful prototypes of software systems, as opposed to focusing on the problems of achieving production quality performance of systems.

  11. Exploring the Use of a Test Automation Framework

    NASA Technical Reports Server (NTRS)

    Cervantes, Alex

    2009-01-01

    It is known that software testers, more often than not, lack the time needed to fully test the delivered software product within the time period allotted to them. When problems in the implementation phase of a development project occur, it normally causes the software delivery date to slide. As a result, testers either need to work longer hours, or supplementary resources need to be added to the test team in order to meet aggressive test deadlines. One solution to this problem is to provide testers with a test automation framework to facilitate the development of automated test solutions.

  12. Factors to Consider When Implementing Automated Software Testing

    DTIC Science & Technology

    2016-11-10

    programming, e.g., Java or Visual Basic.  Subject Matter Experts (SME) with firm grasp of application being automated. 2. Additional costs for setup (e.g...Abilities (KSA) required (e.g., Test and Evaluation). 2. Analyze programming skills needed (e.g., Java , C, C++, Visual Basic). 3. Compose team – testers

  13. Assisting Instructional Assessment of Undergraduate Collaborative Wiki and SVN Activities

    ERIC Educational Resources Information Center

    Kim, Jihie; Shaw, Erin; Xu, Hao; Adarsh, G. V.

    2012-01-01

    In this paper we examine the collaborative performance of undergraduate engineering students who used shared project documents (Wikis, Google documents) and a software version control system (SVN) to support project collaboration. We present an initial implementation of TeamAnalytics, an instructional tool that facilitates the analyses of the…

  14. A qualitative study identifying the cost categories associated with electronic health record implementation in the UK

    PubMed Central

    Slight, Sarah P; Quinn, Casey; Avery, Anthony J; Bates, David W; Sheikh, Aziz

    2014-01-01

    Objective We conducted a prospective evaluation of different forms of electronic health record (EHR) systems to better understand the costs incurred during implementation and the factors that can influence these costs. Methods We selected a range of diverse organizations across three different geographical areas in England that were at different stages of implementing three centrally procured applications, that is, iSOFT's Lorenzo Regional Care, Cerner's Millennium, and CSE's RiO. 41 semi-structured interviews were conducted with hospital staff, members of the implementation team, and those involved in the implementation at a national level. Results Four main overarching cost categories were identified: infrastructure (eg, hardware and software), personnel (eg, training team), estates/facilities (eg, space), and other (eg, training materials). Many factors were felt to impact on these costs, with different hospitals choosing varying amounts and types of infrastructure, diverse training approaches for staff, and different software applications to integrate with the new system. Conclusions Improving the quality and safety of patient care through EHR adoption is a priority area for UK and US governments and policy makers worldwide. With cost considered one of the most significant barriers, it is important for hospitals and governments to be clear from the outset of the major cost categories involved and the factors that may impact on these costs. Failure to adequately train staff or to follow key steps in implementation has preceded many of the failures in this domain, which can create new safety hazards. PMID:24523391

  15. A performance improvement plan to increase nurse adherence to use of medication safety software.

    PubMed

    Gavriloff, Carrie

    2012-08-01

    Nurses can protect patients receiving intravenous (IV) medication by using medication safety software to program "smart" pumps to administer IV medications. After a patient safety event identified inconsistent use of medication safety software by nurses, a performance improvement team implemented the Deming Cycle performance improvement methodology. The combined use of improved direct care nurse communication, programming strategies, staff education, medication safety champions, adherence monitoring, and technology acquisition resulted in a statistically significant (p < .001) increase in nurse adherence to using medication safety software from 28% to above 85%, exceeding national benchmark adherence rates (Cohen, Cooke, Husch & Woodley, 2007; Carefusion, 2011). Copyright © 2012 Elsevier Inc. All rights reserved.

  16. An approach to verification and validation of a reliable multicasting protocol: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. This initial version did not handle off-nominal cases such as network partitions or site failures. Meanwhile, the V&V team concurrently developed a formal model of the requirements using a variant of SCR-based state tables. Based on these requirements tables, the V&V team developed test cases to exercise the implementation. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test in the model and implementation agreed, then the test either found a potential problem or verified a required behavior. However, if the execution of a test was different in the model and implementation, then the differences helped identify inconsistencies between the model and implementation. In either case, the dialogue between both teams drove the co-evolution of the model and implementation. We have found that this interactive, iterative approach to development allows software designers to focus on delivery of nominal functionality while the V&V team can focus on analysis of off nominal cases. Testing serves as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP. Although RMP has provided our research effort with a rich set of test cases, it also has practical applications within NASA. For example, RMP is being considered for use in the NASA EOSDIS project due to its significant performance benefits in applications that need to replicate large amounts of data to many network sites.

  17. DEVELOPING THE NATIONAL GEOTHERMAL DATA SYSTEM ADOPTION OF CKAN FOR DOMESTIC & INTERNATIONAL DATA DEPLOYMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Ryan J.; Kuhmuench, Christoph; Richard, Stephen M.

    2013-03-01

    The National Geothermal Data System (NGDS) De- sign and Testing Team is developing NGDS software currently referred to as the “NGDS Node-In-A-Box”. The software targets organizations or individuals who wish to host at least one of the following: • an online repository containing resources for the NGDS; • an online site for creating metadata to register re- sources with the NGDS • NDGS-conformant Web APIs that enable access to NGDS data (e.g., WMS, WFS, WCS); • NDGS-conformant Web APIs that support dis- covery of NGDS resources via catalog service (e.g. CSW) • a web site that supports discovery and under-more » standing of NGDS resources A number of different frameworks for development of this online application were reviewed. The NGDS Design and Testing Team determined to use CKAN (http://ckan.org/), because it provides the closest match between out of the box functionality and NGDS node-in-a-box requirements. To achieve the NGDS vision and goals, this software development project has been inititated to provide NGDS data consumers with a highly functional inter- face to access the system, and to ease the burden on data providers who wish to publish data in the sys- tem. It is important to note that this software package constitutes a reference implementation. The NGDS software is based on open standards, which means other server software can make resources available, and other client applications can utilize NGDS data. A number of international organizations have ex- pressed interest in the NGDS approach to data access. The CKAN node implementation can provide a sim- ple path for deploying this technology in other set- tings.« less

  18. Decentralized formation flying control in a multiple-team hierarchy.

    PubMed

    Mueller, Joseph B; Thomas, Stephanie J

    2005-12-01

    In recent years, formation flying has been recognized as an enabling technology for a variety of mission concepts in both the scientific and defense arenas. Examples of developing missions at NASA include magnetospheric multiscale (MMS), solar imaging radio array (SIRA), and terrestrial planet finder (TPF). For each of these missions, a multiple satellite approach is required in order to accomplish the large-scale geometries imposed by the science objectives. In addition, the paradigm shift of using a multiple satellite cluster rather than a large, monolithic spacecraft has also been motivated by the expected benefits of increased robustness, greater flexibility, and reduced cost. However, the operational costs of monitoring and commanding a fleet of close-orbiting satellites is likely to be unreasonable unless the onboard software is sufficiently autonomous, robust, and scalable to large clusters. This paper presents the prototype of a system that addresses these objectives-a decentralized guidance and control system that is distributed across spacecraft using a multiple team framework. The objective is to divide large clusters into teams of "manageable" size, so that the communication and computation demands driven by N decentralized units are related to the number of satellites in a team rather than the entire cluster. The system is designed to provide a high level of autonomy, to support clusters with large numbers of satellites, to enable the number of spacecraft in the cluster to change post-launch, and to provide for on-orbit software modification. The distributed guidance and control system will be implemented in an object-oriented style using a messaging architecture for networking and threaded applications (MANTA). In this architecture, tasks may be remotely added, removed, or replaced post launch to increase mission flexibility and robustness. This built-in adaptability will allow software modifications to be made on-orbit in a robust manner. The prototype system, which is implemented in Matlab, emulates the object-oriented and message-passing features of the MANTA software. In this paper, the multiple team organization of the cluster is described, and the modular software architecture is presented. The relative dynamics in eccentric reference orbits is reviewed, and families of periodic, relative trajectories are identified, expressed as sets of static geometric parameters. The guidance law design is presented, and an example reconfiguration scenario is used to illustrate the distributed process of assigning geometric goals to the cluster. Next, a decentralized maneuver planning approach is presented that utilizes linear-programming methods to enact reconfiguration and coarse formation keeping maneuvers. Finally, a method for performing online collision avoidance is discussed, and an example is provided to gauge its performance.

  19. Reconfigurable Software for Controlling Formation Flying

    NASA Technical Reports Server (NTRS)

    Mueller, Joseph B.

    2006-01-01

    Software for a system to control the trajectories of multiple spacecraft flying in formation is being developed to reflect underlying concepts of (1) a decentralized approach to guidance and control and (2) reconfigurability of the control system, including reconfigurability of the software and of control laws. The software is organized as a modular network of software tasks. The computational load for both determining relative trajectories and planning maneuvers is shared equally among all spacecraft in a cluster. The flexibility and robustness of the software are apparent in the fact that tasks can be added, removed, or replaced during flight. In a computational simulation of a representative formation-flying scenario, it was demonstrated that the following are among the services performed by the software: Uploading of commands from a ground station and distribution of the commands among the spacecraft, Autonomous initiation and reconfiguration of formations, Autonomous formation of teams through negotiations among the spacecraft, Working out details of high-level commands (e.g., shapes and sizes of geometrically complex formations), Implementation of a distributed guidance law providing autonomous optimization and assignment of target states, and Implementation of a decentralized, fuel-optimal, impulsive control law for planning maneuvers.

  20. Considering Subcontractors in Distributed Scrum Teams

    NASA Astrophysics Data System (ADS)

    Rudzki, Jakub; Hammouda, Imed; Mikkola, Tuomas; Mustonen, Karri; Systä, Tarja

    In this chapter we present our experiences with working with subcontractors in distributed Scrum teams. The context of our experiences is a medium size software service provider company. We present the way the subcontractors are selected and how Scrum practices can be used in real-life projects. We discuss team arrangements and tools used in distributed development teams highlighting aspects that are important when working with subcontractors. We also present an illustrative example where different phases of a project working with subcontractors are described. The example also provides practical tips on work in such projects. Finally, we present a summary of our data that was collected from Scrum and non-Scrum projects implemented over a few years. This chapter should provide a practical point of view on working with subcontractors in Scrum teams for those who are considering such cooperation.

  1. Implementing LibGuides 2: An Academic Case Study

    ERIC Educational Resources Information Center

    Duncan, Vicky; Lucky, Shannon; McLean, Jaclyn

    2015-01-01

    Since 1997, the University of Saskatchewan Library has used "subject pages" to highlight key library resources. When Springshare announced it was launching LibGuides v2, a project team was assembled to transition a mixture of locally produced guides and guides created with the original LibGuides v1 software. This article synthesizes best…

  2. Validation and verification of a virtual environment for training naval submarine officers

    NASA Astrophysics Data System (ADS)

    Zeltzer, David L.; Pioch, Nicholas J.

    1996-04-01

    A prototype virtual environment (VE) has been developed for training a submarine officer of the desk (OOD) to perform in-harbor navigation on a surfaced submarine. The OOD, stationed on the conning tower of the vessel, is responsible for monitoring the progress of the boat as it negotiates a marked channel, as well as verifying the navigational suggestions of the below- deck piloting team. The VE system allows an OOD trainee to view a particular harbor and associated waterway through a head-mounted display, receive spoken reports from a simulated piloting team, give spoken commands to the helmsman, and receive verbal confirmation of command execution from the helm. The task analysis of in-harbor navigation, and the derivation of application requirements are briefly described. This is followed by a discussion of the implementation of the prototype. This implementation underwent a series of validation and verification assessment activities, including operational validation, data validation, and software verification of individual software modules as well as the integrated system. Validation and verification procedures are discussed with respect to the OOD application in particular, and with respect to VE applications in general.

  3. Towards a balanced software team formation based on Belbin team role using fuzzy technique

    NASA Astrophysics Data System (ADS)

    Omar, Mazni; Hasan, Bikhtiyar; Ahmad, Mazida; Yasin, Azman; Baharom, Fauziah; Mohd, Haslina; Darus, Norida Muhd

    2016-08-01

    In software engineering (SE), team roles play significant impact in determining the project success. To ensure the optimal outcome of the project the team is working on, it is essential to ensure that the team members are assigned to the right role with the right characteristics. One of the prevalent team roles is Belbin team role. A successful team must have a balance of team roles. Thus, this study demonstrates steps taken to determine balance of software team formation based on Belbin team role using fuzzy technique. Fuzzy technique was chosen because it allows analyzing of imprecise data and classifying selected criteria. In this study, two roles in Belbin team role, which are Shaper (Sh) and Plant (Pl) were chosen to assign the specific role in software team. Results show that the technique is able to be used for determining the balance of team roles. Future works will focus on the validation of the proposed method by using empirical data in industrial setting.

  4. Recipe for Success: Digital Viewables

    NASA Technical Reports Server (NTRS)

    LaPha, Steven; Gaydos, Frank

    2014-01-01

    The Engineering Services Contract (ESC) and Information Management Communication Support contract (IMCS) at Kennedy Space Center (KSC) provide services to NASA in respect to flight and ground systems design and development. These groups provides the necessary tools, aid, and best practice methodologies required for efficient, optimized design and process development. The team is responsible for configuring and implementing systems, software, along with training, documentation, and administering standards. The team supports over 200 engineers and design specialists with the use of Windchill, Creo Parametric, NX, AutoCAD, and a variety of other design and analysis tools.

  5. Arra: Tas::89 0227::Tas Recovery Act 100g Ftp: An Ultra-High Speed Data Transfer Service Over Next Generation 100 Gigabit Per Second Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    YU, DANTONG; Jin, Shudong

    2014-03-01

    Data-intensive applications, including high energy and nuclear physics, astrophysics, climate modeling, nano-scale materials science, genomics, and financing, are expected to generate exabytes of data over the coming years, which must be transferred, visualized, and analyzed by geographically distributed teams of users. High-performance network capabilities must be available to these users at the application level in a transparent, virtualized manner. Moreover, the application users must have the capability to move large datasets from local and remote locations across network environments to their home institutions. To solve these challenges, the main goal of our project is to design and evaluate high-performance datamore » transfer software to support various data-intensive applications. First, we have designed a middleware software that provides access to Remote Direct Memory Access (RDMA) functionalities. This middleware integrates network access, memory management and multitasking in its core design. We address a number of issues related to its efficient implementation, for instance, explicit buffer management and memory registration, and parallelization of RDMA operations, which are vital to delivering the benefit of RDMA to the applications. Built on top of this middleware, an implementation and experimental evaluation of the RDMA-based FTP software, RFTP, is described and evaluated. This application has been implemented by our team to exploit the full capabilities of advanced RDMA mechanisms for ultra-high speed bulk data transfer applications on Energy Sciences Network (ESnet). Second, we designed our data transfer software to optimize TCP/IP based data transfer performance such that RFTP can be fully compatible with today’s Internet. Our kernel optimization techniques with Linux system calls sendfile and splice, can reduce data copy cost. In this report, we summarize the technical challenges of our project, the primary software design methods, the major project milestones achieved, as well as the testbed evaluation work and demonstrations during our project life time.« less

  6. A Core Plug and Play Architecture for Reusable Flight Software Systems

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan

    2006-01-01

    The Flight Software Branch, at Goddard Space Flight Center (GSFC), has been working on a run-time approach to facilitate a formal software reuse process. The reuse process is designed to enable rapid development and integration of high-quality software systems and to more accurately predict development costs and schedule. Previous reuse practices have been somewhat successful when the same teams are moved from project to project. But this typically requires taking the software system in an all-or-nothing approach where useful components cannot be easily extracted from the whole. As a result, the system is less flexible and scalable with limited applicability to new projects. This paper will focus on the rationale behind, and implementation of the run-time executive. This executive is the core for the component-based flight software commonality and reuse process adopted at Goddard.

  7. Telemetry Monitoring and Display Using LabVIEW

    NASA Technical Reports Server (NTRS)

    Wells, George; Baroth, Edmund C.

    1993-01-01

    The Measurement Technology Center of the Instrumentation Section configures automated data acquisition systems to meet the diverse needs of JPL's experimental research community. These systems are based on personal computers or workstations (Apple, IBM/Compatible, Hewlett-Packard, and Sun Microsystems) and often include integrated data analysis, visualization and experiment control functions in addition to data acquisition capabilities. These integrated systems may include sensors, signal conditioning, data acquisition interface cards, software, and a user interface. Graphical programming is used to simplify configuration of such systems. Employment of a graphical programming language is the most important factor in enabling the implementation of data acquisition, analysis, display and visualization systems at low cost. Other important factors are the use of commercial software packages and off-the-shelf data acquisition hardware where possible. Understanding the experimenter's needs is also critical. An interactive approach to user interface construction and training of operators is also important. One application was created as a result of a competative effort between a graphical programming language team and a text-based C language programming team to verify the advantages of using a graphical programming language approach. With approximately eight weeks of funding over a period of three months, the text-based programming team accomplished about 10% of the basic requirements, while the Macintosh/LabVIEW team accomplished about 150%, having gone beyond the original requirements to simulate a telemetry stream and provide utility programs. This application verified that using graphical programming can significantly reduce software development time. As a result of this initial effort, additional follow-on work was awarded to the graphical programming team.

  8. Project Management Software for Distributed Industrial Companies

    NASA Astrophysics Data System (ADS)

    Dobrojević, M.; Medjo, B.; Rakin, M.; Sedmak, A.

    This paper gives an overview of the development of a new software solution for project management, intended mainly to use in industrial environment. The main concern of the proposed solution is application in everyday engineering practice in various, mainly distributed industrial companies. Having this in mind, special care has been devoted to development of appropriate tools for tracking, storing and analysis of the information about the project, and in-time delivering to the right team members or other responsible persons. The proposed solution is Internet-based and uses LAMP/WAMP (Linux or Windows - Apache - MySQL - PHP) platform, because of its stability, versatility, open source technology and simple maintenance. Modular structure of the software makes it easy for customization according to client specific needs, with a very short implementation period. Its main advantages are simple usage, quick implementation, easy system maintenance, short training and only basic computer skills needed for operators.

  9. RINGMesh: A programming library for developing mesh-based geomodeling applications

    NASA Astrophysics Data System (ADS)

    Pellerin, Jeanne; Botella, Arnaud; Bonneau, François; Mazuyer, Antoine; Chauvin, Benjamin; Lévy, Bruno; Caumon, Guillaume

    2017-07-01

    RINGMesh is a C++ open-source programming library for manipulating discretized geological models. It is designed to ease the development of applications and workflows that use discretized 3D models. It is neither a geomodeler, nor a meshing software. RINGMesh implements functionalities to read discretized surface-based or volumetric structural models and to check their validity. The models can be then exported in various file formats. RINGMesh provides data structures to represent geological structural models, either defined by their discretized boundary surfaces, and/or by discretized volumes. A programming interface allows to develop of new geomodeling methods, and to plug in external software. The goal of RINGMesh is to help researchers to focus on the implementation of their specific method rather than on tedious tasks common to many applications. The documented code is open-source and distributed under the modified BSD license. It is available at https://www.ring-team.org/index.php/software/ringmesh.

  10. Idea Paper: The Lifecycle of Software for Scientific Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubey, Anshu; McInnes, Lois C.

    The software lifecycle is a well researched topic that has produced many models to meet the needs of different types of software projects. However, one class of projects, software development for scientific computing, has received relatively little attention from lifecycle researchers. In particular, software for end-to-end computations for obtaining scientific results has received few lifecycle proposals and no formalization of a development model. An examination of development approaches employed by the teams implementing large multicomponent codes reveals a great deal of similarity in their strategies. This idea paper formalizes these related approaches into a lifecycle model for end-to-end scientific applicationmore » software, featuring loose coupling between submodels for development of infrastructure and scientific capability. We also invite input from stakeholders to converge on a model that captures the complexity of this development processes and provides needed lifecycle guidance to the scientific software community.« less

  11. Performance of student software development teams: the influence of personality and identifying as team members

    NASA Astrophysics Data System (ADS)

    Monaghan, Conal; Bizumic, Boris; Reynolds, Katherine; Smithson, Michael; Johns-Boast, Lynette; van Rooy, Dirk

    2015-01-01

    One prominent approach in the exploration of the variations in project team performance has been to study two components of the aggregate personalities of the team members: conscientiousness and agreeableness. A second line of research, known as self-categorisation theory, argues that identifying as team members and the team's performance norms should substantially influence the team's performance. This paper explores the influence of both these perspectives in university software engineering project teams. Eighty students worked to complete a piece of software in small project teams during 2007 or 2008. To reduce limitations in statistical analysis, Monte Carlo simulation techniques were employed to extrapolate from the results of the original sample to a larger simulated sample (2043 cases, within 319 teams). The results emphasise the importance of taking into account personality (particularly conscientiousness), and both team identification and the team's norm of performance, in order to cultivate higher levels of performance in student software engineering project teams.

  12. LHCb Build and Deployment Infrastructure for run 2

    NASA Astrophysics Data System (ADS)

    Clemencic, M.; Couturier, B.

    2015-12-01

    After the successful run 1 of the LHC, the LHCb Core software team has taken advantage of the long shutdown to consolidate and improve its build and deployment infrastructure. Several of the related projects have already been presented like the build system using Jenkins, as well as the LHCb Performance and Regression testing infrastructure. Some components are completely new, like the Software Configuration Database (using the Graph DB Neo4j), or the new packaging installation using RPM packages. Furthermore all those parts are integrated to allow easier and quicker releases of the LHCb Software stack, therefore reducing the risk of operational errors. Integration and Regression tests are also now easier to implement, allowing to improve further the software checks.

  13. Scheduling System Assessment, and Development and Enhancement of Re-engineered Version of GPSS

    NASA Technical Reports Server (NTRS)

    Loganantharaj, Rasiah; Thomas, Bushrod; Passonno, Nicole

    1996-01-01

    The objective of this project is two-fold. First to provide an evaluation of a commercially developed version of the ground processing scheduling system (GPSS) for its applicability to the Kennedy Space Center (KSC) ground processing problem. Second, to work with the KSC GPSS development team and provide enhancement to the existing software. Systems reengineering is required to provide a sustainable system for the users and the software maintenance group. Using the LISP profile prototype code developed by the GPSS reverse reengineering groups as a building block, we have implemented the resource deconfliction portion of GPSS in common LISP using its object oriented features. The prototype corrects and extends some of the deficiencies of the current production version, plus it uses and builds on the classes from the development team's profile prototype.

  14. Architecture-Based Unit Testing of the Flight Software Product Line

    NASA Technical Reports Server (NTRS)

    Ganesan, Dharmalingam; Lindvall, Mikael; McComas, David; Bartholomew, Maureen; Slegel, Steve; Medina, Barbara

    2010-01-01

    This paper presents an analysis of the unit testing approach developed and used by the Core Flight Software (CFS) product line team at the NASA GSFC. The goal of the analysis is to understand, review, and reconunend strategies for improving the existing unit testing infrastructure as well as to capture lessons learned and best practices that can be used by other product line teams for their unit testing. The CFS unit testing framework is designed and implemented as a set of variation points, and thus testing support is built into the product line architecture. The analysis found that the CFS unit testing approach has many practical and good solutions that are worth considering when deciding how to design the testing architecture for a product line, which are documented in this paper along with some suggested innprovennents.

  15. The TextBase project--implementation of a base level message supporting electronic patient record transfer in English general practice.

    PubMed

    Booth, N; Jain, N L; Sugden, B

    1999-01-01

    The TextBase project is a laboratory experiment to assess the feasibility of a common exchange format for sending a transcription of the contents of the Electronic Patient Record (EPR) between different general practices, when patients move from one practice to another in the NHS in England. The project was managed using a partnership arrangement between the four EPR systems vendors who agreed to collaborate and the project team. It lasted one year and consisted of an iterative design process followed by creation of message generation and reading modules within the collaborating EPR systems according to a software requirement specification created by the project team. The paper describes the creation of a common record display format, the implementation of transfer using a floppy disk in the lab, and considers the further barriers before a national implementation might be achieved.

  16. Maintaining Quality and Confidence in Open-Source, Evolving Software: Lessons Learned with PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Frederick, J. M.; Hammond, G. E.

    2017-12-01

    Software evolution in an open-source framework poses a major challenge to a geoscientific simulator, but when properly managed, the pay-off can be enormous for both the developers and the community at large. Developers must juggle implementing new scientific process models, adopting increasingly efficient numerical methods and programming paradigms, changing funding sources (or total lack of funding), while also ensuring that legacy code remains functional and reported bugs are fixed in a timely manner. With robust software engineering and a plan for long-term maintenance, a simulator can evolve over time incorporating and leveraging many advances in the computational and domain sciences. In this positive light, what practices in software engineering and code maintenance can be employed within open-source development to maximize the positive aspects of software evolution and community contributions while minimizing its negative side effects? This presentation will discusses steps taken in the development of PFLOTRAN (www.pflotran.org), an open source, massively parallel subsurface simulator for multiphase, multicomponent, and multiscale reactive flow and transport processes in porous media. As PFLOTRAN's user base and development team continues to grow, it has become increasingly important to implement strategies which ensure sustainable software development while maintaining software quality and community confidence. In this presentation, we will share our experiences and "lessons learned" within the context of our open-source development framework and community engagement efforts. Topics discussed will include how we've leveraged both standard software engineering principles, such as coding standards, version control, and automated testing, as well unique advantages of object-oriented design in process model coupling, to ensure software quality and confidence. We will also be prepared to discuss the major challenges faced by most open-source software teams, such as on-boarding new developers or one-time contributions, dealing with competitors or lookie-loos, and other downsides of complete transparency, as well as our approach to community engagement, including a user group email list, hosting short courses and workshops for new users, and maintaining a website. SAND2017-8174A

  17. Software Project Management and Measurement on the World-Wide-Web (WWW)

    NASA Technical Reports Server (NTRS)

    Callahan, John; Ramakrishnan, Sudhaka

    1996-01-01

    We briefly describe a system for forms-based, work-flow management that helps members of a software development team overcome geographical barriers to collaboration. Our system, called the Web Integrated Software Environment (WISE), is implemented as a World-Wide-Web service that allows for management and measurement of software development projects based on dynamic analysis of change activity in the workflow. WISE tracks issues in a software development process, provides informal communication between the users with different roles, supports to-do lists, and helps in software process improvement. WISE minimizes the time devoted to metrics collection and analysis by providing implicit delivery of messages between users based on the content of project documents. The use of a database in WISE is hidden from the users who view WISE as maintaining a personal 'to-do list' of tasks related to the many projects on which they may play different roles.

  18. Management Guidelines for Database Developers' Teams in Software Development Projects

    NASA Astrophysics Data System (ADS)

    Rusu, Lazar; Lin, Yifeng; Hodosi, Georg

    Worldwide job market for database developers (DBDs) is continually increasing in last several years. In some companies, DBDs are organized as a special team (DBDs team) to support other projects and roles. As a new role, the DBDs team is facing a major problem that there are not any management guidelines for them. The team manager does not know which kinds of tasks should be assigned to this team and what practices should be used during DBDs work. Therefore in this paper we have developed a set of management guidelines, which includes 8 fundamental tasks and 17 practices from software development process, by using two methodologies Capability Maturity Model (CMM) and agile software development in particular Scrum in order to improve the DBDs team work. Moreover the management guidelines developed here has been complemented with practices from authors' experience in this area and has been evaluated in the case of a software company. The management guidelines for DBD teams presented in this paper could be very usefully for other companies too that are using a DBDs team and could contribute towards an increase of the efficiency of these teams in their work on software development projects.

  19. Spitzer observatory operations: increasing efficiency in mission operations

    NASA Astrophysics Data System (ADS)

    Scott, Charles P.; Kahr, Bolinda E.; Sarrel, Marc A.

    2006-06-01

    This paper explores the how's and why's of the Spitzer Mission Operations System's (MOS) success, efficiency, and affordability in comparison to other observatory-class missions. MOS exploits today's flight, ground, and operations capabilities, embraces automation, and balances both risk and cost. With operational efficiency as the primary goal, MOS maintains a strong control process by translating lessons learned into efficiency improvements, thereby enabling the MOS processes, teams, and procedures to rapidly evolve from concept (through thorough validation) into in-flight implementation. Operational teaming, planning, and execution are designed to enable re-use. Mission changes, unforeseen events, and continuous improvement have often times forced us to learn to fly anew. Collaborative spacecraft operations and remote science and instrument teams have become well integrated, and worked together to improve and optimize each human, machine, and software-system element. Adaptation to tighter spacecraft margins has facilitated continuous operational improvements via automated and autonomous software coupled with improved human analysis. Based upon what we now know and what we need to improve, adapt, or fix, the projected mission lifetime continues to grow - as does the opportunity for numerous scientific discoveries.

  20. TMT approach to observatory software development process

    NASA Astrophysics Data System (ADS)

    Buur, Hanne; Subramaniam, Annapurni; Gillies, Kim; Dumas, Christophe; Bhatia, Ravinder

    2016-07-01

    The purpose of the Observatory Software System (OSW) is to integrate all software and hardware components of the Thirty Meter Telescope (TMT) to enable observations and data capture; thus it is a complex software system that is defined by four principal software subsystems: Common Software (CSW), Executive Software (ESW), Data Management System (DMS) and Science Operations Support System (SOSS), all of which have interdependencies with the observatory control systems and data acquisition systems. Therefore, the software development process and plan must consider dependencies to other subsystems, manage architecture, interfaces and design, manage software scope and complexity, and standardize and optimize use of resources and tools. Additionally, the TMT Observatory Software will largely be developed in India through TMT's workshare relationship with the India TMT Coordination Centre (ITCC) and use of Indian software industry vendors, which adds complexity and challenges to the software development process, communication and coordination of activities and priorities as well as measuring performance and managing quality and risk. The software project management challenge for the TMT OSW is thus a multi-faceted technical, managerial, communications and interpersonal relations challenge. The approach TMT is using to manage this multifaceted challenge is a combination of establishing an effective geographically distributed software team (Integrated Product Team) with strong project management and technical leadership provided by the TMT Project Office (PO) and the ITCC partner to manage plans, process, performance, risk and quality, and to facilitate effective communications; establishing an effective cross-functional software management team composed of stakeholders, OSW leadership and ITCC leadership to manage dependencies and software release plans, technical complexities and change to approved interfaces, architecture, design and tool set, and to facilitate effective communications; adopting an agile-based software development process across the observatory to enable frequent software releases to help mitigate subsystem interdependencies; defining concise scope and work packages for each of the OSW subsystems to facilitate effective outsourcing of software deliverables to the ITCC partner, and to enable performance monitoring and risk management. At this stage, the architecture and high-level design of the software system has been established and reviewed. During construction each subsystem will have a final design phase with reviews, followed by implementation and testing. The results of the TMT approach to the Observatory Software development process will only be preliminary at the time of the submittal of this paper, but it is anticipated that the early results will be a favorable indication of progress.

  1. Distributed Visualization Project

    NASA Technical Reports Server (NTRS)

    Craig, Douglas; Conroy, Michael; Kickbusch, Tracey; Mazone, Rebecca

    2016-01-01

    Distributed Visualization allows anyone, anywhere to see any simulation at any time. Development focuses on algorithms, software, data formats, data systems and processes to enable sharing simulation-based information across temporal and spatial boundaries without requiring stakeholders to possess highly-specialized and very expensive display systems. It also introduces abstraction between the native and shared data, which allows teams to share results without giving away proprietary or sensitive data. The initial implementation of this capability is the Distributed Observer Network (DON) version 3.1. DON 3.1 is available for public release in the NASA Software Store (https://software.nasa.gov/software/KSC-13775) and works with version 3.0 of the Model Process Control specification (an XML Simulation Data Representation and Communication Language) to display complex graphical information and associated Meta-Data.

  2. Orbit Determination and Navigation Software Testing for the Mars Reconnaissance Orbiter

    NASA Technical Reports Server (NTRS)

    Pini, Alex

    2011-01-01

    During the extended science phase of the Mars Reconnaissance Orbiter's lifecycle, the operational duties pertaining to navigation primarily involve orbit determination. The orbit determination process utilizes radiometric tracking data and is used for the prediction and reconstruction of MRO's trajectories. Predictions are done twice per week for ephemeris updates on-board the spacecraft and for planning purposes. Orbit Trim Maneuvers (OTM-s) are also designed using the predicted trajectory. Reconstructions, which incorporate a batch estimator, provide precise information about the spacecraft state to be synchronized with scientific measurements. These tasks were conducted regularly to validate the results obtained by the MRO Navigation Team. Additionally, the team is in the process of converting to newer versions of the navigation software and operating system. The capability to model multiple densities in the Martian atmosphere is also being implemented. However, testing outputs among these different configurations was necessary to ensure compliance to a satisfactory degree.

  3. Evolution of Software-Only-Simulation at NASA IV and V

    NASA Technical Reports Server (NTRS)

    McCarty, Justin; Morris, Justin; Zemerick, Scott

    2014-01-01

    Software-Only-Simulations have been an emerging but quickly developing field of study throughout NASA. The NASA Independent Verification Validation (IVV) Independent Test Capability (ITC) team has been rapidly building a collection of simulators for a wide range of NASA missions. ITC specializes in full end-to-end simulations that enable developers, VV personnel, and operators to test-as-you-fly. In four years, the team has delivered a wide variety of spacecraft simulations that have ranged from low complexity science missions such as the Global Precipitation Management (GPM) satellite and the Deep Space Climate Observatory (DSCOVR), to the extremely complex missions such as the James Webb Space Telescope (JWST) and Space Launch System (SLS).This paper describes the evolution of ITCs technologies and processes that have been utilized to design, implement, and deploy end-to-end simulation environments for various NASA missions. A comparison of mission simulators are discussed with focus on technology and lessons learned in complexity, hardware modeling, and continuous integration. The paper also describes the methods for executing the missions unmodified flight software binaries (not cross-compiled) for verification and validation activities.

  4. An Autonomous Flight Safety System

    DTIC Science & Technology

    2008-11-01

    are taken. AFSS can take vehicle navigation data from redundant onboard sensors and make flight termination decisions using software-based rules...implemented on redundant flight processors. By basing these decisions on actual Instantaneous Impact Predictions and by providing for an arbitrary...number of mission rules, it is the contention of the AFSS development team that the decision making process used by Missile Flight Control Officers

  5. JRTF: A Flexible Software Framework for Real-Time Control in Magnetic Confinement Nuclear Fusion Experiments

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Zheng, G. Z.; Zheng, W.; Chen, Z.; Yuan, T.; Yang, C.

    2016-04-01

    The magnetic confinement nuclear fusion experiments require various real-time control applications like plasma control. ITER has designed the Fast Plant System Controller (FPSC) for this job. ITER provided hardware and software standards and guidelines for building a FPSC. In order to develop various real-time FPSC applications efficiently, a flexible real-time software framework called J-TEXT real-time framework (JRTF) is developed by J-TEXT tokamak team. JRTF allowed developers to implement different functions as independent and reusable modules called Application Blocks (AB). The AB developers only need to focus on implementing the control tasks or the algorithms. The timing, scheduling, data sharing and eventing are handled by the JRTF pipelines. JRTF provides great flexibility on developing ABs. Unit test against ABs can be developed easily and ABs can even be used in non-JRTF applications. JRTF also provides interfaces allowing JRTF applications to be configured and monitored at runtime. JRTF is compatible with ITER standard FPSC hardware and ITER (Control, Data Access and Communication) CODAC Core software. It can be configured and monitored using (Experimental Physics and Industrial Control System) EPICS. Moreover the JRTF can be ported to different platforms and be integrated with supervisory control software other than EPICS. The paper presents the design and implementation of JRTF as well as brief test results.

  6. Tactical Approaches for Making a Successful Satellite Passive Microwave ESDR

    NASA Astrophysics Data System (ADS)

    Hardman, M.; Brodzik, M. J.; Gotberg, J.; Long, D. G.; Paget, A. C.

    2014-12-01

    Our NASA MEaSUREs project is producing a new, enhanced resolution gridded Earth System Data Record for the entire satellite passive microwave (SMMR, SSM/I-SSMIS and AMSR-E) time series. Our project goals are twofold: to produce a well-documented, consistently processed, high-quality historical record at higher spatial resolutions than have previously been available, and to transition the production software to the NSIDC DAAC for ongoing processing after our project completion. In support of these goals, our distributed team at BYU and NSIDC faces project coordination challenges to produce a high-quality data set that our user community will accept as a replacement for the currently available historical versions of these data. We work closely with our DAAC liaison on format specifications, data and metadata plans, and project progress. In order for the user community to understand and support our project, we have solicited a team of Early Adopters who are reviewing and evaluating a prototype version of the data. Early Adopter feedback will be critical input to our final data content and format decisions. For algorithm transparency and accountability, we have released an Algorithm Theoretical Basis Document (ATBD) and detailed supporting technical documentation, with rationale for all algorithm implementation decisions. For distributed team management, we are using collaborative tools for software revision control and issue tracking. For reliably transitioning a research-quality image reconstruction software system to production-quality software suitable for use at the DAAC, we have adopted continuous integration methods for running automated regression testing. Our presentation will summarize bothadvantages and challenges of each of these tactics in ensuring production of a successful ESDR and an enduring production software system.

  7. Knowledge and attitude toward interdisciplinary team working among obstetricians and gynecologists in teaching hospitals in South East Nigeria.

    PubMed

    Iyoke, Chukwuemeka Anthony; Lawani, Lucky Osaheni; Ugwu, George Onyemaechi; Ajah, Leonard Ogbonna; Ezugwu, Euzebus Chinonye; Onah, Paul; Onwuka, Chidinma Ifechi

    2015-01-01

    Interdisciplinary team working could facilitate the efficient provision and coordination of increasingly diverse health services, thereby improving the quality of patient care. The purpose of this study was to describe knowledge of interdisciplinary team working among obstetricians and gynecologists in two teaching hospitals in South East Nigeria and to determine their attitude toward an interdisciplinary collaborative approach to patient care in these institutions. This was a questionnaire-based cross-sectional study. Data analysis involved descriptive statistics and was carried out using Statistical Package for the Social Sciences software version 17.0 for Windows. In total, 116 doctors participated in the study. The mean age of the respondents was 31.9±7.0 (range 22-51) years. Approximately 74% of respondents were aware of the concept of interdisciplinary team working. Approximately 15% of respondents who were aware of the concept of interdisciplinary team working had very good knowledge of it; 52% had good knowledge and 33% had poor knowledge. Twenty-nine percent of knowledgeable respondents reported ever receiving formal teaching/training on interdisciplinary team working in the course of their professional development. About 78% of those aware of team working believed that interdisciplinary teams would be useful in obstetrics and gynecology practice in Nigeria, with 89% stating that it would be very useful. Approximately 77% of those aware of team working would support establishment and implementation of interdisciplinary teams at their centers. There was a high degree of knowledge of the concept and a positive attitude toward interdisciplinary team working among obstetricians and gynecologists in the study centers. This suggests that the attitude of physicians may not be an impediment to implementation of a collaborative interdisciplinary approach to clinical care in the study centers.

  8. Knowledge and attitude toward interdisciplinary team working among obstetricians and gynecologists in teaching hospitals in South East Nigeria

    PubMed Central

    Iyoke, Chukwuemeka Anthony; Lawani, Lucky Osaheni; Ugwu, George Onyemaechi; Ajah, Leonard Ogbonna; Ezugwu, Euzebus Chinonye; Onah, Paul; Onwuka, Chidinma Ifechi

    2015-01-01

    Background Interdisciplinary team working could facilitate the efficient provision and coordination of increasingly diverse health services, thereby improving the quality of patient care. The purpose of this study was to describe knowledge of interdisciplinary team working among obstetricians and gynecologists in two teaching hospitals in South East Nigeria and to determine their attitude toward an interdisciplinary collaborative approach to patient care in these institutions. Methods This was a questionnaire-based cross-sectional study. Data analysis involved descriptive statistics and was carried out using Statistical Package for the Social Sciences software version 17.0 for Windows. Results In total, 116 doctors participated in the study. The mean age of the respondents was 31.9±7.0 (range 22–51) years. Approximately 74% of respondents were aware of the concept of interdisciplinary team working. Approximately 15% of respondents who were aware of the concept of interdisciplinary team working had very good knowledge of it; 52% had good knowledge and 33% had poor knowledge. Twenty-nine percent of knowledgeable respondents reported ever receiving formal teaching/training on interdisciplinary team working in the course of their professional development. About 78% of those aware of team working believed that interdisciplinary teams would be useful in obstetrics and gynecology practice in Nigeria, with 89% stating that it would be very useful. Approximately 77% of those aware of team working would support establishment and implementation of interdisciplinary teams at their centers. Conclusion There was a high degree of knowledge of the concept and a positive attitude toward interdisciplinary team working among obstetricians and gynecologists in the study centers. This suggests that the attitude of physicians may not be an impediment to implementation of a collaborative interdisciplinary approach to clinical care in the study centers. PMID:26064058

  9. The Cascading Impacts of Technology Selection: Incorporating Ruby on Rails into ECHO

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Cechini, M.

    2010-12-01

    NASA’s Earth Observing System (EOS) ClearingHOuse (ECHO) is a SOA based Earth Science Data search and order system implemented in Java with one significant exception: the web client used by 98% of our users is written in Perl. After several decades of maintenance the Perl based application had reached the end of its serviceable life and ECHO was tasked with implementing a replacement. Despite a broad investment in Java, the ECHO team conducted a survey of modern development technologies including Flex, Python/Django, JSF2/Spring and Ruby on Rails. The team ultimately chose Ruby on Rails (RoR) with Cucumber for testing due to its perceived applicability to web application development and corresponding development efficiency gains. Both positive and negative impacts on the entire ECHO team, including our stakeholders, were immediate and sometimes subtle. The technology selection caused shifts in our architecture and design, development and deployment procedures, requirement definition approach, testing approach, and, somewhat surprisingly, our project team structure and software process. This presentation discusses our experiences, including technical, process, and psychological, using RoR on a production system. During this session we will discuss: - Real impacts of introducing a dynamic language to a Java team - Real and perceived efficiency advantages - Impediments to adoption and effectiveness - Impacts of transition from Test Driven Development to Behavior Driven Development - Leveraging Cucumber to provide fully executable requirement documents - Impacts on team structure and roles

  10. Leader Delegation and Trust in Global Software Teams

    ERIC Educational Resources Information Center

    Zhang, Suling

    2008-01-01

    Virtual teams are an important work structure in global software development. The distributed team structure enables access to a diverse set of expertise which is often not available in one location, to a cheaper labor force, and to a potentially accelerated development process that uses a twenty-four hour work structure. Many software teams…

  11. The roles of the AAS Journals' Data Editors

    NASA Astrophysics Data System (ADS)

    Muench, August; NASA/SAO ADS, CERN/Zenodo.org, Harvard/CfA Wolbach Library

    2018-01-01

    I will summarize the community services provided by the AAS Journals' Data Editors to support authors’ when citing and preserving the software and data used in the published literature. In addition I will describe the life of a piece of code as it passes through the current workflows for software citation in astronomy. Using this “lifecycle” I will detail the ongoing work funded by a grant from the Alfred P. Sloan Foundation to the American Astronomical Society to improve the citation of software in the literature. The funded development team and advisory boards, made up of non-profit publishers, literature indexers, and preservation archives, is implementing the Force11 Software citation principles for astronomy Journals. The outcome of this work will be new workflows for authors and developers that fit in their current practices while enabling versioned citation of software and granular credit for its creators.

  12. A Matrix Approach to Software Process Definition

    NASA Technical Reports Server (NTRS)

    Schultz, David; Bachman, Judith; Landis, Linda; Stark, Mike; Godfrey, Sally; Morisio, Maurizio; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The Software Engineering Laboratory (SEL) is currently engaged in a Methodology and Metrics program for the Information Systems Center (ISC) at Goddard Space Flight Center (GSFC). This paper addresses the Methodology portion of the program. The purpose of the Methodology effort is to assist a software team lead in selecting and tailoring a software development or maintenance process for a specific GSFC project. It is intended that this process will also be compliant with both ISO 9001 and the Software Engineering Institute's Capability Maturity Model (CMM). Under the Methodology program, we have defined four standard ISO-compliant software processes for the ISC, and three tailoring criteria that team leads can use to categorize their projects. The team lead would select a process and appropriate tailoring factors, from which a software process tailored to the specific project could be generated. Our objective in the Methodology program is to present software process information in a structured fashion, to make it easy for a team lead to characterize the type of software engineering to be performed, and to apply tailoring parameters to search for an appropriate software process description. This will enable the team lead to follow a proven, effective software process and also satisfy NASA's requirement for compliance with ISO 9001 and the anticipated requirement for CMM assessment. This work is also intended to support the deployment of sound software processes across the ISC.

  13. Design and Implementation of a Modern Automatic Deformation Monitoring System

    NASA Astrophysics Data System (ADS)

    Engel, Philipp; Schweimler, Björn

    2016-03-01

    The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the University of Applied Sciences in Neubrandenburg (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.

  14. Omics Metadata Management Software v. 1 (OMMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and to perform bioinformatics analyses and information management tasks via a simple and intuitive web-based interface. Several use cases with short-read sequence datasets are provided to showcase the full functionality of the OMMS, from metadata curation tasks, to bioinformatics analyses and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for web-based deployment supporting geographically dispersed research teams. Our software was developed with open-source bundles, is flexible, extensible and easily installedmore » and run by operators with general system administration and scripting language literacy.« less

  15. An Open Source Tool to Test Interoperability

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.

    2012-12-01

    Scientists interact with information at various levels from gathering of the raw observed data to accessing portrayed processed quality control data. Geoinformatics tools help scientist on the acquisition, storage, processing, dissemination and presentation of geospatial information. Most of the interactions occur in a distributed environment between software components that take the role of either client or server. The communication between components includes protocols, encodings of messages and managing of errors. Testing of these communication components is important to guarantee proper implementation of standards. The communication between clients and servers can be adhoc or follow standards. By following standards interoperability between components increase while reducing the time of developing new software. The Open Geospatial Consortium (OGC), not only coordinates the development of standards but also, within the Compliance Testing Program (CITE), provides a testing infrastructure to test clients and servers. The OGC Web-based Test Engine Facility, based on TEAM Engine, allows developers to test Web services and clients for correct implementation of OGC standards. TEAM Engine is a JAVA open source facility, available at Sourceforge that can be run via command line, deployed in a web servlet container or integrated in developer's environment via MAVEN. The TEAM Engine uses the Compliance Test Language (CTL) and TestNG to test HTTP requests, SOAP services and XML instances against Schemas and Schematron based assertions of any type of web service, not only OGC services. For example, the OGC Web Feature Service (WFS) 1.0.0 test has more than 400 test assertions. Some of these assertions includes conformance of HTTP responses, conformance of GML-encoded data; proper values for elements and attributes in the XML; and, correct error responses. This presentation will provide an overview of TEAM Engine, introduction of how to test via the OGC Testing web site and description of performing local tests. It will also provide information about how to participate in the open source code development of TEAM Engine.

  16. Impact of Top Management Team on Firm Performance in Small and Medium-Sized Enterprises Adopting Commercial Open-Source Enterprise Resource Planning

    ERIC Educational Resources Information Center

    Cereola, Sandra J.; Wier, Benson; Norman, Carolyn Strand

    2012-01-01

    Based on the large number of small and medium-sized enterprises (SMEs) in the United States, their increasing interest in enterprise-wide software systems and their impact on the US economy, it is important to understand the determinants that can facilitate the successful implementation and assimilation of such technology into these firms' daily…

  17. NASA Work Breakdown Structure (WBS) Handbook

    NASA Technical Reports Server (NTRS)

    Terrell, Stefanie M.

    2018-01-01

    The purpose of this document is to provide program/project teams necessary instruction and guidance in the best practices for Work Breakdown Structure (WBS) and WBS dictionary development and use for project implementation and management control. This handbook can be used for all types of NASA projects and work activities including research, development, construction, test and evaluation, and operations. The products of these work efforts may be hardware, software, data, or service elements (alone or in combination). The aim of this document is to assist project teams in the development of effective work breakdown structures that provide a framework of common reference for all project elements.

  18. A Team Building Model for Software Engineering Courses Term Projects

    ERIC Educational Resources Information Center

    Sahin, Yasar Guneri

    2011-01-01

    This paper proposes a new model for team building, which enables teachers to build coherent teams rapidly and fairly for the term projects of software engineering courses. Moreover, the model can also be used to build teams for any type of project, if the team member candidates are students, or if they are inexperienced on a certain subject. The…

  19. Software Capability Evaluation Version 2.0 Method Description

    DTIC Science & Technology

    1994-06-01

    These criteria are discussed below; they include training, team composition, team leadership , team member experience and knowledge, individual...previous SCEs. No more than one team member should have less than two years of professional software experience. 3 Leadership . Ideally, the team leader...features: e leadership - the assignment of responsibility the presence of sponsorship. * organizational policies - there are written po! ;goveming the

  20. From Prime to Extended Mission: Evolution of the MER Tactical Uplink Process

    NASA Technical Reports Server (NTRS)

    Mishkin, Andrew H.; Laubach, Sharon

    2006-01-01

    To support a 90-day surface mission for two robotic rovers, the Mars Exploration Rover mission designed and implemented an intensive tactical operations process, enabling daily commanding of each rover. Using a combination of new processes, custom software tools, a Mars-time staffing schedule, and seven-day-a-week operations, the MER team was able to compress the traditional weeks-long command-turnaround for a deep space robotic mission to about 18 hours. However, the pace of this process was never intended to be continued indefinitely. Even before the end of the three-month prime mission, MER operations began evolving towards greater sustainability. A combination of continued software tool development, increasing team experience, and availability of reusable sequences first reduced the mean process duration to approximately 11 hours. The number of workshifts required to perform the process dropped, and the team returned to a modified 'Earth-time' schedule. Additional process and tool adaptation eventually provided the option of planning multiple Martian days of activity within a single workshift, making 5-day-a-week operations possible. The vast majority of the science team returned to their home institutions, continuing to participate fully in the tactical operations process remotely. MER has continued to operate for over two Earth-years as many of its key personnel have moved on to other projects, the operations team and budget have shrunk, and the rovers have begun to exhibit symptoms of aging.

  1. Mars Science Laboratory Boot Robustness Testing

    NASA Technical Reports Server (NTRS)

    Banazadeh, Payam; Lam, Danny

    2011-01-01

    Mars Science Laboratory (MSL) is one of the most complex spacecrafts in the history of mankind. Due to the nature of its complexity, a large number of flight software (FSW) requirements have been written for implementation. In practice, these requirements necessitate very complex and very precise flight software with no room for error. One of flight software's responsibilities is to be able to boot up and check the state of all devices on the spacecraft after the wake up process. This boot up and initialization is crucial to the mission success since any misbehavior of different devices needs to be handled through the flight software. I have created a test toolkit that allows the FSW team to exhaustively test the flight software under variety of different unexpected scenarios and validate that flight software can handle any situation after booting up. The test includes initializing different devices on spacecraft to different configurations and validate at the end of the flight software boot up that the flight software has initialized those devices to what they are suppose to be in that particular scenario.

  2. Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2003-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.

  3. Support of Herschel Key Programme Teams at the NASA Herschel Science Center

    NASA Astrophysics Data System (ADS)

    Shupe, David L.; Appleton, P. N.; Ardila, D.; Bhattacharya, B.; Mei, Y.; Morris, P.; Rector, J.; NHSC Team

    2010-01-01

    The first science data from the Herschel Space Observatory were distributed to Key Programme teams in September 2009. This poster describes a number of resources that have been developed by the NASA Herschel Science Center (NHSC) to support the first users of the observatory. The NHSC webpages and Helpdesk serve as the starting point for information and queries from the US community. Details about the use of the Herschel Common Science Software can be looked up in the Helpdesk Knowledgebase. The capability of real-time remote support through desktop sharing has been implemented. The NHSC continues to host workshops on data analysis and observation planning. Key Programme teams have been provided Wiki sites upon request for their team's private use and for sharing information with other teams. A secure data storage area is in place for troubleshooting purposes and for use by visitors. The NHSC draws upon close working relationships with Instrument Control Centers and the Herschel Science Center in Madrid in order to have the necessary expertise on hand to assist Herschel observers, including both Key Programme teams and respondents to upcoming open time proposal calls.

  4. Intelligent Command and Control Systems for Satellite Ground Operations

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1999-01-01

    This grant, Intelligent Command and Control Systems for Satellite Ground Operations, funded by NASA Goddard Space Flight Center, has spanned almost a decade. During this time, it has supported a broad range of research addressing the changing needs of NASA operations. It is important to note that many of NASA's evolving needs, for example, use of automation to drastically reduce (e.g., 70%) operations costs, are similar requirements in both government and private sectors. Initially the research addressed the appropriate use of emerging and inexpensive computational technologies, such as X Windows, graphics, and color, together with COTS (commercial-off-the-shelf) hardware and software such as standard Unix workstations to re-engineer satellite operations centers. The first phase of research supported by this grant explored the development of principled design methodologies to make effective use of emerging and inexpensive technologies. The ultimate performance measures for new designs were whether or not they increased system effectiveness while decreasing costs. GT-MOCA (The Georgia Tech Mission Operations Cooperative Associate) and GT-VITA (Georgia Tech Visual and Inspectable Tutor and Assistant), whose latter stages were supported by this research, explored model-based design of collaborative operations teams and the design of intelligent tutoring systems, respectively. Implemented in proof-of-concept form for satellite operations, empirical evaluations of both, using satellite operators for the former and personnel involved in satellite control operations for the latter, demonstrated unequivocally the feasibility and effectiveness of the proposed modeling and design strategy underlying both research efforts. The proof-of-concept implementation of GT-MOCA showed that the methodology could specify software requirements that enabled a human-computer operations team to perform without any significant performance differences from the standard two-person satellite operations team. GT-VITA, using the same underlying methodology, the operator function model (OFM), and its computational implementation, OFMspert, successfully taught satellite control knowledge required by flight operations team members. The tutor structured knowledge in three ways: declarative knowledge (e.g., What is this? What does it do?), procedural knowledge, and operational skill. Operational skill is essential in real-time operations. It combines the two former knowledge types, assisting a student to use them effectively in a dynamic, multi-tasking, real-time operations environment. A high-fidelity simulator of the operator interface to the ground control system, including an almost full replication of both the human-computer interface and human interaction with the dynamic system, was used in the GT-MOCA and GT-VITA evaluations. The GT-VITA empirical evaluation, conducted with a range of'novices' that included GSFC operations management, GSFC operations software developers, and new flight operations team members, demonstrated that GT-VITA effectively taught a wide range of knowledge in a succinct and engaging manner.

  5. Absorbing Software Testing into the Scrum Method

    NASA Astrophysics Data System (ADS)

    Tuomikoski, Janne; Tervonen, Ilkka

    In this paper we study, how to absorb software testing into the Scrum method. We conducted the research as an action research during the years 2007-2008 with three iterations. The result showed that testing can and even should be absorbed to the Scrum method. The testing team was merged into the Scrum teams. The teams can now deliver better working software in a shorter time, because testing keeps track of the progress of the development. Also the team spirit is higher, because the Scrum team members are committed to the same goal. The biggest change from test manager’s point of view was the organized Product Owner Team. Test manager don’t have testing team anymore, and in the future all the testing tasks have to be assigned through the Product Backlog.

  6. A Genuine TEAM Player

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Qualtech Systems, Inc. developed a complete software system with capabilities of multisignal modeling, diagnostic analysis, run-time diagnostic operations, and intelligent interactive reasoners. Commercially available as the TEAMS (Testability Engineering and Maintenance System) tool set, the software can be used to reveal unanticipated system failures. The TEAMS software package is broken down into four companion tools: TEAMS-RT, TEAMATE, TEAMS-KB, and TEAMS-RDS. TEAMS-RT identifies good, bad, and suspect components in the system in real-time. It reports system health results from onboard tests, and detects and isolates failures within the system, allowing for rapid fault isolation. TEAMATE takes over from where TEAMS-RT left off by intelligently guiding the maintenance technician through the troubleshooting procedure, repair actions, and operational checkout. TEAMS-KB serves as a model management and collection tool. TEAMS-RDS (TEAMS-Remote Diagnostic Server) has the ability to continuously assess a system and isolate any failure in that system or its components, in real time. RDS incorporates TEAMS-RT, TEAMATE, and TEAMS-KB in a large-scale server architecture capable of providing advanced diagnostic and maintenance functions over a network, such as the Internet, with a web browser user interface.

  7. Conversion from Tree to Graph Representation of Requirements

    NASA Technical Reports Server (NTRS)

    Mayank, Vimal; Everett, David Frank; Shmunis, Natalya; Austin, Mark

    2009-01-01

    A procedure and software to implement the procedure have been devised to enable conversion from a tree representation to a graph representation of the requirements governing the development and design of an engineering system. The need for this procedure and software and for other requirements-management tools arises as follows: In systems-engineering circles, it is well known that requirements- management capability improves the likelihood of success in the team-based development of complex systems involving multiple technological disciplines. It is especially desirable to be able to visualize (in order to identify and manage) requirements early in the system- design process, when errors can be corrected most easily and inexpensively.

  8. Managing Complexity in the MSL/Curiosity Entry, Descent, and Landing Flight Software and Avionics Verification and Validation Campaign

    NASA Technical Reports Server (NTRS)

    Stehura, Aaron; Rozek, Matthew

    2013-01-01

    The complexity of the Mars Science Laboratory (MSL) mission presented the Entry, Descent, and Landing systems engineering team with many challenges in its Verification and Validation (V&V) campaign. This paper describes some of the logistical hurdles related to managing a complex set of requirements, test venues, test objectives, and analysis products in the implementation of a specific portion of the overall V&V program to test the interaction of flight software with the MSL avionics suite. Application-specific solutions to these problems are presented herein, which can be generalized to other space missions and to similar formidable systems engineering problems.

  9. Launch Vehicle Operations Simulator

    NASA Technical Reports Server (NTRS)

    Blackledge, J. W.

    1974-01-01

    The Saturn Launch Vehicle Operations Simulator (LVOS) was developed for NASA at Kennedy Space Center. LVOS simulates the Saturn launch vehicle and its ground support equipment. The simulator was intended primarily to be used as a launch crew trainer but it is also being used for test procedure and software validation. A NASA/contractor team of engineers and programmers implemented the simulator after the Apollo XI lunar landing during the low activity periods between launches.

  10. SLS Flight Software Testing: Using a Modified Agile Software Testing Approach

    NASA Technical Reports Server (NTRS)

    Bolton, Albanie T.

    2016-01-01

    NASA's Space Launch System (SLS) is an advanced launch vehicle for a new era of exploration beyond earth's orbit (BEO). The world's most powerful rocket, SLS, will launch crews of up to four astronauts in the agency's Orion spacecraft on missions to explore multiple deep-space destinations. Boeing is developing the SLS core stage, including the avionics that will control vehicle during flight. The core stage will be built at NASA's Michoud Assembly Facility (MAF) in New Orleans, LA using state-of-the-art manufacturing equipment. At the same time, the rocket's avionics computer software is being developed here at Marshall Space Flight Center in Huntsville, AL. At Marshall, the Flight and Ground Software division provides comprehensive engineering expertise for development of flight and ground software. Within that division, the Software Systems Engineering Branch's test and verification (T&V) team uses an agile test approach in testing and verification of software. The agile software test method opens the door for regular short sprint release cycles. The idea or basic premise behind the concept of agile software development and testing is that it is iterative and developed incrementally. Agile testing has an iterative development methodology where requirements and solutions evolve through collaboration between cross-functional teams. With testing and development done incrementally, this allows for increased features and enhanced value for releases. This value can be seen throughout the T&V team processes that are documented in various work instructions within the branch. The T&V team produces procedural test results at a higher rate, resolves issues found in software with designers at an earlier stage versus at a later release, and team members gain increased knowledge of the system architecture by interfacing with designers. SLS Flight Software teams want to continue uncovering better ways of developing software in an efficient and project beneficial manner. Through agile testing, there has been increased value through individuals and interactions over processes and tools, improved customer collaboration, and improved responsiveness to changes through controlled planning. The presentation will describe agile testing methodology as taken with the SLS FSW Test and Verification team at Marshall Space Flight Center.

  11. Collaborative engineering and design management for the Hobby-Eberly Telescope tracker upgrade

    NASA Astrophysics Data System (ADS)

    Mollison, Nicholas T.; Hayes, Richard J.; Good, John M.; Booth, John A.; Savage, Richard D.; Jackson, John R.; Rafal, Marc D.; Beno, Joseph H.

    2010-07-01

    The engineering and design of systems as complex as the Hobby-Eberly Telescope's* new tracker require that multiple tasks be executed in parallel and overlapping efforts. When the design of individual subsystems is distributed among multiple organizations, teams, and individuals, challenges can arise with respect to managing design productivity and coordinating successful collaborative exchanges. This paper focuses on design management issues and current practices for the tracker design portion of the Hobby-Eberly Telescope Wide Field Upgrade project. The scope of the tracker upgrade requires engineering contributions and input from numerous fields including optics, instrumentation, electromechanics, software controls engineering, and site-operations. Successful system-level integration of tracker subsystems and interfaces is critical to the telescope's ultimate performance in astronomical observation. Software and process controls for design information and workflow management have been implemented to assist the collaborative transfer of tracker design data. The tracker system architecture and selection of subsystem interfaces has also proven to be a determining factor in design task formulation and team communication needs. Interface controls and requirements change controls will be discussed, and critical team interactions are recounted (a group-participation Failure Modes and Effects Analysis [FMEA] is one of special interest). This paper will be of interest to engineers, designers, and managers engaging in multi-disciplinary and parallel engineering projects that require coordination among multiple individuals, teams, and organizations.

  12. Modeling and Analysis of Space Based Transceivers

    NASA Technical Reports Server (NTRS)

    Moore, Michael S.; Price, Jeremy C.; Reinhart, Richard; Liebetreu, John; Kacpura, Tom J.

    2005-01-01

    This paper presents the tool chain, methodology, and results of an on-going study being performed jointly by Space Communication Experts at NASA Glenn Research Center (GRC), General Dynamics C4 Systems (GD), and Southwest Research Institute (SwRI). The team is evaluating the applicability and tradeoffs concerning the use of Software Defined Radio (SDR) technologies for Space missions. The Space Telecommunications Radio Systems (STRS) project is developing an approach toward building SDR-based transceivers for space communications applications based on an accompanying software architecture that can be used to implement transceivers for NASA space missions. The study is assessing the overall cost and benefit of employing SDR technologies in general, and of developing a software architecture standard for its space SDR transceivers. The study is considering the cost and benefit of existing architectures, such as the Joint Tactical Radio Systems (JTRS) Software Communications Architecture (SCA), as well as potential new space-specific architectures.

  13. Fostering soft skills in project-oriented learning within an agile atmosphere

    NASA Astrophysics Data System (ADS)

    Chassidim, Hadas; Almog, Dani; Mark, Shlomo

    2018-07-01

    The project-oriented and Agile approaches have motivated a new generation of software engineers. Within the academic curriculum, the issue of whether students are being sufficiently prepared for the future has been raised. The objective of this work is to present the project-oriented environment as an influential factor that software engineering profession requires, using the second year course 'Software Development and Management in Agile Approach' as a case-study. This course combines academic topics, self-learned and soft skills implementation, the call for creativity, and the recognition of updated technologies and dynamic circumstances. The results of a survey that evaluated the perceived value of the course showed that the highest contribution of our environment was in the effectiveness of the team-work and the overall development process of the project.

  14. Staff perceptions of a Productive Community Services implementation: A qualitative interview study.

    PubMed

    Bradley, Dominique Kim Frances; Griffin, Murray

    2015-06-01

    The Productive Series is a collection of change programmes designed by the English National Health Service (NHS) Institute for Innovation and Improvement to help frontline healthcare staff improve quality and reduce wasted time, so that this time can be reinvested into time spent with patients. The programmes have been implemented in at least 14 countries around the world. This study examines an implementation of the Productive Community Services programme that took place in a Community healthcare organisation in England from July 2010 to March 2012. To explore staff members' perceptions of a Productive Community Services implementation. Cross-sectional interview. Community Healthcare Organisation in East Anglia, England. 45 participants were recruited using purposive, snowballing and opportunistic sampling methods to represent five main types of staff group in the organisation; clinical team members, administrative team members, service managers/team leaders, senior managers and software support staff. Team members were recruited on the basis that they had submitted data for at least one Productive Community Services module. Semi-structured individual and group interviews were carried out after the programme concluded and analysed using thematic analysis. This report focuses on six of the themes identified. The analysis found that communication was not always effective, and there was a lack of awareness, knowledge and understanding of the programme. Many staff did not find the Productive Community Services work relevant, and although certain improvements were sustained, suboptimal practices crept back. Although negative outcomes were reported, such as the programme taking time away from patients initially, many benefits were described including improved stock control and work environments, and better use of the Electronic Patient Record system. One of the themes identified highlighted the positive perceptions of the programme, however a focus on five other themes indicate that important aspects of the implementation could have been improved. The innovation and implementation literature already addresses the issues identified, which suggests a gap between theory and practice for implementation teams. A lack of perceived relevance also suggests that similar programmes need to be made more easily adaptable for the varied specialisms found in Community Services. Further research on Productive Community Services implementations and knowledge transfer is required, and publication of studies focusing on the less positive aspects of implementations may accelerate this process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Rotational fluid flow experiment

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This project which began in 1986 as part of the Worcester Polytechnic Institute (WPI) Advanced Space Design Program focuses on the design and implementation of an electromechanical system for studying vortex behavior in a microgravity environment. Most of the existing equipment was revised and redesigned by this project team, as necessary. Emphasis was placed on documentation and integration of the electrical and mechanical subsystems. Project results include reconfiguration and thorough testing of all hardware subsystems, implementation of an infrared gas entrainment detector, new signal processing circuitry for the ultrasonic fluid circulation device, improved prototype interface circuits, and software for overall control of experiment operation.

  16. Sociotechnical Challenges of Developing an Interoperable Personal Health Record

    PubMed Central

    Gaskin, G.L.; Longhurst, C.A.; Slayton, R.; Das, A.K.

    2011-01-01

    Objectives To analyze sociotechnical issues involved in the process of developing an interoperable commercial Personal Health Record (PHR) in a hospital setting, and to create guidelines for future PHR implementations. Methods This qualitative study utilized observational research and semi-structured interviews with 8 members of the hospital team, as gathered over a 28 week period of developing and adapting a vendor-based PHR at Lucile Packard Children’s Hospital at Stanford University. A grounded theory approach was utilized to code and analyze over 100 pages of typewritten field notes and interview transcripts. This grounded analysis allowed themes to surface during the data collection process which were subsequently explored in greater detail in the observations and interviews. Results Four major themes emerged: (1) Multidisciplinary teamwork helped team members identify crucial features of the PHR; (2) Divergent goals for the PHR existed even within the hospital team; (3) Differing organizational conceptions of the end-user between the hospital and software company differentially shaped expectations for the final product; (4) Difficulties with coordination and accountability between the hospital and software company caused major delays and expenses and strained the relationship between hospital and software vendor. Conclusions Though commercial interoperable PHRs have great potential to improve healthcare, the process of designing and developing such systems is an inherently sociotechnical process with many complex issues and barriers. This paper offers recommendations based on the lessons learned to guide future development of such PHRs. PMID:22003373

  17. Implementation of a Campuswide Distributed Mass Storage Service: the Dream Versus Reality

    NASA Technical Reports Server (NTRS)

    Prahst, Stephen; Armstead, Betty Jo

    1996-01-01

    In 1990, a technical team at NASA Lewis Research Center, Cleveland, Ohio, began defining a Mass Storage Service to pro- wide long-term archival storage, short-term storage for very large files, distributed Network File System access, and backup services for critical data dw resides on workstations and personal computers. Because of software availability and budgets, the total service was phased in over dm years. During the process of building the service from the commercial technologies available, our Mass Storage Team refined the original vision and learned from the problems and mistakes that occurred. We also enhanced some technologies to better meet the needs of users and system administrators. This report describes our team's journey from dream to reality, outlines some of the problem areas that still exist, and suggests some solutions.

  18. A Comparison of Authoring Software for Developing Mathematics Self-Learning Software Packages.

    ERIC Educational Resources Information Center

    Suen, Che-yin; Pok, Yang-ming

    Four years ago, the authors started to develop a self-paced mathematics learning software called NPMaths by using an authoring package called Tencore. However, NPMaths had some weak points. A development team was hence formed to develop similar software called Mathematics On Line. This time the team used another development language called…

  19. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, M.; Messina, P.; Coffey, R.

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursormore » to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms. The Data Analytics and Visualization Team lends expertise in tools and methods for high-performance, post-processing of large datasets, interactive data exploration, batch visualization, and production visualization. The Operations Team ensures that system hardware and software work reliably and optimally; system tools are matched to the unique system architectures and scale of ALCF resources; the entire system software stack works smoothly together; and I/O performance issues, bug fixes, and requests for system software are addressed. The User Services and Outreach Team offers frontline services and support to existing and potential ALCF users. The team also provides marketing and outreach to users, DOE, and the broader community.« less

  20. Extreme Programming: Maestro Style

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2009-01-01

    "Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme-programming practices. The single most influential of these factors is that continuous interaction between customers and programmers is not feasible.

  1. Practical experience with test-driven development during commissioning of the multi-star AO system ARGOS

    NASA Astrophysics Data System (ADS)

    Kulas, M.; Borelli, Jose Luis; Gässler, Wolfgang; Peter, Diethard; Rabien, Sebastian; Orban de Xivry, Gilles; Busoni, Lorenzo; Bonaglia, Marco; Mazzoni, Tommaso; Rahmer, Gustavo

    2014-07-01

    Commissioning time for an instrument at an observatory is precious, especially the night time. Whenever astronomers come up with a software feature request or point out a software defect, the software engineers have the task to find a solution and implement it as fast as possible. In this project phase, the software engineers work under time pressure and stress to deliver a functional instrument control software (ICS). The shortness of development time during commissioning is a constraint for software engineering teams and applies to the ARGOS project as well. The goal of the ARGOS (Advanced Rayleigh guided Ground layer adaptive Optics System) project is the upgrade of the Large Binocular Telescope (LBT) with an adaptive optics (AO) system consisting of six Rayleigh laser guide stars and wavefront sensors. For developing the ICS, we used the technique Test- Driven Development (TDD) whose main rule demands that the programmer writes test code before production code. Thereby, TDD can yield a software system, that grows without defects and eases maintenance. Having applied TDD in a calm and relaxed environment like office and laboratory, the ARGOS team has profited from the benefits of TDD. Before the commissioning, we were worried that the time pressure in that tough project phase would force us to drop TDD because we would spend more time writing test code than it would be worth. Despite this concern at the beginning, we could keep TDD most of the time also in this project phase This report describes the practical application and performance of TDD including its benefits, limitations and problems during the ARGOS commissioning. Furthermore, it covers our experience with pair programming and continuous integration at the telescope.

  2. Terra Harvest Open Source Environment (THOSE): a universal unattended ground sensor controller

    NASA Astrophysics Data System (ADS)

    Gold, Joshua; Klawon, Kevin; Humeniuk, David; Landoll, Darren

    2011-06-01

    Under the Terra Harvest Program, the Defense Intelligence Agency (DIA) has the objective of developing a universal Controller for the Unattended Ground Sensor (UGS) community. The mission is to define, implement, and thoroughly document an open architecture that universally supports UGS missions, integrating disparate systems, peripherals, etc. The Controller's inherent interoperability with numerous systems enables the integration of both legacy and future Unattended Ground Sensor System (UGSS) components, while the design's open architecture supports rapid third-party development to ensure operational readiness. The successful accomplishment of these objectives by the program's Phase 3b contractors is demonstrated via integration of the companies' respective plug-'n-play contributions that include various peripherals, such as sensors, cameras, etc., and their associated software drivers. In order to independently validate the Terra Harvest architecture, L-3 Nova Engineering, along with its partner, the University of Dayton Research Institute (UDRI), is developing the Terra Harvest Open Source Environment (THOSE), a Java based system running on an embedded Linux Operating System (OS). The Use Cases on which the software is developed support the full range of UGS operational scenarios such as remote sensor triggering, image capture, and data exfiltration. The Team is additionally developing an ARM microprocessor evaluation platform that is both energyefficient and operationally flexible. The paper describes the overall THOSE architecture, as well as the implementation strategy for some of the key software components. Preliminary integration/test results and the Team's approach for transitioning the THOSE design and source code to the Government are also presented.

  3. Supporting the Use of CERT (registered trademark) Secure Coding Standards in DoD Acquisitions

    DTIC Science & Technology

    2012-07-01

    Capability Maturity Model IntegrationSM (CMMI®) [Davis 2009]. SM Team Software Process, TSP, and Capability Maturity Model Integration are service...STP Software Test Plan TEP Test and Evaluation Plan TSP Team Software Process V & V verification and validation CMU/SEI-2012-TN-016 | 47...Supporting the Use of CERT® Secure Coding Standards in DoD Acquisitions Tim Morrow ( Software Engineering Institute) Robert Seacord ( Software

  4. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes.

  5. Crosstalk: The Journal of Defense Software Engineering. Volume 22, Number 2, February 2009

    DTIC Science & Technology

    2009-02-01

    IT Investment With Service-Oriented Architecture ( SOA ), Geoffrey Raines examines how an SOA offers federal senior leadership teams an incremental and...values, and is used by 30 million people. [1] Given budget constraints, an incre- mental approach seems to be required. A Path Forward SOA , as implemented...point of view, SOA offers several positive benefits. Language Neutral Integration Web-enabling applications with a com- mon browser interface became a

  6. Implementation of Phased Array Antenna Technology Providing a Wireless Local Area Network to Enhance Port Security and Maritime Interdiction Operations

    DTIC Science & Technology

    2009-09-01

    boarding team, COTS, WLAN, smart antenna, OpenVPN application, wireless base station, OFDM, latency, point-to-point wireless link. 16. PRICE CODE 17...16 c. SSL/TLS .................................17 2. OpenVPN ......................................17 III. EXPERIMENT METHODOLOGY...network frame at Layer 2 has already been secured by encryption at a higher level. 2. OpenVPN OpenVPN is open source software that provides a VPN

  7. A Quantitative and Qualitative Review of the Implementation of a Healthcare Information Network.

    DTIC Science & Technology

    1997-04-01

    Kekre , Sunder , Mayuram S. Krishnan. "Drivers of Customer Satisfaction for Software Products: Implications for Design and Service Support." Management...1995. Huth, E.J. "Needed: An Economic Approach to Systems for Medical Information." Ann Internal Med. 103, no. 4 (1985): 617-19. Kekre , Sunder ... Kekre et. al. discuss similar issues 32 as Goodhue and Austin, but they use slightly different terminology. Kekre’s team determined that

  8. An Investigation of Agility Issues in Scrum Teams Using Agility Indicators

    NASA Astrophysics Data System (ADS)

    Pikkarainen, Minna; Wang, Xiaofeng

    Agile software development methods have emerged and become increasingly popular in recent years; yet the issues encountered by software development teams that strive to achieve agility using agile methods are yet to be explored systematically. Built upon a previous study that has established a set of indicators of agility, this study investigates what issues are manifested in software development teams using agile methods. It is focussed on Scrum teams particularly. In other words, the goal of the chapter is to evaluate Scrum teams using agility indicators and therefore to further validate previously presented agility indicators within the additional cases. A multiple case study research method is employed. The findings of the study reveal that the teams using Scrum do not necessarily achieve agility in terms of team autonomy, sharing, stability and embraced uncertainty. The possible reasons include previous organizational plan-driven culture, resistance towards the Scrum roles and changing resources.

  9. Automation of Cassini Support Imaging Uplink Command Development

    NASA Technical Reports Server (NTRS)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  10. Investigating Team Cohesion in COCOMO II.2000

    ERIC Educational Resources Information Center

    Snowdeal-Carden, Betty A.

    2013-01-01

    Software engineering is team oriented and intensely complex, relying on human collaboration and creativity more than any other engineering discipline. Poor software estimation is a problem that within the United States costs over a billion dollars per year. Effective measurement of team cohesion is foundationally important to gain accurate…

  11. Teaching Tip: Managing Software Engineering Student Teams Using Pellerin's 4-D System

    ERIC Educational Resources Information Center

    Doman, Marguerite; Besmer, Andrew; Olsen, Anne

    2015-01-01

    In this article, we discuss the use of Pellerin's Four Dimension Leadership System (4-D) as a way to manage teams in a classroom setting. Over a 5-year period, we used a modified version of the 4-D model to manage teams within a senior level Software Engineering capstone course. We found that this approach for team management in a classroom…

  12. Ground Data System Risk Mitigation Techniques for Faster, Better, Cheaper Missions

    NASA Technical Reports Server (NTRS)

    Catena, John J.; Saylor, Rick; Casasanta, Ralph; Weikel, Craig; Powers, Edward I. (Technical Monitor)

    2000-01-01

    With the advent of faster, cheaper, and better missions, NASA Projects acknowledged that a higher level of risk was inherent and accepted with this approach. It was incumbent however upon each component of the Project whether spacecraft, payload, launch vehicle, or ground data system to ensure that the mission would nevertheless be an unqualified success. The Small Explorer (SMEX) program's ground data system (GDS) team developed risk mitigation techniques to achieve these goals starting in 1989. These techniques have evolved through the SMEX series of missions and are practiced today under the Triana program. These techniques are: (1) Mission Team Organization--empowerment of a closeknit ground data system team comprising system engineering, software engineering, testing, and flight operations personnel; (2) Common Spacecraft Test and Operational Control System--utilization of the pre-launch spacecraft integration system as the post-launch ground data system on-orbit command and control system; (3) Utilization of operations personnel in pre-launch testing--making the flight operations team an integrated member of the spacecraft testing activities at the beginning of the spacecraft fabrication phase; (4) Consolidated Test Team--combined system, mission readiness and operations testing to optimize test opportunities with the ground system and spacecraft; and (5). Reuse of Spacecraft, Systems and People--reuse of people, software and on-orbit spacecraft throughout the SMEX mission series. The SMEX ground system development approach for faster, cheaper, better missions has been very successful. This paper will discuss these risk management techniques in the areas of ground data system design, implementation, test, and operational readiness.

  13. Enhancing Collaborative Learning through Group Intelligence Software

    NASA Astrophysics Data System (ADS)

    Tan, Yin Leng; Macaulay, Linda A.

    Employers increasingly demand not only academic excellence from graduates but also excellent interpersonal skills and the ability to work collaboratively in teams. This paper discusses the role of Group Intelligence software in helping to develop these higher order skills in the context of an enquiry based learning (EBL) project. The software supports teams in generating ideas, categorizing, prioritizing, voting and multi-criteria decision making and automatically generates a report of each team session. Students worked in a Group Intelligence lab designed to support both face to face and computer-mediated communication and employers provided feedback at two key points in the year long team project. Evaluation of the effectiveness of Group Intelligence software in collaborative learning was based on five key concepts of creativity, participation, productivity, engagement and understanding.

  14. Using Modern Methodologies with Maintenance Software

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; Francis, Laurie K.; Smith, Benjamin D.

    2014-01-01

    Jet Propulsion Laboratory uses multi-mission software produced by the Mission Planning and Sequencing (MPS) team to process, simulate, translate, and package the commands that are sent to a spacecraft. MPS works under the auspices of the Multi-Mission Ground Systems and Services (MGSS). This software consists of nineteen applications that are in maintenance. The MPS software is classified as either class B (mission critical) or class C (mission important). The scheduling of tasks is difficult because mission needs must be addressed prior to performing any other tasks and those needs often spring up unexpectedly. Keeping track of the tasks that everyone is working on is also difficult because each person is working on a different software component. Recently the group adopted the Scrum methodology for planning and scheduling tasks. Scrum is one of the newer methodologies typically used in agile development. In the Scrum development environment, teams pick their tasks that are to be completed within a sprint based on priority. The team specifies the sprint length usually a month or less. Scrum is typically used for new development of one application. In the Scrum methodology there is a scrum master who is a facilitator who tries to make sure that everything moves smoothly, a product owner who represents the user(s) of the software and the team. MPS is not the traditional environment for the Scrum methodology. MPS has many software applications in maintenance, team members who are working on disparate applications, many users, and is interruptible based on mission needs, issues and requirements. In order to use scrum, the methodology needed adaptation to MPS. Scrum was chosen because it is adaptable. This paper is about the development of the process for using scrum, a new development methodology, with a team that works on disparate interruptible tasks on multiple software applications.

  15. Evaluation of tactical training in team handball by means of artificial neural networks.

    PubMed

    Hassan, Amr; Schrapf, Norbert; Ramadan, Wael; Tilp, Markus

    2017-04-01

    While tactical performance in competition has been analysed extensively, the assessment of training processes of tactical behaviour has rather been neglected in the literature. Therefore, the purpose of this study is to provide a methodology to assess the acquisition and implementation of offensive tactical behaviour in team handball. The use of game analysis software combined with an artificial neural network (ANN) software enabled identifying tactical target patterns from high level junior players based on their positions during offensive actions. These patterns were then trained by an amateur junior handball team (n = 14, 17 (0.5) years)). Following 6 weeks of tactical training an exhibition game was performed where the players were advised to use the target patterns as often as possible. Subsequently, the position data of the game was analysed with an ANN. The test revealed that 58% of the played patterns could be related to the trained target patterns. The similarity between executed patterns and target patterns was assessed by calculating the mean distance between key positions of the players in the game and the target pattern which was 0.49 (0.20) m. In summary, the presented method appears to be a valid instrument to assess tactical training.

  16. Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha

    2016-01-01

    To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.

  17. First experiences with the implementation of the European standard EN 62304 on medical device software for the quality assurance of a radiotherapy unit

    PubMed Central

    2014-01-01

    Background According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). Methods The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. Results The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. Conclusions The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers. PMID:24655818

  18. First experiences with the implementation of the European standard EN 62304 on medical device software for the quality assurance of a radiotherapy unit.

    PubMed

    Höss, Angelika; Lampe, Christian; Panse, Ralf; Ackermann, Benjamin; Naumann, Jakob; Jäkel, Oliver

    2014-03-21

    According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers.

  19. Methodology for Software Reliability Prediction. Volume 2.

    DTIC Science & Technology

    1987-11-01

    The overall acquisition ,z program shall include the resources, schedule, management, structure , and controls necessary to ensure that specified AD...Independent Verification/Validation - Programming Team Structure - Educational Level of Team Members - Experience Level of Team Members * Methods Used...Prediction or Estimation Parameter Supported: Software - Characteristics 3. Objectives: Structured programming studies and Government Ur.’.. procurement

  20. EPOS Data and Service Provision

    NASA Astrophysics Data System (ADS)

    Bailo, Daniele; Jeffery, Keith G.; Atakan, Kuvvet; Harrison, Matt

    2017-04-01

    EPOS is now in IP (implementation phase) after a successful PP (preparatory phase). EPOS consists of essentially two components, one ICS (Integrated Core Services) representing the integrating ICT (Information and Communication Technology) and many TCS (Thematic Core Services) representing the scientific domains. The architecture developed, demonstrated and agreed within the project during the PP is now being developed utilising co-design with the TCS teams and agile, spiral methods within the ICS team. The 'heart' of EPOS is the metadata catalog. This provides for the ICS a digital representation of the TCS assets (services, data, software, equipment, expertise…) thus facilitating access, interoperation and (re-)use. A major part of the work has been interactions with the TCS. The original intention to harvest information from the TCS required (and still requires) discussions to understand fully the TCS organisational structures linked with rights, security and privacy; their (meta)data syntax (structure) and semantics (meaning); their workflows and methods of working and the services offered. To complicate matters further the TCS are each at varying stages of development and the ICS design has to accommodate pre-existing, developing and expected future standards for metadata, data, software and processes. Through information documents, questionnaires and interviews/meetings the EPOS ICS team has collected DDSS (Data, Data Products, Software and Services) information from the TCS. The ICS team developed a simplified metadata model for presentation to the TCS and the ICS team will perform the mapping and conversion from this model to the internal detailed technical metadata model using (CERIF: a EU recommendation to Member States maintained, developed and promoted by euroCRIS www.eurocris.org ). At the time of writing the final modifications of the EPOS metadata model are being made, and the mappings to CERIF designed, prior to the main phase of (meta)data collection into the EPOS metadata catalog. In parallel work proceeds on the user interface softsare, the APIs (Application Programming Interfaces) to the TCS services, the harvesting method and software, the AAAI (Authentication, Authorisation, Accounting Infrastructure) and the system manager. The next steps will involve interfaces to ICS-D (Distributed ICS i.e. facilities and services for computing, data storage, detectors and instruments for data collection etc.) to which requests, software and data will be deployed and from which data will be generated. Associated with this will be the development of the workflow system which will assist the end-user in building a workflow to achieve the scientific objectives.

  1. OOD/OOP experience in the Science Operations Center part of the ground system for X ray Timing Explorer mission

    NASA Technical Reports Server (NTRS)

    Choudhary, Abdur Rahim

    1994-01-01

    The Science Operations Center (SOC) for the X-ray Timing Explorer (XTE) mission is an important component of the XTE ground system. Its mandate includes: (1) command and telemetry for the three XTE instruments, using CCSDS standards; (2) monitoring of the real-time science operations, reconfiguration of the experiment and the instruments, and real-time commanding to address the targets of opportunity (TOO) and alternate observations; and (3) analysis, processing, and archival of the XTE telemetry, and the timely delivery of the data products to the principal investigator (PI) teams and the guest observers (GO). The SOC has two major components: the science operations facility (SOF) that addresses the first two objectives stated above and the guest observer facility (GOF) that addresses the third. The SOF has subscribed to the object oriented design and implementation; while the GOF uses the traditional approach in order to take advantage of the existing software developed in support of previous missions. This paper details the SOF development using the object oriented design (OOD), and its implementation using the object oriented programming (OOP) in C++ under Unix environment on client-server architecture using Sun workstations. It also illustrates how the object oriented (OO) and the traditional approaches coexist in SOF and GOF, the lessons learned, and how the OOD facilitated the distributed software development collaboratively by four different teams. Details are presented for the SOF system, its major subsystems, its interfaces with the rest of the XTE ground data system, and its design and implementation approaches.

  2. Building information models for astronomy projects

    NASA Astrophysics Data System (ADS)

    Ariño, Javier; Murga, Gaizka; Campo, Ramón; Eletxigerra, Iñigo; Ampuero, Pedro

    2012-09-01

    A Building Information Model is a digital representation of physical and functional characteristics of a building. BIMs represent the geometrical characteristics of the Building, but also properties like bills of quantities, definition of COTS components, status of material in the different stages of the project, project economic data, etc. The BIM methodology, which is well established in the Architecture Engineering and Construction (AEC) domain for conventional buildings, has been brought one step forward in its application for Astronomical/Scientific facilities. In these facilities steel/concrete structures have high dynamic and seismic requirements, M&E installations are complex and there is a large amount of special equipment and mechanisms involved as a fundamental part of the facility. The detail design definition is typically implemented by different design teams in specialized design software packages. In order to allow the coordinated work of different engineering teams, the overall model, and its associated engineering database, is progressively integrated using a coordination and roaming software which can be used before starting construction phase for checking interferences, planning the construction sequence, studying maintenance operation, reporting to the project office, etc. This integrated design & construction approach will allow to efficiently plan construction sequence (4D). This is a powerful tool to study and analyze in detail alternative construction sequences and ideally coordinate the work of different construction teams. In addition engineering, construction and operational database can be linked to the virtual model (6D), what gives to the end users a invaluable tool for the lifecycle management, as all the facility information can be easily accessed, added or replaced. This paper presents the BIM methodology as implemented by IDOM with the E-ELT and ATST Enclosures as application examples.

  3. Command and Control Software Development Memory Management

    NASA Technical Reports Server (NTRS)

    Joseph, Austin Pope

    2017-01-01

    This internship was initially meant to cover the implementation of unit test automation for a NASA ground control project. As is often the case with large development projects, the scope and breadth of the internship changed. Instead, the internship focused on finding and correcting memory leaks and errors as reported by a COTS software product meant to track such issues. Memory leaks come in many different flavors and some of them are more benign than others. On the extreme end a program might be dynamically allocating memory and not correctly deallocating it when it is no longer in use. This is called a direct memory leak and in the worst case can use all the available memory and crash the program. If the leaks are small they may simply slow the program down which, in a safety critical system (a system for which a failure or design error can cause a risk to human life), is still unacceptable. The ground control system is managed in smaller sub-teams, referred to as CSCIs. The CSCI that this internship focused on is responsible for monitoring the health and status of the system. This team's software had several methods/modules that were leaking significant amounts of memory. Since most of the code in this system is safety-critical, correcting memory leaks is a necessity.

  4. Achieving Agility and Stability in Large-Scale Software Development

    DTIC Science & Technology

    2013-01-16

    temporary team is assigned to prepare layers and frameworks for future feature teams. Presentation Layer Domain Layer Data Access Layer...http://www.sei.cmu.edu/training/ elearning ~ Software Engineering Institute CarnegieMellon

  5. Implementation of the AES as a Hash Function for Confirming the Identity of Software on a Computer System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Randy R.; Bass, Robert B.; Kouzes, Richard T.

    2003-01-20

    This paper provides a brief overview of the implementation of the Advanced Encryption Standard (AES) as a hash function for confirming the identity of software resident on a computer system. The PNNL Software Authentication team chose to use a hash function to confirm software identity on a system for situations where: (1) there is limited time to perform the confirmation and (2) access to the system is restricted to keyboard or thumbwheel input and output can only be displayed on a monitor. PNNL reviewed three popular algorithms: the Secure Hash Algorithm - 1 (SHA-1), the Message Digest - 5 (MD-5),more » and the Advanced Encryption Standard (AES) and selected the AES to incorporate in software confirmation tool we developed. This paper gives a brief overview of the SHA-1, MD-5, and the AES and sites references for further detail. It then explains the overall processing steps of the AES to reduce a large amount of generic data-the plain text, such is present in memory and other data storage media in a computer system, to a small amount of data-the hash digest, which is a mathematically unique representation or signature of the former that could be displayed on a computer's monitor. This paper starts with a simple definition and example to illustrate the use of a hash function. It concludes with a description of how the software confirmation tool uses the hash function to confirm the identity of software on a computer system.« less

  6. Workflow-Based Software Development Environment

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    2013-01-01

    The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment

  7. Relating Communications Mode Choice and Teamwork Quality: Conversational versus Textual Communication in IT System and Software Development Teams

    ERIC Educational Resources Information Center

    Smith, James Robert

    2012-01-01

    This cross-sectional study explored how IT system and software development team members communicated in the workplace and whether teams that used more verbal communication (and less text-based communication) experienced higher levels of collaboration as measured using the Teamwork Quality (TWQ) scale. Although computer-mediated communication tools…

  8. ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, L.E.

    1995-02-01

    This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since suchmore » cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mancuso, C.A.

    The INEL Database of BNCT Information and Treatment (TIDBIT) has been under development for several years. Late in 1993, a new software development team took over the project and did and assessment of the current implementation status, and determined that the user interface was unsatisfactory for the expected users and that the data structures were out of step with the current state of reality. The team evaluated several tools that would improve the user interface to make the system easier to use. Uniface turned out to be the product of choice. During 1994, TIDBIT got its name, underwent a completemore » change of appearance, had a major overhaul to the data structures that support the application, and system documentation was begun. A prototype of the system was demonstrated in September 1994.« less

  10. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  11. Achieving Agility and Stability in Large-Scale Software Development

    DTIC Science & Technology

    2013-01-16

    temporary team is assigned to prepare layers and frameworks for future feature teams. Presentation Layer Domain Layer Data Access Layer Framework...http://www.sei.cmu.edu/training/ elearning ~ Software Engineering Institute CarnegieMellon

  12. The (mis)use of subjective process measures in software engineering

    NASA Technical Reports Server (NTRS)

    Valett, Jon D.; Condon, Steven E.

    1993-01-01

    A variety of measures are used in software engineering research to develop an understanding of the software process and product. These measures fall into three broad categories: quantitative, characteristics, and subjective. Quantitative measures are those to which a numerical value can be assigned, for example effort or lines of code (LOC). Characteristics describe the software process or product; they might include programming language or the type of application. While such factors do not provide a quantitative measurement of a process or product, they do help characterize them. Subjective measures (as defined in this study) are those that are based on the opinion or opinions of individuals; they are somewhat unique and difficult to quantify. Capturing of subjective measure data typically involves development of some type of scale. For example, 'team experience' is one of the subjective measures that were collected and studied by the Software Engineering Laboratory (SEL). Certainly, team experience could have an impact on the software process or product; actually measuring a team's experience, however, is not a strictly mathematical exercise. Simply adding up each team member's years of experience appears inadequate. In fact, most researchers would agree that 'years' do not directly translate into 'experience.' Team experience must be defined subjectively and then a scale must be developed e.g., high experience versus low experience; or high, medium, low experience; or a different or more granular scale. Using this type of scale, a particular team's overall experience can be compared with that of other teams in the development environment. Defining, collecting, and scaling subjective measures is difficult. First, precise definitions of the measures must be established. Next, choices must be made about whose opinions will be solicited to constitute the data. Finally, care must be given to defining the right scale and level of granularity for measurement.

  13. The Nuclear Energy Advanced Modeling and Simulation Safeguards and Separations Reprocessing Plant Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alex; Billings, Jay Jay; de Almeida, Valmor F

    2011-08-01

    This report details the progress made in the development of the Reprocessing Plant Toolkit (RPTk) for the DOE Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. RPTk is an ongoing development effort intended to provide users with an extensible, integrated, and scalable software framework for the modeling and simulation of spent nuclear fuel reprocessing plants by enabling the insertion and coupling of user-developed physicochemical modules of variable fidelity. The NEAMS Safeguards and Separations IPSC (SafeSeps) and the Enabling Computational Technologies (ECT) supporting program element have partnered to release an initial version of the RPTk with a focus on software usabilitymore » and utility. RPTk implements a data flow architecture that is the source of the system's extensibility and scalability. Data flows through physicochemical modules sequentially, with each module importing data, evolving it, and exporting the updated data to the next downstream module. This is accomplished through various architectural abstractions designed to give RPTk true plug-and-play capabilities. A simple application of this architecture, as well as RPTk data flow and evolution, is demonstrated in Section 6 with an application consisting of two coupled physicochemical modules. The remaining sections describe this ongoing work in full, from system vision and design inception to full implementation. Section 3 describes the relevant software development processes used by the RPTk development team. These processes allow the team to manage system complexity and ensure stakeholder satisfaction. This section also details the work done on the RPTk ``black box'' and ``white box'' models, with a special focus on the separation of concerns between the RPTk user interface and application runtime. Section 4 and 5 discuss that application runtime component in more detail, and describe the dependencies, behavior, and rigorous testing of its constituent components.« less

  14. Process-based quality management for clinical implementation of adaptive radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noel, Camille E.; Santanam, Lakshmi; Parikh, Parag J.

    Purpose: Intensity-modulated adaptive radiotherapy (ART) has been the focus of considerable research and developmental work due to its potential therapeutic benefits. However, in light of its unique quality assurance (QA) challenges, no one has described a robust framework for its clinical implementation. In fact, recent position papers by ASTRO and AAPM have firmly endorsed pretreatment patient-specific IMRT QA, which limits the feasibility of online ART. The authors aim to address these obstacles by applying failure mode and effects analysis (FMEA) to identify high-priority errors and appropriate risk-mitigation strategies for clinical implementation of intensity-modulated ART. Methods: An experienced team of twomore » clinical medical physicists, one clinical engineer, and one radiation oncologist was assembled to perform a standard FMEA for intensity-modulated ART. A set of 216 potential radiotherapy failures composed by the forthcoming AAPM task group 100 (TG-100) was used as the basis. Of the 216 failures, 127 were identified as most relevant to an ART scheme. Using the associated TG-100 FMEA values as a baseline, the team considered how the likeliness of occurrence (O), outcome severity (S), and likeliness of failure being undetected (D) would change for ART. New risk priority numbers (RPN) were calculated. Failures characterized by RPN ≥ 200 were identified as potentially critical. Results: FMEA revealed that ART RPN increased for 38% (n = 48/127) of potential failures, with 75% (n = 36/48) attributed to failures in the segmentation and treatment planning processes. Forty-three of 127 failures were identified as potentially critical. Risk-mitigation strategies include implementing a suite of quality control and decision support software, specialty QA software/hardware tools, and an increase in specially trained personnel. Conclusions: Results of the FMEA-based risk assessment demonstrate that intensity-modulated ART introduces different (but not necessarily more) risks than standard IMRT and may be safely implemented with the proper mitigations.« less

  15. Process-based quality management for clinical implementation of adaptive radiotherapy

    PubMed Central

    Noel, Camille E.; Santanam, Lakshmi; Parikh, Parag J.; Mutic, Sasa

    2014-01-01

    Purpose: Intensity-modulated adaptive radiotherapy (ART) has been the focus of considerable research and developmental work due to its potential therapeutic benefits. However, in light of its unique quality assurance (QA) challenges, no one has described a robust framework for its clinical implementation. In fact, recent position papers by ASTRO and AAPM have firmly endorsed pretreatment patient-specific IMRT QA, which limits the feasibility of online ART. The authors aim to address these obstacles by applying failure mode and effects analysis (FMEA) to identify high-priority errors and appropriate risk-mitigation strategies for clinical implementation of intensity-modulated ART. Methods: An experienced team of two clinical medical physicists, one clinical engineer, and one radiation oncologist was assembled to perform a standard FMEA for intensity-modulated ART. A set of 216 potential radiotherapy failures composed by the forthcoming AAPM task group 100 (TG-100) was used as the basis. Of the 216 failures, 127 were identified as most relevant to an ART scheme. Using the associated TG-100 FMEA values as a baseline, the team considered how the likeliness of occurrence (O), outcome severity (S), and likeliness of failure being undetected (D) would change for ART. New risk priority numbers (RPN) were calculated. Failures characterized by RPN ≥ 200 were identified as potentially critical. Results: FMEA revealed that ART RPN increased for 38% (n = 48/127) of potential failures, with 75% (n = 36/48) attributed to failures in the segmentation and treatment planning processes. Forty-three of 127 failures were identified as potentially critical. Risk-mitigation strategies include implementing a suite of quality control and decision support software, specialty QA software/hardware tools, and an increase in specially trained personnel. Conclusions: Results of the FMEA-based risk assessment demonstrate that intensity-modulated ART introduces different (but not necessarily more) risks than standard IMRT and may be safely implemented with the proper mitigations. PMID:25086527

  16. Process-based quality management for clinical implementation of adaptive radiotherapy.

    PubMed

    Noel, Camille E; Santanam, Lakshmi; Parikh, Parag J; Mutic, Sasa

    2014-08-01

    Intensity-modulated adaptive radiotherapy (ART) has been the focus of considerable research and developmental work due to its potential therapeutic benefits. However, in light of its unique quality assurance (QA) challenges, no one has described a robust framework for its clinical implementation. In fact, recent position papers by ASTRO and AAPM have firmly endorsed pretreatment patient-specific IMRT QA, which limits the feasibility of online ART. The authors aim to address these obstacles by applying failure mode and effects analysis (FMEA) to identify high-priority errors and appropriate risk-mitigation strategies for clinical implementation of intensity-modulated ART. An experienced team of two clinical medical physicists, one clinical engineer, and one radiation oncologist was assembled to perform a standard FMEA for intensity-modulated ART. A set of 216 potential radiotherapy failures composed by the forthcoming AAPM task group 100 (TG-100) was used as the basis. Of the 216 failures, 127 were identified as most relevant to an ART scheme. Using the associated TG-100 FMEA values as a baseline, the team considered how the likeliness of occurrence (O), outcome severity (S), and likeliness of failure being undetected (D) would change for ART. New risk priority numbers (RPN) were calculated. Failures characterized by RPN ≥ 200 were identified as potentially critical. FMEA revealed that ART RPN increased for 38% (n = 48/127) of potential failures, with 75% (n = 36/48) attributed to failures in the segmentation and treatment planning processes. Forty-three of 127 failures were identified as potentially critical. Risk-mitigation strategies include implementing a suite of quality control and decision support software, specialty QA software/hardware tools, and an increase in specially trained personnel. Results of the FMEA-based risk assessment demonstrate that intensity-modulated ART introduces different (but not necessarily more) risks than standard IMRT and may be safely implemented with the proper mitigations.

  17. Requirements Engineering in Building Climate Science Software

    NASA Astrophysics Data System (ADS)

    Batcheller, Archer L.

    Software has an important role in supporting scientific work. This dissertation studies teams that build scientific software, focusing on the way that they determine what the software should do. These requirements engineering processes are investigated through three case studies of climate science software projects. The Earth System Modeling Framework assists modeling applications, the Earth System Grid distributes data via a web portal, and the NCAR (National Center for Atmospheric Research) Command Language is used to convert, analyze and visualize data. Document analysis, observation, and interviews were used to investigate the requirements-related work. The first research question is about how and why stakeholders engage in a project, and what they do for the project. Two key findings arise. First, user counts are a vital measure of project success, which makes adoption important and makes counting tricky and political. Second, despite the importance of quantities of users, a few particular "power users" develop a relationship with the software developers and play a special role in providing feedback to the software team and integrating the system into user practice. The second research question focuses on how project objectives are articulated and how they are put into practice. The team seeks to both build a software system according to product requirements but also to conduct their work according to process requirements such as user support. Support provides essential communication between users and developers that assists with refining and identifying requirements for the software. It also helps users to learn and apply the software to their real needs. User support is a vital activity for scientific software teams aspiring to create infrastructure. The third research question is about how change in scientific practice and knowledge leads to changes in the software, and vice versa. The "thickness" of a layer of software infrastructure impacts whether the software team or users have control and responsibility for making changes in response to new scientific ideas. Thick infrastructure provides more functionality for users, but gives them less control of it. The stability of infrastructure trades off against the responsiveness that the infrastructure can have to user needs.

  18. Extreme Ultraviolet Imaging Telescope (EIT)

    NASA Technical Reports Server (NTRS)

    Lemen, J. R.; Freeland, S. L.

    1997-01-01

    Efforts concentrated on development and implementation of the SolarSoft (SSW) data analysis system. From an EIT analysis perspective, this system was designed to facilitate efficient reuse and conversion of software developed for Yohkoh/SXT and to take advantage of a large existing body of software developed by the SDAC, Yohkoh, and SOHO instrument teams. Another strong motivation for this system was to provide an EIT analysis environment which permits coordinated analysis of EIT data in conjunction with data from important supporting instruments, including Yohkoh/SXT and the other SOHO coronal instruments; CDS, SUMER, and LASCO. In addition, the SSW system will support coordinated EIT/TRACE analysis (by design) when TRACE data is available; TRACE launch is currently planned for March 1998. Working with Jeff Newmark, the Chianti software package (K.P. Dere et al) and UV /EUV data base was fully integrated into the SSW system to facilitate EIT temperature and emission analysis.

  19. Improving mapping for Ebola response through mobilising a local community with self-owned smartphones: Tonkolili District, Sierra Leone, January 2015.

    PubMed

    Nic Lochlainn, Laura M; Gayton, Ivan; Theocharopoulos, Georgios; Edwards, Robin; Danis, Kostas; Kremer, Ronald; Kleijer, Karline; Tejan, Sumaila M; Sankoh, Mohamed; Jimissa, Augustin; Greig, Jane; Caleo, Grazia

    2018-01-01

    During the 2014-16 Ebola virus disease (EVD) outbreak, the Magburaka Ebola Management Centre (EMC) operated by Médecins Sans Frontières (MSF) in Tonkolili District, Sierra Leone, identified that available district maps lacked up-to-date village information to facilitate timely implementation of EVD control strategies. In January 2015, we undertook a survey in chiefdoms within the MSF EMC catchment area to collect mapping and village data. We explore the feasibility and cost to mobilise a local community for this survey, describe validation against existing mapping sources and use of the data to prioritise areas for interventions, and lessons learned. We recruited local people with self-owned Android smartphones installed with open-source survey software (OpenDataKit (ODK)) and open-source navigation software (OpenStreetMap Automated Navigation Directions (OsmAnd)). Surveyors were paired with local motorbike drivers to travel to eligible villages. The collected mapping data were validated by checking for duplication and comparing the village names against a pre-existing village name and location list using a geographic distance and text string-matching algorithm. The survey teams gained sufficient familiarity with the ODK and OsmAnd software within 1-2 hours. Nine chiefdoms in Tonkolili District and three in Bombali District were surveyed within two weeks. Following de-duplication, the surveyors collected data from 891 villages with an estimated 127,021 households. The overall survey cost was €3,395; €3.80 per village surveyed. The MSF GIS team (MSF-OCG) created improved maps for the MSF Magburaka EMC team which were used to support surveillance, investigation of suspect EVD cases, hygiene-kit distribution and EVD survivor support. We shared the mapping data with OpenStreetMap, the local Ministry of Health and Sanitation and Sierra Leone District and National Ebola Response Centres. Involving local community and using accessible technology allowed rapid implementation, at moderate cost, of a survey to collect geographic and essential village information, and creation of updated maps. These methods could be used for future emergencies to facilitate response.

  20. Improving mapping for Ebola response through mobilising a local community with self-owned smartphones: Tonkolili District, Sierra Leone, January 2015

    PubMed Central

    Gayton, Ivan; Theocharopoulos, Georgios; Edwards, Robin; Danis, Kostas; Kremer, Ronald; Kleijer, Karline; Tejan, Sumaila M.; Sankoh, Mohamed; Jimissa, Augustin; Greig, Jane; Caleo, Grazia

    2018-01-01

    Background During the 2014–16 Ebola virus disease (EVD) outbreak, the Magburaka Ebola Management Centre (EMC) operated by Médecins Sans Frontières (MSF) in Tonkolili District, Sierra Leone, identified that available district maps lacked up-to-date village information to facilitate timely implementation of EVD control strategies. In January 2015, we undertook a survey in chiefdoms within the MSF EMC catchment area to collect mapping and village data. We explore the feasibility and cost to mobilise a local community for this survey, describe validation against existing mapping sources and use of the data to prioritise areas for interventions, and lessons learned. Methods We recruited local people with self-owned Android smartphones installed with open-source survey software (OpenDataKit (ODK)) and open-source navigation software (OpenStreetMap Automated Navigation Directions (OsmAnd)). Surveyors were paired with local motorbike drivers to travel to eligible villages. The collected mapping data were validated by checking for duplication and comparing the village names against a pre-existing village name and location list using a geographic distance and text string-matching algorithm. Results The survey teams gained sufficient familiarity with the ODK and OsmAnd software within 1–2 hours. Nine chiefdoms in Tonkolili District and three in Bombali District were surveyed within two weeks. Following de-duplication, the surveyors collected data from 891 villages with an estimated 127,021 households. The overall survey cost was €3,395; €3.80 per village surveyed. The MSF GIS team (MSF-OCG) created improved maps for the MSF Magburaka EMC team which were used to support surveillance, investigation of suspect EVD cases, hygiene-kit distribution and EVD survivor support. We shared the mapping data with OpenStreetMap, the local Ministry of Health and Sanitation and Sierra Leone District and National Ebola Response Centres. Conclusions Involving local community and using accessible technology allowed rapid implementation, at moderate cost, of a survey to collect geographic and essential village information, and creation of updated maps. These methods could be used for future emergencies to facilitate response. PMID:29298314

  1. Payload Operations Support Team Tools

    NASA Technical Reports Server (NTRS)

    Askew, Bill; Barry, Matthew; Burrows, Gary; Casey, Mike; Charles, Joe; Downing, Nicholas; Jain, Monika; Leopold, Rebecca; Luty, Roger; McDill, David; hide

    2007-01-01

    Payload Operations Support Team Tools is a software system that assists in (1) development and testing of software for payloads to be flown aboard the space shuttles and (2) training of payload customers, flight controllers, and flight crews in payload operations

  2. DiaFit: The Development of a Smart App for Patients with Type 2 Diabetes and Obesity.

    PubMed

    Modave, François; Bian, Jiang; Rosenberg, Eric; Mendoza, Tonatiuh; Liang, Zhan; Bhosale, Ravi; Maeztu, Carlos; Rodriguez, Camila; Cardel, Michelle I

    2016-01-01

    Optimal management of chronic diseases, such as type 2 diabetes (T2D) and obesity, requires patient-provider communication and proactive self-management from the patient. Mobile apps could be an effective strategy for improving patient-provider communication and provide resources for self-management to patients themselves. The objective of this paper is to describe the development of a mobile tool for patients with T2D and obesity that utilizes an integrative approach to facilitate patient-centered app development, with patient and physician interfaces. Our implementation strategy focused on the building of a multidisciplinary team to create a user-friendly and evidence-based app, to be used by patients in a home setting or at the point-of-care. We present the iterative design, development, and testing of DiaFit, an app designed to improve the self-management of T2D and obesity, using an adapted Agile approach to software implementation. The production team consisted of experts in mobile health, nutrition sciences, and obesity; software engineers; and clinicians. Additionally, the team included citizen scientists and clinicians who acted as the de facto software clients for DiaFit and therefore interacted with the production team throughout the entire app creation, from design to testing. DiaFit (version 1.0) is an open-source, inclusive iOS app that incorporates nutrition data, physical activity data, and medication and glucose values, as well as patient-reported outcomes. DiaFit supports the uploading of data from sensory devices via Bluetooth for physical activity (iOS step counts, FitBit, Apple watch) and glucose monitoring (iHealth glucose meter). The app provides summary statistics and graphics for step counts, dietary information, and glucose values that can be used by patients and their providers to make informed health decisions. The DiaFit iOS app was developed in Swift (version 2.2) with a Web back-end deployed on the Health Insurance Portability and Accountability Act compliant-ready Amazon Web Services cloud computing platform. DiaFit is publicly available on GitHub to the diabetes community at large, under the GNU General Public License agreement. Given the proliferation of health-related apps available to health consumers, it is essential to ensure that apps are evidence-based and user-oriented, with specific health conditions in mind. To this end, we have used a software development approach focusing on community and clinical engagement to create DiaFit, an app that assists patients with T2D and obesity to better manage their health through active communication with their providers and proactive self-management of their diseases.

  3. DiaFit: The Development of a Smart App for Patients with Type 2 Diabetes and Obesity

    PubMed Central

    Modave, François; Bian, Jiang; Rosenberg, Eric; Mendoza, Tonatiuh; Liang, Zhan; Bhosale, Ravi; Maeztu, Carlos; Rodriguez, Camila; Cardel, Michelle I

    2018-01-01

    Background Optimal management of chronic diseases, such as type 2 diabetes (T2D) and obesity, requires patient-provider communication and proactive self-management from the patient. Mobile apps could be an effective strategy for improving patient-provider communication and provide resources for self-management to patients themselves. Objective The objective of this paper is to describe the development of a mobile tool for patients with T2D and obesity that utilizes an integrative approach to facilitate patient-centered app development, with patient and physician interfaces. Our implementation strategy focused on the building of a multidisciplinary team to create a user-friendly and evidence-based app, to be used by patients in a home setting or at the point-of-care. Methods We present the iterative design, development, and testing of DiaFit, an app designed to improve the self-management of T2D and obesity, using an adapted Agile approach to software implementation. The production team consisted of experts in mobile health, nutrition sciences, and obesity; software engineers; and clinicians. Additionally, the team included citizen scientists and clinicians who acted as the de facto software clients for DiaFit and therefore interacted with the production team throughout the entire app creation, from design to testing. Results DiaFit (version 1.0) is an open-source, inclusive iOS app that incorporates nutrition data, physical activity data, and medication and glucose values, as well as patient-reported outcomes. DiaFit supports the uploading of data from sensory devices via Bluetooth for physical activity (iOS step counts, FitBit, Apple watch) and glucose monitoring (iHealth glucose meter). The app provides summary statistics and graphics for step counts, dietary information, and glucose values that can be used by patients and their providers to make informed health decisions. The DiaFit iOS app was developed in Swift (version 2.2) with a Web back-end deployed on the Health Insurance Portability and Accountability Act compliant-ready Amazon Web Services cloud computing platform. DiaFit is publicly available on GitHub to the diabetes community at large, under the GNU General Public License agreement. Conclusions Given the proliferation of health-related apps available to health consumers, it is essential to ensure that apps are evidence-based and user-oriented, with specific health conditions in mind. To this end, we have used a software development approach focusing on community and clinical engagement to create DiaFit, an app that assists patients with T2D and obesity to better manage their health through active communication with their providers and proactive self-management of their diseases. PMID:29388609

  4. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  5. Transportable Payload Operations Control Center reusable software: Building blocks for quality ground data systems

    NASA Technical Reports Server (NTRS)

    Mahmot, Ron; Koslosky, John T.; Beach, Edward; Schwarz, Barbara

    1994-01-01

    The Mission Operations Division (MOD) at Goddard Space Flight Center builds Mission Operations Centers which are used by Flight Operations Teams to monitor and control satellites. Reducing system life cycle costs through software reuse has always been a priority of the MOD. The MOD's Transportable Payload Operations Control Center development team established an extensive library of 14 subsystems with over 100,000 delivered source instructions of reusable, generic software components. Nine TPOCC-based control centers to date support 11 satellites and achieved an average software reuse level of more than 75 percent. This paper shares experiences of how the TPOCC building blocks were developed and how building block developer's, mission development teams, and users are all part of the process.

  6. MoniQA: a general approach to monitor quality assurance

    NASA Astrophysics Data System (ADS)

    Jacobs, J.; Deprez, T.; Marchal, G.; Bosmans, H.

    2006-03-01

    MoniQA ("Monitor Quality Assurance") is a new, non-commercial, independent quality assurance software application developed in our medical physics team. It is a complete Java TM - based modular environment for the evaluation of radiological viewing devices and it thus fits in the global quality assurance network of our (film less) radiology department. The purpose of the software tool is to guide the medical physicist through an acceptance protocol and the radiologist through a constancy check protocol by presentation of the necessary test patterns and by automated data collection. Data are then sent to a central management system for further analysis. At the moment more than 55 patterns have been implemented, which can be grouped in schemes to implement protocols (i.e. AAPMtg18, DIN and EUREF). Some test patterns are dynamically created and 'drawn' on the viewing device with random parameters as is the case in a recently proposed new pattern for constancy testing. The software is installed on 35 diagnostic stations (70 monitors) in a film less radiology department. Learning time was very limited. A constancy check -with the new pattern that assesses luminance decrease, resolution problems and geometric distortion- takes only 2 minutes and 28 seconds per monitor. The modular approach of the software allows the evaluation of new or emerging test patterns. We will report on the software and its usability: practicality of the constancy check tests in our hospital and on the results from acceptance tests of viewing stations for digital mammography.

  7. Effective Team Support: From Modeling to Software Agents

    NASA Technical Reports Server (NTRS)

    Remington, Roger W. (Technical Monitor); John, Bonnie; Sycara, Katia

    2003-01-01

    The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and engineers and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in modeling infrastructure and task infrastructure. Work is continuing under a different contract to complete empirical data collection, cognitive modeling, and the building of software agents to support the teams task.

  8. Effective Team Support: From Task and Cognitive Modeling to Software Agents for Time-Critical Complex Work Environments

    NASA Technical Reports Server (NTRS)

    Remington, Roger W. (Technical Monitor); John, Bonnie E.; Sycara, Katia

    2005-01-01

    The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in completing a system for empirical data collection, cognitive modeling, and the building of software agents to support a team's tasks, and in running experiments for the collection of baseline data.

  9. Using Pilots to Assess the Value and Approach of CMMI Implementation

    NASA Technical Reports Server (NTRS)

    Godfrey, Sara; Andary, James; Rosenberg, Linda

    2002-01-01

    At Goddard Space Flight Center (GSFC), we have chosen to use Capability Maturity Model Integrated (CMMI) to guide our process improvement program. Projects at GSFC consist of complex systems of software and hardware that control satellites, operate ground systems, run instruments, manage databases and data and support scientific research. It is a challenge to launch a process improvement program that encompasses our diverse systems, yet is manageable in terms of cost effectiveness. In order to establish the best approach for improvement, our process improvement effort was divided into three phases: 1) Pilot projects; 2) Staged implementation; and 3) Sustainment and continual improvement. During Phase 1 the focus of the activities was on a baselining process, using pre-appraisals in order to get a baseline for making a better cost and effort estimate for the improvement effort. Pilot pre-appraisals were conducted from different perspectives so different approaches for process implementation could be evaluated. Phase 1 also concentrated on establishing an improvement infrastructure and training of the improvement teams. At the time of this paper, three pilot appraisals have been completed. Our initial appraisal was performed in a flight software area, considering the flight software organization as the organization. The second appraisal was done from a project perspective, focusing on systems engineering and acquisition, and using the organization as GSFC. The final appraisal was in a ground support software area, again using GSFC as the organization. This paper will present our initial approach, lessons learned from all three pilots and the changes in our approach based on the lessons learned.

  10. A Web-Based Data Collection Platform for Multisite Randomized Behavioral Intervention Trials: Development, Key Software Features, and Results of a User Survey.

    PubMed

    Modi, Riddhi A; Mugavero, Michael J; Amico, Rivet K; Keruly, Jeanne; Quinlivan, Evelyn Byrd; Crane, Heidi M; Guzman, Alfredo; Zinski, Anne; Montue, Solange; Roytburd, Katya; Church, Anna; Willig, James H

    2017-06-16

    Meticulous tracking of study data must begin early in the study recruitment phase and must account for regulatory compliance, minimize missing data, and provide high information integrity and/or reduction of errors. In behavioral intervention trials, participants typically complete several study procedures at different time points. Among HIV-infected patients, behavioral interventions can favorably affect health outcomes. In order to empower newly diagnosed HIV positive individuals to learn skills to enhance retention in HIV care, we developed the behavioral health intervention Integrating ENGagement and Adherence Goals upon Entry (iENGAGE) funded by the National Institute of Allergy and Infectious Diseases (NIAID), where we deployed an in-clinic behavioral health intervention in 4 urban HIV outpatient clinics in the United States. To scale our intervention strategy homogenously across sites, we developed software that would function as a behavioral sciences research platform. This manuscript aimed to: (1) describe the design and implementation of a Web-based software application to facilitate deployment of a multisite behavioral science intervention; and (2) report on results of a survey to capture end-user perspectives of the impact of this platform on the conduct of a behavioral intervention trial. In order to support the implementation of the NIAID-funded trial iENGAGE, we developed software to deploy a 4-site behavioral intervention for new clinic patients with HIV/AIDS. We integrated the study coordinator into the informatics team to participate in the software development process. Here, we report the key software features and the results of the 25-item survey to evaluate user perspectives on research and intervention activities specific to the iENGAGE trial (N=13). The key features addressed are study enrollment, participant randomization, real-time data collection, facilitation of longitudinal workflow, reporting, and reusability. We found 100% user agreement (13/13) that participation in the database design and/or testing phase made it easier to understand user roles and responsibilities and recommended participation of research teams in developing databases for future studies. Users acknowledged ease of use, color flags, longitudinal work flow, and data storage in one location as the most useful features of the software platform and issues related to saving participant forms, security restrictions, and worklist layout as least useful features. The successful development of the iENGAGE behavioral science research platform validated an approach of early and continuous involvement of the study team in design development. In addition, we recommend post-hoc collection of data from users as this led to important insights on how to enhance future software and inform standard clinical practices. Clinicaltrials.gov NCT01900236; (https://clinicaltrials.gov/ct2/show/NCT01900236 (Archived by WebCite at http://www.webcitation.org/6qAa8ld7v). ©Riddhi A Modi, Michael J Mugavero, Rivet K Amico, Jeanne Keruly, Evelyn Byrd Quinlivan, Heidi M Crane, Alfredo Guzman, Anne Zinski, Solange Montue, Katya Roytburd, Anna Church, James H Willig. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 16.06.2017.

  11. Real time monitoring of risk-adjusted paediatric cardiac surgery outcomes using variable life-adjusted display: implementation in three UK centres

    PubMed Central

    Pagel, Christina; Utley, Martin; Crowe, Sonya; Witter, Thomas; Anderson, David; Samson, Ray; McLean, Andrew; Banks, Victoria; Tsang, Victor; Brown, Katherine

    2013-01-01

    Objective To implement routine in-house monitoring of risk-adjusted 30-day mortality following paediatric cardiac surgery. Design Collaborative monitoring software development and implementation in three specialist centres. Patients and methods Analyses incorporated 2 years of data routinely audited by the National Institute of Cardiac Outcomes Research (NICOR). Exclusion criteria were patients over 16 or undergoing non-cardiac or only catheter procedures. We applied the partial risk adjustment in surgery (PRAiS) risk model for death within 30 days following surgery and generated variable life-adjusted display (VLAD) charts for each centre. These were shared with each clinical team and feedback was sought. Results Participating centres were Great Ormond Street Hospital, Evelina Children's Hospital and The Royal Hospital for Sick Children in Glasgow. Data captured all procedures performed between 1 January 2010 and 31 December 2011. This incorporated 2490 30-day episodes of care, 66 of which were associated with a death within 30 days.The VLAD charts generated for each centre displayed trends in outcomes benchmarked to recent national outcomes. All centres ended the 2-year period within four deaths from what would be expected. The VLAD charts were shared in multidisciplinary meetings and clinical teams reported that they were a useful addition to existing quality assurance initiatives. Each centre is continuing to use the prototype software to monitor their in-house surgical outcomes. Conclusions Timely and routine monitoring of risk-adjusted mortality following paediatric cardiac surgery is feasible. Close liaison with hospital data managers as well as clinicians was crucial to the success of the project. PMID:23564473

  12. Streamlining Software Aspects of Certification: Technical Team Report on the First Industry Workshop

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Holloway, C. Michael; Knight, John C.; Leveson, Nancy G.; Yang, Jeffrey C.; Dorsey, Cheryl A.; McCormick, G. Frank

    1998-01-01

    To address concerns about time and expense associated with software aspects of certification, the Federal Aviation Administration (FAA) began the Streamlining Software Aspects of Certification (SSAC) program. As part of this program, a Technical Team was established to determine whether the cost and time associated with certifying aircraft can be reduced while maintaining or improving safety, with the intent of impacting the FAA's Flight 2000 program. The Technical Team conducted a workshop to gain a better understanding of the major concerns in industry about software cost and schedule. Over 120 people attended the workshop, including representatives from the FAA,commercial transport and general aviation aircraft manufacturers and suppliers, and procurers and developers of non-airborne systems; and, more than 200 issues about software aspects of certification were recorded. This paper provides an overview of the SSAC program, motivation for the workshop, details of the workshop activities and outcomes, and recommendations for follow-on work.

  13. Improving Video Game Development: Facilitating Heterogeneous Team Collaboration through Flexible Software Processes

    NASA Astrophysics Data System (ADS)

    Musil, Juergen; Schweda, Angelika; Winkler, Dietmar; Biffl, Stefan

    Based on our observations of Austrian video game software development (VGSD) practices we identified a lack of systematic processes/method support and inefficient collaboration between various involved disciplines, i.e. engineers and artists. VGSD includes heterogeneous disciplines, e.g. creative arts, game/content design, and software. Nevertheless, improving team collaboration and process support is an ongoing challenge to enable a comprehensive view on game development projects. Lessons learned from software engineering practices can help game developers to increase game development processes within a heterogeneous environment. Based on a state of the practice survey in the Austrian games industry, this paper presents (a) first results with focus on process/method support and (b) suggests a candidate flexible process approach based on Scrum to improve VGSD and team collaboration. Results showed (a) a trend to highly flexible software processes involving various disciplines and (b) identified the suggested flexible process approach as feasible and useful for project application.

  14. The Navy’s Management of Software Licenses Needs Improvement

    DTIC Science & Technology

    2013-08-07

    Enterprise Software Licensing ( ESL ) as a primary DON etliciency target. Through policy and Integrated Product Team actions, this efficiency...review, as well as with DoD Enterprise Software Initiative ( ESl ) Blanket Pw·chase Agreements and any r•elated fedeml Acquisition Regulation and General...organizational and multi-functional DON ESL team. The DON is also participating in DoD level enterprise softwru·e licensing project~ through the Dol

  15. An Experimental Investigation of Computer Program Development Approaches and Computer Programming Metrics.

    DTIC Science & Technology

    1979-12-01

    team progranming in reducing software dleveloup- ment costs relative to ad hoc approaches and improving software product quality relative to...are interpreted as demonstrating the advantages of disciplined team programming in reducing software development costs relative to ad hoc approaches...is due oartialty to the cost and imoracticality of a valiI experimental setup within a oroauct ion environment. Thus the question remains, are

  16. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.

  17. Predicting Software Suitability Using a Bayesian Belief Network

    NASA Technical Reports Server (NTRS)

    Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.

    2005-01-01

    The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.

  18. A self-referential HOWTO on release engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galassi, Mark C.

    Release engineering is a fundamental part of the software development cycle: it is the point at which quality control is exercised and bug fixes are integrated. The way in which software is released also gives the end user her first experience of a software package, while in scientific computing release engineering can guarantee reproducibility. For these reasons and others, the release process is a good indicator of the maturity and organization of a development team. Software teams often do not put in place a release process at the beginning. This is unfortunate because the team does not have early andmore » continuous execution of test suites, and it does not exercise the software in the same conditions as the end users. I describe an approach to release engineering based on the software tools developed and used by the GNU project, together with several specific proposals related to packaging and distribution. I do this in a step-by-step manner, demonstrating how this very paper is written and built using proper release engineering methods. Because many aspects of release engineering are not exercised in the building of the paper, the accompanying software repository also contains examples of software libraries.« less

  19. Conserving analyst attention units: use of multi-agent software and CEP methods to assist information analysis

    NASA Astrophysics Data System (ADS)

    Rimland, Jeffrey; McNeese, Michael; Hall, David

    2013-05-01

    Although the capability of computer-based artificial intelligence techniques for decision-making and situational awareness has seen notable improvement over the last several decades, the current state-of-the-art still falls short of creating computer systems capable of autonomously making complex decisions and judgments in many domains where data is nuanced and accountability is high. However, there is a great deal of potential for hybrid systems in which software applications augment human capabilities by focusing the analyst's attention to relevant information elements based on both a priori knowledge of the analyst's goals and the processing/correlation of a series of data streams too numerous and heterogeneous for the analyst to digest without assistance. Researchers at Penn State University are exploring ways in which an information framework influenced by Klein's (Recognition Primed Decision) RPD model, Endsley's model of situational awareness, and the Joint Directors of Laboratories (JDL) data fusion process model can be implemented through a novel combination of Complex Event Processing (CEP) and Multi-Agent Software (MAS). Though originally designed for stock market and financial applications, the high performance data-driven nature of CEP techniques provide a natural compliment to the proven capabilities of MAS systems for modeling naturalistic decision-making, performing process adjudication, and optimizing networked processing and cognition via the use of "mobile agents." This paper addresses the challenges and opportunities of such a framework for augmenting human observational capability as well as enabling the ability to perform collaborative context-aware reasoning in both human teams and hybrid human / software agent teams.

  20. Zero to Integration in Eight Months, the Dawn Ground Data System Engineering Challenge

    NASA Technical Reports Server (NTRS)

    Dubon, Lydia P.

    2006-01-01

    The Dawn Project has presented the Ground Data System (GDS) with technical challenges driven by cost and schedule constraints commonly associated with National Aeronautics and Space Administration (NASA) Discovery Projects. The Dawn mission consists of a new and exciting Deep Space partnership among: the Jet Propulsion Laboratory (JPL), manages the project and is responsible for flight operation; Orbital Sciences Corporation (OSC), is the spacecraft builder and is responsible for flight system test and integration; and the University of California, at Los Angeles (UCLA), is responsible for science planning and operations. As a cost-capped mission, one of Dawn's implementation strategies is to leverage from both flight and ground heritage. OSC's ground data system is used for flight system test and integration as part of the flight heritage strategy. Mission operations, however, are to be conducted with JPL's ground system. The system engineering challenge of dealing with two heterogeneous ground systems emerged immediately. During the first technical interchange meeting between the JPL's GDS Team and OSC's Flight Software Team, August 2003, the need to integrate the ground system with the flight software was brought to the table. This need was driven by the project's commitment to enable instrument engineering model integration in a spacecraft simulator environment, for both demonstration and risk mitigation purposes, by April 2004. This paper will describe the system engineering approach that was undertaken by JPL's GDS Team in order to meet the technical challenge within a non-negotiable eight-month schedule. Key to the success was adherence to fundamental systems engineering practices: decomposition of the project request into manageable requirements; integration of multiple ground disciplines and experts into a focused team effort; definition of a structured yet flexible development process; definition of an in-process risk reduction plan; and aggregation of the intermediate products to an integrated final product. In addition, this paper will highlight the role of lessons learned from the integration experience. The lessons learned from an early GDS deployment have served as the foundation for the design and implementation of the Dawn Ground Data System.

  1. Zero to Integration in Eight Months, the Dawn Ground Data System Engineering Challange

    NASA Technical Reports Server (NTRS)

    Dubon, Lydia P.

    2006-01-01

    The Dawn Project has presented the Ground Data System (GDS) with technical challenges driven by cost and schedule constraints commonly associated with National Aeronautics and Space Administration (NASA) Discovery Projects. The Dawn mission consists of a new and exciting Deep Space partnership among: the Jet Propulsion Laboratory (JPL), responsible for project management and flight operations; Orbital Sciences Corporation (OSC), spacecraft builder and responsible for flight system test and integration; and the University of California, at Los Angeles (UCLA), responsible for science planning and operations. As a cost-capped mission, one of Dawn s implementation strategies is to leverage from both flight and ground heritage. OSC's ground data system is used for flight system test and integration as part of the flight heritage strategy. Mission operations, however, are to be conducted with JPL s ground system. The system engineering challenge of dealing with two heterogeneous ground systems emerged immediately. During the first technical interchange meeting between the JPL s GDS Team and OSC's Flight Software Team, August 2003, the need to integrate the ground system with the flight software was brought to the table. This need was driven by the project s commitment to enable instrument engineering model integration in a spacecraft simulator environment, for both demonstration and risk mitigation purposes, by April 2004. This paper will describe the system engineering approach that was undertaken by JPL's GDS Team in order to meet the technical challenge within a non-negotiable eight-month schedule. Key to the success was adherence to an overall systems engineering process and fundamental systems engineering practices: decomposition of the project request into manageable requirements; definition of a structured yet flexible development process; integration of multiple ground disciplines and experts into a focused team effort; in-process risk management; and aggregation of the intermediate products to an integrated final product. In addition, this paper will highlight the role of lessons learned from the integration experience. The lessons learned from an early GDS deployment have served as the foundation for the design and implementation of the Dawn Ground Data System.

  2. PopED lite: An optimal design software for preclinical pharmacokinetic and pharmacodynamic studies.

    PubMed

    Aoki, Yasunori; Sundqvist, Monika; Hooker, Andrew C; Gennemark, Peter

    2016-04-01

    Optimal experimental design approaches are seldom used in preclinical drug discovery. The objective is to develop an optimal design software tool specifically designed for preclinical applications in order to increase the efficiency of drug discovery in vivo studies. Several realistic experimental design case studies were collected and many preclinical experimental teams were consulted to determine the design goal of the software tool. The tool obtains an optimized experimental design by solving a constrained optimization problem, where each experimental design is evaluated using some function of the Fisher Information Matrix. The software was implemented in C++ using the Qt framework to assure a responsive user-software interaction through a rich graphical user interface, and at the same time, achieving the desired computational speed. In addition, a discrete global optimization algorithm was developed and implemented. The software design goals were simplicity, speed and intuition. Based on these design goals, we have developed the publicly available software PopED lite (http://www.bluetree.me/PopED_lite). Optimization computation was on average, over 14 test problems, 30 times faster in PopED lite compared to an already existing optimal design software tool. PopED lite is now used in real drug discovery projects and a few of these case studies are presented in this paper. PopED lite is designed to be simple, fast and intuitive. Simple, to give many users access to basic optimal design calculations. Fast, to fit a short design-execution cycle and allow interactive experimental design (test one design, discuss proposed design, test another design, etc). Intuitive, so that the input to and output from the software tool can easily be understood by users without knowledge of the theory of optimal design. In this way, PopED lite is highly useful in practice and complements existing tools. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Evidence-based hamstring injury prevention is not adopted by the majority of Champions League or Norwegian Premier League football teams: the Nordic Hamstring survey.

    PubMed

    Bahr, Roald; Thorborg, Kristian; Ekstrand, Jan

    2015-11-01

    The Nordic hamstring (NH) exercise programme was introduced in 2001 and has been shown to reduce the risk of acute hamstring injuries in football by at least 50%. Despite this, the rate of hamstring injuries has not decreased over the past decade in male elite football. To examine the implementation of the NH exercise programme at the highest level of male football in Europe, the UEFA Champions League (UCL), and to compare this to the Norwegian Premier League, Tippeligaen, where the pioneer research on the NH programme was conducted. Retrospective survey. 50 professional football teams, 32 from the UCL and 18 from Tippeligaen. A questionnaire, based on the Reach, Efficacy, Adoption, Implementation and Maintenance framework, addressing key issues related to the implementation of the NH programme during three seasons from 2012 through 2014, was distributed to team medical staff using electronic survey software. The response rate was 100%. Of the 150 club-seasons covered by the study, the NH programme was completed in full in 16 (10.7%) and in part in an additional 9 (6%) seasons. Consequently, 125 (83.3%) club-seasons were classified as non-compliant. There was no difference in compliance between the UCL and Tippeligaen in any season (χ(2): 0.41 to 0.52). Adoption and implementation of the NH exercise programme at the highest levels of male football in Europe is low; too low to expect any overall effect on acute hamstring injury rates. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. FPA Depot - Web Application

    NASA Technical Reports Server (NTRS)

    Avila, Edwin M. Martinez; Muniz, Ricardo; Szafran, Jamie; Dalton, Adam

    2011-01-01

    Lines of code (LOC) analysis is one of the methods used to measure programmer productivity and estimate schedules of programming projects. The Launch Control System (LCS) had previously used this method to estimate the amount of work and to plan development efforts. The disadvantage of using LOC as a measure of effort is that one can only measure 30% to 35% of the total effort of software projects involves coding [8]. In the application, instead of using the LOC we are using function point for a better estimation of hours in each software to develop. Because of these disadvantages, Jamie Szafran of the System Software Branch of Control And Data Systems (NE-C3) at Kennedy Space Canter developed a web application called Function Point Analysis (FPA) Depot. The objective of this web application is that the LCS software architecture team can use the data to more accurately estimate the effort required to implement customer requirements. This paper describes the evolution of the domain model used for function point analysis as project managers continually strive to generate more accurate estimates.

  5. Principles and Best Practices Emerging from Data Basin: A Data Platform Supporting Scientific Research and Landscape Conservation Planning

    NASA Astrophysics Data System (ADS)

    Comendant, T.; Strittholt, J. R.; Ward, B. C.; Bachelet, D. M.; Grossman, D.; Stevenson-Molnar, N.; Henifin, K.; Lundin, M.; Marvin, T. S.; Peterman, W. L.; Corrigan, G. N.; O'Connor, K.

    2013-12-01

    A multi-disciplinary team of scientists, software engineers, and outreach staff at the Conservation Biology Institute launched an open-access, web-based spatial data platform called Data Basin (www.databasin.org) in 2010. Primarily built to support research and environmental resource planning, Data Basin provides the capability for individuals and organizations to explore, create, interpret, and collaborate around their priority topics and geographies. We used a stakeholder analysis to assess the needs of data consumers/produces and help prioritize primary and secondary audiences. Data Basin's simple and user-friendly interface makes mapping and geo-processing tools more accessible to less technical audiences. Input from users is considered in system planning, testing, and implementation. The team continually develops using an agile software development approach, which allows new features, improvements, and bug fixes to be deployed to the live system on a frequent basis. The data import process is handled through administrative approval and Data Basin requires spatial data (biological, physical, and socio-economic) to be well-documented. Outreach and training is used to convey the scope and appropriate use of the scientific information and available resources.

  6. Electronic dental records: start taking the steps.

    PubMed

    Bergoff, Jana

    2011-01-01

    Converting paper patient records charts into their electronic counterparts (EDRs) not only has many advantages, but also could become a legal requirement in the future. Several steps key to a successful transition includes assessing the needs of the dental team and what they require as a part of the implementation Existing software and hardware must be evaluated for continued use and expansion. Proper protocols for information transfer must be established to ensure complete records while maintaining HIPAA regulations regarding patient privacy. Reduce anxiety by setting realistic dead-lines and using trusted back-up methods.

  7. Flight Planning Branch NASA Co-op Tour

    NASA Technical Reports Server (NTRS)

    Marr, Aja M.

    2013-01-01

    This semester I worked with the Flight Planning Branch at the NASA Johnson Space Center. I learned about the different aspects of flight planning for the International Space Station as well as the software that is used internally and ISSLive! which is used to help educate the public on the space program. I had the opportunity to do on the job training in the Mission Control Center with the planning team. I transferred old timeline records from the planning team's old software to the new software in order to preserve the data for the future when the software is retired. I learned about the operations of the International Space Station, the importance of good communication between the different parts of the planning team, and enrolled in professional development classes as well as technical classes to learn about the space station.

  8. Fully Employing Software Inspections Data

    NASA Technical Reports Server (NTRS)

    Shull, Forrest; Feldmann, Raimund L.; Seaman, Carolyn; Regardie, Myrna; Godfrey, Sally

    2009-01-01

    Software inspections provide a proven approach to quality assurance for software products of all kinds, including requirements, design, code, test plans, among others. Common to all inspections is the aim of finding and fixing defects as early as possible, and thereby providing cost savings by minimizing the amount of rework necessary later in the lifecycle. Measurement data, such as the number and type of found defects and the effort spent by the inspection team, provide not only direct feedback about the software product to the project team but are also valuable for process improvement activities. In this paper, we discuss NASA's use of software inspections and the rich set of data that has resulted. In particular, we present results from analysis of inspection data that illustrate the benefits of fully utilizing that data for process improvement at several levels. Examining such data across multiple inspections or projects allows team members to monitor and trigger cross project improvements. Such improvements may focus on the software development processes of the whole organization as well as improvements to the applied inspection process itself.

  9. Collaboration, Communication and Co-ordination in Agile Software Development Practice

    NASA Astrophysics Data System (ADS)

    Robinson, Hugh; Sharp, Helen

    This chapter analyses the results of a series of observational studies of agile software developmentagile software development teams, identifying commonalities in collaboration, co-ordination and communication activities. Pairing and customer collaborationcustomer collaboration are focussed on to illustrate the nature of collaboration and communication, as are two simple physical artefacts that emerged through analysis as being an information-rich focal point for the co-ordination of collaboration and communication activities. The analysis shows that pairingpairing has common characteristics across all teams, while customer collaboration differs between the teams depending on the application and organisational context of development.

  10. CoRoTlog

    NASA Astrophysics Data System (ADS)

    Plasson, Ph.

    2006-11-01

    LESIA, in close cooperation with CNES, DLR and IWF, is responsible for the tests and validation of the CoRoT instrument digital process unit which is made up of the BEX and DPU assembly. The main part of the work has consisted in validating the DPU software and in testing the BEX/DPU coupling. This work took more than two years due to the central role of the software tested and its technical complexity. The first task, in the validation process, was to carry out the acceptance tests of the DPU software. These tests consisted in checking each of the 325 requirements identified in the URD (User Requirements Document) and were played in a configuration using the DPU coupled to a BEX simulator. During the acceptance tests, all the transversal functionalities of the DPU software, like the TC/TM management, the state machine management, the BEX driving, the system monitoring or the maintenance functionalities were checked in depth. The functionalities associated with the seismology and exoplanetology processing, like the loading of window and mask descriptors or the configuration of the service execution parameters, were also exhaustively tested. After having validated the DPU software against the user requirements using a BEX simulator, the following step consisted in coupling the DPU and the BEX in order to check that the formed unit worked correctly and met the performance requirements. These tests were conducted in two phases: the first one was devoted to the functional aspects and the tests of interface, the second one to the performance aspects. The performance tests were based on the use of the DPU software scientific services and on the use of full images representative of a realistic sky as inputs. These tests were also based on the use of a reference set of windows and parameters, which was provided by the scientific team and was representative, in terms of load and complexity, of the one that could be used during the observation mode of the CoRoT instrument. Theywere played in a configuration using either a BCC simulator or a real BCC coupled to a video simulator, to feed the BEX/DPU unit. The validation of the scientific algorithms was conducted in parallel to the phase of the BEX/DPU coupling tests. The objective of this phase was to check that the algorithms implemented in the scientific services of the DPU software were in good conformity with those specified in the URD and that the obtained numerical precision corresponded to that expected. Forty cases of tests were defined covering the fine and rough angular error measurement processing, the rejection of the brilliant pixels, the subtraction of the offset and the sky background, the photometry algorithms, the SAA handling and reference image management. For each test case, the LESIA scientific team produced, by simulation, using the model instrument, the dynamic data files and the parameter sets allowing to feed the DPU on the one hand, and, on the other hand, a model of the onboard software. These data files correspond to FITS images (black windows, star windows, offset windows) containing more or less disturbances and making it possible to test the DPU software in dynamic mode over durations of up to 48 hours. To perform the test and validation activities of the CoRoT instrument digital process unit, a set of software testing tools was developed by LESIA (Software Ground Support Equipment, hereafter "SGSE"). Thanks to their versatility and modularity, these software testing tools were actually used during all the activities of integration, tests and validation of the instrument and its subsystems CoRoTCase and CoRoTCam. The CoRoT SGSE were specified, designed and developed by LESIA. The objective was to have a software system allowing the users (validation team of the onboard software, instrument integration team, etc.) to remotely control and monitor the whole instrument or only one of the subsystems of the instrument like the DPU coupled to a simulator BEX or the BEX/DPU unit coupled to a BCC simulator. The idea was to be able to interact in real time with the system under test by driving the various EGSE, but also to play test procedures implemented as scripts organized into libraries, to record the telemetries and housekeeping data in a database, and to be able to carry out post-mortem analyses.

  11. Improving hospital weekend handover: a user-centered, standardised approach.

    PubMed

    Mehra, Avi; Henein, Christin

    2014-01-01

    Clinical Handover remains one of the most perilous procedures in medicine (1). Weekend handover has emerged as a key area of concern with high variability in handover processes across hospitals (1,2,4, 5-10). Studying weekend handover processes within medicine at an acute teaching hospital revealed huge variability in documented content and structure. A total of 12 different pro formas were in use by the medical day-team to handover to the weekend team on-call. A Likert-survey of doctors revealed 93% felt the current handover system needed improvement with 71% stating that it did not ensure patient safety (Chi-squared, p-value <0.001, n=32). Semi-structured interviews of doctors identified common themes including "a lack of consistency in approach" "poor standardization" and "high variability". Seeking to address concerns of standardization, a standardized handover pro forma was developed using Royal College of Physician (RCP) guidelines (2), with direct end-user input. Results following implementation revealed a considerable improvement in documented ceiling of care, urgency of task and team member assignment with 100% uptake of the new proforma at both 4-week and 6-month post-implementation analyses. 88% of doctors surveyed perceived that the new proforma improved patient safety (p<0.01, n=25), with 62% highlighting that it allowed doctors to work more efficiently. Results also revealed that 44% felt further improvements were needed and highlighted electronic solutions and handover training as main priorities. Handover briefing was subsequently incorporated into junior doctor induction and education modules delivered, with good feedback. Following collaboration with key stakeholders and with end-user input, integrated electronic handover software was designed and funding secured. The software is currently under final development. Introducing a standardized handover proforma can be an effective initial step in improving weekend handover. Handover education and end-user involvement are key in improving the process. Electronic handover solutions have been shown to significantly increase the quality of handover and are worth considering (9, 10).

  12. Improving hospital weekend handover: a user-centered, standardised approach

    PubMed Central

    Mehra, Avi; Henein, Christin

    2014-01-01

    Clinical Handover remains one of the most perilous procedures in medicine (1). Weekend handover has emerged as a key area of concern with high variability in handover processes across hospitals (1,2,4, 5–10). Studying weekend handover processes within medicine at an acute teaching hospital revealed huge variability in documented content and structure. A total of 12 different pro formas were in use by the medical day-team to handover to the weekend team on-call. A Likert-survey of doctors revealed 93% felt the current handover system needed improvement with 71% stating that it did not ensure patient safety (Chi-squared, p-value <0.001, n=32). Semi-structured interviews of doctors identified common themes including “a lack of consistency in approach” “poor standardization” and “high variability”. Seeking to address concerns of standardization, a standardized handover pro forma was developed using Royal College of Physician (RCP) guidelines (2), with direct end-user input. Results following implementation revealed a considerable improvement in documented ceiling of care, urgency of task and team member assignment with 100% uptake of the new proforma at both 4-week and 6-month post-implementation analyses. 88% of doctors surveyed perceived that the new proforma improved patient safety (p<0.01, n=25), with 62% highlighting that it allowed doctors to work more efficiently. Results also revealed that 44% felt further improvements were needed and highlighted electronic solutions and handover training as main priorities. Handover briefing was subsequently incorporated into junior doctor induction and education modules delivered, with good feedback. Following collaboration with key stakeholders and with end-user input, integrated electronic handover software was designed and funding secured. The software is currently under final development. Introducing a standardized handover proforma can be an effective initial step in improving weekend handover. Handover education and end-user involvement are key in improving the process. Electronic handover solutions have been shown to significantly increase the quality of handover and are worth considering (9, 10). PMID:26734248

  13. A Browser-Based Multi-User Working Environment for Physicists

    NASA Astrophysics Data System (ADS)

    Erdmann, M.; Fischer, R.; Glaser, C.; Klingebiel, D.; Komm, M.; Müller, G.; Rieger, M.; Steggemann, J.; Urban, M.; Winchen, T.

    2014-06-01

    Many programs in experimental particle physics do not yet have a graphical interface, or demand strong platform and software requirements. With the most recent development of the VISPA project, we provide graphical interfaces to existing software programs and access to multiple computing clusters through standard web browsers. The scalable clientserver system allows analyses to be performed in sizable teams, and disburdens the individual physicist from installing and maintaining a software environment. The VISPA graphical interfaces are implemented in HTML, JavaScript and extensions to the Python webserver. The webserver uses SSH and RPC to access user data, code and processes on remote sites. As example applications we present graphical interfaces for steering the reconstruction framework OFFLINE of the Pierre-Auger experiment, and the analysis development toolkit PXL. The browser based VISPA system was field-tested in biweekly homework of a third year physics course by more than 100 students. We discuss the system deployment and the evaluation by the students.

  14. Evaluating Sustainability Models for Interoperability through Brokering Software

    NASA Astrophysics Data System (ADS)

    Pearlman, Jay; Benedict, Karl; Best, Mairi; Fyfe, Sue; Jacobs, Cliff; Michener, William; Nativi, Stefano; Powers, Lindsay; Turner, Andrew

    2016-04-01

    Sustainability of software and research support systems is an element of innovation that is not often discussed. Yet, sustainment is essential if we expect research communities to make the time investment to learn and adopt new technologies. As the Research Data Alliance (RDA) is developing new approaches to interoperability, the question of uptake and sustainability is important. Brokering software sustainability is one of the areas that is being addressed in RDA. The Business Models Team of the Research Data Alliance Brokering Governance Working Group examined several support models proposed to promote the long-term sustainability of brokering middleware. The business model analysis includes examination of funding source, implementation frameworks and challenges, and policy and legal considerations. Results of this comprehensive analysis highlight advantages and disadvantages of the various models with respect to the specific requirements for brokering services. We offer recommendations based on the outcomes of this analysis that suggest that hybrid funding models present the most likely avenue to long term sustainability.

  15. Software-centric View on OVMS for LBT

    NASA Astrophysics Data System (ADS)

    Trowitzsch, J.; Borelli, J.; Pott, J.; Kürster, M.

    2012-09-01

    The performance of infrared interferometry (IF) and adaptive optics (AO) strongly depends on the mitigation and correction of telescope vibrations. Therefore, at the Large Binocular Telescope (LBT) the OVMS, the Optical Path Difference and Vibration Monitoring System, is being installed. It is meant to ensure suitable conditions for adaptive optics and interferometry. The vibration information is collected from accelerometers that are distributed over the optical elements of the LBT. The collected vibration measurements are converted into tip-tilt and optical path difference data. That data is utilized in the control strategies of the LBT adaptive secondary mirrors and the beam combining interferometers, LINC-NIRVANA and LBTI. Within the OVMS the software part is responsibility of the LINC-NIRVANA team at MPIA Heidelberg. It comprises the software for the real-time data acquisition from the accelerometers as well as the related telemetry interface and the vibration monitoring quick look tools. The basic design ideas, implementation details and special features are explained here.

  16. Continuous integration for concurrent MOOSE framework and application development on GitHub

    DOE PAGES

    Slaughter, Andrew E.; Peterson, John W.; Gaston, Derek R.; ...

    2015-11-20

    For the past several years, Idaho National Laboratory’s MOOSE framework team has employed modern software engineering techniques (continuous integration, joint application/framework source code repos- itories, automated regression testing, etc.) in developing closed-source multiphysics simulation software (Gaston et al., Journal of Open Research Software vol. 2, article e10, 2014). In March 2014, the MOOSE framework was released under an open source license on GitHub, significantly expanding and diversifying the pool of current active and potential future contributors on the project. Despite this recent growth, the same philosophy of concurrent framework and application development continues to guide the project’s development roadmap. Severalmore » specific practices, including techniques for managing multiple repositories, conducting automated regression testing, and implementing a cascading build process are discussed in this short paper. Furthermore, special attention is given to describing the manner in which these practices naturally synergize with the GitHub API and GitHub-specific features such as issue tracking, Pull Requests, and project forks.« less

  17. Continuous integration for concurrent MOOSE framework and application development on GitHub

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaughter, Andrew E.; Peterson, John W.; Gaston, Derek R.

    For the past several years, Idaho National Laboratory’s MOOSE framework team has employed modern software engineering techniques (continuous integration, joint application/framework source code repos- itories, automated regression testing, etc.) in developing closed-source multiphysics simulation software (Gaston et al., Journal of Open Research Software vol. 2, article e10, 2014). In March 2014, the MOOSE framework was released under an open source license on GitHub, significantly expanding and diversifying the pool of current active and potential future contributors on the project. Despite this recent growth, the same philosophy of concurrent framework and application development continues to guide the project’s development roadmap. Severalmore » specific practices, including techniques for managing multiple repositories, conducting automated regression testing, and implementing a cascading build process are discussed in this short paper. Furthermore, special attention is given to describing the manner in which these practices naturally synergize with the GitHub API and GitHub-specific features such as issue tracking, Pull Requests, and project forks.« less

  18. Using "Facebook" to Improve Communication in Undergraduate Software Development Teams

    ERIC Educational Resources Information Center

    Charlton, Terence; Devlin, Marie; Drummond, Sarah

    2009-01-01

    As part of the CETL ALiC initiative (Centre of Excellence in Teaching and Learning: Active Learning in Computing), undergraduate computing science students at Newcastle and Durham universities participated in a cross-site team software development project. To ensure we offer adequate resources to support this collaboration, we conducted an…

  19. Student Team Projects in Information Systems Development: Measuring Collective Creative Efficacy

    ERIC Educational Resources Information Center

    Cheng, Hsiu-Hua; Yang, Heng-Li

    2011-01-01

    For information systems development project student teams, learning how to improve software development processes is an important training. Software process improvement is an outcome of a number of creative behaviours. Social cognitive theory states that the efficacy of judgment influences behaviours. This study explores the impact of three types…

  20. Using SFOC to fly the Magellan Venus mapping mission

    NASA Technical Reports Server (NTRS)

    Bucher, Allen W.; Leonard, Robert E., Jr.; Short, Owen G.

    1993-01-01

    Traditionally, spacecraft flight operations at the Jet Propulsion Laboratory (JPL) have been performed by teams of spacecraft experts utilizing ground software designed specifically for the current mission. The Jet Propulsion Laboratory set out to reduce the cost of spacecraft mission operations by designing ground data processing software that could be used by multiple spacecraft missions, either sequentially or concurrently. The Space Flight Operations Center (SFOC) System was developed to provide the ground data system capabilities needed to monitor several spacecraft simultaneously and provide enough flexibility to meet the specific needs of individual projects. The Magellan Spacecraft Team utilizes the SFOC hardware and software designed for engineering telemetry analysis, both real-time and non-real-time. The flexibility of the SFOC System has allowed the spacecraft team to integrate their own tools with SFOC tools to perform the tasks required to operate a spacecraft mission. This paper describes how the Magellan Spacecraft Team is utilizing the SFOC System in conjunction with their own software tools to perform the required tasks of spacecraft event monitoring as well as engineering data analysis and trending.

  1. Managing complex research datasets using electronic tools: A meta-analysis exemplar

    PubMed Central

    Brown, Sharon A.; Martin, Ellen E.; Garcia, Theresa J.; Winter, Mary A.; García, Alexandra A.; Brown, Adama; Cuevas, Heather E.; Sumlin, Lisa L.

    2013-01-01

    Meta-analyses of broad scope and complexity require investigators to organize many study documents and manage communication among several research staff. Commercially available electronic tools, e.g., EndNote, Adobe Acrobat Pro, Blackboard, Excel, and IBM SPSS Statistics (SPSS), are useful for organizing and tracking the meta-analytic process, as well as enhancing communication among research team members. The purpose of this paper is to describe the electronic processes we designed, using commercially available software, for an extensive quantitative model-testing meta-analysis we are conducting. Specific electronic tools improved the efficiency of (a) locating and screening studies, (b) screening and organizing studies and other project documents, (c) extracting data from primary studies, (d) checking data accuracy and analyses, and (e) communication among team members. The major limitation in designing and implementing a fully electronic system for meta-analysis was the requisite upfront time to: decide on which electronic tools to use, determine how these tools would be employed, develop clear guidelines for their use, and train members of the research team. The electronic process described here has been useful in streamlining the process of conducting this complex meta-analysis and enhancing communication and sharing documents among research team members. PMID:23681256

  2. Managing complex research datasets using electronic tools: a meta-analysis exemplar.

    PubMed

    Brown, Sharon A; Martin, Ellen E; Garcia, Theresa J; Winter, Mary A; García, Alexandra A; Brown, Adama; Cuevas, Heather E; Sumlin, Lisa L

    2013-06-01

    Meta-analyses of broad scope and complexity require investigators to organize many study documents and manage communication among several research staff. Commercially available electronic tools, for example, EndNote, Adobe Acrobat Pro, Blackboard, Excel, and IBM SPSS Statistics (SPSS), are useful for organizing and tracking the meta-analytic process as well as enhancing communication among research team members. The purpose of this article is to describe the electronic processes designed, using commercially available software, for an extensive, quantitative model-testing meta-analysis. Specific electronic tools improved the efficiency of (a) locating and screening studies, (b) screening and organizing studies and other project documents, (c) extracting data from primary studies, (d) checking data accuracy and analyses, and (e) communication among team members. The major limitation in designing and implementing a fully electronic system for meta-analysis was the requisite upfront time to decide on which electronic tools to use, determine how these tools would be used, develop clear guidelines for their use, and train members of the research team. The electronic process described here has been useful in streamlining the process of conducting this complex meta-analysis and enhancing communication and sharing documents among research team members.

  3. Exploring interpersonal behavior and team sensemaking during health information technology implementation.

    PubMed

    Kitzmiller, Rebecca R; McDaniel, Reuben R; Johnson, Constance M; Lind, E Allan; Anderson, Ruth A

    2013-01-01

    We examine how interpersonal behavior and social interaction influence team sensemaking and subsequent team actions during a hospital-based health information technology (HIT) implementation project. Over the course of 18 months, we directly observed the interpersonal interactions of HIT implementation teams using a sensemaking lens. We identified three voice-promoting strategies enacted by team leaders that fostered team member voice and sensemaking; communicating a vision; connecting goals to team member values; and seeking team member input. However, infrequent leader expressions of anger quickly undermined team sensemaking, halting dialog essential to problem solving. By seeking team member opinions, team leaders overcame the negative effects of anger. Leaders must enact voice-promoting behaviors and use them throughout a team's engagement. Further, training teams in how to use conflict to achieve greater innovation may improve sensemaking essential to project risk mitigation. Health care work processes are complex; teams involved in implementing improvements must be prepared to deal with conflicting, contentious issues, which will arise during change. Therefore, team conflict training may be essential to sustaining sensemaking. Future research should seek to identify team interactions that foster sensemaking, especially when topics are difficult or unwelcome, then determine the association between staff sensemaking and the impact on HIT implementation outcomes. We are among the first to focus on project teams tasked with HIT implementation. This research extends our understanding of how leaders' behaviors might facilitate or impeded speaking up among project teams in health care settings.

  4. Performance of Student Software Development Teams: The Influence of Personality and Identifying as Team Members

    ERIC Educational Resources Information Center

    Monaghan, Conal; Bizumic, Boris; Reynolds, Katherine; Smithson, Michael; Johns-Boast, Lynette; van Rooy, Dirk

    2015-01-01

    One prominent approach in the exploration of the variations in project team performance has been to study two components of the aggregate personalities of the team members: conscientiousness and agreeableness. A second line of research, known as self-categorisation theory, argues that identifying as team members and the team's performance norms…

  5. GFAST Software Demonstration

    NASA Image and Video Library

    2017-03-17

    NASA engineers and test directors gather in Firing Room 3 in the Launch Control Center at NASA's Kennedy Space Center in Florida, to watch a demonstration of the automated command and control software for the agency's Space Launch System (SLS) and Orion spacecraft. The software is called the Ground Launch Sequencer. It will be responsible for nearly all of the launch commit criteria during the final phases of launch countdowns. The Ground and Flight Application Software Team (GFAST) demonstrated the software. It was developed by the Command, Control and Communications team in the Ground Systems Development and Operations (GSDO) Program. GSDO is helping to prepare the center for the first test flight of Orion atop the SLS on Exploration Mission 1.

  6. The role of staff turnover in the implementation of evidence-based practices in mental health care.

    PubMed

    Woltmann, Emily M; Whitley, Rob; McHugo, Gregory J; Brunette, Mary; Torrey, William C; Coots, Laura; Lynde, David; Drake, Robert E

    2008-07-01

    This study examined turnover rates of teams implementing psychosocial evidence-based practices in public-sector mental health settings. It also explored the relationship between turnover and implementation outcomes in an effort to understand whether practitioner perspectives on turnover are related to implementation outcomes. Team turnover was measured for 42 implementing teams participating in a national demonstration project examining implementation of five evidence-based practices between 2002 and 2005. Regression techniques were used to analyze the effects of team turnover on penetration and fidelity. Qualitative data collected throughout the project were blended with the quantitative data to examine the significance of team turnover to those attempting to implement the practices. High team turnover was common (M+/-SD=81%+/-46%) and did not vary by practice. The 24-month turnover rate was inversely related to fidelity scores at 24 months (N=40, beta=-.005, p=.01). A negative trend was observed for penetration. Further analysis indicated that 71% of teams noted that turnover was a relevant factor in implementation. The behavioral health workforce remains in flux. High turnover most often had a negative impact on implementation, although some teams were able to use strategies to improve implementation through turnover. Implementation models must consider turbulent behavioral health workforce conditions.

  7. KCNSC Automated RAIL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branson, Donald

    The KCNSC Automated RAIL (Rolling Action Item List) system provides an electronic platform to manage and escalate rolling action items within an business and manufacturing environment at Honeywell. The software enables a tiered approach to issue management where issues are escalated up a management chain based on team input and compared to business metrics. The software manages action items at different levels of the organization and allows all users to discuss action items concurrently. In addition, the software drives accountability through timely emails and proper visibility during team meetings.

  8. Impact of agile methodologies on team capacity in automotive radio-navigation projects

    NASA Astrophysics Data System (ADS)

    Prostean, G.; Hutanu, A.; Volker, S.

    2017-01-01

    The development processes used in automotive radio-navigation projects are constantly under adaption pressure. While the software development models are based on automotive production processes, the integration of peripheral components into an automotive system will trigger a high number of requirement modifications. The use of traditional development models in automotive industry will bring team’s development capacity to its boundaries. The root cause lays in the inflexibility of actual processes and their adaption limits. This paper addresses a new project management approach for the development of radio-navigation projects. The understanding of weaknesses of current used models helped us in development and integration of agile methodologies in traditional development model structure. In the first part we focus on the change management methods to reduce request for change inflow. Established change management risk analysis processes enables the project management to judge the impact of a requirement change and also gives time to the project to implement some changes. However, in big automotive radio-navigation projects the saved time is not enough to implement the large amount of changes, which are submitted to the project. In the second phase of this paper we focus on increasing team capacity by integrating at critical project phases agile methodologies into the used traditional model. The overall objective of this paper is to prove the need of process adaption in order to solve project team capacity bottlenecks.

  9. Passive perception system for day/night autonomous off-road navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Bergh, Charles F.; Goldberg, Steven B.; Bellutta, Paolo; Huertas, Andres; Matthies, Larry H.

    2005-05-01

    Passive perception of terrain features is a vital requirement for military related unmanned autonomous vehicle operations, especially under electromagnetic signature management conditions. As a member of Team Raptor, the Jet Propulsion Laboratory developed a self-contained passive perception system under the DARPA funded PerceptOR program. An environmentally protected forward-looking sensor head was designed and fabricated in-house to straddle an off-the-shelf pan-tilt unit. The sensor head contained three color cameras for multi-baseline daytime stereo ranging, a pair of cooled mid-wave infrared cameras for nighttime stereo ranging, and supporting electronics to synchronize captured imagery. Narrow-baseline stereo provided improved range data density in cluttered terrain, while wide-baseline stereo provided more accurate ranging for operation at higher speeds in relatively open areas. The passive perception system processed stereo images and outputted over a local area network terrain maps containing elevation, terrain type, and detected hazards. A novel software architecture was designed and implemented to distribute the data processing on a 533MHz quad 7410 PowerPC single board computer under the VxWorks real-time operating system. This architecture, which is general enough to operate on N processors, has been subsequently tested on Pentium-based processors under Windows and Linux, and a Sparc based-processor under Unix. The passive perception system was operated during FY04 PerceptOR program evaluations at Fort A. P. Hill, Virginia, and Yuma Proving Ground, Arizona. This paper discusses the Team Raptor passive perception system hardware and software design, implementation, and performance, and describes a road map to faster and improved passive perception.

  10. TEAM (Technologies Enabling Agile Manufacturing) shop floor control requirements guide: Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-03-28

    TEAM will create a shop floor control system (SFC) to link the pre-production planning to shop floor execution. SFC must meet the requirements of a multi-facility corporation, where control must be maintained between co-located facilities down to individual workstations within each facility. SFC must also meet the requirements of a small corporation, where there may only be one small facility. A hierarchical architecture is required to meet these diverse needs. The hierarchy contains the following levels: Enterprise, Factory, Cell, Station, and Equipment. SFC is focused on the top three levels. Each level of the hierarchy is divided into three basicmore » functions: Scheduler, Dispatcher, and Monitor. The requirements of each function depend on the hierarchical level in which it is to be used. For example, the scheduler at the Enterprise level must allocate production to individual factories and assign due-dates; the scheduler at the Cell level must provide detailed start and stop times of individual operations. Finally the system shall have the following features: distributed and open-architecture. Open architecture software is required in order that the appropriate technology be used at each level of the SFC hierarchy, and even at different instances within the same hierarchical level (for example, Factory A uses discrete-event simulation scheduling software, and Factory B uses an optimization-based scheduler). A distributed implementation is required to reduce the computational burden of the overall system, and allow for localized control. A distributed, open-architecture implementation will also require standards for communication between hierarchical levels.« less

  11. Medical team training and coaching in the Veterans Health Administration; assessment and impact on the first 32 facilities in the programme.

    PubMed

    Neily, Julia; Mills, Peter D; Lee, Pamela; Carney, Brian; West, Priscilla; Percarpio, Katherine; Mazzia, Lisa; Paull, Douglas E; Bagian, James P

    2010-08-01

    Communication is problematic in healthcare. The Veterans Health Administration is implementing Medical Team Training. The authors describe results of the first 32 of 130 sites to undergo the programme. This report is unique; it provides aggregate results of a crew resource-management programme for numerous facilities. Facilities were taught medical team training and implemented briefings, debriefings and other projects. The authors coached teams through consultative phone interviews over a year. Implementation teams self-reported implementation and rated programme impact: 1='no impact' and 5='significant impact.' We used logistic regression to examine implementation of briefing/debriefing. Ninety-seven per cent of facilities implemented briefings and debriefings, and all implemented an additional project. As of the final interview, 73% of OR and 67% of ICU implementation teams self-reported and rated staff impact 4-5. Eighty-six per cent of OR and 82% of ICU implementation teams self-reported and rated patient impact 4-5. Improved teamwork was reported by 84% of OR and 75% of ICU implementation teams. Efficiency improvements were reported by 94% of OR implementation teams. Almost all facilities (97%) reported a success story or avoiding an undesirable event. Sites with lower volume were more likely to conduct briefings/debriefings in all cases for all surgical services (p=0.03). Sites are implementing the programme with a positive impact on patients and staff, and improving teamwork, efficiency and safety. A unique feature of the programme is that implementation was facilitated through follow-up support. This may have contributed to the early success of the programme.

  12. Cleanroom Software Engineering Reference Model. Version 1.0.

    DTIC Science & Technology

    1996-11-01

    teams. It also serves as a baseline for continued evolution of Cleanroom practice. The scope of the CRM is software management , specification...addition to project staff, participants include management , peer organization representatives, and customer representatives as appropriate for...2 Review the status of the process with management , the project team, peer groups, and the customer . These verification activities include

  13. The SOFIA Mission Control System Software

    NASA Astrophysics Data System (ADS)

    Heiligman, G. M.; Brock, D. R.; Culp, S. D.; Decker, P. H.; Estrada, J. C.; Graybeal, J. B.; Nichols, D. M.; Paluzzi, P. R.; Sharer, P. J.; Pampell, R. J.; Papke, B. L.; Salovich, R. D.; Schlappe, S. B.; Spriestersbach, K. K.; Webb, G. L.

    1999-05-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) will be delivered with a computerized mission control system (MCS). The MCS communicates with the aircraft's flight management system and coordinates the operations of the telescope assembly, mission-specific subsystems, and the science instruments. The software for the MCS must be reliable and flexible. It must be easily usable by many teams of observers with widely differing needs, and it must support non-intrusive access for education and public outreach. The technology must be appropriate for SOFIA's 20-year lifetime. The MCS software development process is an object-oriented, use case driven approach. The process is iterative: delivery will be phased over four "builds"; each build will be the result of many iterations; and each iteration will include analysis, design, implementation, and test activities. The team is geographically distributed, coordinating its work via Web pages, teleconferences, T.120 remote collaboration, and CVS (for Internet-enabled configuration management). The MCS software architectural design is derived in part from other observatories' experience. Some important features of the MCS are: * distributed computing over several UNIX and VxWorks computers * fast throughput of time-critical data * use of third-party components, such as the Adaptive Communications Environment (ACE) and the Common Object Request Broker Architecture (CORBA) * extensive configurability via stored, editable configuration files * use of several computer languages so developers have "the right tool for the job". C++, Java, scripting languages, Interactive Data Language (from Research Systems, Int'l.), XML, and HTML will all be used in the final deliverables. This paper reports on work in progress, with the final product scheduled for delivery in 2001. This work was performed for Universities Space Research Association for NASA under contract NAS2-97001.

  14. Requirements Engineering in Building Climate Science Software

    ERIC Educational Resources Information Center

    Batcheller, Archer L.

    2011-01-01

    Software has an important role in supporting scientific work. This dissertation studies teams that build scientific software, focusing on the way that they determine what the software should do. These requirements engineering processes are investigated through three case studies of climate science software projects. The Earth System Modeling…

  15. Toward fidelity between specification and implementation

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing

    1994-01-01

    This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.

  16. Architecture-Centric Development in Globally Distributed Projects

    NASA Astrophysics Data System (ADS)

    Sauer, Joachim

    In this chapter architecture-centric development is proposed as a means to strengthen the cohesion of distributed teams and to tackle challenges due to geographical and temporal distances and the clash of different cultures. A shared software architecture serves as blueprint for all activities in the development process and ties them together. Architecture-centric development thus provides a plan for task allocation, facilitates the cooperation of globally distributed developers, and enables continuous integration reaching across distributed teams. Advice is also provided for software architects who work with distributed teams in an agile manner.

  17. A new approach for instrument software at Gemini

    NASA Astrophysics Data System (ADS)

    Gillies, Kim; Nunez, Arturo; Dunn, Jennifer

    2008-07-01

    Gemini Observatory is now developing its next generation of astronomical instruments, the Aspen instruments. These new instruments are sophisticated and costly requiring large distributed, collaborative teams. Instrument software groups often include experienced team members with existing mature code. Gemini has taken its experience from the previous generation of instruments and current hardware and software technology to create an approach for developing instrument software that takes advantage of the strengths of our instrument builders and our own operations needs. This paper describes this new software approach that couples a lightweight infrastructure and software library with aspects of modern agile software development. The Gemini Planet Imager instrument project, which is currently approaching its critical design review, is used to demonstrate aspects of this approach. New facilities under development will face similar issues in the future, and the approach presented here can be applied to other projects.

  18. A Framework for the Development of Scalable Heterogeneous Robot Teams with Dynamically Distributed Processing

    NASA Astrophysics Data System (ADS)

    Martin, Adrian

    As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.

  19. Using ACIS on the Chandra X-ray Observatory as a Particle Radiation Monitor II

    NASA Technical Reports Server (NTRS)

    Grant, C. E.; Ford, P. G.; Bautz, M. W.; ODell, S. L.

    2012-01-01

    The Advanced CCD Imaging Spectrometer is an instrument on the Chandra X-ray Observatory. CCDs are vulnerable to radiation damage, particularly by soft protons in the radiation belts and solar storms. The Chandra team has implemented procedures to protect ACIS during high-radiation events including autonomous protection triggered by an on-board radiation monitor. Elevated temperatures have reduced the effectiveness of the on-board monitor. The ACIS team has developed an algorithm which uses data from the CCDs themselves to detect periods of high radiation and a flight software patch to apply this algorithm is currently active on-board the instrument. In this paper, we explore the ACIS response to particle radiation through comparisons to a number of external measures of the radiation environment. We hope to better understand the efficiency of the algorithm as a function of the flux and spectrum of the particles and the time-profile of the radiation event.

  20. The Emirates Mars Mission Science Data Center

    NASA Astrophysics Data System (ADS)

    Craft, James; Hammadi, Omran Al; DeWolfe, Alexandria; Staley, Bryan; Schafer, Corey; Pankratz, Chris

    2017-04-01

    The Emirates Mars Mission (EMM), led by the Mohammed Bin Rashid Space Center (MBRSC) in Dubai, United Arab Emirates, is expected to arrive at Mars in January 2021. The EMM Science Data Center (SDC) is to be developed as a joint effort between MBRSC and the University of Colorado's Laboratory for Atmospheric and Space Physics (LASP). The EMM SDC is responsible for the production, management, distribution, and archiving of science data collected from the three instruments on board the Hope spacecraft. With the respective SDC teams on opposite sides of the world evolutionary techniques and cloud-based technologies are being utilized in the development of the EMM SDC. This presentation will provide a top down view of the EMM SDC, summarizing the cloud-based technologies being implemented in the design, as well as the tools, best practices, and lessons learned for software development and management in a geographically distributed team.

  1. The Emirates Mars Mission Science Data Center

    NASA Astrophysics Data System (ADS)

    Craft, J.; Al Hammadi, O.; DeWolfe, A. W.; Staley, B.; Schafer, C.; Pankratz, C. K.

    2017-12-01

    The Emirates Mars Mission (EMM), led by the Mohammed Bin Rashid Space Center (MBRSC) in Dubai, United Arab Emirates, is expected to arrive at Mars in January 2021. The EMM Science Data Center (SDC) is to be developed as a joint effort between MBRSC and the University of Colorado's Laboratory for Atmospheric and Space Physics (LASP). The EMM SDC is responsible for the production, management, distribution, and archiving of science data collected from the three instruments on board the Hope spacecraft.With the respective SDC teams on opposite sides of the world evolutionary techniques and cloud-based technologies are being utilized in the development of the EMM SDC. This presentation will provide a top down view of the EMM SDC, summarizing the cloud-based technologies being implemented in the design, as well as the tools, best practices, and lessons learned for software development and management in a geographically distributed team.

  2. Is it possible to improve radiotherapy team members' communication skills? A randomized study assessing the efficacy of a 38-h communication skills training program.

    PubMed

    Gibon, Anne-Sophie; Merckaert, Isabelle; Liénard, Aurore; Libert, Yves; Delvaux, Nicole; Marchal, Serge; Etienne, Anne-Marie; Reynaert, Christine; Slachmuylder, Jean-Louis; Scalliet, Pierre; Van Houtte, Paul; Coucke, Philippe; Salamon, Emile; Razavi, Darius

    2013-10-01

    Optimizing communication between radiotherapy team members and patients and between colleagues requires training. This study applies a randomized controlled design to assess the efficacy of a 38-h communication skills training program. Four radiotherapy teams were randomly assigned either to a training program or to a waiting list. Team members' communication skills and their self-efficacy to communicate in the context of an encounter with a simulated patient were the primary endpoints. These encounters were scheduled at the baseline and after training for the training group, and at the baseline and four months later for the waiting list group. Encounters were audiotaped and transcribed. Transcripts were analyzed with content analysis software (LaComm) and by an independent rater. Eighty team members were included in the study. Compared to untrained team members, trained team members used more turns of speech with content oriented toward available resources in the team (relative rate [RR]=1.38; p=0.023), more assessment utterances (RR=1.69; p<0.001), more empathy (RR=4.05; p=0.037), more negotiation (RR=2.34; p=0.021) and more emotional words (RR=1.32; p=0.030), and their self-efficacy to communicate increased (p=0.024 and p=0.008, respectively). The training program was effective in improving team members' communication skills and their self-efficacy to communicate in the context of an encounter with a simulated patient. Future study should assess the effect of this training program on communication with actual patients and their satisfaction. Moreover a cost-benefit analysis is needed, before implementing such an intensive training program on a broader scale. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Using Animated Language Software with Children Diagnosed with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Mulholland, Rita; Pete, Ann Marie; Popeson, Joanne

    2008-01-01

    We examined the impact of using an animated software program (Team Up With Timo) on the expressive and receptive language abilities of five children ages 5-9 in a self-contained Learning and Language Disabilities class. We chose to use Team Up With Timo (Animated Speech Corporation) because it allows the teacher to personalize the animation for…

  4. Interprofessional Health Team Communication About Hospital Discharge: An Implementation Science Evaluation Study.

    PubMed

    Bahr, Sarah J; Siclovan, Danielle M; Opper, Kristi; Beiler, Joseph; Bobay, Kathleen L; Weiss, Marianne E

    The Consolidated Framework for Implementation Research guided formative evaluation of the implementation of a redesigned interprofessional team rounding process. The purpose of the redesigned process was to improve health team communication about hospital discharge. Themes emerging from interviews of patients, nurses, and providers revealed the inherent value and positive characteristics of the new process, but also workflow, team hierarchy, and process challenges to successful implementation. The evaluation identified actionable recommendations for modifying the implementation process.

  5. Streamlining Software Aspects of Certification: Report on the SSAC Survey

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Dorsey, Cheryl A.; Knight, John C.; Leveson, Nancy G.; McCormick, G. Frank

    1999-01-01

    The aviation system now depends on information technology more than ever before to ensure safety and efficiency. To address concerns about the efficacy of software aspects of the certification process, the Federal Aviation Administration (FAA) began the Streamlining Software Aspects of Certification (SSAC) program. The SSAC technical team was commissioned to gather data, analyze results, and propose recommendations to maximize efficiency and minimize cost and delay, without compromising safety. The technical team conducted two public workshops to identify and prioritize software approval issues, and conducted a survey to validate the most urgent of those issues. The SSAC survey, containing over two hundred questions about the FAA's software approval process, reached over four hundred industry software developers, aircraft manufacturers, and FAA designated engineering representatives. Three hundred people responded. This report presents the SSAC program rationale, survey process, preliminary findings, and recommendations.

  6. Strengthening Interprofessional Requirements Engineering Through Action Sheets: A Pilot Study.

    PubMed

    Kunz, Aline; Pohlmann, Sabrina; Heinze, Oliver; Brandner, Antje; Reiß, Christina; Kamradt, Martina; Szecsenyi, Joachim; Ose, Dominik

    2016-10-18

    The importance of information and communication technology for healthcare is steadily growing. Newly developed tools are addressing different user groups: physicians, other health care professionals, social workers, patients, and family members. Since often many different actors with different expertise and perspectives are involved in the development process it can be a challenge to integrate the user-reported requirements of those heterogeneous user groups. Nevertheless, the understanding and consideration of user requirements is the prerequisite of building a feasible technical solution. In the course of the presented project it proved to be difficult to gain clear action steps and priorities for the development process out of the primary requirements compilation. Even if a regular exchange between involved teams took place there was a lack of a common language. The objective of this paper is to show how the already existing requirements catalog was subdivided into specific, prioritized, and coherent working packages and the cooperation of multiple interprofessional teams within one development project was reorganized at the same time. In the case presented, the manner of cooperation was reorganized and a new instrument called an Action Sheet was implemented. This paper introduces the newly developed methodology which was meant to smooth the development of a user-centered software product and to restructure interprofessional cooperation. There were 10 focus groups in which views of patients with colorectal cancer, physicians, and other health care professionals were collected in order to create a requirements catalog for developing a personal electronic health record. Data were audio- and videotaped, transcribed verbatim, and thematically analyzed. Afterwards, the requirements catalog was reorganized in the form of Action Sheets which supported the interprofessional cooperation referring to the development process of a personal electronic health record for the Rhine-Neckar region. In order to improve the interprofessional cooperation the idea arose to align the requirements arising from the implementation project with the method of software development applied by the technical development team. This was realized by restructuring the original requirements set in a standardized way and under continuous adjustment between both teams. As a result not only the way of displaying the user demands but also of interprofessional cooperation was steered in a new direction. User demands must be taken into account from the very beginning of the development process, but it is not always obvious how to bring them together with IT knowhow and knowledge of the contextual factors of the health care system. Action Sheets seem to be an effective tool for making the software development process more tangible and convertible for all connected disciplines. Furthermore, the working method turned out to support interprofessional ideas exchange.

  7. Strengthening Interprofessional Requirements Engineering Through Action Sheets: A Pilot Study

    PubMed Central

    Pohlmann, Sabrina; Heinze, Oliver; Brandner, Antje; Reiß, Christina; Kamradt, Martina; Szecsenyi, Joachim; Ose, Dominik

    2016-01-01

    Background The importance of information and communication technology for healthcare is steadily growing. Newly developed tools are addressing different user groups: physicians, other health care professionals, social workers, patients, and family members. Since often many different actors with different expertise and perspectives are involved in the development process it can be a challenge to integrate the user-reported requirements of those heterogeneous user groups. Nevertheless, the understanding and consideration of user requirements is the prerequisite of building a feasible technical solution. In the course of the presented project it proved to be difficult to gain clear action steps and priorities for the development process out of the primary requirements compilation. Even if a regular exchange between involved teams took place there was a lack of a common language. Objective The objective of this paper is to show how the already existing requirements catalog was subdivided into specific, prioritized, and coherent working packages and the cooperation of multiple interprofessional teams within one development project was reorganized at the same time. In the case presented, the manner of cooperation was reorganized and a new instrument called an Action Sheet was implemented. This paper introduces the newly developed methodology which was meant to smooth the development of a user-centered software product and to restructure interprofessional cooperation. Methods There were 10 focus groups in which views of patients with colorectal cancer, physicians, and other health care professionals were collected in order to create a requirements catalog for developing a personal electronic health record. Data were audio- and videotaped, transcribed verbatim, and thematically analyzed. Afterwards, the requirements catalog was reorganized in the form of Action Sheets which supported the interprofessional cooperation referring to the development process of a personal electronic health record for the Rhine-Neckar region. Results In order to improve the interprofessional cooperation the idea arose to align the requirements arising from the implementation project with the method of software development applied by the technical development team. This was realized by restructuring the original requirements set in a standardized way and under continuous adjustment between both teams. As a result not only the way of displaying the user demands but also of interprofessional cooperation was steered in a new direction. Conclusions User demands must be taken into account from the very beginning of the development process, but it is not always obvious how to bring them together with IT knowhow and knowledge of the contextual factors of the health care system. Action Sheets seem to be an effective tool for making the software development process more tangible and convertible for all connected disciplines. Furthermore, the working method turned out to support interprofessional ideas exchange. PMID:27756716

  8. Implementing clinical guidelines in stroke: a qualitative study of perceived facilitators and barriers.

    PubMed

    Donnellan, Claire; Sweetman, S; Shelley, E

    2013-08-01

    Clinical guidelines are frequently used as a mechanism for implementing evidence-based practice. However research indicates that health professionals vary in the extent to which they adhere to these guidelines. This study aimed to study the perceptions of stakeholders and health professionals on the facilitators and barriers to implementing national stroke guidelines in Ireland. Qualitative interviews using focus groups were conducted with stakeholders (n=3) and multidisciplinary team members from hospitals involved in stroke care (n=7). All focus group interviews were semi-structured, using open-ended questions. Data was managed and analysed using NVivo 9 software. The main themes to emerge from the focus groups with stakeholders and hospital multidisciplinary teams were very similar in terms of topics discussed. These were resources, national stroke guidelines as a tool for change, characteristics of national stroke guidelines, advocacy at local level and community stroke care challenges. Facilitators perceived by stakeholders and health professionals included having dedicated resources, user-friendly guidelines relevant at local level and having supportive advocates on the ground. Barriers were inadequate resources, poor guideline characteristics and insufficient training and education. This study highlights health professionals' perspectives regarding many key concepts which may affect the implementation of stroke care guidelines. The introduction of stroke clinical guidelines at a national level is not sufficient to improve health care quality as they should be incorporated in a quality assurance cycle with education programmes and feedback from surveys of clinical practice. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. 3D Visualization for Phoenix Mars Lander Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol

    2012-01-01

    Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.

  10. The ALMA software architecture

    NASA Astrophysics Data System (ADS)

    Schwarz, Joseph; Farris, Allen; Sommer, Heiko

    2004-09-01

    The software for the Atacama Large Millimeter Array (ALMA) is being developed by many institutes on two continents. The software itself will function in a distributed environment, from the 0.5-14 kmbaselines that separate antennas to the larger distances that separate the array site at the Llano de Chajnantor in Chile from the operations and user support facilities in Chile, North America and Europe. Distributed development demands 1) interfaces that allow separated groups to work with minimal dependence on their counterparts at other locations; and 2) a common architecture to minimize duplication and ensure that developers can always perform similar tasks in a similar way. The Container/Component model provides a blueprint for the separation of functional from technical concerns: application developers concentrate on implementing functionality in Components, which depend on Containers to provide them with services such as access to remote resources, transparent serialization of entity objects to XML, logging, error handling and security. Early system integrations have verified that this architecture is sound and that developers can successfully exploit its features. The Containers and their services are provided by a system-orienteddevelopment team as part of the ALMA Common Software (ACS), middleware that is based on CORBA.

  11. Real-Time Multimission Event Notification System for Mars Relay

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Wang, Paul; Hy, Franklin H.

    2013-01-01

    As the Mars Relay Network is in constant flux (missions and teams going through their daily workflow), it is imperative that users are aware of such state changes. For example, a change by an orbiter team can affect operations on a lander team. This software provides an ambient view of the real-time status of the Mars network. The Mars Relay Operations Service (MaROS) comprises a number of tools to coordinate, plan, and visualize various aspects of the Mars Relay Network. As part of MaROS, a feature set was developed that operates on several levels of the software architecture. These levels include a Web-based user interface, a back-end "ReSTlet" built in Java, and databases that store the data as it is received from the network. The result is a real-time event notification and management system, so mission teams can track and act upon events on a moment-by-moment basis. This software retrieves events from MaROS and displays them to the end user. Updates happen in real time, i.e., messages are pushed to the user while logged into the system, and queued when the user is not online for later viewing. The software does not do away with the email notifications, but augments them with in-line notifications. Further, this software expands the events that can generate a notification, and allows user-generated notifications. Existing software sends a smaller subset of mission-generated notifications via email. A common complaint of users was that the system-generated e-mails often "get lost" with other e-mail that comes in. This software allows for an expanded set (including user-generated) of notifications displayed in-line of the program. By separating notifications, this can improve a user's workflow.

  12. Next generation of decision making software for nanopatterns characterization: application to semiconductor industry

    NASA Astrophysics Data System (ADS)

    Dervilllé, A.; Labrosse, A.; Zimmermann, Y.; Foucher, J.; Gronheid, R.; Boeckx, C.; Singh, A.; Leray, P.; Halder, S.

    2016-03-01

    The dimensional scaling in IC manufacturing strongly drives the demands on CD and defect metrology techniques and their measurement uncertainties. Defect review has become as important as CD metrology and both of them create a new metrology paradigm because it creates a completely new need for flexible, robust and scalable metrology software. Current, software architectures and metrology algorithms are performant but it must be pushed to another higher level in order to follow roadmap speed and requirements. For example: manage defect and CD in one step algorithm, customize algorithms and outputs features for each R&D team environment, provide software update every day or every week for R&D teams in order to explore easily various development strategies. The final goal is to avoid spending hours and days to manually tune algorithm to analyze metrology data and to allow R&D teams to stay focus on their expertise. The benefits are drastic costs reduction, more efficient R&D team and better process quality. In this paper, we propose a new generation of software platform and development infrastructure which can integrate specific metrology business modules. For example, we will show the integration of a chemistry module dedicated to electronics materials like Direct Self Assembly features. We will show a new generation of image analysis algorithms which are able to manage at the same time defect rates, images classifications, CD and roughness measurements with high throughput performances in order to be compatible with HVM. In a second part, we will assess the reliability, the customization of algorithm and the software platform capabilities to follow new specific semiconductor metrology software requirements: flexibility, robustness, high throughput and scalability. Finally, we will demonstrate how such environment has allowed a drastic reduction of data analysis cycle time.

  13. "Think different": a qualitative assessment of commercial innovation for diabetes information technology programs.

    PubMed

    Rupcic, Sonia; Tamrat, Tigest; Kachnowski, Stan

    2012-11-01

    This study reviews the state of diabetes information technology (IT) initiatives and presents a set of recommendations for improvement based on interviews with commercial IT innovators. Semistructured interviews were conducted with 10 technology developers, representing 12 of the most successful IT companies in the world. Average interview time was approximately 45 min. Interviews were audio-recorded, transcribed, and entered into ATLAS.ti for qualitative data analysis. Themes were identified through a process of selective and open coding by three researchers. We identified two practices, common among successful IT companies, that have allowed them to avoid or surmount the challenges that confront healthcare professionals involved in diabetes IT development: (1) employing a diverse research team of software developers and engineers, statisticians, consumers, and business people and (2) conducting rigorous research and analytics on technology use and user preferences. Because of the nature of their respective fields, healthcare professionals and commercial innovators face different constraints. With these in mind we present three recommendations, informed by practices shared by successful commercial developers, for those involved in developing diabetes IT programming: (1) include software engineers on the implementation team throughout the intervention, (2) conduct more extensive baseline testing of users and monitor the usage data derived from the technology itself, and (3) pursue Institutional Review Board-exempt research.

  14. Iteratively Developing an mHealth HIV Prevention Program for Sexual Minority Adolescent Men

    PubMed Central

    Prescott, Tonya L.; Philips, Gregory L.; Bull, Sheana S.; Parsons, Jeffrey T.; Mustanski, Brian

    2015-01-01

    Five activities were implemented between November 2012 and June 2014 to develop an mHealth HIV prevention program for adolescent gay, bisexual, and queer men (AGBM): (1) focus groups to gather acceptability of the program components; (2) ongoing development of content; (3) Content Advisory Teams to confirm the tone, flow, and understandability of program content; (4) an internal team test to alpha test software functionality; and (5) a beta test to test the protocol and intervention messages. Findings suggest that AGBM preferred positive and friendly content that at the same time, did not try to sound like a peer. They deemed the number of daily text messages (i.e., 8–15 per day) to be acceptable. The Text Buddy component was well received but youth needed concrete direction about appropriate discussion topics. AGBM determined the self-safety assessment also was acceptable. Its feasible implementation in the beta test suggests that AGBM can actively self-determine their potential danger when participating in sexual health programs. Partnering with the target population in intervention development is critical to ensure that a salient final product and feasible protocol are created. PMID:26238038

  15. Verification and validation of a reliable multicast protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.

  16. The GenABEL Project for statistical genomics.

    PubMed

    Karssen, Lennart C; van Duijn, Cornelia M; Aulchenko, Yurii S

    2016-01-01

    Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the "core team", facilitating agile statistical omics methodology development and fast dissemination.

  17. Interprofessional Team's Perception of Care Delivery After Implementation of a Pediatric Pain and Sedation Protocol.

    PubMed

    Staveski, Sandra L; Wu, May; Tesoro, Tiffany M; Roth, Stephen J; Cisco, Michael J

    2017-06-01

    Pain and agitation are common experiences of patients in pediatric cardiac intensive care units. Variability in assessments by health care providers, communication, and treatment of pain and agitation creates challenges in management of pain and sedation. To develop guidelines for assessment and treatment of pain, agitation, and delirium in the pediatric cardiac intensive unit in an academic children's hospital and to document the effects of implementation of the guidelines on the interprofessional team's perception of care delivery and team function. Before and after implementation of the guidelines, interprofessional team members were surveyed about the members' perception of analgesia, sedation, and delirium management RESULTS: Members of the interprofessional team felt more comfortable with pain and sedation management after implementation of the guidelines. Team members reported improvements in team communication on patients' comfort. Members thought that important information was less likely to be lost during transfer of care. They also noted that the team carried out comfort management plans and used pharmacological and nonpharmacological therapies better after implementation of the guidelines than they did before implementation. Guidelines for pain and sedation management were associated with perceived improvements in team function and patient care by members of the interprofessional team. ©2017 American Association of Critical-Care Nurses.

  18. Multidisciplinary Concurrent Design Optimization via the Internet

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Kelkar, Atul G.; Koganti, Gopichand

    2001-01-01

    A methodology is presented which uses commercial design and analysis software and the Internet to perform concurrent multidisciplinary optimization. The methodology provides a means to develop multidisciplinary designs without requiring that all software be accessible from the same local network. The procedures are amenable to design and development teams whose members, expertise and respective software are not geographically located together. This methodology facilitates multidisciplinary teams working concurrently on a design problem of common interest. Partition of design software to different machines allows each constituent software to be used on the machine that provides the most economy and efficiency. The methodology is demonstrated on the concurrent design of a spacecraft structure and attitude control system. Results are compared to those derived from performing the design with an autonomous FORTRAN program.

  19. GFAST Software Demonstration

    NASA Image and Video Library

    2017-03-17

    NASA engineers and test directors gather in Firing Room 3 in the Launch Control Center at NASA's Kennedy Space Center in Florida, to watch a demonstration of the automated command and control software for the agency's Space Launch System (SLS) and Orion spacecraft. In front, far right, is Charlie Blackwell-Thompson, launch director for Exploration Mission 1 (EM-1). The software is called the Ground Launch Sequencer. It will be responsible for nearly all of the launch commit criteria during the final phases of launch countdowns. The Ground and Flight Application Software Team (GFAST) demonstrated the software. It was developed by the Command, Control and Communications team in the Ground Systems Development and Operations (GSDO) Program. GSDO is helping to prepare the center for the first test flight of Orion atop the SLS on EM-1.

  20. Space Shuttle Ascent Flight Design Process: Evolution and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Picka, Bret A.; Glenn, Christopher B.

    2011-01-01

    The Space Shuttle Ascent Flight Design team is responsible for defining a launch to orbit trajectory profile that satisfies all programmatic mission objectives and defines the ground and onboard reconfiguration requirements for this high-speed and demanding flight phase. This design, verification and reconfiguration process ensures that all applicable mission scenarios are enveloped within integrated vehicle and spacecraft certification constraints and criteria, and includes the design of the nominal ascent profile and trajectory profiles for both uphill and ground-to-ground aborts. The team also develops a wide array of associated training, avionics flight software verification, onboard crew and operations facility products. These key ground and onboard products provide the ultimate users and operators the necessary insight and situational awareness for trajectory dynamics, performance and event sequences, abort mode boundaries and moding, flight performance and impact predictions for launch vehicle stages for use in range safety, and flight software performance. These products also provide the necessary insight to or reconfiguration of communications and tracking systems, launch collision avoidance requirements, and day of launch crew targeting and onboard guidance, navigation and flight control updates that incorporate the final vehicle configuration and environment conditions for the mission. Over the course of the Space Shuttle Program, ascent trajectory design and mission planning has evolved in order to improve program flexibility and reduce cost, while maintaining outstanding data quality. Along the way, the team has implemented innovative solutions and technologies in order to overcome significant challenges. A number of these solutions may have applicability to future human spaceflight programs.

  1. Video recording of neonatal resuscitation: A feasibility study to inform widespread adoption

    PubMed Central

    Shivananda, Sandesh; Twiss, Jennifer; el-Gouhary, Enas; el-Helou, Salhab; Williams, Connie; Murthy, Prashanth; Suresh, Gautham

    2017-01-01

    AIM To determine the feasibility of introducing video recording (VR) of neonatal resuscitation (NR) in a perinatal centre. METHODS This was a prospective cohort quality improvement study on preterm infants and their caregivers. Based on evidence and experience of other centers using VR intervention, a contextually relevant implementation and evaluation strategy was designed in the planning phase. The components of intervention were pre-resuscitation team huddle, VR of NR and video debriefing (VD), all occurring on the same day. Various domains of feasibility and sustainability as well as feasibility criteria were predefined. Data for analysis was collected using quantitative and qualitative methods. RESULTS Seventy-one caregivers participated in VD of 14 NRs facilitated by six trained instructors. Ninety-one percent of caregivers perceived enhanced learning and patient safety and, 48 issues were identified related to policy, caregiver roles, and latent safety threats. Ninety percent of caregivers expressed their willingness to participate in VD activity and supported the idea of integrating it into a resuscitation team routine. Eighty-three percent and 50% of instructors expressed satisfaction with video review software and quality of audio VR. No issues about maintenance of infant or caregivers’ confidentiality and erasure of videos were reported. Criteria for feasibility were met (refusal rate of < 10%, VR performed on > 50% of occasions, and < 20% caregivers’ perceiving a negative impact on team performance). Necessary adaptations to enhance sustainability were identified. CONCLUSION VR of NR as a standard of care quality assurance activity to enhance caregivers’ learning and create opportunities that improve patient safety is feasible. Despite its complexity with inherent challenges in implementation, the intervention was acceptable, implementable, and potentially sustainable with adaptations. PMID:28224098

  2. The widest practicable dissemination: The NASA technical report server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Binkley, Robert L.; Kellogg, Yvonne D.; Paulson, Sharon S.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael; Accomazzi, Alberto

    1995-01-01

    The search for innovative methods to distribute NASA's information lead a gross-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the services over the initial 6-month period. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained will allow NASA to ensure that its institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  3. Safety and Mission Assurance for In-House Design Lessons Learned from Ares I Upper Stage

    NASA Technical Reports Server (NTRS)

    Anderson, Joel M.

    2011-01-01

    This viewgraph presentation identifies lessons learned in the course of the Ares I Upper Stage design and in-house development effort. The contents include: 1) Constellation Organization; 2) Upper Stage Organization; 3) Presentation Structure; 4) Lesson-Importance of Systems Engineering/Integration; 5) Lesson-Importance of Early S&MA Involvement; 6) Lesson-Importance of Appropriate Staffing Levels; 7) Lesson-Importance S&MA Team Deployment; 8) Lesson-Understanding of S&MA In-Line Engineering versus Assurance; 9) Lesson-Importance of Close Coordination between Supportability and Reliability/Maintainability; 10) Lesson-Importance of Engineering Data Systems; 11) Lesson-Importance of Early Development of Supporting Databases; 12) Lesson-Importance of Coordination with Safety Assessment/Review Panels; 13) Lesson-Implementation of Software Reliability; 14) Lesson-Implementation of S&MA Technical Authority/Chief S&MA Officer; 15) Lesson-Importance of S&MA Evaluation of Project Risks; 16) Lesson-Implementation of Critical Items List and Government Mandatory Inspections; 17) Lesson-Implementation of Critical Items List Mandatory Inspections; 18) Lesson-Implementation of Test Article Safety Analysis; and 19) Lesson-Importance of Procurement Quality.

  4. Implementing and integrating a clinically driven electronic medical record for radiation oncology in a large medical enterprise.

    PubMed

    Kirkpatrick, John P; Light, Kim L; Walker, Robyn M; Georgas, Debra L; Antoine, Phillip A; Clough, Robert W; Cozart, Heidi B; Yin, Fang-Fang; Yoo, Sua; Willett, Christopher G

    2013-01-01

    While our department is heavily invested in computer-based treatment planning, we historically relied on paper-based charts for management of Radiation Oncology patients. In early 2009, we initiated the process of conversion to an electronic medical record (EMR) eliminating the need for paper charts. Key goals included the ability to readily access information wherever and whenever needed, without compromising safety, treatment quality, confidentiality, or productivity. In February, 2009, we formed a multi-disciplinary team of Radiation Oncology physicians, nurses, therapists, administrators, physicists/dosimetrists, and information technology (IT) specialists, along with staff from the Duke Health System IT department. The team identified all existing processes and associated information/reports, established the framework for the EMR system and generated, tested and implemented specific EMR processes. Two broad classes of information were identified: information which must be readily accessed by anyone in the health system versus that used solely within the Radiation Oncology department. Examples of the former are consultation reports, weekly treatment check notes, and treatment summaries; the latter includes treatment plans, daily therapy records, and quality assurance reports. To manage the former, we utilized the enterprise-wide system, which required an intensive effort to design and implement procedures to export information from Radiation Oncology into that system. To manage "Radiation Oncology" data, we used our existing system (ARIA, Varian Medical Systems.) The ability to access both systems simultaneously from a single workstation (WS) was essential, requiring new WS and modified software. As of January, 2010, all new treatments were managed solely with an EMR. We find that an EMR makes information more widely accessible and does not compromise patient safety, treatment quality, or confidentiality. However, compared to paper charts, time required by clinicians to access/enter patient information has substantially increased. While productivity is improving with experience, substantial growth will require better integration of the system components, decreased access times, and improved user interfaces. $127K was spent on new hardware and software; elimination of paper yields projected savings of $21K/year. One year after conversion to an EMR, more than 90% of department staff favored the EMR over the previous paper charts. Successful implementation of a Radiation Oncology EMR required not only the effort and commitment of all functions of the department, but support from senior health system management, corporate IT, and vendors. Realization of the full benefits of an EMR will require experience, faster/better integrated software, and continual improvement in underlying clinical processes.

  5. Neurophysiological analytics for all! Free open-source software tools for documenting, analyzing, visualizing, and sharing using electronic notebooks.

    PubMed

    Rosenberg, David M; Horn, Charles C

    2016-08-01

    Neurophysiology requires an extensive workflow of information analysis routines, which often includes incompatible proprietary software, introducing limitations based on financial costs, transfer of data between platforms, and the ability to share. An ecosystem of free open-source software exists to fill these gaps, including thousands of analysis and plotting packages written in Python and R, which can be implemented in a sharable and reproducible format, such as the Jupyter electronic notebook. This tool chain can largely replace current routines by importing data, producing analyses, and generating publication-quality graphics. An electronic notebook like Jupyter allows these analyses, along with documentation of procedures, to display locally or remotely in an internet browser, which can be saved as an HTML, PDF, or other file format for sharing with team members and the scientific community. The present report illustrates these methods using data from electrophysiological recordings of the musk shrew vagus-a model system to investigate gut-brain communication, for example, in cancer chemotherapy-induced emesis. We show methods for spike sorting (including statistical validation), spike train analysis, and analysis of compound action potentials in notebooks. Raw data and code are available from notebooks in data supplements or from an executable online version, which replicates all analyses without installing software-an implementation of reproducible research. This demonstrates the promise of combining disparate analyses into one platform, along with the ease of sharing this work. In an age of diverse, high-throughput computational workflows, this methodology can increase efficiency, transparency, and the collaborative potential of neurophysiological research. Copyright © 2016 the American Physiological Society.

  6. Implementation of a team-based learning course: Work required and perceptions of the teaching team.

    PubMed

    Morris, Jenny

    2016-11-01

    Team-based learning was selected as a strategy to help engage pre-registration undergraduate nursing students in a second-year evidence-informed decision making course. To detail the preparatory work required to deliver a team-based learning course; and to explore the perceptions of the teaching team of their first experience using team-based learning. Descriptive evaluation. Information was extracted from a checklist and process document developed by the course leader to document the work required prior to and during implementation. Members of the teaching team were interviewed by a research assistant at the end of the course using a structured interview schedule to explore perceptions of first time implementation. There were nine months between the time the decision was made to use team-based learning and the first day of the course. Approximately 60days were needed to reconfigure the course for team-based learning delivery, develop the knowledge and expertise of the teaching team, and develop and review the resources required for the students and the teaching team. This reduced to around 12days for the subsequent delivery. Interview data indicated that the teaching team were positive about team-based learning, felt prepared for the course delivery and did not identify any major problems during this first implementation. Implementation of team-based learning required time and effort to prepare the course materials and the teaching team. The teaching team felt well prepared, were positive about using team-based learning and did not identify any major difficulties. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  7. Design of admission medication reconciliation technology: a human factors approach to requirements and prototyping.

    PubMed

    Lesselroth, Blake J; Adams, Kathleen; Tallett, Stephanie; Wood, Scott D; Keeling, Amy; Cheng, Karen; Church, Victoria L; Felder, Robert; Tran, Hanna

    2013-01-01

    Our objectives were to (1) develop an in-depth understanding of the workflow and information flow in medication reconciliation, and (2) design medication reconciliation support technology using a combination of rapid-cycle prototyping and human-centered design. Although medication reconciliation is a national patient safety goal, limitations both of physical environment and in workflow can make it challenging to implement durable systems. We used several human factors techniques to gather requirements and develop a new process to collect a medication history at hospital admission. We completed an ethnography and time and motion analysis of pharmacists in order to illustrate the processes used to reconcile medications. We then used the requirements to design prototype multimedia software for collecting a bedside medication history. We observed how pharmacists incorporated the technology into their physical environment and documented usability issues. Admissions occurred in three phases: (1) list compilation, (2) order processing, and (3) team coordination. Current medication reconciliation processes at the hospital average 19 minutes to complete and do not include a bedside interview. Use of our technology during a bedside interview required an average of 29 minutes. The software represents a viable proof-of-concept to automate parts of history collection and enhance patient communication. However, we discovered several usability issues that require attention. We designed a patient-centered technology to enhance how clinicians collect a patient's medication history. By using multiple human factors methods, our research team identified system themes and design constraints that influence the quality of the medication reconciliation process and implementation effectiveness of new technology. Evidence-based design, human factors, patient-centered care, safety, technology.

  8. Development of AN Open-Source Automatic Deformation Monitoring System for Geodetical and Geotechnical Measurements

    NASA Astrophysics Data System (ADS)

    Engel, P.; Schweimler, B.

    2016-04-01

    The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the Neubrandenburg University of Applied Sciences (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.

  9. Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator

    NASA Technical Reports Server (NTRS)

    Bolen, Kenny; Greenlaw, Ronald

    2010-01-01

    A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.

  10. The application of automated operations at the Institutional Processing Center

    NASA Technical Reports Server (NTRS)

    Barr, Thomas H.

    1993-01-01

    The JPL Institutional and Mission Computing Division, Communications, Computing and Network Services Section, with its mission contractor, OAO Corporation, have for some time been applying automation to the operation of JPL's Information Processing Center (IPC). Automation does not come in one easy to use package. Automation for a data processing center is made up of many different software and hardware products supported by trained personnel. The IPC automation effort formally began with console automation, and has since spiraled out to include production scheduling, data entry, report distribution, online reporting, failure reporting and resolution, documentation, library storage, and operator and user education, while requiring the interaction of multi-vendor and locally developed software. To begin the process, automation goals are determined. Then a team including operations personnel is formed to research and evaluate available options. By acquiring knowledge of current products and those in development, taking an active role in industry organizations, and learning of other data center's experiences, a forecast can be developed as to what direction technology is moving. With IPC management's approval, an implementation plan is developed and resources identified to test or implement new systems. As an example, IPC's new automated data entry system was researched by Data Entry, Production Control, and Advance Planning personnel. A proposal was then submitted to management for review. A determination to implement the new system was made and elements/personnel involved with the initial planning performed the implementation. The final steps of the implementation were educating data entry personnel in the areas effected and procedural changes necessary to the successful operation of the new system.

  11. Unobtrusive Monitoring of Spaceflight Team Functioning

    NASA Technical Reports Server (NTRS)

    Maidel, Veronica; Stanton, Jeffrey M.

    2010-01-01

    This document contains a literature review suggesting that research on industrial performance monitoring has limited value in assessing, understanding, and predicting team functioning in the context of space flight missions. The review indicates that a more relevant area of research explores the effectiveness of teams and how team effectiveness may be predicted through the elicitation of individual and team mental models. Note that the mental models referred to in this literature typically reflect a shared operational understanding of a mission setting such as the cockpit controls and navigational indicators on a flight deck. In principle, however, mental models also exist pertaining to the status of interpersonal relations on a team, collective beliefs about leadership, success in coordination, and other aspects of team behavior and cognition. Pursuing this idea, the second part of this document provides an overview of available off-the-shelf products that might assist in extraction of mental models and elicitation of emotions based on an analysis of communicative texts among mission personnel. The search for text analysis software or tools revealed no available tools to enable extraction of mental models automatically, relying only on collected communication text. Nonetheless, using existing software to analyze how a team is functioning may be relevant for selection or training, when human experts are immediately available to analyze and act on the findings. Alternatively, if output can be sent to the ground periodically and analyzed by experts on the ground, then these software packages might be employed during missions as well. A demonstration of two text analysis software applications is presented. Another possibility explored in this document is the option of collecting biometric and proxemic measures such as keystroke dynamics and interpersonal distance in order to expose various individual or dyadic states that may be indicators or predictors of certain elements of team functioning. This document summarizes interviews conducted with personnel currently involved in observing or monitoring astronauts or who are in charge of technology that allows communication and monitoring. The objective of these interviews was to elicit their perspectives on monitoring team performance during long-duration missions and the feasibility of potential automatic non-obtrusive monitoring systems. Finally, in the last section, the report describes several priority areas for research that can help transform team mental models, biometrics, and/or proxemics into workable systems for unobtrusive monitoring of space flight team effectiveness. Conclusions from this work suggest that unobtrusive monitoring of space flight personnel is likely to be a valuable future tool for assessing team functioning, but that several research gaps must be filled before prototype systems can be developed for this purpose.

  12. Case Study: Accelerating Process Improvement by Integrating the TSP and CMMI

    DTIC Science & Technology

    2007-06-01

    Could software development teams and indi- viduals apply similar principles to improve their work? Watts S . Humphrey , a founder of the process...was an authorized PSP instructor. At Schwalb’s urging, Watts Humphrey briefed the SLT on the PSP and TSP, and after the briefing, the team... Humphrey 96] Humphrey , Watts S . Introduction to the Personal Software Process. Boston, MA: Addison- Wesley Publishing Company, Inc., 1996 (ISBN

  13. Qualitative evaluation of the implementation of the Interdisciplinary Management Tool: a reflective tool to enhance interdisciplinary teamwork using Structured, Facilitated Action Research for Implementation.

    PubMed

    Nancarrow, Susan A; Smith, Tony; Ariss, Steven; Enderby, Pamela M

    2015-07-01

    Reflective practice is used increasingly to enhance team functioning and service effectiveness; however, there is little evidence of its use in interdisciplinary teams. This paper presents the qualitative evaluation of the Interdisciplinary Management Tool (IMT), an evidence-based change tool designed to enhance interdisciplinary teamwork through structured team reflection. The IMT incorporates three components: an evidence-based resource guide; a reflective implementation framework based on Structured, Facilitated Action Research for Implementation methodology; and formative and summative evaluation components. The IMT was implemented with intermediate care teams supported by independent facilitators in England. Each intervention lasted 6 months and was evaluated over a 12-month period. Data sources include interviews, a focus group with facilitators, questionnaires completed by team members and documentary feedback from structured team reports. Data were analysed qualitatively using the Framework approach. The IMT was implemented with 10 teams, including 253 staff from more than 10 different disciplines. Team challenges included lack of clear vision; communication issues; limited career progression opportunities; inefficient resource use; need for role clarity and service development. The IMT successfully engaged staff in the change process, and resulted in teams developing creative strategies to address the issues identified. Participants valued dedicated time to focus on the processes of team functioning; however, some were uncomfortable with a focus on teamwork at the expense of delivering direct patient care. The IMT is a relatively low-cost, structured, reflective way to enhance team function. It empowers individuals to understand and value their own, and others' roles and responsibilities within the team; identify barriers to effective teamwork, and develop and implement appropriate solutions to these. To be successful, teams need protected time to take for reflection, and executive support to be able to broker changes that are beyond the scope of the team. © 2014 John Wiley & Sons Ltd.

  14. Planned change or emergent change implementation approach and nurses' professional clinical autonomy.

    PubMed

    Luiking, Marie-Louise; Aarts, Leon; Bras, Leo; Grypdonck, Maria; van Linge, Roland

    2017-11-01

    Nurses' clinical autonomy is considered important for patients' outcome and influenced by the implementation approach of innovations. Emergent change approach with participation in the implementation process is thought to increase clinical autonomy. Planned change approach without this participation is thought not to increase clinical autonomy. Evidence of these effects on clinical autonomy is however limited. To examine the changes in clinical autonomy and in personal norms and values for a planned change and emergent change implementation of an innovation, e.g. intensive insulin therapy. Prospective comparative study with two geographically separated nurses' teams on one intensive care unit (ICU), randomly assigned to the experimental conditions. Data were collected from March 2008 to January 2009. Pre-existing differences in perception of team and innovation characteristics were excluded using instruments based on the innovation contingency model. The Nursing Activity Scale was used to measure clinical autonomy. The Personal Values and Norms instrument was used to assess orientation towards nursing activities and the Team Learning Processes instrument to assess learning as a team. Pre-implementation the measurements did not differ. Post-implementation, clinical autonomy was increased in the emergent change team and decreased in the planned change team. The Personal Values and Norms instrument showed in the emergent change team a decreased hierarchic score and increased developmental and rational scores. In the planned change team the hierarchical and group scores were increased. Learning as a team did not differ between the teams. In both teams there was a change in clinical autonomy and orientation towards nursing activities, in line with the experimental conditions. Emergent change implementation resulted in more clinical autonomy than planned change implementation. If an innovation requires the nurses to make their own clinical decisions, an emergent change implementation should help to establish this clinical autonomy. © 2015 British Association of Critical Care Nurses.

  15. WFF TOPEX Software Documentation Overview, May 1999. Volume 2

    NASA Technical Reports Server (NTRS)

    Brooks, Ronald L.; Lee, Jeffrey

    2003-01-01

    This document provides an overview'of software development activities and the resulting products and procedures developed by the TOPEX Software Development Team (SWDT) at Wallops Flight Facility, in support of the WFF TOPEX Engineering Assessment and Verification efforts.

  16. Development and Evaluation of a Computer-Based Program for Assessing Quality of Family Medicine Teams Based on Accreditation Standards

    PubMed Central

    Valjevac, Salih; Ridjanovic, Zoran; Masic, Izet

    2009-01-01

    CONFLICT OF INTEREST: NONE DECLARED SUMMARY Introduction Agency for healthcare quality and accreditation in Federation of Bosnia and Herzegovina (AKAZ) is authorized body in the field of healthcare quality and safety improvement and accreditation of healthcare institutions. Beside accreditation standards for hospitals and primary health care centers, AKAZ has also developed accreditation standards for family medicine teams. Methods Software development was primarily based on Accreditation Standards for Family Medicine Teams. Seven chapters / topics: (1. Physical factors; 2. Equipment; 3. Organization and Management; 4. Health promotion and illness prevention; 5. Clinical services; 6. Patient survey; and 7. Patient’s rights and obligations) contain 35 standards describing expected level of family medicine team’s quality. Based on accreditation standards structure and needs of different potential users, it was concluded that software backbone should be a database containing all accreditation standards, self assessment and external assessment details. In this article we will present the development of standardized software for self and external evaluation of quality of service in family medicine, as well as plans for the future development of this software package. Conclusion Electronic data gathering and storing enhances the management, access and overall use of information. During this project we came to conclusion that software for self assessment and external assessment is ideal for accreditation standards distribution, their overview by the family medicine team members, their self assessment and external assessment. PMID:24109157

  17. Perfecting scientists’ collaboration and problem-solving skills in the virtual team environment

    USDA-ARS?s Scientific Manuscript database

    Perfecting Scientists’ Collaboration and Problem-Solving Skills in the Virtual Team Environment Numerous factors have contributed to the proliferation of conducting work in virtual teams at the domestic, national, and global levels: innovations in technology, critical developments in software, co-lo...

  18. An Overview of the JPSS Ground Project Algorithm Integration Process

    NASA Astrophysics Data System (ADS)

    Vicente, G. A.; Williams, R.; Dorman, T. J.; Williamson, R. C.; Shaw, F. J.; Thomas, W. M.; Hung, L.; Griffin, A.; Meade, P.; Steadley, R. S.; Cember, R. P.

    2015-12-01

    The smooth transition, implementation and operationalization of scientific software's from the National Oceanic and Atmospheric Administration (NOAA) development teams to the Join Polar Satellite System (JPSS) Ground Segment requires a variety of experiences and expertise. This task has been accomplished by a dedicated group of scientist and engineers working in close collaboration with the NOAA Satellite and Information Services (NESDIS) Center for Satellite Applications and Research (STAR) science teams for the JPSS/Suomi-NPOES Preparatory Project (S-NPP) Advanced Technology Microwave Sounder (ATMS), Cross-track Infrared Sounder (CrIS), Visible Infrared Imaging Radiometer Suite (VIIRS) and Ozone Mapping and Profiler Suite (OMPS) instruments. The presentation purpose is to describe the JPSS project process for algorithm implementation from the very early delivering stages by the science teams to the full operationalization into the Interface Processing Segment (IDPS), the processing system that provides Environmental Data Records (EDR's) to NOAA. Special focus is given to the NASA Data Products Engineering and Services (DPES) Algorithm Integration Team (AIT) functional and regression test activities. In the functional testing phase, the AIT uses one or a few specific chunks of data (granules) selected by the NOAA STAR Calibration and Validation (cal/val) Teams to demonstrate that a small change in the code performs properly and does not disrupt the rest of the algorithm chain. In the regression testing phase, the modified code is placed into to the Government Resources for Algorithm Verification, Integration, Test and Evaluation (GRAVITE) Algorithm Development Area (ADA), a simulated and smaller version of the operational IDPS. Baseline files are swapped out, not edited and the whole code package runs in one full orbit of Science Data Records (SDR's) using Calibration Look Up Tables (Cal LUT's) for the time of the orbit. The purpose of the regression test is to identify unintended outcomes. Overall the presentation provides a general and easy to follow overview of the JPSS Algorithm Change Process (ACP) and is intended to facility the audience understanding of a very extensive and complex process.

  19. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and detection and responses that can be tested in VMET and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM. The plan for VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.

  20. Reduction in Mortality Following Pediatric Rapid Response Team Implementation.

    PubMed

    Kolovos, Nikoleta S; Gill, Jeff; Michelson, Peter H; Doctor, Allan; Hartman, Mary E

    2018-05-01

    To evaluate the effectiveness of a physician-led rapid response team program on morbidity and mortality following unplanned admission to the PICU. Before-after study. Single-center quaternary-referral PICU. All unplanned PICU admissions from the ward from 2005 to 2011. The dataset was divided into pre- and post-rapid response team groups for comparison. A Cox proportional hazards model was used to identify the patient characteristics associated with mortality following unplanned PICU admission. Following rapid response team implementation, Pediatric Risk of Mortality, version 3, illness severity was reduced (28.7%), PICU length of stay was less (19.0%), and mortality declined (22%). Relative risk of death following unplanned admission to the PICU after rapid response team implementation was 0.685. For children requiring unplanned admission to the PICU, rapid response team implementation is associated with reduced mortality, admission severity of illness, and length of stay. Rapid response team implementation led to more proximal capture and aggressive intervention in the trajectory of a decompensating pediatric ward patient.

  1. Making sense of health information technology implementation: A qualitative study protocol.

    PubMed

    Kitzmiller, Rebecca R; Anderson, Ruth A; McDaniel, Reuben R

    2010-11-29

    Implementing new practices, such as health information technology (HIT), is often difficult due to the disruption of the highly coordinated, interdependent processes (e.g., information exchange, communication, relationships) of providing care in hospitals. Thus, HIT implementation may occur slowly as staff members observe and make sense of unexpected disruptions in care. As a critical organizational function, sensemaking, defined as the social process of searching for answers and meaning which drive action, leads to unified understanding, learning, and effective problem solving -- strategies that studies have linked to successful change. Project teamwork is a change strategy increasingly used by hospitals that facilitates sensemaking by providing a formal mechanism for team members to share ideas, construct the meaning of events, and take next actions. In this longitudinal case study, we aim to examine project teams' sensemaking and action as the team prepares to implement new information technology in a tiertiary care hospital. Based on management and healthcare literature on HIT implementation and project teamwork, we chose sensemaking as an alternative to traditional models for understanding organizational change and teamwork. Our methods choices are derived from this conceptual framework. Data on project team interactions will be prospectively collected through direct observation and organizational document review. Through qualitative methods, we will identify sensemaking patterns and explore variation in sensemaking across teams. Participant demographics will be used to explore variation in sensemaking patterns. Outcomes of this research will be new knowledge about sensemaking patterns of project teams, such as: the antecedents and consequences of the ongoing, evolutionary, social process of implementing HIT; the internal and external factors that influence the project team, including team composition, team member interaction, and interaction between the project team and the larger organization; the ways in which internal and external factors influence project team processes; and the ways in which project team processes facilitate team task accomplishment. These findings will lead to new methods of implementing HIT in hospitals.

  2. Making sense of health information technology implementation: A qualitative study protocol

    PubMed Central

    2010-01-01

    Background Implementing new practices, such as health information technology (HIT), is often difficult due to the disruption of the highly coordinated, interdependent processes (e.g., information exchange, communication, relationships) of providing care in hospitals. Thus, HIT implementation may occur slowly as staff members observe and make sense of unexpected disruptions in care. As a critical organizational function, sensemaking, defined as the social process of searching for answers and meaning which drive action, leads to unified understanding, learning, and effective problem solving -- strategies that studies have linked to successful change. Project teamwork is a change strategy increasingly used by hospitals that facilitates sensemaking by providing a formal mechanism for team members to share ideas, construct the meaning of events, and take next actions. Methods In this longitudinal case study, we aim to examine project teams' sensemaking and action as the team prepares to implement new information technology in a tiertiary care hospital. Based on management and healthcare literature on HIT implementation and project teamwork, we chose sensemaking as an alternative to traditional models for understanding organizational change and teamwork. Our methods choices are derived from this conceptual framework. Data on project team interactions will be prospectively collected through direct observation and organizational document review. Through qualitative methods, we will identify sensemaking patterns and explore variation in sensemaking across teams. Participant demographics will be used to explore variation in sensemaking patterns. Discussion Outcomes of this research will be new knowledge about sensemaking patterns of project teams, such as: the antecedents and consequences of the ongoing, evolutionary, social process of implementing HIT; the internal and external factors that influence the project team, including team composition, team member interaction, and interaction between the project team and the larger organization; the ways in which internal and external factors influence project team processes; and the ways in which project team processes facilitate team task accomplishment. These findings will lead to new methods of implementing HIT in hospitals. PMID:21114860

  3. Why and how Mastering an Incremental and Iterative Software Development Process

    NASA Astrophysics Data System (ADS)

    Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe

    2004-06-01

    One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.

  4. A Bibliography of the Personal Software Process (PSP) and the Team Software Process (TSP)

    DTIC Science & Technology

    2009-10-01

    Postmortem.‖ Proceedings of the TSP Symposium (September 2007). http://www.sei.cmu.edu/tspsymposium/ Rickets , Chris; Lindeman, Robert; & Hodgins, Brad... Rickets , Chris A. ―A TSP Software Maintenance Life Cycle.‖ CrossTalk (March 2005). Rozanc, I. & Mahnic, V. ―Teaching Software Quality with Emphasis on PSP

  5. Effects of the Meetings-Flow Approach on Quality Teamwork in the Training of Software Capstone Projects

    ERIC Educational Resources Information Center

    Chen, Chung-Yang; Hong, Ya-Chun; Chen, Pei-Chi

    2014-01-01

    Software development relies heavily on teamwork; determining how to streamline this collaborative development is an essential training subject in computer and software engineering education. A team process known as the meetings-flow (MF) approach has recently been introduced in software capstone projects in engineering programs at various…

  6. Analysis of Software Development Methodologies to Build Safety Software Applications for the SATEX-II: A Mexican Experimental Satellite

    NASA Astrophysics Data System (ADS)

    Aguilar Cisneros, Jorge; Vargas Martinez, Hector; Pedroza Melendez, Alejandro; Alonso Arevalo, Miguel

    2013-09-01

    Mexico is a country where the experience to build software for satellite applications is beginning. This is a delicate situation because in the near future we will need to develop software for the SATEX-II (Mexican Experimental Satellite). SATEX- II is a SOMECyTA's project (the Mexican Society of Aerospace Science and Technology). We have experienced applying software development methodologies, like TSP (Team Software Process) and SCRUM in other areas. Then, we analyzed these methodologies and we concluded: these can be applied to develop software for the SATEX-II, also, we supported these methodologies with SSP-05-0 Standard in particular with ESA PSS-05-11. Our analysis was focusing on main characteristics of each methodology and how these methodologies could be used with the ESA PSS 05-0 Standards. Our outcomes, in general, may be used by teams who need to build small satellites, but, in particular, these are going to be used when we will build the on board software applications for the SATEX-II.

  7. Empirical studies of design software: Implications for software engineering environments

    NASA Technical Reports Server (NTRS)

    Krasner, Herb

    1988-01-01

    The empirical studies team of MCC's Design Process Group conducted three studies in 1986-87 in order to gather data on professionals designing software systems in a range of situations. The first study (the Lift Experiment) used thinking aloud protocols in a controlled laboratory setting to study the cognitive processes of individual designers. The second study (the Object Server Project) involved the observation, videotaping, and data collection of a design team of a medium-sized development project over several months in order to study team dynamics. The third study (the Field Study) involved interviews with the personnel from 19 large development projects in the MCC shareholders in order to study how the process of design is affected by organizationl and project behavior. The focus of this report will be on key observations of design process (at several levels) and their implications for the design of environments.

  8. Enhanced Training for Cyber Situational Awareness in Red versus Blue Team Exercises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbajal, Armida J.; Stevens-Adams, Susan Marie; Silva, Austin Ray

    This report summarizes research conducted through the Sandia National Laboratories Enhanced Training for Cyber Situational Awareness in Red Versus Blue Team Exercises Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding concerning how to best structure training for cyber defenders. Two modes of training were considered. The baseline training condition (Tool-Based training) was based on current practices where classroom instruction focuses on the functions of a software tool with various exercises in which students apply those functions. In the second training condition (Narrative-Based training), classroom instruction addressed software functions, but in the contextmore » of adversary tactics and techniques. It was hypothesized that students receiving narrative-based training would gain a deeper conceptual understanding of the software tools and this would be reflected in better performance within a red versus blue team exercise.« less

  9. A representational basis for the development of a distributed expert system for Space Shuttle flight control

    NASA Technical Reports Server (NTRS)

    Helly, J. J., Jr.; Bates, W. V.; Cutler, M.; Kelem, S.

    1984-01-01

    A new representation of malfunction procedure logic which permits the automation of these procedures using Boolean normal forms is presented. This representation is discussed in the context of the development of an expert system for space shuttle flight control including software and hardware implementation modes, and a distributed architecture. The roles and responsibility of the flight control team as well as previous work toward the development of expert systems for flight control support at Johnson Space Center are discussed. The notion of malfunction procedures as graphs is introduced as well as the concept of hardware-equivalence.

  10. A Real-Time Telemetry Simulator of the IUS Spacecraft

    NASA Technical Reports Server (NTRS)

    Drews, Michael E.; Forman, Douglas A.; Baker, Damon M.; Khazoyan, Louis B.; Viazzo, Danilo

    1998-01-01

    A real-time telemetry simulator of the IUS spacecraft has recently entered operation to train Flight Control Teams for the launch of the AXAF telescope from the Shuttle. The simulator has proven to be a successful higher fidelity implementation of its predecessor, while affirming the rapid development methodology used in its design. Although composed of COTS hardware and software, the system simulates the full breadth of the mission: Launch, Pre-Deployment-Checkout, Burn Sequence, and AXAF/IUS separation. Realism is increased through patching the system into the operations facility to simulate IUS telemetry, Shuttle telemetry, and the Tracking Station link (commands and status message).

  11. Empirical studies of software design: Implications for SSEs

    NASA Technical Reports Server (NTRS)

    Krasner, Herb

    1988-01-01

    Implications for Software Engineering Environments (SEEs) are presented in viewgraph format for characteristics of projects studied; significant problems and crucial problem areas in software design for large systems; layered behavioral model of software processes; implications of field study results; software project as an ecological system; results of the LIFT study; information model of design exploration; software design strategies; results of the team design study; and a list of publications.

  12. Florida alternative NTCIP testing software (ANTS) for actuated signal controllers.

    DOT National Transportation Integrated Search

    2009-01-01

    The scope of this research project did include the development of a software tool to test devices for NTCIP compliance. Development of the Florida Alternative NTCIP Testing Software (ANTS) was developed by the research team due to limitations found w...

  13. Reinventing The Design Process: Teams and Models

    NASA Technical Reports Server (NTRS)

    Wall, Stephen D.

    1999-01-01

    The future of space mission designing will be dramatically different from the past. Formerly, performance-driven paradigms emphasized data return with cost and schedule being secondary issues. Now and in the future, costs are capped and schedules fixed-these two variables must be treated as independent in the design process. Accordingly, JPL has redesigned its design process. At the conceptual level, design times have been reduced by properly defining the required design depth, improving the linkages between tools, and managing team dynamics. In implementation-phase design, system requirements will be held in crosscutting models, linked to subsystem design tools through a central database that captures the design and supplies needed configuration management and control. Mission goals will then be captured in timelining software that drives the models, testing their capability to execute the goals. Metrics are used to measure and control both processes and to ensure that design parameters converge through the design process within schedule constraints. This methodology manages margins controlled by acceptable risk levels. Thus, teams can evolve risk tolerance (and cost) as they would any engineering parameter. This new approach allows more design freedom for a longer time, which tends to encourage revolutionary and unexpected improvements in design.

  14. Biotechnology software in the digital age: are you winning?

    PubMed

    Scheitz, Cornelia Johanna Franziska; Peck, Lawrence J; Groban, Eli S

    2018-01-16

    There is a digital revolution taking place and biotechnology companies are slow to adapt. Many pharmaceutical, biotechnology, and industrial bio-production companies believe that software must be developed and maintained in-house and that data are more secure on internal servers than on the cloud. In fact, most companies in this space continue to employ large IT and software teams and acquire computational infrastructure in the form of in-house servers. This is due to a fear of the cloud not sufficiently protecting in-house resources and the belief that their software is valuable IP. Over the next decade, the ability to quickly adapt to changing market conditions, with agile software teams, will quickly become a compelling competitive advantage. Biotechnology companies that do not adopt the new regime may lose on key business metrics such as return on invested capital, revenue, profitability, and eventually market share.

  15. User systems guidelines for software projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abrahamson, L.

    1986-04-01

    This manual presents guidelines for software standards which were developed so that software project-development teams and management involved in approving the software could have a generalized view of all phases in the software production procedure and the steps involved in completing each phase. Guidelines are presented for six phases of software development: project definition, building a user interface, designing software, writing code, testing code, and preparing software documentation. The discussions for each phase include examples illustrating the recommended guidelines. 45 refs. (DWL)

  16. Implementing a rapid response team: factors influencing success.

    PubMed

    Murray, Theresa; Kleinpell, Ruth

    2006-12-01

    Rapid response teams (RRTs), or medical emergency teams, focus on preventing a patient crisis by addressing changes in patient status before a cardiopulmonary arrest occurs. Responding to acute changes, RRTs and medical emergency teams are similar to "code" teams. The exception, however is that they step into action before a patient arrests. Although RRTs are acknowledge as an important initiative, implementation can present many challenges. This article reports on the implementation and ongoing use of a RRT at a community health care setting, highlighting important considerations and strategies for success.

  17. Understanding Implementation of Complex Interventions in Primary Care Teams.

    PubMed

    Luig, Thea; Asselin, Jodie; Sharma, Arya M; Campbell-Scherer, Denise L

    2018-01-01

    The implementation of interventions to support practice change in primary care settings is complex. Pragmatic strategies, grounded in empiric data, are needed to navigate real-world challenges and unanticipated interactions with context that can impact implementation and outcomes. This article uses the example of the "5As Team" randomized control trial to explore implementation strategies to promote knowledge transfer, capacity building, and practice integration, and their interaction within the context of an interdisciplinary primary care team. We performed a qualitative evaluation of the implementation process of the 5As Team intervention study, a randomized control trial of a complex intervention in primary care. We conducted thematic analysis of field notes of intervention sessions, log books of the practice facilitation team members, and semistructured interviews with 29 interdisciplinary clinician participants. We used and further developed the Interactive Systems Framework for dissemination and implementation to interpret and structure findings. Three themes emerged that illuminate interactions between implementation processes, context, and outcomes: (1) facilitating team communication supported collective and individual sense-making and adoption of the innovation, (2) iterative evaluation of the implementation process and real-time feedback-driven adaptions of the intervention proved crucial for sustainable, context-appropriate intervention impact, (3) stakeholder engagement led to both knowledge exchange that contributes to local problem solving and to shaping a clinical context that is supportive to practice change. Our findings contribute pragmatic strategies that can help practitioners and researchers to navigate interactions between context, intervention, and implementation factors to increase implementation success. We further developed an implementation framework that includes sustained engagement with stakeholders, facilitation of team sense-making, and dynamic evaluation and intervention design as integral parts of complex intervention implementation. NCT01967797. 18 October 2013. © Copyright 2018 by the American Board of Family Medicine.

  18. Promoting Action on Research Implementation in Health Services framework applied to TeamSTEPPS implementation in small rural hospitals.

    PubMed

    Ward, Marcia M; Baloh, Jure; Zhu, Xi; Stewart, Greg L

    A particularly useful model for examining implementation of quality improvement interventions in health care settings is the PARIHS (Promoting Action on Research Implementation in Health Services) framework developed by Kitson and colleagues. The PARIHS framework proposes three elements (evidence, context, and facilitation) that are related to successful implementation. An evidence-based program focused on quality enhancement in health care, termed TeamSTEPPS (Team Strategies and Tools to Enhance Performance and Patient Safety), has been widely promoted by the Agency for Healthcare Research and Quality, but research is needed to better understand its implementation. We apply the PARIHS framework in studying TeamSTEPPS implementation to identify elements that are most closely related to successful implementation. Quarterly interviews were conducted over a 9-month period in 13 small rural hospitals that implemented TeamSTEPPS. Interview quotes that were related to each of the PARIHS elements were identified using directed content analysis. Transcripts were also scored quantitatively, and bivariate regression analysis was employed to explore relationships between PARIHS elements and successful implementation related to planning activities. The current findings provide support for the PARIHS framework and identified two of the three PARIHS elements (context and facilitation) as important contributors to successful implementation. This study applies the PARIHS framework to TeamSTEPPS, a widely used quality initiative focused on improving health care quality and patient safety. By focusing on small rural hospitals that undertook this quality improvement activity of their own accord, our findings represent effectiveness research in an understudied segment of the health care delivery system. By identifying context and facilitation as the most important contributors to successful implementation, these analyses provide a focus for efficient and effective sustainment of TeamSTEPPS efforts.

  19. STAR-CCM+ Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    2016-09-30

    The commercial Computational Fluid Dynamics (CFD) code STAR-CCM+ provides general purpose finite volume method solutions for fluid dynamics and energy transport. This document defines plans for verification and validation (V&V) of the base code and models implemented within the code by the Consortium for Advanced Simulation of Light water reactors (CASL). The software quality assurance activities described herein are port of the overall software life cycle defined in the CASL Software Quality Assurance (SQA) Plan [Sieger, 2015]. STAR-CCM+ serves as the principal foundation for development of an advanced predictive multi-phase boiling simulation capability within CASL. The CASL Thermal Hydraulics Methodsmore » (THM) team develops advanced closure models required to describe the subgrid-resolution behavior of secondary fluids or fluid phases in multiphase boiling flows within the Eulerian-Eulerian framework of the code. These include wall heat partitioning models that describe the formation of vapor on the surface and the forces the define bubble/droplet dynamic motion. The CASL models are implemented as user coding or field functions within the general framework of the code. This report defines procedures and requirements for V&V of the multi-phase CFD capability developed by CASL THM. Results of V&V evaluations will be documented in a separate STAR-CCM+ V&V assessment report. This report is expected to be a living document and will be updated as additional validation cases are identified and adopted as part of the CASL THM V&V suite.« less

  20. ToxPredictor: a Toxicity Estimation Software Tool

    EPA Science Inventory

    The Computational Toxicology Team within the National Risk Management Research Laboratory has developed a software tool that will allow the user to estimate the toxicity for a variety of endpoints (such as acute aquatic toxicity). The software tool is coded in Java and can be ac...

  1. Use of Dynamic Models and Operational Architecture to Solve Complex Navy Challenges

    NASA Technical Reports Server (NTRS)

    Grande, Darby; Black, J. Todd; Freeman, Jared; Sorber, TIm; Serfaty, Daniel

    2010-01-01

    The United States Navy established 8 Maritime Operations Centers (MOC) to enhance the command and control of forces at the operational level of warfare. Each MOC is a headquarters manned by qualified joint operational-level staffs, and enabled by globally interoperable C41 systems. To assess and refine MOC staffing, equipment, and schedules, a dynamic software model was developed. The model leverages pre-existing operational process architecture, joint military task lists that define activities and their precedence relations, as well as Navy documents that specify manning and roles per activity. The software model serves as a "computational wind-tunnel" in which to test a MOC on a mission, and to refine its structure, staffing, processes, and schedules. More generally, the model supports resource allocation decisions concerning Doctrine, Organization, Training, Material, Leadership, Personnel and Facilities (DOTMLPF) at MOCs around the world. A rapid prototype effort efficiently produced this software in less than five months, using an integrated process team consisting of MOC military and civilian staff, modeling experts, and software developers. The work reported here was conducted for Commander, United States Fleet Forces Command in Norfolk, Virginia, code N5-0LW (Operational Level of War) that facilitates the identification, consolidation, and prioritization of MOC capabilities requirements, and implementation and delivery of MOC solutions.

  2. Antiterrorist Software

    NASA Technical Reports Server (NTRS)

    Clark, David A.

    1998-01-01

    In light of the escalation of terrorism, the Department of Defense spearheaded the development of new antiterrorist software for all Government agencies by issuing a Broad Agency Announcement to solicit proposals. This Government-wide competition resulted in a team that includes NASA Lewis Research Center's Computer Services Division, who will develop the graphical user interface (GUI) and test it in their usability lab. The team launched a program entitled Joint Sphere of Security (JSOS), crafted a design architecture (see the following figure), and is testing the interface. This software system has a state-ofthe- art, object-oriented architecture, with a main kernel composed of the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS will be used as the software "breadboard" for assembling the components of explosions, such as blast and collapse simulations.

  3. IGDS/TRAP Interface Program (ITIP). Software User Manual (SUM). [network flow diagrams for coal gasification studies

    NASA Technical Reports Server (NTRS)

    Jefferys, S.; Johnson, W.; Lewis, R.; Rich, R.

    1981-01-01

    This specification establishes the requirements, concepts, and preliminary design for a set of software known as the IGDS/TRAP Interface Program (ITIP). This software provides the capability to develop at an Interactive Graphics Design System (IGDS) design station process flow diagrams for use by the NASA Coal Gasification Task Team. In addition, ITIP will use the Data Management and Retrieval System (DMRS) to maintain a data base from which a properly formatted input file to the Time-Line and Resources Analysis Program (TRAP) can be extracted. This set of software will reside on the PDP-11/70 and will become the primary interface between the Coal Gasification Task Team and IGDS, DMRS, and TRAP. The user manual for the computer program is presented.

  4. TOPEX Software Document Series. Volume 5; Rev. 1; TOPEX GDR Processing

    NASA Technical Reports Server (NTRS)

    Lee, Jeffrey; Lockwood, Dennis; Hancock, David W., III

    2003-01-01

    This document is a compendium of the WFF TOPEX Software Development Team's knowledge regarding Geophysical Data Record (GDR) Processing. It includes many elements of a requirements document, a software specification document, a software design document, and a user's manual. In the more technical sections, this document assumes the reader is familiar with TOPEX and instrument files.

  5. Shaping Software Engineering Curricula Using Open Source Communities: A Case Study

    ERIC Educational Resources Information Center

    Bowring, James; Burke, Quinn

    2016-01-01

    This paper documents four years of a novel approach to teaching a two-course sequence in software engineering as part of the ABET-accredited computer science curriculum at the College of Charleston. This approach is team-based and centers on learning software engineering in the context of open source software projects. In the first course, teams…

  6. Unobtrusive Monitoring of Spaceflight Team Functioning. Literature Review and Operational Assessment for NASA Behavioral Health and Performance Element

    NASA Technical Reports Server (NTRS)

    Maidel, Veronica; Stanton, Jeffrey M.

    2010-01-01

    This document contains a literature review suggesting that research on industrial performance monitoring has limited value in assessing, understanding, and predicting team functioning in the context of space flight missions. The review indicates that a more relevant area of research explores the effectiveness of teams and how team effectiveness may be predicted through the elicitation of individual and team mental models. Note that the mental models referred to in this literature typically reflect a shared operational understanding of a mission setting such as the cockpit controls and navigational indicators on a flight deck. In principle, however, mental models also exist pertaining to the status of interpersonal relations on a team, collective beliefs about leadership, success in coordination, and other aspects of team behavior and cognition. Pursuing this idea, the second part of this document provides an overview of available off-the-shelf products that might assist in extraction of mental models and elicitation of emotions based on an analysis of communicative texts among mission personnel. The search for text analysis software or tools revealed no available tools to enable extraction of mental models automatically, relying only on collected communication text. Nonetheless, using existing software to analyze how a team is functioning may be relevant for selection or training, when human experts are immediately available to analyze and act on the findings. Alternatively, if output can be sent to the ground periodically and analyzed by experts on the ground, then these software packages might be employed during missions as well. A demonstration of two text analysis software applications is presented. Another possibility explored in this document is the option of collecting biometric and proxemic measures such as keystroke dynamics and interpersonal distance in order to expose various individual or dyadic states that may be indicators or predictors of certain elements of team functioning. This document summarizes interviews conducted with personnel currently involved in observing or monitoring astronauts or who are in charge of technology that allows communication and monitoring. The objective of these interviews was to elicit their perspectives on monitoring team performance during long-duration missions and the feasibility of potential automatic non-obtrusive monitoring systems. Finally, in the last section, the report describes several priority areas for research that can help transform team mental models, biometrics, and/or proxemics into workable systems for unobtrusive monitoring of space flight team effectiveness. Conclusions from this work suggest that unobtrusive monitoring of space flight personnel is likely to be a valuable future tool for assessing team functioning, but that several research gaps must be filled before prototype systems can be developed for this purpose.

  7. An Approach to Verification and Validation of a Reliable Multicasting Protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1994-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.

  8. An approach to verification and validation of a reliable multicasting protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.

  9. Data management in clinical research: Synthesizing stakeholder perspectives.

    PubMed

    Johnson, Stephen B; Farach, Frank J; Pelphrey, Kevin; Rozenblit, Leon

    2016-04-01

    This study assesses data management needs in clinical research from the perspectives of researchers, software analysts and developers. This is a mixed-methods study that employs sublanguage analysis in an innovative manner to link the assessments. We performed content analysis using sublanguage theory on transcribed interviews conducted with researchers at four universities. A business analyst independently extracted potential software features from the transcriptions, which were translated into the sublanguage. This common sublanguage was then used to create survey questions for researchers, analysts and developers about the desirability and difficulty of features. Results were synthesized using the common sublanguage to compare stakeholder perceptions with the original content analysis. Individual researchers exhibited significant diversity of perspectives that did not correlate by role or site. Researchers had mixed feelings about their technologies, and sought improvements in integration, interoperability and interaction as well as engaging with study participants. Researchers and analysts agreed that data integration has higher desirability and mobile technology has lower desirability but disagreed on the desirability of data validation rules. Developers agreed that data integration and validation are the most difficult to implement. Researchers perceive tasks related to study execution, analysis and quality control as highly strategic, in contrast with tactical tasks related to data manipulation. Researchers have only partial technologic support for analysis and quality control, and poor support for study execution. Software for data integration and validation appears critical to support clinical research, but may be expensive to implement. Features to support study workflow, collaboration and engagement have been underappreciated, but may prove to be easy successes. Software developers should consider the strategic goals of researchers with regard to the overall coordination of research projects and teams, workflow connecting data collection with analysis and processes for improving data quality. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system throughmore » fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of existing publish-subscribe tools. We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.« less

  11. Post-Flight Data Analysis Tool

    NASA Technical Reports Server (NTRS)

    George, Marina

    2018-01-01

    A software tool that facilitates the retrieval and analysis of post-flight data. This allows our team and other teams to effectively and efficiently analyze and evaluate post-flight data in order to certify commercial providers.

  12. Self-Managed Work Teams in Nursing Homes: Implementing and Empowering Nurse Aide Teams

    ERIC Educational Resources Information Center

    Yeatts, Dale E.; Cready, Cynthia; Ray, Beth; DeWitt, Amy; Queen, Courtney

    2004-01-01

    Purpose: This article describes the progress of our study to examine the advantages and costs of using self-managed nurse aide teams in nursing homes, steps that are being taken to implement such teams, and management strategies being used to manage the teams. Design and Methods: A quasi-experimental design is underway where certified nurse aide…

  13. Managing Communication among Geographically Distributed Teams: A Brazilian Case

    NASA Astrophysics Data System (ADS)

    Almeida, Ana Carina M.; de Farias Junior, Ivaldir H.; de S. Carneiro, Pedro Jorge

    The growing demand for qualified professionals is making software companies opt for distributed software development (DSD). At the project conception, communication and synchronization of information are critical factors for success. However problems such as time-zone difference between teams, culture, language and different development processes among sites could difficult the communication among teams. In this way, the main goal of this paper is to describe the solution adopted by a Brazilian team to improve communication in a multisite project environment. The purposed solution was based on the best practices described in the literature, and the communication plan was created based on the infrastructure needed by the project. The outcome of this work is to minimize the impact of communication issues in multisite projects, increasing productivity, good understanding and avoiding rework on code and document writing.

  14. Happy@feet application for the management of diabetic foot osteomyelitis.

    PubMed

    Fiquet, S; Desbiez, F; Tauveron, I; Mrozek, N; Vidal, M; Lesens, O

    2016-12-01

    We aimed to develop and implement an application that could improve the management of patients presenting with diabetic foot osteomyelitis. Physicians from the multidisciplinary diabetic foot infection team and a software engineer first assessed the needs required for the infection management and application. An experimental version was then designed and progressively improved. A final version was implemented in clinical practice in 2013 by the multidisciplinary diabetic foot infection team of our university hospital. The application, known as Happy@feet, helps gather and allows access to all required data for patient management, dispenses prescriptions (antibiotics, nursing care, blood tests), and helps follow the evolution of the wound. At the end of the consultation, a customizable letter is generated and may be directly sent to the persons concerned. This application also facilitates clinical and economic research. In 2014, Happy@feet was used to follow 83 patients during 271 consultations, 88 of which were day care hospitalizations. The Happy@feet application is useful to manage these complex patients. Once the learning period is over, the time required for data collection is compensated by the rapid dispense of prescriptions and letters. Happy@feet can be used for research projects and will be used in a remote patient management project. Copyright © 2016. Published by Elsevier SAS.

  15. A Functional Data Model Realized: LaTiS Deployments

    NASA Astrophysics Data System (ADS)

    Baltzer, T.; Lindholm, D. M.; Wilson, A.; Putnam, B.; Christofferson, R.; Flores, N.; Roughton, S.

    2016-12-01

    At prior AGU annual meetings, members of the University of Colorado Laboratory for Atmospheric and Space Physics (LASP) Web Team have described work being done on a functional data model and the software framework called LaTis, that implements it. This presentation describes the evolution of LaTiS and presents several instances of LaTiS in operation today that demonstrate its various capabilities. With LaTiS, serving a new dataset can be a simple as adding a small descriptor file. From providing access to spacecraft telemetry data in a variety of forms for the LASP missions operation group, to providing access to scientific data for the MMS and MAVEN science teams, to server-side functionality such as fusing satellite visible and infrared data along with forecast model data into a Geotiff image for situational awareness purposes, LaTiS has demonstrated itself as a highly flexible, standards-based framework that provides easy data access, dynamic reformatting, and customizable server side functionality.

  16. Integration and Value of Earth Observations Data for Water Management Decision-Making in the Western U.S.

    NASA Astrophysics Data System (ADS)

    Larsen, S. G.; Willardson, T.

    2017-12-01

    Some exciting new science and tools are under development for water management decision-making in the Western U.S. This session will highlight a number of examples where remotely-sensed observation data has been directly beneficial to water resource stakeholders, and discuss the steps needed between receipt of the data and their delivery as a finished data product or tool. We will explore case studies of how NASA scientists and researchers have worked with together with western state water agencies and other stakeholders as a team, to develop and interpret remotely-sensed data observations, implement easy-to-use software and tools, train team-members on their operation, and transition those tools into the insititution's workflows. The benefits of integrating these tools into stakeholder, agency, and end-user operations can be seen on-the-ground, when water is optimally managed for the decision-maker's objectives. These cases also point to the importance of building relationships and conduits for communication between researchers and their institutional counterparts.

  17. Integration and Value of Earth Observations Data for Water Management Decision-Making in the Western U.S.

    NASA Astrophysics Data System (ADS)

    Larsen, S. G.; Willardson, T.

    2016-12-01

    Some exciting new science and tools are under development for water management decision-making in the Western U.S. This session will highlight a number of examples where remotely-sensed observation data has been directly beneficial to water resource stakeholders, and discuss the steps needed between receipt of the data and their delivery as a finished data product or tool. We will explore case studies of how NASA scientists and researchers have worked with together with western state water agencies and other stakeholders as a team, to develop and interpret remotely-sensed data observations, implement easy-to-use software and tools, train team-members on their operation, and transition those tools into the insititution's workflows. The benefits of integrating these tools into stakeholder, agency, and end-user operations can be seen on-the-ground, when water is optimally managed for the decision-maker's objectives. These cases also point to the importance of building relationships and conduits for communication between researchers and their institutional counterparts.

  18. Unintended adverse consequences of a clinical decision support system: two cases.

    PubMed

    Stone, Erin G

    2018-05-01

    Many institutions have implemented clinical decision support systems (CDSSs). While CDSS research papers have focused on benefits of these systems, there is a smaller body of literature showing that CDSSs may also produce unintended adverse consequences (UACs). Detailed here are 2 cases of UACs resulting from a CDSS. Both of these cases were related to external systems that fed data into the CDSS. In the first case, lack of knowledge of data categorization in an external pharmacy system produced a UAC; in the second case, the change of a clinical laboratory instrument produced the UAC. CDSSs rely on data from many external systems. These systems are dynamic and may have changes in hardware, software, vendors, or processes. Such changes can affect the accuracy of CDSSs. These cases point to the need for the CDSS team to be familiar with these external systems. This team (manager and alert builders) should include members in specific clinical specialties with deep knowledge of these external systems.

  19. Formal methods demonstration project for space applications

    NASA Technical Reports Server (NTRS)

    Divito, Ben L.

    1995-01-01

    The Space Shuttle program is cooperating in a pilot project to apply formal methods to live requirements analysis activities. As one of the larger ongoing shuttle Change Requests (CR's), the Global Positioning System (GPS) CR involves a significant upgrade to the Shuttle's navigation capability. Shuttles are to be outfitted with GPS receivers and the primary avionics software will be enhanced to accept GPS-provided positions and integrate them into navigation calculations. Prior to implementing the CR, requirements analysts at Loral Space Information Systems, the Shuttle software contractor, must scrutinize the CR to identify and resolve any requirements issues. We describe an ongoing task of the Formal Methods Demonstration Project for Space Applications whose goal is to find an effective way to use formal methods in the GPS CR requirements analysis phase. This phase is currently under way and a small team from NASA Langley, ViGYAN Inc. and Loral is now engaged in this task. Background on the GPS CR is provided and an overview of the hardware/software architecture is presented. We outline the approach being taken to formalize the requirements, only a subset of which is being attempted. The approach features the use of the PVS specification language to model 'principal functions', which are major units of Shuttle software. Conventional state machine techniques form the basis of our approach. Given this background, we present interim results based on a snapshot of work in progress. Samples of requirements specifications rendered in PVS are offered to illustration. We walk through a specification sketch for the principal function known as GPS Receiver State processing. Results to date are summarized and feedback from Loral requirements analysts is highlighted. Preliminary data is shown comparing issues detected by the formal methods team versus those detected using existing requirements analysis methods. We conclude by discussing our plan to complete the remaining activities of this task.

  20. Implementing the Team Approach in Higher Education: Important Questions and Advice for Administrators

    ERIC Educational Resources Information Center

    Lara, Tracy M.; Hughey, Aaron W.

    2008-01-01

    Many companies have implemented the team approach as a way to empower their employees in an effort to enhance productivity, quality and overall profitability. While application of the concept to higher education administration has been limited, colleges and universities could benefit from the team approach if implemented appropriately and…

  1. A Brief Survey of the Team Software ProcessSM (TSPSM)

    DTIC Science & Technology

    2011-10-24

    spent more than 20 years in industry as a software engineer, system designer, project leader, and development manager working on control systems...InnerWorkings, Inc. Instituto Tecnologico y de Estudios Superiores de Monterrey Siemens AG SILAC Ingenieria de Software S.A. de C.V

  2. Organizational Stresses and Practices Impeding Quality Software Development in Government Procurements

    ERIC Educational Resources Information Center

    Holcomb, Glenda S.

    2010-01-01

    This qualitative, phenomenological doctoral dissertation research study explored the software project team members perceptions of changing organizational cultures based on management decisions made at project deviation points. The research study provided a view into challenged or failing government software projects through the lived experiences…

  3. The Essence of Using Collaborative Technology for Virtual Team Members: A Study Using Interpretative Phenomenology

    ERIC Educational Resources Information Center

    Houck, Christiana L.

    2013-01-01

    This interpretative phenomenological study used semi-structured interviews of 10 participants to gain a deeper understanding of the experience for virtual team members using collaborative technology. The participants were knowledge workers from global software companies working on cross-functional project teams at a distance. There were no…

  4. Spacecraft operations automation: Automatic alarm notification and web telemetry display

    NASA Astrophysics Data System (ADS)

    Short, Owen G.; Leonard, Robert E.; Bucher, Allen W.; Allen, Bryan

    1999-11-01

    In these times of Faster, Better, Cheaper (FBC) spacecraft, Spacecraft Operations Automation is an area that is targeted by many Operations Teams. To meet the challenges of the FBC environment, the Mars Global Surveyor (MGS) Operations Team designed and quickly implemented two new low-cost technologies: one which monitors spacecraft telemetry, checks the status of the telemetry, and contacts technical experts by pager when any telemetry datapoints exceed alarm limits, and a second which allows quick and convenient remote access to data displays. The first new technology is Automatic Alarm Notification (AAN). AAN monitors spacecraft telemetry and will notify engineers automatically if any telemetry is received which creates an alarm condition. The second new technology is Web Telemetry Display (WTD). WTD captures telemetry displays generated by the flight telemetry system and makes them available to the project web server. This allows engineers to check the health and status of the spacecraft from any computer capable of connecting to the global internet, without needing normally-required specialized hardware and software. Both of these technologies have greatly reduced operations costs by alleviating the need to have operations engineers monitor spacecraft performance on a 24 hour per day, 7 day per week basis from a central Mission Support Area. This paper gives details on the design and implementation of AAN and WTD, discusses their limitations, and lists the ongoing benefits which have accrued to MGS Flight Operations since their implementation in late 1996.

  5. Does team training work? Principles for health care.

    PubMed

    Salas, Eduardo; DiazGranados, Deborah; Weaver, Sallie J; King, Heidi

    2008-11-01

    Teamwork is integral to a working environment conducive to patient safety and care. Team training is one methodology designed to equip team members with the competencies necessary for optimizing teamwork. There is evidence of team training's effectiveness in highly complex and dynamic work environments, such as aviation and health care. However, most quantitative evaluations of training do not offer any insight into the actual reasons why, how, and when team training is effective. To address this gap in understanding, and to provide guidance for members of the health care community interested in implementing team training programs, this article presents both quantitative results and a specific qualitative review and content analysis of team training implemented in health care. Based on this review, we offer eight evidence-based principles for effective planning, implementation, and evaluation of team training programs specific to health care.

  6. National plan to enhance aviation safety through human factors improvements

    NASA Technical Reports Server (NTRS)

    Foushee, Clay

    1990-01-01

    The purpose of this section of the plan is to establish a development and implementation strategy plan for improving safety and efficiency in the Air Traffic Control (ATC) system. These improvements will be achieved through the proper applications of human factors considerations to the present and future systems. The program will have four basic goals: (1) prepare for the future system through proper hiring and training; (2) develop a controller work station team concept (managing human errors); (3) understand and address the human factors implications of negative system results; and (4) define the proper division of responsibilities and interactions between the human and the machine in ATC systems. This plan addresses six program elements which together address the overall purpose. The six program elements are: (1) determine principles of human-centered automation that will enhance aviation safety and the efficiency of the air traffic controller; (2) provide new and/or enhanced methods and techniques to measure, assess, and improve human performance in the ATC environment; (3) determine system needs and methods for information transfer between and within controller teams and between controller teams and the cockpit; (4) determine how new controller work station technology can optimally be applied and integrated to enhance safety and efficiency; (5) assess training needs and develop improved techniques and strategies for selection, training, and evaluation of controllers; and (6) develop standards, methods, and procedures for the certification and validation of human engineering in the design, testing, and implementation of any hardware or software system element which affects information flow to or from the human.

  7. Application of the probabilistic model BET_UNREST during a volcanic unrest simulation exercise in Dominica, Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Constantinescu, Robert; Robertson, Richard; Lindsay, Jan M.; Tonini, Roberto; Sandri, Laura; Rouwet, Dmitri; Smith, Patrick; Stewart, Roderick

    2016-11-01

    We report on the first "real-time" application of the BET_UNREST (Bayesian Event Tree for Volcanic Unrest) probabilistic model, during a VUELCO Simulation Exercise carried out on the island of Dominica, Lesser Antilles, in May 2015. Dominica has a concentration of nine potentially active volcanic centers and frequent volcanic earthquake swarms at shallow depths, intense geothermal activity, and recent phreatic explosions (1997) indicate the region is still active. The exercise scenario was developed in secret by a team of scientists from The University of the West Indies (Trinidad and Tobago) and University of Auckland (New Zealand). The simulated unrest activity was provided to the exercise's Scientific Team in three "phases" through exercise injects comprising processed monitoring data. We applied the newly created BET_UNREST model through its software implementation PyBetUnrest, to estimate the probabilities of having (i) unrest of (ii) magmatic, hydrothermal or tectonic origin, which may or may not lead to (iii) an eruption. The probabilities obtained for each simulated phase raised controversy and intense deliberations among the members of the Scientific Team. The results were often considered to be "too high" and were not included in any of the reports presented to ODM (Office for Disaster Management) revealing interesting crisis communication challenges. We concluded that the PyBetUnrest application itself was successful and brought the tool one step closer to a full implementation. However, as with any newly proposed method, it needs more testing, and in order to be able to use it in the future, we make a series of recommendations for future applications.

  8. CrossTalk: The Journal of Defense Software Engineering. Volume 18, Number 9

    DTIC Science & Technology

    2005-09-01

    2004. 12. Humphrey , Watts . Introduction to the Personal Software Process SM. Addison- Wesley 1997. 13. Humphrey , Watts . Introduction to the Team...Personal Software ProcessSM (PSPSM)is a software development process orig- inated by Watts Humphrey at the Software Engineering Institute (SEI) in the...meets its commitments and bring a sense of control and predictability into an apparently chaotic project.u References 1. Humphrey , Watts . Coaching

  9. Implementing an Open Source Electronic Health Record System in Kenyan Health Care Facilities: Case Study

    PubMed Central

    Magare, Steve; Monda, Jonathan; Kamau, Onesmus; Houston, Stuart; Fraser, Hamish; Powell, John; English, Mike; Paton, Chris

    2018-01-01

    Background The Kenyan government, working with international partners and local organizations, has developed an eHealth strategy, specified standards, and guidelines for electronic health record adoption in public hospitals and implemented two major health information technology projects: District Health Information Software Version 2, for collating national health care indicators and a rollout of the KenyaEMR and International Quality Care Health Management Information Systems, for managing 600 HIV clinics across the country. Following these projects, a modified version of the Open Medical Record System electronic health record was specified and developed to fulfill the clinical and administrative requirements of health care facilities operated by devolved counties in Kenya and to automate the process of collating health care indicators and entering them into the District Health Information Software Version 2 system. Objective We aimed to present a descriptive case study of the implementation of an open source electronic health record system in public health care facilities in Kenya. Methods We conducted a landscape review of existing literature concerning eHealth policies and electronic health record development in Kenya. Following initial discussions with the Ministry of Health, the World Health Organization, and implementing partners, we conducted a series of visits to implementing sites to conduct semistructured individual interviews and group discussions with stakeholders to produce a historical case study of the implementation. Results This case study describes how consultants based in Kenya, working with developers in India and project stakeholders, implemented the new system into several public hospitals in a county in rural Kenya. The implementation process included upgrading the hospital information technology infrastructure, training users, and attempting to garner administrative and clinical buy-in for adoption of the system. The initial deployment was ultimately scaled back due to a complex mix of sociotechnical and administrative issues. Learning from these early challenges, the system is now being redesigned and prepared for deployment in 6 new counties across Kenya. Conclusions Implementing electronic health record systems is a challenging process in high-income settings. In low-income settings, such as Kenya, open source software may offer some respite from the high costs of software licensing, but the familiar challenges of clinical and administration buy-in, the need to adequately train users, and the need for the provision of ongoing technical support are common across the North-South divide. Strategies such as creating local support teams, using local development resources, ensuring end user buy-in, and rolling out in smaller facilities before larger hospitals are being incorporated into the project. These are positive developments to help maintain momentum as the project continues. Further integration with existing open source communities could help ongoing development and implementations of the project. We hope this case study will provide some lessons and guidance for other challenging implementations of electronic health record systems as they continue across Africa. PMID:29669709

  10. Work-team implementation.

    PubMed

    Reiste, K K; Hubrich, A

    1996-02-01

    The authors describe the implementation of the Work-Team Concept at the Frigidaire plans in Jefferson, Iowa. By forming teams, plant staff have made significant improvements in worker safety, product quality, customer service, cost-effectiveness, and overall employee well-being.

  11. Bellerophon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lingerfelt, Eric J; Messer, II, Otis E

    2017-01-02

    The Bellerophon software system supports CHIMERA, a production-level HPC application that simulates the evolution of core-collapse supernovae. Bellerophon enables CHIMERA's geographically dispersed team of collaborators to perform job monitoring and real-time data analysis from multiple supercomputing resources, including platforms at OLCF, NERSC, and NICS. Its multi-tier architecture provides an encapsulated, end-to-end software solution that enables the CHIMERA team to quickly and easily access highly customizable animated and static views of results from anywhere in the world via a cross-platform desktop application.

  12. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  13. Production Techniques for Computer-Based Learning Material.

    ERIC Educational Resources Information Center

    Moonen, Jef; Schoenmaker, Jan

    Experiences in the development of educational software in the Netherlands have included the use of individual and team approaches, the determination of software content and how it should be presented, and the organization of the entire development process, from experimental programs to prototype to final product. Because educational software is a…

  14. Fairbanks North Star borough rural roads upgrade inventory and cost estimation software user guide : version I.

    DOT National Transportation Integrated Search

    2013-04-01

    The Rural Road Upgrade Inventory and Cost Estimation Software is designed by the AUTC : research team to help the Fairbanks North Star Borough (FNSB) estimate the cost of upgrading : rural roads located in the Borough's Service Areas. The Software pe...

  15. Knowledge Sharing through Pair Programming in Learning Environments: An Empirical Study

    ERIC Educational Resources Information Center

    Kavitha, R. K.; Ahmed, M. S.

    2015-01-01

    Agile software development is an iterative and incremental methodology, where solutions evolve from self-organizing, cross-functional teams. Pair programming is a type of agile software development technique where two programmers work together with one computer for developing software. This paper reports the results of the pair programming…

  16. WFF TOPEX Software Documentation Altimeter Instrument File (AIF) Processing, October 1998. Volume 3

    NASA Technical Reports Server (NTRS)

    Lee, Jeffrey; Lockwood, Dennis

    2003-01-01

    This document is a compendium of the WFF TOPEX Software Development Team's knowledge regarding Sensor Data Record (SDR) Processing. It includes many elements of a requirements document, a software specification document, a software design document, and a user's manual. In the more technical sections, this document assumes the reader is familiar with TOPEX and instrument files.

  17. Space and Missile Systems Center Standard: Software Development

    DTIC Science & Technology

    2015-01-16

    maintenance , or any other activity or combination of activities resulting in products . Within this standard, requirements to “develop,” “define...integration, reuse, reengineering, maintenance , or any other activity that results in products ). The term “developer” encompasses all software team...activities that results in software products . Software development includes new development, modification, reuse, reengineering, maintenance , and any other

  18. Evaluation of Multi-Age Team (MAT): Implementation at Crabapple Middle School: Report for 1995-1996.

    ERIC Educational Resources Information Center

    Elmore, Randy; Wisenbaker, Joseph

    In fall 1993, administrators and faculty at the Crabapple Middle School in Roswell, Georgia, implemented the Multi-Age Team (MAT) program, creating multiage teams of sixth-, seventh-, and eighth-grade students. The project's main goal was to enhance self-esteem. Additional goals included implementation of interdisciplinary, thematic instruction;…

  19. Evaluation of Multi-Age Team (MAT) Implementation at Crabapple Middle School: Report for 1994-1995.

    ERIC Educational Resources Information Center

    Elmore, Randy; Wisenbaker, Joseph

    In fall 1993, administrators and faculty at the Crabappple Middle School in Roswell, Georgia, implemented the Multi-Age Team (MAT) program, creating multi-age teams of sixth-, seventh-, and eighth-grade students. The projects' main goal was to enhance self-esteem. Additional goals included implementation of interdisciplinary, thematic instruction;…

  20. The Effects of Team-Based Learning on Social Studies Knowledge Acquisition in High School

    ERIC Educational Resources Information Center

    Wanzek, Jeanne; Vaughn, Sharon; Kent, Shawn C.; Swanson, Elizabeth A.; Roberts, Greg; Haynes, Martha; Fall, Anna-Mária; Stillman-Spisak, Stephanie J.; Solis, Michael

    2014-01-01

    This randomized control trial examined the efficacy of team-based learning implemented within 11th-grade social studies classes. A randomized blocked design was implemented with 26 classes randomly assigned to treatment or comparison. In the treatment classes teachers implemented team-based learning practices to support students in engaging in…

  1. Team learning and innovation in nursing, a review of the literature.

    PubMed

    Timmermans, Olaf; Van Linge, Roland; Van Petegem, Peter; Van Rompaey, Bart; Denekens, Joke

    2012-01-01

    The capability to learn and innovate has been recognized as a key-factor for nursing teams to deliver high quality performance. Researchers suggest there is a relation between team-learning activities and changes in nursing teams throughout the implementation of novelties. A review of the literature was conducted in regard to the relation between team learning and implementation of innovations in nursing teams and to explore factors that contribute or hinder team learning. The search was limited to studies that were published in English or Dutch between 1998 and 2010. Eight studies were included in the review. The results of this review revealed that research on team learning and innovation in nursing is limited. The included studies showed moderate methodological quality and low levels of evidence. Team learning included processes to gather, process, and store information from different innovations within the nursing team and the prevalence of team-learning activities was contributed or hindered by individual and contextual factors. Further research is needed on the relation between team learning and implementation of innovations in nursing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Managing MDO Software Development Projects

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.; Salas, A. O.

    2002-01-01

    Over the past decade, the NASA Langley Research Center developed a series of 'grand challenge' applications demonstrating the use of parallel and distributed computation and multidisciplinary design optimization. All but the last of these applications were focused on the high-speed civil transport vehicle; the final application focused on reusable launch vehicles. Teams of discipline experts developed these multidisciplinary applications by integrating legacy engineering analysis codes. As teams became larger and the application development became more complex with increasing levels of fidelity and numbers of disciplines, the need for applying software engineering practices became evident. This paper briefly introduces the application projects and then describes the approaches taken in project management and software engineering for each project; lessons learned are highlighted.

  3. Software Program: Software Management Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.

  4. The KSC Simulation Team practices for contingencies in Firing Room 1

    NASA Technical Reports Server (NTRS)

    1998-01-01

    In Firing Room 1 at KSC, Shuttle launch team members put the Shuttle system through an integrated simulation. The control room is set up with software used to simulate flight and ground systems in the launch configuration. A Simulation Team, comprised of KSC engineers, introduce 12 or more major problems to prepare the launch team for worst-case scenarios. Such tests and simulations keep the Shuttle launch team sharp and ready for liftoff. The next liftoff is targeted for Oct. 29.

  5. Development of Distributed Research Center for monitoring and projecting regional climatic and environmental changes: first results

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Shiklomanov, Alexander; Okladinikov, Igor; Prusevich, Alex; Titov, Alexander

    2016-04-01

    Description and first results of the cooperative project "Development of Distributed Research Center for monitoring and projecting of regional climatic and environmental changes" recently started by SCERT IMCES and ESRC UNH are reported. The project is aimed at development of hardware and software platform prototype of Distributed Research Center (DRC) for monitoring and projecting regional climatic and environmental changes over the areas of mutual interest and demonstration the benefits of such collaboration that complements skills and regional knowledge across the northern extratropics. In the framework of the project, innovative approaches of "cloud" processing and analysis of large geospatial datasets will be developed on the technical platforms of two U.S. and Russian leading institutions involved in research of climate change and its consequences. Anticipated results will create a pathway for development and deployment of thematic international virtual research centers focused on interdisciplinary environmental studies by international research teams. DRC under development will comprise best features and functionality of earlier developed by the cooperating teams' information-computational systems RIMS (http://rims.unh.edu) and CLIMATE(http://climate.scert.ru/), which are widely used in Northern Eurasia environment studies. The project includes several major directions of research (Tasks) listed below. 1. Development of architecture and defining major hardware and software components of DRC for monitoring and projecting of regional environmental changes. 2. Development of an information database and computing software suite for distributed processing and analysis of large geospatial data hosted at ESRC and IMCES SB RAS. 3. Development of geoportal, thematic web client and web services providing international research teams with an access to "cloud" computing resources at DRC; two options will be executed: access through a basic graphical web browser and using geographic information systems - (GIS). 4. Using the output of the first three tasks, compilation of the DRC prototype, its validation, and testing the DRC feasibility for analyses of the recent regional environmental changes over Northern Eurasia and North America. Results of the first stage of the Project implementation are presented. This work is supported by the Ministry of Education and Science of the Russian Federation, Agreement № 14.613.21.0037.

  6. Improving Clinical Workflow in Ambulatory Care: Implemented Recommendations in an Innovation Prototype for the Veteran’s Health Administration

    PubMed Central

    Patterson, Emily S.; Lowry, Svetlana Z.; Ramaiah, Mala; Gibbons, Michael C.; Brick, David; Calco, Robert; Matton, Greg; Miller, Anne; Makar, Ellen; Ferrer, Jorge A.

    2015-01-01

    Introduction: Human factors workflow analyses in healthcare settings prior to technology implemented are recommended to improve workflow in ambulatory care settings. In this paper we describe how insights from a workflow analysis conducted by NIST were implemented in a software prototype developed for a Veteran’s Health Administration (VHA) VAi2 innovation project and associated lessons learned. Methods: We organize the original recommendations and associated stages and steps visualized in process maps from NIST and the VA’s lessons learned from implementing the recommendations in the VAi2 prototype according to four stages: 1) before the patient visit, 2) during the visit, 3) discharge, and 4) visit documentation. NIST recommendations to improve workflow in ambulatory care (outpatient) settings and process map representations were based on reflective statements collected during one-hour discussions with three physicians. The development of the VAi2 prototype was conducted initially independently from the NIST recommendations, but at a midpoint in the process development, all of the implementation elements were compared with the NIST recommendations and lessons learned were documented. Findings: Story-based displays and templates with default preliminary order sets were used to support scheduling, time-critical notifications, drafting medication orders, and supporting a diagnosis-based workflow. These templates enabled customization to the level of diagnostic uncertainty. Functionality was designed to support cooperative work across interdisciplinary team members, including shared documentation sessions with tracking of text modifications, medication lists, and patient education features. Displays were customized to the role and included access for consultants and site-defined educator teams. Discussion: Workflow, usability, and patient safety can be enhanced through clinician-centered design of electronic health records. The lessons learned from implementing NIST recommendations to improve workflow in ambulatory care using an EHR provide a first step in moving from a billing-centered perspective on how to maintain accurate, comprehensive, and up-to-date information about a group of patients to a clinician-centered perspective. These recommendations point the way towards a “patient visit management system,” which incorporates broader notions of supporting workload management, supporting flexible flow of patients and tasks, enabling accountable distributed work across members of the clinical team, and supporting dynamic tracking of steps in tasks that have longer time distributions. PMID:26290887

  7. Are the expected benefits of requirements reuse hampered by distance? An experiment.

    PubMed

    Carrillo de Gea, Juan M; Nicolás, Joaquín; Fernández-Alemán, José L; Toval, Ambrosio; Idri, Ali

    2016-01-01

    Software development processes are often performed by distributed teams which may be separated by great distances. Global software development (GSD) has undergone a significant growth in recent years. The challenges concerning GSD are especially relevant to requirements engineering (RE). Stakeholders need to share a common ground, but there are many difficulties as regards the potentially variable interpretation of the requirements in different contexts. We posit that the application of requirements reuse techniques could alleviate this problem through the diminution of the number of requirements open to misinterpretation. This paper presents a reuse-based approach with which to address RE in GSD, with special emphasis on specification techniques, namely parameterised requirements and traceability relationships. An experiment was carried out with the participation of 29 university students enrolled on a Computer Science and Engineering course. Two main scenarios that represented co-localisation and distribution in software development were portrayed by participants from Spain and Morocco. The global teams achieved a slightly better performance than the co-located teams as regards effectiveness , which could be a result of the worse productivity of the global teams in comparison to the co-located teams. Subjective perceptions were generally more positive in the case of the distributed teams ( difficulty , speed and understanding ), with the exception of quality . A theoretical model has been proposed as an evaluation framework with which to analyse, from the point of view of the factor of distance, the effect of requirements specification techniques on a set of performance and perception-based variables. The experiment utilised a new internationalisation requirements catalogue. None of the differences found between co-located and distributed teams were significant according to the outcome of our statistical tests. The well-known benefits of requirements reuse in traditional co-located projects could, therefore, also be expected in GSD projects.

  8. Distributing Data to Hand-Held Devices in a Wireless Network

    NASA Technical Reports Server (NTRS)

    Hodges, Mark; Simmons, Layne

    2008-01-01

    ADROIT is a developmental computer program for real-time distribution of complex data streams for display on Web-enabled, portable terminals held by members of an operational team of a spacecraft-command-and-control center who may be located away from the center. Examples of such terminals include personal data assistants, laptop computers, and cellular telephones. ADROIT would make it unnecessary to equip each terminal with platform- specific software for access to the data streams or with software that implements the information-sharing protocol used to deliver telemetry data to clients in the center. ADROIT is a combination of middleware plus software specific to the center. (Middleware enables one application program to communicate with another by performing such functions as conversion, translation, consolidation, and/or integration.) ADROIT translates a data stream (voice, video, or alphanumerical data) from the center into Extensible Markup Language, effectuates a subscription process to determine who gets what data when, and presents the data to each user in real time. Thus, ADROIT is expected to enable distribution of operations and to reduce the cost of operations by reducing the number of persons required to be in the center.

  9. Resource Allocation Planning Helper (RALPH): Lessons learned

    NASA Technical Reports Server (NTRS)

    Durham, Ralph; Reilly, Norman B.; Springer, Joe B.

    1990-01-01

    The current task of Resource Allocation Process includes the planning and apportionment of JPL's Ground Data System composed of the Deep Space Network and Mission Control and Computing Center facilities. The addition of the data driven, rule based planning system, RALPH, has expanded the planning horizon from 8 weeks to 10 years and has resulted in large labor savings. Use of the system has also resulted in important improvements in science return through enhanced resource utilization. In addition, RALPH has been instrumental in supporting rapid turn around for an increased volume of special what if studies. The status of RALPH is briefly reviewed and important lessons learned from the creation of an highly functional design team are focused on through an evolutionary design and implementation period in which an AI shell was selected, prototyped, and ultimately abandoned, and through the fundamental changes to the very process that spawned the tool kit. Principal topics include proper integration of software tools within the planning environment, transition from prototype to delivered to delivered software, changes in the planning methodology as a result of evolving software capabilities and creation of the ability to develop and process generic requirements to allow planning flexibility.

  10. Landsat-7 Simulation and Testing Environments

    NASA Technical Reports Server (NTRS)

    Holmes, E.; Ha, K.; Hawkins, K.; Lombardo, J.; Ram, M.; Sabelhaus, P.; Scott, S.; Phillips, R.

    1999-01-01

    A spacecraft Attitude Control and Determination Subsystem (ACDS) is heavily dependent upon simulation throughout its entire development, implementation and ground test cycle. Engineering simulation tools are typically developed to design and analyze control systems to validate the design and software simulation tools are required to qualify the flight software. However, the need for simulation does not end here. Operating the ACDS of a spacecraft on the ground requires the simulation of spacecraft dynamics, disturbance modeling and celestial body motion. Sensor data must also be simulated and substituted for actual sensor data on the ground so that the spacecraft will respond by sending commands to the actuators as they will on orbit. And finally, the simulators is the primary training tool and test-bed for the Flight Operations Team. In this paper various ACDS simulation, developed for or used by the Landsat 7 project will be described. The paper will include a description of each tool, its unique attributes, and its role in the overall development and testing of the ACDS. Finally, a section is included which discusses how the coordinated use of these simulation tools can maximize the probability of uncovering software, hardware and operations errors during the ground test process.

  11. Design validation of an eye-safe scanning aerosol lidar with the Center for Lidar and Atmospheric Sciences Students (CLASS) at Hampton University

    NASA Astrophysics Data System (ADS)

    Richter, Dale A.; Higdon, N. S.; Ponsardin, Patrick L.; Sanchez, David; Chyba, Thomas H.; Temple, Doyle A.; Gong, Wei; Battle, Russell; Edmondson, Mika; Futrell, Anne; Harper, David; Haughton, Lincoln; Johnson, Demetra; Lewis, Kyle; Payne-Baggott, Renee S.

    2002-01-01

    ITTs Advanced Engineering and Sciences Division and the Hampton University Center for Lidar and Atmospheric Sciences Students (CLASS) team have worked closely to design, fabricate and test an eye-safe, scanning aerosol-lidar system that can be safely deployed and used by students form a variety of disciplines. CLASS is a 5-year undergraduate- research training program funded by NASA to provide hands-on atmospheric-science and lidar-technology education. The system is based on a 1.5 micron, 125 mJ, 20 Hz eye-safe optical parametric oscillator (OPO) and will be used by the HU researchers and students to evaluate the biological impact of aerosols, clouds, and pollution a variety of systems issues. The system design tasks we addressed include the development of software to calculate eye-safety levels and to model lidar performance, implementation of eye-safety features in the lidar transmitter, optimization of the receiver using optical ray tracing software, evaluation of detectors and amplifiers in the near RI, test of OPO and receiver technology, development of hardware and software for laser and scanner control and video display of the scan region.

  12. Team Collaboration Software

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Schrock, Mitchell; Baldwin, John R.; Borden, Charles S.

    2010-01-01

    The Ground Resource Allocation and Planning Environment (GRAPE 1.0) is a Web-based, collaborative team environment based on the Microsoft SharePoint platform, which provides Deep Space Network (DSN) resource planners tools and services for sharing information and performing analysis.

  13. Photo-realistic Terrain Modeling and Visualization for Mars Exploration Rover Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Sims, Michael; Kunz, Clayton; Lees, David; Bowman, Judd

    2005-01-01

    Modern NASA planetary exploration missions employ complex systems of hardware and software managed by large teams of. engineers and scientists in order to study remote environments. The most complex and successful of these recent projects is the Mars Exploration Rover mission. The Computational Sciences Division at NASA Ames Research Center delivered a 30 visualization program, Viz, to the MER mission that provides an immersive, interactive environment for science analysis of the remote planetary surface. In addition, Ames provided the Athena Science Team with high-quality terrain reconstructions generated with the Ames Stereo-pipeline. The on-site support team for these software systems responded to unanticipated opportunities to generate 30 terrain models during the primary MER mission. This paper describes Viz, the Stereo-pipeline, and the experiences of the on-site team supporting the scientists at JPL during the primary MER mission.

  14. Intelligent systems for KSC ground processing

    NASA Technical Reports Server (NTRS)

    Heard, Astrid E.

    1992-01-01

    The ground processing and launch of Shuttle vehicles and their payloads is the primary task of Kennedy Space Center. It is a process which is largely manual and contains little inherent automation. Business is conducted today much as it was during previous NASA programs such as Apollo. In light of new programs and decreasing budgets, NASA must find more cost effective ways in which to do business while retaining the quality and safety of activities. Advanced technologies including artificial intelligence could cut manpower and processing time. This paper is an overview of the research and development in Al technology at KSC with descriptions of the systems which have been implemented, as well as a few under development which are promising additions to ground processing software. Projects discussed cover many facets of ground processing activities, including computer sustaining engineering, subsystem monitor and diagnosis tools and launch team assistants. The deployed Al applications have proven an effectiveness which has helped to demonstrate the benefits of utilizing intelligent software in the ground processing task.

  15. Logistics Modeling for Lunar Exploration Systems

    NASA Technical Reports Server (NTRS)

    Andraschko, Mark R.; Merrill, R. Gabe; Earle, Kevin D.

    2008-01-01

    The extensive logistics required to support extended crewed operations in space make effective modeling of logistics requirements and deployment critical to predicting the behavior of human lunar exploration systems. This paper discusses the software that has been developed as part of the Campaign Manifest Analysis Tool in support of strategic analysis activities under the Constellation Architecture Team - Lunar. The described logistics module enables definition of logistics requirements across multiple surface locations and allows for the transfer of logistics between those locations. A key feature of the module is the loading algorithm that is used to efficiently load logistics by type into carriers and then onto landers. Attention is given to the capabilities and limitations of this loading algorithm, particularly with regard to surface transfers. These capabilities are described within the context of the object-oriented software implementation, with details provided on the applicability of using this approach to model other human exploration scenarios. Some challenges of incorporating probabilistics into this type of logistics analysis model are discussed at a high level.

  16. Implementation of Motion Simulation Software and Visual-Auditory Electronics for Use in a Low Gravity Robotic Testbed

    NASA Technical Reports Server (NTRS)

    Martin, William Campbell

    2011-01-01

    The Jet Propulsion Laboratory (JPL) is developing the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) to assist in manned space missions. One of the proposed targets for this robotic vehicle is a near-Earth asteroid (NEA), which typically exhibit a surface gravity of only a few micro-g. In order to properly test ATHLETE in such an environment, the development team has constructed an inverted Stewart platform testbed that acts as a robotic motion simulator. This project focused on creating physical simulation software that is able to predict how ATHLETE will function on and around a NEA. The corresponding platform configurations are calculated and then passed to the testbed to control ATHLETE's motion. In addition, imitation attitude, imitation attitude control thrusters were designed and fabricated for use on ATHLETE. These utilize a combination of high power LEDs and audio amplifiers to provide visual and auditory cues that correspond to the physics simulation.

  17. The K9 On-Board Rover Architecture

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Bualat, Maria; Fair, Michael; Washington, Richard; Wright, Anne

    2006-01-01

    This paper describes the software architecture of NASA Ames Research Center s K9 rover. The goal of the onboard software architecture team was to develop a modular, flexible framework that would allow both high- and low-level control of the K9 hardware. Examples of low-level control are the simple drive or pan/tilt commands which are handled by the resource managers, and examples of high-level control are the command sequences which are handled by the conditional executive. In between these two control levels are complex behavioral commands which are handled by the pilot, such as drive to goal with obstacle avoidance or visually servo to a target. This paper presents the design of the architecture as of Fall 2000. We describe the state of the architecture implementation as well as its current evolution. An early version of the architecture was used for K9 operations during a dual-rover field experiment conducted by NASA Ames Research Center (ARC) and the Jet Propulsion Laboratory (JPL) from May 14 to May 16, 2000.

  18. Comparative analysis of data base management systems

    NASA Technical Reports Server (NTRS)

    Smith, R.

    1983-01-01

    A study to determine if the Remote File Inquiry (RFI) system would handle the future requirements of the user community is discussed. RFI is a locally written and locally maintained on-line query/update package. The current and future on-line requirements of the user community were studied. Additional consideration was given to the types of data structuring the users required. The survey indicated the features of greatest benefit were: sort, subtotals, totals, record selection, storage of queries, global updating and the ability to page break. The major deficiencies were: one level of hierarchy, excessive response time, software unreliability, difficult to add, delete and modify records, complicated error messages and the lack of ability to perform interfield comparisons. Missing features users required were: formatted screens, interfield comparions, interfield arithmetic, multiple file access, security and data integrity. The survey team recommended Kennedy Space Center move forward to state-of-the-art software, a Data Base Management System which is thoroughly tested and easy to implement and use.

  19. Capturing district nursing through a knowledge-based electronic caseload analysis tool (eCAT).

    PubMed

    Kane, Kay

    2014-03-01

    The Electronic Caseload Analysis Tool (eCAT) is a knowledge-based software tool to assist the caseload analysis process. The tool provides a wide range of graphical reports, along with an integrated clinical advisor, to assist district nurses, team leaders, operational and strategic managers with caseload analysis by describing, comparing and benchmarking district nursing practice in the context of population need, staff resources, and service structure. District nurses and clinical lead nurses in Northern Ireland developed the tool, along with academic colleagues from the University of Ulster, working in partnership with a leading software company. The aim was to use the eCAT tool to identify the nursing need of local populations, along with the variances in district nursing practice, and match the workforce accordingly. This article reviews the literature, describes the eCAT solution and discusses the impact of eCAT on nursing practice, staff allocation, service delivery and workforce planning, using fictitious exemplars and a post-implementation evaluation from the trusts.

  20. Academic Alignment to Reduce the Presence of "Social Loafers" and "Diligent Isolates" in Student Teams

    ERIC Educational Resources Information Center

    Pieterse, Vreda; Thompson, Lisa

    2010-01-01

    The acquisition of effective teamwork skills is crucial in all disciplines. Using an interpretive approach, this study investigates collaboration and co-operation in teams of software engineering students. Teams whose members were both homogeneous and heterogeneous in terms of their members' academic abilities, skills and goals were identified and…

  1. Digital Transplantation Pathology: Combining Whole Slide Imaging, Multiplex Staining, and Automated Image Analysis

    PubMed Central

    Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J

    2013-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785

  2. Gathering, strategizing, motivating and celebrating: the team huddle in a teaching general practice.

    PubMed

    Walsh, Allyn; Moore, Ainsley; Everson, Jennifer; DeCaire, Katharine

    2018-03-01

    To understand how implementing a daily team huddle affected the function of a complex interprofessional team including learners. A qualitative descriptive study using semi-structured interviews in focus groups. An academic general practice teaching practice. All members of one interprofessional team, including nurses, general practitioners, junior doctors, and support staff. Focus group interviews using semi-structured guidance were transcribed and the results analysed using qualitative content analysis. Four interrelated themes were identified: communication and knowledge sharing; efficiency of care; relationship and team building; and shared responsibility for team function. The implementation of the daily team huddle was seen by participants to enhance the collaboration within the team and to contribute to work life enjoyment. Participants perceived that problems were anticipated and solved quickly. Clinical updates and information about patients benefited the team including learners. Junior doctors quickly understood the scope of practice of other team members, but some felt reluctant to offer clinical opinions. The implementation of a daily team huddle was viewed as worthwhile by this large interprofessional general practice team. The delivery of patient care was more efficient, knowledge was readily distributed, and problem solving was shared across the team, including junior doctors.

  3. Contingency theoretic methodology for agent-based web-oriented manufacturing systems

    NASA Astrophysics Data System (ADS)

    Durrett, John R.; Burnell, Lisa J.; Priest, John W.

    2000-12-01

    The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.

  4. The need for scientific software engineering in the pharmaceutical industry

    NASA Astrophysics Data System (ADS)

    Luty, Brock; Rose, Peter W.

    2017-03-01

    Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.

  5. The need for scientific software engineering in the pharmaceutical industry.

    PubMed

    Luty, Brock; Rose, Peter W

    2017-03-01

    Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.

  6. Multidisciplinary teams of case managers in the implementation of an innovative integrated services delivery for the elderly in France.

    PubMed

    de Stampa, Matthieu; Vedel, Isabelle; Trouvé, Hélène; Ankri, Joël; Saint Jean, Olivier; Somme, Dominique

    2014-04-07

    The case management process is now well defined, and teams of case managers have been implemented in integrated services delivery. However, little is known about the role played by the team of case managers and the value in having multidisciplinary case management teams. The objectives were to develop a fuller understanding of the role played by the case manager team and identify the value of inter-professional collaboration in multidisciplinary teams during the implementation of an innovative integrated service in France. We conducted a qualitative study with focus groups comprising 14 multidisciplinary teams for a total of 59 case managers, six months after their recruitment to the MAIA program (Maison Autonomie Integration Alzheimer). Most of the case managers saw themselves as being part of a team of case managers (91.5%). Case management teams help case managers develop a comprehensive understanding of the integration concept, meet the complex needs of elderly people and change their professional practices. Multidisciplinary case management teams add value by helping case managers move from theory to practice, by encouraging them develop a comprehensive clinical vision, and by initiating the interdisciplinary approach. The multidisciplinary team of case managers is central to the implementation of case management and helps case managers develop their new role and a core inter-professional competency.

  7. PACS for Bhutan: a cost effective open source architecture for emerging countries.

    PubMed

    Ratib, Osman; Roduit, Nicolas; Nidup, Dechen; De Geer, Gerard; Rosset, Antoine; Geissbuhler, Antoine

    2016-10-01

    This paper reports the design and implementation of an innovative and cost-effective imaging management infrastructure suitable for radiology centres in emerging countries. It was implemented in the main referring hospital of Bhutan equipped with a CT, an MRI, digital radiology, and a suite of several ultrasound units. They lacked the necessary informatics infrastructure for image archiving and interpretation and needed a system for distribution of images to clinical wards. The solution developed for this project combines several open source software platforms in a robust and versatile archiving and communication system connected to analysis workstations equipped with a FDA-certified version of the highly popular Open-Source software. The whole system was implemented on standard off-the-shelf hardware. The system was installed in three days, and training of the radiologists as well as the technical and IT staff was provided onsite to ensure full ownership of the system by the local team. Radiologists were rapidly capable of reading and interpreting studies on the diagnostic workstations, which had a significant benefit on their workflow and ability to perform diagnostic tasks more efficiently. Furthermore, images were also made available to several clinical units on standard desktop computers through a web-based viewer. • Open source imaging informatics platforms can provide cost-effective alternatives for PACS • Robust and cost-effective open architecture can provide adequate solutions for emerging countries • Imaging informatics is often lacking in hospitals equipped with digital modalities.

  8. EX6AFS: A data acquisition system for high-speed dispersive EXAFS measurements implemented using object-oriented programming techniques

    NASA Astrophysics Data System (ADS)

    Jennings, Guy; Lee, Peter L.

    1995-02-01

    In this paper we describe the design and implementation of a computerized data-acquisition system for high-speed energy-dispersive EXAFS experiments on the X6A beamline at the National Synchrotron Light Source. The acquisition system drives the stepper motors used to move the components of the experimental setup and controls the readout of the EXAFS spectra. The system runs on a Macintosh IIfx computer and is written entirely in the object-oriented language C++. Large segments of the system are implemented by means of commercial class libraries, specifically the MacApp application framework from Apple, the Rogue Wave class library, and the Hierarchical Data Format datafile format library from the National Center for Supercomputing Applications. This reduces the amount of code that must be written and enhances reliability. The system makes use of several advanced features of C++: Multiple inheritance allows the code to be decomposed into independent software components and the use of exception handling allows the system to be much more reliable in the event of unexpected errors. Object-oriented techniques allow the program to be extended easily as new requirements develop. All sections of the program related to a particular concept are located in a small set of source files. The program will also be used as a prototype for future software development plans for the Basic Energy Science Synchrotron Radiation Center Collaborative Access Team beamlines being designed and built at the Advanced Photon Source.

  9. Spacelab software development and integration concepts study report. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Software considerations were developed for incorporation in the spacelab systems design, and include management concepts for top-down structured programming, composite designs for modular programs, and team management methods for production programming.

  10. Orthorectified High Resolution Multispectral Imagery for Application to Change Detection and Analysis

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.

    1997-01-01

    The project team has outlined several technical objectives which will allow the companies to improve on their current capabilities. These include modifications to the imaging system, enabling it to operate more cost effectively and with greater ease of use, automation of the post-processing software to mosaic and orthorectify the image scenes collected, and the addition of radiometric calibration to greatly aid in the ability to perform accurate change detection. Business objectives include fine tuning of the market plan plus specification of future product requirements, expansion of sales activities (including identification of necessary additional resources required to meet stated revenue objectives), development of a product distribution plan, and implementation of a world wide sales effort.

  11. Upgrades to Electronic Speckle Interferometer (ESPI) Operation and Data Analysis at NASA's Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Connelly, Joseph; Blake, Peter; Jones, Joycelyn

    2008-01-01

    The authors report operational upgrades and streamlined data analysis of a commissioned electronic speckle interferometer (ESPI) in a permanent in-house facility at NASA's Goddard Space Flight Center. Our ESPI was commercially purchased for use by the James Webb Space Telescope (JWST) development team. We have quantified and reduced systematic error sources, improved the software operability with a user-friendly graphic interface, developed an instrument simulator, streamlined data analysis for long-duration testing, and implemented a turn-key approach to speckle interferometry. We also summarize results from a test of the JWST support structure (previously published), and present new results from several pieces of test hardware at various environmental conditions.

  12. Controlador para un Reloj GPS de Referencia en el Protocolo NTP

    NASA Astrophysics Data System (ADS)

    Hauscarriaga, F.; Bareilles, F. A.

    The synchronization between computers in a local network plays a very important role on enviroments similar to IAR. Calculations for exact time are needed before, during and after an observation. For this purpose the IAR's GNU/Linux Software Development Team implemented a driver inside NTP protocol (an internet standard for time synchronization of computers) for a GPS receiver acquired a few years ago by IAR, which did not have support in such protocol. Today our Institute has a stable and reliable time base synchronized to atomic clocks on board GPS Satellites according to computers's synchronization standard, offering precise time services to all scientific community and particularly to the University of La Plata. FULL TEXT IN SPANISH

  13. Software Engineering for Scientific Computer Simulations

    NASA Astrophysics Data System (ADS)

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  14. Learning to Write Programs with Others: Collaborative Quadruple Programming

    ERIC Educational Resources Information Center

    Arora, Ritu; Goel, Sanjay

    2012-01-01

    Most software development is carried out by teams of software engineers working collaboratively to achieve the desired goal. Consequently software development education not only needs to develop a student's ability to write programs that can be easily comprehended by others and be able to comprehend programs written by others, but also the ability…

  15. TOPEX SDR Processing, October 1998. Volume 4

    NASA Technical Reports Server (NTRS)

    Lee, Jeffrey E.; Lockwood, Dennis W.

    2003-01-01

    This document is a compendium of the WFF TOPEX Software Development Team's knowledge regarding Sensor Data Record (SDR) Processing. It includes many elements of a requirements document, a software specification document, a software design document, and a user's manual. In the more technical sections, this document assumes the reader is familiar with TOPEX and instrument files.

  16. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure detection, and confirm responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - the ARINC 6535-partitioned Operating System, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by FSW. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure their effectiveness and performance in the exterior FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.

  17. Service evaluation of the implementation of a digitally-enabled care pathway for the recognition and management of acute kidney injury.

    PubMed

    Connell, Alistair; Montgomery, Hugh; Morris, Stephen; Nightingale, Claire; Stanley, Sarah; Emerson, Mary; Jones, Gareth; Sadeghi-Alavijeh, Omid; Merrick, Charles; King, Dominic; Karthikesalingam, Alan; Hughes, Cian; Ledsam, Joseph; Back, Trevor; Rees, Geraint; Raine, Rosalind; Laing, Christopher

    2017-01-01

    Acute Kidney Injury (AKI), an abrupt deterioration in kidney function, is defined by changes in urine output or serum creatinine. AKI is common (affecting up to 20% of acute hospital admissions in the United Kingdom), associated with significant morbidity and mortality, and expensive (excess costs to the National Health Service in England alone may exceed £1 billion per year). NHS England has mandated the implementation of an automated algorithm to detect AKI based on changes in serum creatinine, and to alert clinicians. It is uncertain, however, whether 'alerting' alone improves care quality. We have thus developed a digitally-enabled care pathway as a clinical service to inpatients in the Royal Free Hospital (RFH), a large London hospital. This pathway incorporates a mobile software application - the "Streams-AKI" app, developed by DeepMind Health - that applies the NHS AKI algorithm to routinely collected serum creatinine data in hospital inpatients. Streams-AKI alerts clinicians to potential AKI cases, furnishing them with a trend view of kidney function alongside other relevant data, in real-time, on a mobile device. A clinical response team comprising nephrologists and critical care nurses responds to these AKI alerts by reviewing individual patients and administering interventions according to existing clinical practice guidelines. We propose a mixed methods service evaluation of the implementation of this care pathway. This evaluation will assess how the care pathway meets the health and care needs of service users (RFH inpatients), in terms of clinical outcome, processes of care, and NHS costs. It will also seek to assess acceptance of the pathway by members of the response team and wider hospital community. All analyses will be undertaken by the service evaluation team from UCL (Department of Applied Health Research) and St George's, University of London (Population Health Research Institute).

  18. The Impact of Software Culture on the Management of Community Data

    NASA Astrophysics Data System (ADS)

    Collins, J. A.; Pulsifer, P. L.; Sheffield, E.; Lewis, S.; Oldenburg, J.

    2013-12-01

    The Exchange for Local Observations and Knowledge of the Arctic (ELOKA), a program hosted at the National Snow and Ice Data Center (NSIDC), supports the collection, curation, and distribution of Local and Traditional Knowledge (LTK) data, as well as some quantitative data products. Investigations involving LTK data often involve community participation, and therefore require flexible and robust user interfaces to support a reliable process of data collection and management. Often, investigators focused on LTK and community-based monitoring choose to use ELOKA's data services based on our ability to provide rapid proof-of-concepts and economical delivery of a usable product. To satisfy these two overarching criteria, ELOKA is experimenting with modifications to its software development culture both in terms of how the software applications are developed as well as the kind of software applications (or components) being developed. Over the past several years, NSIDC has shifted its software development culture from one of assigning individual scientific programmers to support particular principal investigators or projects, to an Agile Software Methodology implementation using Scrum practices. ELOKA has participated in this process by working with other product owners to schedule and prioritize development work which is then implemented by a team of application developers. Scrum, along with practices such as Test Driven Development (TDD) and paired programming, improves the quality of the software product delivered to the user community. To meet the need for rapid prototyping and to maximize product development and support with limited developer input, our software development efforts are now focused on creating a platform of application modules that can be quickly customized to suit the needs of a variety of LTK projects. This approach is in contrast to the strategy of delivering custom applications for individual projects. To date, we have integrated components of the Nunaliit Atlas framework (a Java/JavaScript client-server web-based application) with an existing Ruby on Rails application. This approach requires transitioning individual applications to expose a service layer, thus allowing interapplication communication via RESTful services. In this presentation we will report on our experiences using Agile Scrum practices, our efforts to move from custom solutions to a platform of customizable modules, and the impact of each on our ability to support researchers and Arctic residents in the domain of community-based observations and knowledge.

  19. Negotiation and Decision Making with Collaborative Software: How MarineMap 'Changed the Game' in California's Marine Life Protected Act Initiative.

    PubMed

    Cravens, Amanda E

    2016-02-01

    Environmental managers and planners have become increasingly enthusiastic about the potential of decision support tools (DSTs) to improve environmental decision-making processes as information technology transforms many aspects of daily life. Discussions about DSTs, however, rarely recognize the range of ways software can influence users' negotiation, problem-solving, or decision-making strategies and incentives, in part because there are few empirical studies of completed processes that used technology. This mixed-methods study-which draws on data from approximately 60 semi-structured interviews and an online survey--examines how one geospatial DST influenced participants' experiences during a multi-year marine planning process in California. Results suggest that DSTs can facilitate communication by creating a common language, help users understand the geography and scientific criteria in play during the process, aid stakeholders in identifying shared or diverging interests, and facilitate joint problem solving. The same design features that enabled the tool to aid in decision making, however, also presented surprising challenges in certain circumstances by, for example, making it difficult for participants to discuss information that was not spatially represented on the map-based interface. The study also highlights the importance of the social context in which software is developed and implemented, suggesting that the relationship between the software development team and other participants may be as important as technical software design in shaping how DSTs add value. The paper concludes with considerations to inform the future use of DSTs in environmental decision-making processes.

  20. Negotiation and Decision Making with Collaborative Software: How MarineMap `Changed the Game' in California's Marine Life Protected Act Initiative

    NASA Astrophysics Data System (ADS)

    Cravens, Amanda E.

    2016-02-01

    Environmental managers and planners have become increasingly enthusiastic about the potential of decision support tools (DSTs) to improve environmental decision-making processes as information technology transforms many aspects of daily life. Discussions about DSTs, however, rarely recognize the range of ways software can influence users' negotiation, problem-solving, or decision-making strategies and incentives, in part because there are few empirical studies of completed processes that used technology. This mixed-methods study—which draws on data from approximately 60 semi-structured interviews and an online survey—examines how one geospatial DST influenced participants' experiences during a multi-year marine planning process in California. Results suggest that DSTs can facilitate communication by creating a common language, help users understand the geography and scientific criteria in play during the process, aid stakeholders in identifying shared or diverging interests, and facilitate joint problem solving. The same design features that enabled the tool to aid in decision making, however, also presented surprising challenges in certain circumstances by, for example, making it difficult for participants to discuss information that was not spatially represented on the map-based interface. The study also highlights the importance of the social context in which software is developed and implemented, suggesting that the relationship between the software development team and other participants may be as important as technical software design in shaping how DSTs add value. The paper concludes with considerations to inform the future use of DSTs in environmental decision-making processes.

  1. Building the dream team: don't make it a nightmare.

    PubMed

    Nelson, M; Nelson, S

    1997-11-01

    This article covers the often overlooked area of team management concepts through a discussion of what many companies have done to implement these new concepts successfully. It describes the basics of how to and also explains why people resist the process of implementation. The main topics are (1) team formation, (2) pitfalls to avoid, and (3) team measurement.

  2. Extra-team connections for knowledge transfer between staff teams

    PubMed Central

    Ramanadhan, Shoba; Wiecha, Jean L.; Emmons, Karen M.; Gortmaker, Steven L.; Viswanath, Kasisomayajula

    2009-01-01

    As organizations implement novel health promotion programs across multiple sites, they face great challenges related to knowledge management. Staff social networks may be a useful medium for transferring program-related knowledge in multi-site implementation efforts. To study this potential, we focused on the role of extra-team connections (ties between staff members based in different site teams) as potential channels for knowledge sharing. Data come from a cross-sectional study of afterschool childcare staff implementing a health promotion program at 20 urban sites of the Young Men's Christian Association of Greater Boston. We conducted a sociometric social network analysis and attempted a census of 91 program staff members. We surveyed 80 individuals, and included 73 coordinators and general staff, who lead and support implementation, respectively, in this study. A multiple linear regression model demonstrated a positive relationship between extra-team connections (β = 3.41, P < 0.0001) and skill receipt, a measure of knowledge transfer. We also found that intra-team connections (within-team ties between staff members) were also positively related to skill receipt. Connections between teams appear to support knowledge transfer in this network, but likely require greater active facilitation, perhaps via organizational changes. Further research on extra-team connections and knowledge transfer in low-resource, high turnover environments is needed. PMID:19528313

  3. Software-implemented fault insertion: An FTMP example

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1987-01-01

    This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation.

  4. Using a human patient simulator to study the relationship between communication and nursing students' team performance.

    PubMed

    Hirokawa, Randy Y; Daub, Katharyn; Lovell, Eileen; Smith, Sarah; Davis, Alice; Beck, Christine

    2012-11-01

    This study examined the relationship between communication and nursing students' team performance by determining whether variations in team performance are related to differences in communication regarding five task-relevant functions: assessment, diagnosis, planning, implementation, and evaluation. The study results indicate a positive relationship between nursing students' team performance and comments focused on the implementation of treatment(s) and the evaluation of treatment options. A negative relationship between nursing students' team performance and miscellaneous comments made by team members was also observed. Copyright 2012, SLACK Incorporated.

  5. Neurophysiological analytics for all! Free open-source software tools for documenting, analyzing, visualizing, and sharing using electronic notebooks

    PubMed Central

    2016-01-01

    Neurophysiology requires an extensive workflow of information analysis routines, which often includes incompatible proprietary software, introducing limitations based on financial costs, transfer of data between platforms, and the ability to share. An ecosystem of free open-source software exists to fill these gaps, including thousands of analysis and plotting packages written in Python and R, which can be implemented in a sharable and reproducible format, such as the Jupyter electronic notebook. This tool chain can largely replace current routines by importing data, producing analyses, and generating publication-quality graphics. An electronic notebook like Jupyter allows these analyses, along with documentation of procedures, to display locally or remotely in an internet browser, which can be saved as an HTML, PDF, or other file format for sharing with team members and the scientific community. The present report illustrates these methods using data from electrophysiological recordings of the musk shrew vagus—a model system to investigate gut-brain communication, for example, in cancer chemotherapy-induced emesis. We show methods for spike sorting (including statistical validation), spike train analysis, and analysis of compound action potentials in notebooks. Raw data and code are available from notebooks in data supplements or from an executable online version, which replicates all analyses without installing software—an implementation of reproducible research. This demonstrates the promise of combining disparate analyses into one platform, along with the ease of sharing this work. In an age of diverse, high-throughput computational workflows, this methodology can increase efficiency, transparency, and the collaborative potential of neurophysiological research. PMID:27098025

  6. Suggestions for Layout and Functional Behavior of Software-Based Voice Switch Keysets

    NASA Technical Reports Server (NTRS)

    Scott, David W.

    2010-01-01

    Marshall Space Flight Center (MSFC) provides communication services for a number of real time environments, including Space Shuttle Propulsion support and International Space Station (ISS) payload operations. In such settings, control team members speak with each other via multiple voice circuits or loops. Each loop has a particular purpose and constituency, and users are assigned listen and/or talk capabilities for a given loop based on their role in fulfilling the purpose. A voice switch is a given facility's hardware and software that supports such communication, and may be interconnected with other facilities switches to create a large network that, from an end user perspective, acts like a single system. Since users typically monitor and/or respond to several voice loops concurrently for hours on end and real time operations can be very dynamic and intense, it s vital that a control panel or keyset for interfacing with the voice switch be a servant that reduces stress, not a master that adds it. Implementing the visual interface on a computer screen provides tremendous flexibility and configurability, but there s a very real risk of overcomplication. (Remember how office automation made life easier, which led to a deluge of documents that made life harder?) This paper a) discusses some basic human factors considerations related to keysets implemented as application software windows, b) suggests what to standardize at the facility level and what to leave to the user's preference, and c) provides screen shot mockups for a robust but reasonably simple user experience. Concepts apply to keyset needs in almost any type of operations control or support center.

  7. KSC-98pc970

    NASA Image and Video Library

    1998-08-20

    In Firing Room 1 at KSC, Shuttle launch team members put the Shuttle system through an integrated simulation. The control room is set up with software used to simulate flight and ground systems in the launch configuration. A Simulation Team, comprised of KSC engineers, introduce 12 or more major problems to prepare the launch team for worst-case scenarios. Such tests and simulations keep the Shuttle launch team sharp and ready for liftoff. The next liftoff is targeted for Oct. 29

  8. 77 FR 14350 - North Pacific Fishery Management Council; Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-09

    ... Pacific Fishery Management Council Individual Fishing Quota (IFQ) Implementation Team. SUMMARY: The North Pacific Fishery Management Council (Council) IFQ Implementation Team will meet March 26, 2012 in Anchorage...-2809. SUPPLEMENTARY INFORMATION: The Team will review the discussion papers on Vessel Monitoring System...

  9. Implementation of an Anesthesia Information Management System in an Ambulatory Surgery Center.

    PubMed

    Mudumbai, Seshadri C

    2016-01-01

    Anesthesia information management systems (AIMS) are increasingly being implemented throughout the United States. However, little information exists on the implementation process for AIMS within ambulatory surgery centers (ASC). The objectives of this descriptive study are to document: 1) the phases of implementation of an AIMS at an ASC; and 2) lessons learnt from a socio-technical perspective. The ASC, within the Veterans Health Administration (VHA), has hosted an AIMS since 2008. As a quality improvement effort, we implemented a new version of the AIMS. This new version involved fundamental software changes to enhance clinical care such as real-time importing of laboratory data and total hardware exchange. The pre-implementation phase involved coordinated preparation over six months between multiple informatics teams along with local leadership. During this time, we conducted component, integration, and validation testing to ensure correct data flow from medical devices to AIMS and centralized databases. The implementation phase occurred in September 2014 over three days and was successful. Over the next several months, during post-implementation phase, we addressed residual items like latency of the application. Important lessons learnt from the implementation included the utility of partnering early with executive leadership; ensuring end user acceptance of new clinical workflow; continuous testing of data flow; use of a staged rollout; and providing additional personnel throughout implementation. Implementation of an AIMS at an ASC can utilize methods developed for large hospitals. However, issues unique to an ASC such as limited number of support personnel and distinctive workflows must be considered.

  10. CASIS Fact Sheet: Hardware and Facilities

    NASA Technical Reports Server (NTRS)

    Solomon, Michael R.; Romero, Vergel

    2016-01-01

    Vencore is a proven information solutions, engineering, and analytics company that helps our customers solve their most complex challenges. For more than 40 years, we have designed, developed and delivered mission-critical solutions as our customers' trusted partner. The Engineering Services Contract, or ESC, provides engineering and design services to the NASA organizations engaged in development of new technologies at the Kennedy Space Center. Vencore is the ESC prime contractor, with teammates that include Stinger Ghaffarian Technologies, Sierra Lobo, Nelson Engineering, EASi, and Craig Technologies. The Vencore team designs and develops systems and equipment to be used for the processing of space launch vehicles, spacecraft, and payloads. We perform flight systems engineering for spaceflight hardware and software; develop technologies that serve NASA's mission requirements and operations needs for the future. Our Flight Payload Support (FPS) team at Kennedy Space Center (KSC) provides engineering, development, and certification services as well as payload integration and management services to NASA and commercial customers. Our main objective is to assist principal investigators (PIs) integrate their science experiments into payload hardware for research aboard the International Space Station (ISS), commercial spacecraft, suborbital vehicles, parabolic flight aircrafts, and ground-based studies. Vencore's FPS team is AS9100 certified and a recognized implementation partner for the Center for Advancement of Science in Space (CASIS

  11. NASA Work Breakdown Structure (WBS) Handbook

    NASA Technical Reports Server (NTRS)

    Fleming, Jon F.; Poole, Kenneth W.

    2016-01-01

    The purpose of this document is to provide program/project teams necessary instruction and guidance in the best practices for Work Breakdown Structure (WBS) and WBS dictionary development and use for project implementation and management control. This handbook can be used for all types of NASA projects and work activities including research, development, construction, test and evaluation, and operations. The products of these work efforts may be hardware, software, data, or service elements (alone or in combination). The aim of this document is to assist project teams in the development of effective work breakdown structures that provide a framework of common reference for all project elements. The WBS and WBS dictionary are effective management processes for planning, organizing, and administering NASA programs and projects. The guidance contained in this document is applicable to both in-house, NASA-led effort and contracted effort. It assists management teams from both entities in fulfilling necessary responsibilities for successful accomplishment of project cost, schedule, and technical goals. Benefits resulting from the use of an effective WBS include, but are not limited to: providing a basis for assigned project responsibilities, providing a basis for project schedule and budget development, simplifying a project by dividing the total work scope into manageable units, and providing a common reference for all project communication.

  12. Work Breakdown Structure (WBS) Handbook

    NASA Technical Reports Server (NTRS)

    2010-01-01

    The purpose of this document is to provide program/project teams necessary instruction and guidance in the best practices for Work Breakdown Structure (WBS) and WBS dictionary development and use for project implementation and management control. This handbook can be used for all types of NASA projects and work activities including research, development, construction, test and evaluation, and operations. The products of these work efforts may be hardware, software, data, or service elements (alone or in combination). The aim of this document is to assist project teams in the development of effective work breakdown structures that provide a framework of common reference for all project elements. The WBS and WBS dictionary are effective management processes for planning, organizing, and administering NASA programs and projects. The guidance contained in this document is applicable to both in-house, NASA-led effort and contracted effort. It assists management teams from both entities in fulfilling necessary responsibilities for successful accomplishment of project cost, schedule, and technical goals. Benefits resulting from the use of an effective WBS include, but are not limited to: providing a basis for assigned project responsibilities, providing a basis for project schedule development, simplifying a project by dividing the total work scope into manageable units, and providing a common reference for all project communication.

  13. Mechanisms that Trigger a Good Health-Care Response to Intimate Partner Violence in Spain. Combining Realist Evaluation and Qualitative Comparative Analysis Approaches.

    PubMed

    Goicolea, Isabel; Vives-Cases, Carmen; Hurtig, Anna-Karin; Marchal, Bruno; Briones-Vozmediano, Erica; Otero-García, Laura; García-Quinto, Marta; San Sebastian, Miguel

    2015-01-01

    Health care professionals, especially those working in primary health-care services, can play a key role in preventing and responding to intimate partner violence. However, there are huge variations in the way health care professionals and primary health care teams respond to intimate partner violence. In this study we tested a previously developed programme theory on 15 primary health care center teams located in four different Spanish regions: Murcia, C Valenciana, Castilla-León and Cantabria. The aim was to identify the key combinations of contextual factors and mechanisms that trigger a good primary health care center team response to intimate partner violence. A multiple case-study design was used. Qualitative and quantitative information was collected from each of the 15 centers (cases). In order to handle the large amount of information without losing familiarity with each case, qualitative comparative analysis was undertaken. Conditions (context and mechanisms) and outcomes, were identified and assessed for each of the 15 cases, and solution formulae were calculated using qualitative comparative analysis software. The emerging programme theory highlighted the importance of the combination of each team's self-efficacy, perceived preparation and women-centredness in generating a good team response to intimate partner violence. The use of the protocol and accumulated experience in primary health care were the most relevant contextual/intervention conditions to trigger a good response. However in order to achieve this, they must be combined with other conditions, such as an enabling team climate, having a champion social worker and having staff with training in intimate partner violence. Interventions to improve primary health care teams' response to intimate partner violence should focus on strengthening team's self-efficacy, perceived preparation and the implementation of a woman-centred approach. The use of the protocol combined with a large working experience in primary health care, and other factors such as training, a good team climate, and having a champion social worker on the team, also played a key role. Measures to sustain such interventions and promote these contextual factors should be encouraged.

  14. ICU team composition and its association with ABCDE implementation in a quality collaborative.

    PubMed

    Costa, Deena Kelly; Valley, Thomas S; Miller, Melissa A; Manojlovich, Milisa; Watson, Sam R; McLellan, Phyllis; Pope, Corine; Hyzy, Robert C; Iwashyna, Theodore J

    2018-04-01

    Awakening, Breathing Coordination, Delirium, and Early Mobility bundle (ABCDE) should involve an interprofessional team, yet no studies describe what team composition supports implementation. We administered a survey at MHA Keystone Center ICU 2015 workshop. We measured team composition by the frequency of nurse, respiratory therapist, physician, physical therapist, nurse practitioner/physician assistant or nursing assistant involvement in 1) spontaneous awakening trials (SATs), 2) spontaneous breathing trials, 3) delirium and 4) early mobility. We assessed ABCDE implementation using a 5-point Likert ("routine part of every patient's care" - "no plans to implement"). We used ordinal logistic regression to examine team composition and ABCDE implementation, adjusting for confounders and clustering. From 293 surveys (75% response rate), we found that frequent nurse [OR 6.1 (1.1-34.9)] and physician involvement [OR 4.2 (1.3-13.4)] in SATs, nurse [OR 4.7 (1.6-13.4)] and nursing assistant's involvement [OR 3.9 (1.2-13.5)] in delirium and nurse [OR 2.8 (1.2-6.7)], physician [OR (3.6 (1.2-10.3)], and nursing assistants' involvement [OR 2.3 (1.1-4.8)] in early mobility were significantly associated with higher odds of routine ABCDE implementation. ABCDE implementation was associated with frequent involvement of team members, suggesting a need for role articulation and coordination. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Field Guide for Designing Human Interaction with Intelligent Systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Thronesbery, Carroll G.

    1998-01-01

    The characteristics of this Field Guide approach address the problems of designing innovative software to support user tasks. The requirements for novel software are difficult to specify a priori, because there is not sufficient understanding of how the users' tasks should be supported, and there are not obvious pre-existing design solutions. When the design team is in unfamiliar territory, care must be taken to avoid rushing into detailed design, requirements specification, or implementation of the wrong product. The challenge is to get the right design and requirements in an efficient, cost-effective manner. This document's purpose is to describe the methods we are using to design human interactions with intelligent systems which support Space Shuttle flight controllers in the Mission Control Center at NASA/Johnson Space Center. Although these software systems usually have some intelligent features, the design challenges arise primarily from the innovation needed in the software design. While these methods are tailored to our specific context, they should be extensible, and helpful to designers of human interaction with other types of automated systems. We review the unique features of this context so that you can determine how to apply these methods to your project Throughout this Field Guide, goals of the design methods are discussed. This should help designers understand how a specific method might need to be adapted to the project at hand.

  16. WTEC monograph on instrumentation, control and safety systems of Canadian nuclear facilities

    NASA Technical Reports Server (NTRS)

    Uhrig, Robert E.; Carter, Richard J.

    1993-01-01

    This report updates a 1989-90 survey of advanced instrumentation and controls (I&C) technologies and associated human factors issues in the U.S. and Canadian nuclear industries carried out by a team from Oak Ridge National Laboratory (Carter and Uhrig 1990). The authors found that the most advanced I&C systems are in the Canadian CANDU plants, where the newest plant (Darlington) has digital systems in almost 100 percent of its control systems and in over 70 percent of its plant protection system. Increased emphasis on human factors and cognitive science in modern control rooms has resulted in a reduced workload for the operators and the elimination of many human errors. Automation implemented through digital instrumentation and control is effectively changing the role of the operator to that of a systems manager. The hypothesis that properly introducing digital systems increases safety is supported by the Canadian experience. The performance of these digital systems has been achieved using appropriate quality assurance programs for both hardware and software development. Recent regulatory authority review of the development of safety-critical software has resulted in the creation of isolated software modules with well defined interfaces and more formal structure in the software generation. The ability of digital systems to detect impending failures and initiate a fail-safe action is a significant safety issue that should be of special interest to nuclear utilities and regulatory authorities around the world.

  17. Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems

    NASA Technical Reports Server (NTRS)

    Berrick, Stephen; Lynnes, Christopher

    2007-01-01

    The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed several reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple Scalable Script based Science Processor (S4P) and an online data visualization and analysis system (Giovanni). These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust interoperable and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems the emphasis on value-added customer service and the continual goal for achieving higher cost efficiencies. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor In the success of S4P and S4PM which are now available to the open source community under the NASA Open source Agreement

  18. Terra Harvest software architecture

    NASA Astrophysics Data System (ADS)

    Humeniuk, Dave; Klawon, Kevin

    2012-06-01

    Under the Terra Harvest Program, the DIA has the objective of developing a universal Controller for the Unattended Ground Sensor (UGS) community. The mission is to define, implement, and thoroughly document an open architecture that universally supports UGS missions, integrating disparate systems, peripherals, etc. The Controller's inherent interoperability with numerous systems enables the integration of both legacy and future UGS System (UGSS) components, while the design's open architecture supports rapid third-party development to ensure operational readiness. The successful accomplishment of these objectives by the program's Phase 3b contractors is demonstrated via integration of the companies' respective plug-'n'-play contributions that include controllers, various peripherals, such as sensors, cameras, etc., and their associated software drivers. In order to independently validate the Terra Harvest architecture, L-3 Nova Engineering, along with its partner, the University of Dayton Research Institute, is developing the Terra Harvest Open Source Environment (THOSE), a Java Virtual Machine (JVM) running on an embedded Linux Operating System. The Use Cases on which the software is developed support the full range of UGS operational scenarios such as remote sensor triggering, image capture, and data exfiltration. The Team is additionally developing an ARM microprocessor-based evaluation platform that is both energy-efficient and operationally flexible. The paper describes the overall THOSE architecture, as well as the design decisions for some of the key software components. Development process for THOSE is discussed as well.

  19. The human side of lean teams.

    PubMed

    Wackerbarth, Sarah B; Strawser-Srinath, Jamie R; Conigliaro, Joseph C

    2015-05-01

    Organizations use lean principles to increase quality and decrease costs. Lean projects require an understanding of systems-wide processes and utilize interdisciplinary teams. Most lean tools are straightforward, and the biggest barrier to successful implementation is often development of the team aspect of the lean approach. The purpose of this article is to share challenges experienced by a lean team charged with improving a hospital discharge process. Reflection on the experience provides an opportunity to highlight lessons from The Team Handbook by Peter Scholtes and colleagues. To improve the likelihood that process improvement initiatives, including lean projects, will be successful, organizations should consider providing training in organizational change principles and team building. The authors' lean team learned these lessons the hard way. Despite the challenges, the team successfully implemented changes throughout the organization that have had a positive impact. Training to understand the psychology of change might have decreased the resistance faced in implementing these changes. © 2014 by the American College of Medical Quality.

  20. Determinants of treatment plan implementation in multidisciplinary team meetings for patients with chronic diseases: a mixed-methods study

    PubMed Central

    Raine, Rosalind; Xanthopoulou, Penny; Wallace, Isla; Nic a’ Bháird, Caoimhe; Lanceley, Anne; Clarke, Alex; Livingston, Gill; Prentice, Archie; Ardron, Dave; Harris, Miriam; King, Michael; Michie, Susan; Blazeby, Jane M; Austin-Parsons, Natalie; Gibbs, Simon; Barber, Julie

    2014-01-01

    Objective Multidisciplinary team (MDT) meetings are assumed to produce better decisions and are extensively used to manage chronic disease in the National Health Service (NHS). However, evidence for their effectiveness is mixed. Our objective was to investigate determinants of MDT effectiveness by examining factors influencing the implementation of MDT treatment plans. This is a proxy measure of effectiveness, because it lies on the pathway to improvements in health, and reflects team decision making which has taken account of clinical and non-clinical information. Additionally, this measure can be compared across MDTs for different conditions. Methods We undertook a prospective mixed-methods study of 12 MDTs in London and North Thames. Data were collected by observation of 370 MDT meetings, interviews with 53 MDT members, and from 2654 patient medical records. We examined the influence of patient-related factors (disease, age, sex, deprivation, whether their preferences and other clinical/health behaviours were mentioned) and MDT features (as measured using the ‘Team Climate Inventory’ and skill mix) on the implementation of MDT treatment plans. Results The adjusted odds (or likelihood) of implementation was reduced by 25% for each additional professional group represented at the MDT meeting. Implementation was more likely in MDTs with clear goals and processes and a good ‘Team Climate’ (adjusted OR 1.96; 95% CI 1.15 to 3.31 for a unit increase in Team Climate Inventory (TCI) score). Implementation varied by disease category, with the lowest adjusted odds of implementation in mental health teams. Implementation was also lower for patients living in more deprived areas (adjusted odds of implementation for patients in the most compared with least deprived areas was 0.60, 95% CI 0.39 to 0.91). Conclusions Greater multidisciplinarity is not necessarily associated with more effective decision making. Explicit goals and procedures are also crucial. Decision implementation should be routinely monitored to ensure the equitable provision of care. PMID:24915539

  1. The Design of a Fault-Tolerant COTS-Based Bus Architecture

    NASA Technical Reports Server (NTRS)

    Chau, Savio N.; Alkalai, Leon; Burt, John B.; Tai, Ann T.

    1999-01-01

    In this paper, we report our experiences and findings on the design of a fault-tolerant bus architecture comprised of two COTS buses, the IEEE 1394 and the 12C. This fault-tolerant bus is the backbone system bus for the avionics architecture of the X2000 program at the Jet Propulsion Laboratory. COTS buses are attractive because of the availability of low cost commercial products. However, they are not specifically designed for highly reliable applications such as long-life deep-space missions. The X2000 design team has devised a multi-level fault tolerance approach to compensate for this shortcoming of COTS buses. First, the approach enhances the fault tolerance capabilities of the IEEE 1394 and 12 C buses by adding a layer of fault handling hardware and software. Second, algorithms are developed to enable the IEEE 1394 and the 12 C buses assist each other to isolate and recovery from faults. Third, the set of IEEE 1394 and 12 C buses is duplicated to further enhance system reliability. The X2000 design team has paid special attention to guarantee that all fault tolerance provisions will not cause the bus design to deviate from the commercial standard specifications. Otherwise, the economic attractiveness of using COTS will be diminished. The hardware and software design of the X2000 fault-tolerant bus are being implemented and flight hardware will be delivered to the ST4 and Europa Orbiter missions.

  2. Information technology implementing globalization on strategies for quality care provided to children submitted to cardiac surgery: International Quality Improvement Collaborative Program--IQIC.

    PubMed

    Sciarra, Adilia Maria Pires; Croti, Ulisses Alexandre; Batigalia, Fernando

    2014-01-01

    Congenital heart diseases are the world's most common major birth defect, affecting one in every 120 children. Ninety percent of these children are born in areas where appropriate medical care is inadequate or unavailable. To share knowledge and experience between an international center of excellence in pediatric cardiac surgery and a related program in Brazil. The strategy used by the program was based on long-term technological and educational support models used in that center, contributing to the creation and implementation of new programs. The Telemedicine platform was used for real-time monthly broadcast of themes. A chat software was used for interaction between participating members and the group from the center of excellence. Professionals specialized in care provided to the mentioned population had the opportunity to share to the knowledge conveyed. It was possible to observe that the technological resources that implement the globalization of human knowledge were effective in the dissemination and improvement of the team regarding the care provided to children with congenital heart diseases.

  3. Information technology implementing globalization on strategies for quality care provided to children submitted to cardiac surgery: International Quality Improvement Collaborative Program - IQIC

    PubMed Central

    Sciarra, Adilia Maria Pires; Croti, Ulisses Alexandre; Batigalia, Fernando

    2014-01-01

    Introduction Congenital heart diseases are the world's most common major birth defect, affecting one in every 120 children. Ninety percent of these children are born in areas where appropriate medical care is inadequate or unavailable. Objective To share knowledge and experience between an international center of excellence in pediatric cardiac surgery and a related program in Brazil. Methods The strategy used by the program was based on long-term technological and educational support models used in that center, contributing to the creation and implementation of new programs. The Telemedicine platform was used for real-time monthly broadcast of themes. A chat software was used for interaction between participating members and the group from the center of excellence. Results Professionals specialized in care provided to the mentioned population had the opportunity to share to the knowledge conveyed. Conclusion It was possible to observe that the technological resources that implement the globalization of human knowledge were effective in the dissemination and improvement of the team regarding the care provided to children with congenital heart diseases. PMID:24896168

  4. Which factors affect software projects maintenance cost more?

    PubMed

    Dehaghani, Sayed Mehdi Hejazi; Hajrahimi, Nafiseh

    2013-03-01

    The software industry has had significant progress in recent years. The entire life of software includes two phases: production and maintenance. Software maintenance cost is increasingly growing and estimates showed that about 90% of software life cost is related to its maintenance phase. Extraction and considering the factors affecting the software maintenance cost help to estimate the cost and reduce it by controlling the factors. In this study, the factors affecting software maintenance cost were determined then were ranked based on their priority and after that effective ways to reduce the maintenance costs were presented. This paper is a research study. 15 software related to health care centers information systems in Isfahan University of Medical Sciences and hospitals function were studied in the years 2010 to 2011. Among Medical software maintenance team members, 40 were selected as sample. After interviews with experts in this field, factors affecting maintenance cost were determined. In order to prioritize the factors derived by AHP, at first, measurement criteria (factors found) were appointed by members of the maintenance team and eventually were prioritized with the help of EC software. Based on the results of this study, 32 factors were obtained which were classified in six groups. "Project" was ranked the most effective feature in maintenance cost with the highest priority. By taking into account some major elements like careful feasibility of IT projects, full documentation and accompany the designers in the maintenance phase good results can be achieved to reduce maintenance costs and increase longevity of the software.

  5. Software Development in the Water Sciences: a view from the divide (Invited)

    NASA Astrophysics Data System (ADS)

    Miles, B.; Band, L. E.

    2013-12-01

    While training in statistical methods is an important part of many earth scientists' training, these scientists often learn the bulk of their software development skills in an ad hoc, just-in-time manner. Yet to carry out contemporary research scientists are spending more and more time developing software. Here I present perspectives - as an earth sciences graduate student with professional software engineering experience - on the challenges scientists face adopting software engineering practices, with an emphasis on areas of the science software development lifecycle that could benefit most from improved engineering. This work builds on experience gained as part of the NSF-funded Water Science Software Institute (WSSI) conceptualization award (NSF Award # 1216817). Throughout 2013, the WSSI team held a series of software scoping and development sprints with the goals of: (1) adding features to better model green infrastructure within the Regional Hydro-Ecological Simulation System (RHESSys); and (2) infusing test-driven agile software development practices into the processes employed by the RHESSys team. The goal of efforts such as the WSSI is to ensure that investments by current and future scientists in software engineering training will enable transformative science by improving both scientific reproducibility and researcher productivity. Experience with the WSSI indicates: (1) the potential for achieving this goal; and (2) while scientists are willing to adopt some software engineering practices, transformative science will require continued collaboration between domain scientists and cyberinfrastructure experts for the foreseeable future.

  6. Technology-driven dietary assessment: a software developer’s perspective

    PubMed Central

    Buday, Richard; Tapia, Ramsey; Maze, Gary R.

    2015-01-01

    Dietary researchers need new software to improve nutrition data collection and analysis, but creating information technology is difficult. Software development projects may be unsuccessful due to inadequate understanding of needs, management problems, technology barriers or legal hurdles. Cost overruns and schedule delays are common. Barriers facing scientific researchers developing software include workflow, cost, schedule, and team issues. Different methods of software development and the role that intellectual property rights play are discussed. A dietary researcher must carefully consider multiple issues to maximize the likelihood of success when creating new software. PMID:22591224

  7. Proposing an Evidence-Based Strategy for Software Requirements Engineering.

    PubMed

    Lindoerfer, Doris; Mansmann, Ulrich

    2016-01-01

    This paper discusses an evidence-based approach to software requirements engineering. The approach is called evidence-based, since it uses publications on the specific problem as a surrogate for stakeholder interests, to formulate risks and testing experiences. This complements the idea that agile software development models are more relevant, in which requirements and solutions evolve through collaboration between self-organizing cross-functional teams. The strategy is exemplified and applied to the development of a Software Requirements list used to develop software systems for patient registries.

  8. Implementation of the BETTER 2 program: a qualitative study exploring barriers and facilitators of a novel way to improve chronic disease prevention and screening in primary care.

    PubMed

    Sopcak, Nicolette; Aguilar, Carolina; O'Brien, Mary Ann; Nykiforuk, Candace; Aubrey-Bassler, Kris; Cullen, Richard; Grunfeld, Eva; Manca, Donna Patricia

    2016-12-01

    BETTER (Building on Existing Tools to Improve Chronic Disease Prevention and Screening in Primary Care) is a patient-based intervention to improve chronic disease prevention and screening (CDPS) for cardiovascular disease, diabetes, cancer, and associated lifestyle factors in patients aged 40 to 65. The key component of BETTER is a prevention practitioner (PP), a health care professional with specialized skills in CDPS who meets with patients to develop a personalized prevention prescription, using the BETTER toolkit and Brief Action Planning. The purpose of this qualitative study was to understand facilitators and barriers of the implementation of the BETTER 2 program among clinicians, patients, and stakeholders in three (urban, rural, and remote) primary care settings in Newfoundland and Labrador, Canada. We collected and analyzed responses from 20 key informant interviews and 5 focus groups, as well as memos and field notes. Data were organized using Nvivo 10 software and coded using constant comparison methods. We then employed the Consolidated Framework for Implementation Research (CFIR) to focus our analysis on the domains most relevant for program implementation. The following key elements, within the five CFIR domains, were identified as impacting the implementation of BETTER 2: (1) intervention characteristics-complexity and cost of the intervention; (2) outer setting-perception of fit including lack of remuneration, lack of resources, and duplication of services, as well as patients' needs as perceived by physicians and patients; (3) characteristics of prevention practitioners-interest in prevention and ability to support and motivate patients; (4) inner setting-the availability of a local champion and working in a team versus working as a team; and (5) process-planning and engaging, collaboration, and teamwork. The implementation of a novel CDPS program into new primary care settings is a complex, multi-level process. This study identified key elements that hindered or facilitated the implementation of the BETTER approach in three primary care settings in Newfoundland and Labrador. Employing the CFIR as an overarching typology allows for comparisons with other contexts and settings, and may be useful for practices, researchers, and policy-makers interested in the implementation of CDPS programs.

  9. Dipy, a library for the analysis of diffusion MRI data.

    PubMed

    Garyfallidis, Eleftherios; Brett, Matthew; Amirbekian, Bagrat; Rokem, Ariel; van der Walt, Stefan; Descoteaux, Maxime; Nimmo-Smith, Ian

    2014-01-01

    Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing.

  10. Dipy, a library for the analysis of diffusion MRI data

    PubMed Central

    Garyfallidis, Eleftherios; Brett, Matthew; Amirbekian, Bagrat; Rokem, Ariel; van der Walt, Stefan; Descoteaux, Maxime; Nimmo-Smith, Ian

    2014-01-01

    Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing. PMID:24600385

  11. Evaluating the Medical Kit System for the International Space Station(ISS) - A Paradigm Revisited

    NASA Technical Reports Server (NTRS)

    Hailey, Melinda J.; Urbina, Michelle C.; Hughlett, Jessica L.; Gilmore, Stevan; Locke, James; Reyna, Baraquiel; Smith, Gwyn E.

    2010-01-01

    Medical capabilities aboard the International Space Station (ISS) have been packaged to help astronaut crew medical officers (CMO) mitigate both urgent and non-urgent medical issues during their 6-month expeditions. Two ISS crewmembers are designated as CMOs for each 3-crewmember mission and are typically not physicians. In addition, the ISS may have communication gaps of up to 45 minutes during each orbit, necessitating medical equipment that can be reliably operated autonomously during flight. The retirement of the space shuttle combined with ten years of manned ISS expeditions led the Space Medicine Division at the NASA Johnson Space Center to reassess the current ISS Medical Kit System. This reassessment led to the system being streamlined to meet future logistical considerations with current Russian space vehicles and future NASA/commercial space vehicle systems. Methods The JSC Space Medicine Division coordinated the development of requirements, fabrication of prototypes, and conducted usability testing for the new ISS Medical Kit System in concert with implementing updated versions of the ISS Medical Check List and associated in-flight software applications. The teams constructed a medical kit system with the flexibility for use on the ISS, and resupply on the Russian Progress space vehicle and future NASA/commercial space vehicles. Results Prototype systems were developed, reviewed, and tested for implementation. Completion of Preliminary and Critical Design Reviews resulted in a streamlined ISS Medical Kit System that is being used for training by ISS crews starting with Expedition 27 (June 2011). Conclusions The team will present the process for designing, developing, , implementing, and training with this new ISS Medical Kit System.

  12. [Mandibular reconstruction with fibula free flap. Experience of virtual reconstruction using Osirix®, a free and open source software for medical imagery].

    PubMed

    Albert, S; Cristofari, J-P; Cox, A; Bensimon, J-L; Guedon, C; Barry, B

    2011-12-01

    The techniques of free tissue transfers are mainly used for mandibular reconstruction by specialized surgical teams. This type of reconstruction is mostly realized in matters of head and neck cancers affecting mandibular bone and requiring a wide surgical resection and interruption of the mandible. To decrease the duration of the operation, surgical procedure involves generally two teams, one devoted to cancer resection and the other one to raise the fibular flap and making the reconstruction. For a better preparation of this surgical procedure, we propose here the use of a medical imaging software enabling mandibular reconstructions in three dimensions using the CT-scan done during the initial disease-staging checkup. The software used is Osirix®, developed since 2004 by a team of radiologists from Geneva and UCLA, working on Apple® computers and downloadable free of charge in its basic version. We report here our experience of this software in 17 patients, with a preoperative modelling in three dimensions of the mandible, of the segment of mandible to be removed. It also forecasts the numbers of fragments of fibula needed and the location of osteotomies. Copyright © 2009 Elsevier Masson SAS. All rights reserved.

  13. Spacelab user implementation assessment study. (Software requirements analysis). Volume 2: Technical report

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The engineering analyses and evaluation studies conducted for the Software Requirements Analysis are discussed. Included are the development of the study data base, synthesis of implementation approaches for software required by both mandatory onboard computer services and command/control functions, and identification and implementation of software for ground processing activities.

  14. Barriers to and facilitators of implementing complex workplace dietary interventions: process evaluation results of a cluster controlled trial.

    PubMed

    Fitzgerald, Sarah; Geaney, Fiona; Kelly, Clare; McHugh, Sheena; Perry, Ivan J

    2016-04-21

    Ambiguity exists regarding the effectiveness of workplace dietary interventions. Rigorous process evaluation is vital to understand this uncertainty. This study was conducted as part of the Food Choice at Work trial which assessed the comparative effectiveness of a workplace environmental dietary modification intervention and an educational intervention both alone and in combination versus a control workplace. Effectiveness was assessed in terms of employees' dietary intakes, nutrition knowledge and health status in four large manufacturing workplaces. The study aimed to examine barriers to and facilitators of implementing complex workplace interventions, from the perspectives of key workplace stakeholders and researchers involved in implementation. A detailed process evaluation monitored and evaluated intervention implementation. Interviews were conducted at baseline (27 interviews) and at 7-9 month follow-up (27 interviews) with a purposive sample of workplace stakeholders (managers and participating employees). Topic guides explored factors which facilitated or impeded implementation. Researchers involved in recruitment and data collection participated in focus groups at baseline and at 7-9 month follow-up to explore their perceptions of intervention implementation. Data were imported into NVivo software and analysed using a thematic framework approach. Four major themes emerged; perceived benefits of participation, negotiation and flexibility of the implementation team, viability and intensity of interventions and workplace structures and cultures. The latter three themes either positively or negatively affected implementation, depending on context. The implementation team included managers involved in coordinating and delivering the interventions and the researchers who collected data and delivered intervention elements. Stakeholders' perceptions of the benefits of participating, which facilitated implementation, included managers' desire to improve company image and employees seeking health improvements. Other facilitators included stakeholder buy-in, organisational support and stakeholder cohesiveness with regards to the level of support provided to the intervention. Anticipation of employee resistance towards menu changes, workplace restructuring and target-driven workplace cultures impeded intervention implementation. Contextual factors such as workplace structures and cultures need to be considered in the implementation of future workplace dietary interventions. Negotiation and flexibility of key workplace stakeholders plays an integral role in overcoming the barriers of workplace cultures, structures and resistance to change. Current Controlled Trials: ISRCTN35108237. Date of registration: 02/07/2013.

  15. Development and Deployment of the OpenMRS-Ebola Electronic Health Record System for an Ebola Treatment Center in Sierra Leone

    PubMed Central

    Jazayeri, Darius; Teich, Jonathan M; Ball, Ellen; Nankubuge, Patricia Alexandra; Rwebembera, Job; Wing, Kevin; Sesay, Alieu Amara; Kanter, Andrew S; Ramos, Glauber D; Walton, David; Cummings, Rachael; Checchi, Francesco; Fraser, Hamish S

    2017-01-01

    Background Stringent infection control requirements at Ebola treatment centers (ETCs), which are specialized facilities for isolating and treating Ebola patients, create substantial challenges for recording and reviewing patient information. During the 2014-2016 West African Ebola epidemic, paper-based data collection systems at ETCs compromised the quality, quantity, and confidentiality of patient data. Electronic health record (EHR) systems have the potential to address such problems, with benefits for patient care, surveillance, and research. However, no suitable software was available for deployment when large-scale ETCs opened as the epidemic escalated in 2014. Objective We present our work on rapidly developing and deploying OpenMRS-Ebola, an EHR system for the Kerry Town ETC in Sierra Leone. We describe our experience, lessons learned, and recommendations for future health emergencies. Methods We used the OpenMRS platform and Agile software development approaches to build OpenMRS-Ebola. Key features of our work included daily communications between the development team and ground-based operations team, iterative processes, and phased development and implementation. We made design decisions based on the restrictions of the ETC environment and regular user feedback. To evaluate the system, we conducted predeployment user questionnaires and compared the EHR records with duplicate paper records. Results We successfully built OpenMRS-Ebola, a modular stand-alone EHR system with a tablet-based application for infectious patient wards and a desktop-based application for noninfectious areas. OpenMRS-Ebola supports patient tracking (registration, bed allocation, and discharge); recording of vital signs and symptoms; medication and intravenous fluid ordering and monitoring; laboratory results; clinician notes; and data export. It displays relevant patient information to clinicians in infectious and noninfectious zones. We implemented phase 1 (patient tracking; drug ordering and monitoring) after 2.5 months of full-time development. OpenMRS-Ebola was used for 112 patient registrations, 569 prescription orders, and 971 medication administration recordings. We were unable to fully implement phases 2 and 3 as the ETC closed because of a decrease in new Ebola cases. The phase 1 evaluation suggested that OpenMRS-Ebola worked well in the context of the rollout, and the user feedback was positive. Conclusions To our knowledge, OpenMRS-Ebola is the most comprehensive adaptable clinical EHR built for a low-resource setting health emergency. It is designed to address the main challenges of data collection in highly infectious environments that require robust infection prevention and control measures and it is interoperable with other electronic health systems. Although we built and deployed OpenMRS-Ebola more rapidly than typical software, our work highlights the challenges of having to develop an appropriate system during an emergency rather than being able to rapidly adapt an existing one. Lessons learned from this and previous emergencies should be used to ensure that a set of well-designed, easy-to-use, pretested health software is ready for quick deployment in future. PMID:28827211

  16. Fast Deployment on the Cloud of Integrated Postgres, API and a Jupyter Notebook for Geospatial Collaboration

    NASA Astrophysics Data System (ADS)

    Fatland, R.; Tan, A.; Arendt, A. A.

    2016-12-01

    We describe a Python-based implementation of a PostgreSQL database accessed through an Application Programming Interface (API) hosted on the Amazon Web Services public cloud. The data is geospatial and concerns hydrological model results in the glaciated catchment basins of southcentral and southeast Alaska. This implementation, however, is intended to be generalized to other forms of geophysical data, particularly data that is intended to be shared across a collaborative team or publicly. An example (moderate-size) dataset is provided together with the code base and a complete installation tutorial on GitHub. An enthusiastic scientist with some familiarity with software installation can replicate the example system in two hours. This installation includes database, API, a test Client and a supporting Jupyter Notebook, specifically oriented towards Python 3 and markup text to comprise an executable paper. The installation 'on the cloud' often engenders discussion and consideration of cloud cost and safety. By treating the process as somewhat "cookbook" we hope to first demonstrate the feasibility of the proposition. A discussion of cost and data security is provided in this presentation and in the accompanying tutorial/documentation. This geospatial data system case study is part of a larger effort at the University of Washington to enable research teams to take advantage of the public cloud to meet challenges in data management and analysis.

  17. Proficiency Assessment of Male Volleyball Teams of the 13-15-Year Age Group at Estonian Championships

    ERIC Educational Resources Information Center

    Stamm, Meelis; Stamm, Raini; Koskel, Sade

    2008-01-01

    Study aim: Assessment of feasibility of using own computer software "Game" at competitions. Material and methods: The data were collected during Estonian championships in 2006 for male volleyball teams of the 13-15-years age group (n = 8). In all games, the performance of both teams was recorded in parallel with two computers. A total of…

  18. A systematic review of team formulation in clinical psychology practice: Definition, implementation, and outcomes.

    PubMed

    Geach, Nicole; Moghaddam, Nima G; De Boos, Danielle

    2018-06-01

    Team formulation is promoted by professional practice guidelines for clinical psychologists. However, it is unclear whether team formulation is understood/implemented in consistent ways - or whether there is outcome evidence to support the promotion of this practice. This systematic review aimed to (1) synthesize how team formulation practice is defined and implemented by practitioner psychologists and (2) analyse the range of team formulation outcomes in the peer-reviewed literature. Seven electronic bibliographic databases were searched in June 2016. Eleven articles met inclusion criteria and were quality assessed. Extracted data were synthesized using content analysis. Descriptions of team formulation revealed three main forms of instantiation: (1) a structured, consultation approach; (2) semi-structured, reflective practice meetings; and (3) unstructured/informal sharing of ideas through routine interactions. Outcome evidence linked team formulation to a range of outcomes for staff teams and service users, including some negative outcomes. Quality appraisal identified significant issues with evaluation methods; such that, overall, outcomes were not well-supported. There is weak evidence to support the claimed beneficial outcomes of team formulation in practice. There is a need for greater specification and standardization of 'team formulation' practices, to enable a clearer understanding of any relationships with outcomes and implications for best-practice implementations. Under the umbrella term of 'team formulation', three types of practice are reported: (1) highly structured consultation; (2) reflective practice meetings; and (3) informal sharing of ideas. Outcomes linked to team formulation, including some negative outcomes, were not well evidenced. Research using robust study designs is required to investigate the process and outcomes of team formulation practice. © 2017 The British Psychological Society.

  19. An interprofessional team approach to tracheostomy care: a mixed-method investigation into the mechanisms explaining tracheostomy team effectiveness.

    PubMed

    Mitchell, Rebecca; Parker, Vicki; Giles, Michelle

    2013-04-01

    In an effort to reduce tracheostomy-related complications, many acute care facilities have implemented specialist tracheostomy teams. Some studies, however, generate only mixed support for the connection between tracheostomy teams and patient outcomes. This suggests that the effect of collaborative teamwork in tracheostomy care is still not well understood. The aim of this paper is to investigate the mechanisms through which an interprofessional team approach can improve the management of patients with a tracheostomy. The achievement of this research objective requires the collection of rich empirical data, which indicates the use of a qualitative methodology. A case study approach provided an opportunity to collect a wealth of data on tracheostomy team activities and dynamics. Data were collected on an interprofessional tracheostomy team in a large tertiary referral hospital in Australia. The team was composed of clinical nurse consultants, a physiotherapist, a speech pathologist, a dietician, a social worker and medical officers. Data were collected through a focus group and one-to-one, semi-structured in-depth interviews, and thematic analysis was used to analyse experiences of tracheostomy team members. Qualitative analysis resulted in two main themes: interprofessional protocol development and implementation; and interprofessional decision-making. Our findings suggest that tracheostomy teams enhance consistency of care through the development and implementation of interprofessional protocol. In addition, such team allow more efficient and effective communication and decision-making consequent to the collocation of diverse professionals. These findings provide new insight into the role of tracheostomy teams in successfully implementing complex protocol and the explanatory mechanisms through which interprofessional teams may generate positive outcomes for tracheostomy patients. Copyright © 2012. Published by Elsevier Ltd.

  20. Extra-team Connections for Knowledge Transfer between Staff Teams

    ERIC Educational Resources Information Center

    Ramanadhan, Shoba; Wiecha, Jean L.; Emmons, Karen M.; Gortmaker, Steven L.; Viswanath, Kasisomayajula

    2009-01-01

    As organizations implement novel health promotion programs across multiple sites, they face great challenges related to knowledge management. Staff social networks may be a useful medium for transferring program-related knowledge in multi-site implementation efforts. To study this potential, we focused on the role of extra-team connections (ties…

  1. 42 CFR 460.106 - Plan of care.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Services § 460.106 Plan of care. (a) Basic requirement. The interdisciplinary team must promptly develop a... outcomes to be achieved. (c) Implementation of the plan of care. (1) The team must implement, coordinate...) The team must continuously monitor the participant's health and psychosocial status, as well as the...

  2. 42 CFR 460.106 - Plan of care.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Services § 460.106 Plan of care. (a) Basic requirement. The interdisciplinary team must promptly develop a... outcomes to be achieved. (c) Implementation of the plan of care. (1) The team must implement, coordinate...) The team must continuously monitor the participant's health and psychosocial status, as well as the...

  3. Implementing an Open Source Electronic Health Record System in Kenyan Health Care Facilities: Case Study.

    PubMed

    Muinga, Naomi; Magare, Steve; Monda, Jonathan; Kamau, Onesmus; Houston, Stuart; Fraser, Hamish; Powell, John; English, Mike; Paton, Chris

    2018-04-18

    The Kenyan government, working with international partners and local organizations, has developed an eHealth strategy, specified standards, and guidelines for electronic health record adoption in public hospitals and implemented two major health information technology projects: District Health Information Software Version 2, for collating national health care indicators and a rollout of the KenyaEMR and International Quality Care Health Management Information Systems, for managing 600 HIV clinics across the country. Following these projects, a modified version of the Open Medical Record System electronic health record was specified and developed to fulfill the clinical and administrative requirements of health care facilities operated by devolved counties in Kenya and to automate the process of collating health care indicators and entering them into the District Health Information Software Version 2 system. We aimed to present a descriptive case study of the implementation of an open source electronic health record system in public health care facilities in Kenya. We conducted a landscape review of existing literature concerning eHealth policies and electronic health record development in Kenya. Following initial discussions with the Ministry of Health, the World Health Organization, and implementing partners, we conducted a series of visits to implementing sites to conduct semistructured individual interviews and group discussions with stakeholders to produce a historical case study of the implementation. This case study describes how consultants based in Kenya, working with developers in India and project stakeholders, implemented the new system into several public hospitals in a county in rural Kenya. The implementation process included upgrading the hospital information technology infrastructure, training users, and attempting to garner administrative and clinical buy-in for adoption of the system. The initial deployment was ultimately scaled back due to a complex mix of sociotechnical and administrative issues. Learning from these early challenges, the system is now being redesigned and prepared for deployment in 6 new counties across Kenya. Implementing electronic health record systems is a challenging process in high-income settings. In low-income settings, such as Kenya, open source software may offer some respite from the high costs of software licensing, but the familiar challenges of clinical and administration buy-in, the need to adequately train users, and the need for the provision of ongoing technical support are common across the North-South divide. Strategies such as creating local support teams, using local development resources, ensuring end user buy-in, and rolling out in smaller facilities before larger hospitals are being incorporated into the project. These are positive developments to help maintain momentum as the project continues. Further integration with existing open source communities could help ongoing development and implementations of the project. We hope this case study will provide some lessons and guidance for other challenging implementations of electronic health record systems as they continue across Africa. ©Naomi Muinga, Steve Magare, Jonathan Monda, Onesmus Kamau, Stuart Houston, Hamish Fraser, John Powell, Mike English, Chris Paton. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 18.04.2018.

  4. KSC-98pc969

    NASA Image and Video Library

    1998-08-19

    KENNEDY SPACE CENTER, FLA. -- In Firing Room 1 at KSC, Shuttle launch team members put the Shuttle system through an integrated simulation. The control room is set up with software used to simulate flight and ground systems in the launch configuration. A Simulation Team, comprisING KSC engineers, introduce 12 or more major problems to prepare the launch team for worst-case scenarios. Such tests and simulations keep the Shuttle launch team sharp and ready for liftoff. The next liftoff is targeted for Oct. 29.

  5. KSC-98pc971

    NASA Image and Video Library

    1998-08-20

    KENNEDY SPACE CENTER, FLA. -- In Firing Room 1 at KSC, Shuttle launch team members put the Shuttle system through an integrated simulation. The control room is set up with software used to simulate flight and ground systems in the launch configuration. A Simulation Team, comprising KSC engineers, introduce 12 or more major problems to prepare the launch team for worst-case scenarios. Such tests and simulations keep the Shuttle launch team sharp and ready for liftoff. The next liftoff is targeted for Oct. 29

  6. Implementing Total Quality Management in a University Setting.

    ERIC Educational Resources Information Center

    Coate, L. Edwin

    1991-01-01

    Oregon State University implemented Total Quality Management in nine phases: exploration; establishing a pilot study team; defining customer needs; adopting the breakthrough planning process; performing breakthrough planning in divisions; forming daily management teams; initiating cross-functional pilot projects; implementing cross-functional…

  7. Maternity Nurses' Perceptions of Implementation of the Ten Steps to Successful Breastfeeding.

    PubMed

    Cunningham, Emilie M; Doyle, Eva I; Bowden, Rodney G

    The purpose of this study was to determine maternity nurses' perceptions of implementing the Ten Steps to Successful Breastfeeding. An online survey and a focus group were used to evaluate perceptions of maternity nurses of implementing the Ten Steps to Successful Breastfeeding in an urban Texas hospital at the onset of the project initiation. Responses were transcribed and coded using Nvivo software. Thematic analysis was conducted and consensus was reached among the research team to validate themes. Twenty-eight maternity nurses participated. Nurses perceived a number of barriers to implementing the Ten Steps to Successful Breastfeeding including nurse staffing shortages, variations in practice among nurses, different levels of nurse education and knowledge about breastfeeding, lack of parental awareness and knowledge about breastfeeding, culture, and postpartum issues such as maternal fatigue, visitors, and routine required procedures during recovery care that interfered with skin-to-skin positioning. Maternity nurses desired more education about breastfeeding; specifically, a hands-on approach, rather than formal classroom instruction, to be able to promote successful implementation of the Ten Steps. More education on breastfeeding for new mothers, their families, and healthcare providers was recommended. Nurse staffing should be adequate to support nurses in their efforts to promote breastfeeding. Skin-to-skin positioning should be integrated into the recovery period. Hospital leadership support for full implementation and policy adherence is essential. Challenges in implementing the Ten Steps were identified along with potential solutions.

  8. Putting the MeaT into TeaM Training: Development, Delivery, and Evaluation of a Surgical Team-Training Workshop.

    PubMed

    Seymour, Neal E; Paige, John T; Arora, Sonal; Fernandez, Gladys L; Aggarwal, Rajesh; Tsuda, Shawn T; Powers, Kinga A; Langlois, Gerard; Stefanidis, Dimitrios

    2016-01-01

    Despite importance to patient care, team training is infrequently used in surgical education. To address this, a workshop was developed by the Association for Surgical Education Simulation Committee to teach team training using high-fidelity patient simulators and the American College of Surgeons-Association of Program Directors in Surgery team-training curriculum. Workshops were conducted at 3 national meetings. Participants completed preworkshop and postworkshop questionnaires to define experience, confidence in using simulation, intention to implement, as well as workshop content quality. The course consisted of (A) a didactic review of Preparation, Implementation, and Debriefing and (B) facilitated small group simulation sessions followed by debriefings. Of 78 participants, 51 completed the workshops. Overall, 65% indicated that residents at their institutions used patient simulation, but only 33% used the American College of Surgeons-the Association of Program Directors in Surgery team-training modules. The workshop increased confidence to implement simulation team training (3.4 ± 1.3 vs 4.5 ± 0.9). Quality and importance were rated highly (5.4 ± 00.6, highest score = 6). Preparation for simulation-based team training is possible in this workshop setting, although the effect on actual implementation remains to be determined. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  9. Making Sense, Making Do: Local District Implementation of a New State Induction Policy

    ERIC Educational Resources Information Center

    Ellis, Chad D.

    2016-01-01

    Connecticut's Teacher Education and Mentoring (TEAM) program is in its early stages of implementation. This study examined how local school districts implemented TEAM and identified factors that affected implementation. It was based on interviews with twenty-two participants at the state, district, and local school levels. The intentions of the…

  10. IMSF: Infinite Methodology Set Framework

    NASA Astrophysics Data System (ADS)

    Ota, Martin; Jelínek, Ivan

    Software development is usually an integration task in enterprise environment - few software applications work autonomously now. It is usually a collaboration of heterogeneous and unstable teams. One serious problem is lack of resources, a popular result being outsourcing, ‘body shopping’, and indirectly team and team member fluctuation. Outsourced sub-deliveries easily become black boxes with no clear development method used, which has a negative impact on supportability. Such environments then often face the problems of quality assurance and enterprise know-how management. The used methodology is one of the key factors. Each methodology was created as a generalization of a number of solved projects, and each methodology is thus more or less connected with a set of task types. When the task type is not suitable, it causes problems that usually result in an undocumented ad-hoc solution. This was the motivation behind formalizing a simple process for collaborative software engineering. Infinite Methodology Set Framework (IMSF) defines the ICT business process of adaptive use of methods for classified types of tasks. The article introduces IMSF and briefly comments its meta-model.

  11. Teams communicating through STEPPS.

    PubMed

    Stead, Karen; Kumar, Saravana; Schultz, Timothy J; Tiver, Sue; Pirone, Christy J; Adams, Robert J; Wareham, Conrad A

    2009-06-01

    To evaluate the effectiveness of the implementation of a TeamSTEPPS (Team Strategies and Tools to Enhance Performance and Patient Safety) program at an Australian mental health facility. TeamSTEPPS is an evidence-based teamwork training system developed in the United States. Five health care sites in South Australia implemented TeamSTEPPS using a train-the-trainer model over an 8-month intervention period commencing January 2008 and concluding September 2008. A team of senior clinical staff was formed at each site to drive the improvement process. Independent researchers used direct observation and questionnaire surveys to evaluate the effectiveness of the implementation in three outcome areas: observed team behaviours; staff attitudes and opinions; and clinical performance and outcome. The results reported here focus on one site, an inpatient mental health facility. Team knowledge, skills and attitudes; patient safety culture; incident reporting rates; seclusion rates; observation for the frequency of use of TeamSTEPPS tools. Outcomes included restructuring of multidisciplinary meetings and the introduction of structured communication tools. The evaluation of patient safety culture and of staff knowledge, skills and attitudes (KSA) to teamwork and communication indicated a significant improvement in two dimensions of patient safety culture (frequency of event reporting, and organisational learning) and a 6.8% increase in the total KSA score. Clinical outcomes included reduced rates of seclusion. TeamSTEPPS implementation had a substantial impact on patient safety culture, teamwork and communication at an Australian mental health facility. It encouraged a culture of learning from patient safety incidents and making continuous improvements.

  12. Team Production of Learner-Controlled Courseware: A Progress Report.

    ERIC Educational Resources Information Center

    Bunderson, C. Victor

    A project being conducted by the MITRE Corporation and Brigham Young University (BYU) is developing hardware, software, and courseware for the TICCIT (Time Shared, Interactive, Computer Controlled Information Television) computer-assisted instructional system. Four instructional teams at BYU, each having an instructional psychologist, subject…

  13. The shuttle main engine: A first look

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1996-01-01

    Anyone entering the Space Shuttle Main Engine (SSME) team attends a two week course to become familiar with the design and workings of the engine. This course provides intensive coverage of the individual hardware items and their functions. Some individuals, particularly those involved with software maintenance and development, have felt overwhelmed by this volume of material and their lack of a logical framework in which to place it. To provide this logical framework, it was decided that a brief self-taught introduction to the overall operation of the SSME should be designed. To aid the people or new team members with an interest in the software, this new course should also explain the structure and functioning of the controller and its software. This paper presents a description of this presentation.

  14. Fault tolerant software modules for SIFT

    NASA Technical Reports Server (NTRS)

    Hecht, M.; Hecht, H.

    1982-01-01

    The implementation of software fault tolerance is investigated for critical modules of the Software Implemented Fault Tolerance (SIFT) operating system to support the computational and reliability requirements of advanced fly by wire transport aircraft. Fault tolerant designs generated for the error reported and global executive are examined. A description of the alternate routines, implementation requirements, and software validation are included.

  15. The Use of Flexible, Interactive, Situation-Focused Software for the E-Learning of Mathematics.

    ERIC Educational Resources Information Center

    Farnsworth, Ralph Edward

    This paper discusses the classroom, home, and distance use of new, flexible, interactive, application-oriented software known as Active Learning Suite. The actual use of the software, not just a controlled experiment, is reported on. Designed for the e-learning of university mathematics, the program was developed by a joint U.S.-Russia team and…

  16. The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    NASA Technical Reports Server (NTRS)

    Green, Scott; Kouchakdjian, Ara; Basili, Victor; Weidow, David

    1990-01-01

    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage.

  17. Robot Tracking of Human Subjects in Field Environments

    NASA Technical Reports Server (NTRS)

    Graham, Jeffrey; Shillcutt, Kimberly

    2003-01-01

    Future planetary exploration will involve both humans and robots. Understanding and improving their interaction is a main focus of research in the Intelligent Systems Branch at NASA's Johnson Space Center. By teaming intelligent robots with astronauts on surface extra-vehicular activities (EVAs), safety and productivity can be improved. The EVA Robotic Assistant (ERA) project was established to study the issues of human-robot teams, to develop a testbed robot to assist space-suited humans in exploration tasks, and to experimentally determine the effectiveness of an EVA assistant robot. A companion paper discusses the ERA project in general, its history starting with ASRO (Astronaut-Rover project), and the results of recent field tests in Arizona. This paper focuses on one aspect of the research, robot tracking, in greater detail: the software architecture and algorithms. The ERA robot is capable of moving towards and/or continuously following mobile or stationary targets or sequences of targets. The contributions made by this research include how the low-level pose data is assembled, normalized and communicated, how the tracking algorithm was generalized and implemented, and qualitative performance reports from recent field tests.

  18. The researchers' role in knowledge translation: a realist evaluation of the development and implementation of diagnostic pathways for cancer in two United Kingdom localities.

    PubMed

    Banks, Jon; Wye, Lesley; Hall, Nicola; Rooney, James; Walter, Fiona M; Hamilton, Willie; Gjini, Ardiana; Rubin, Greg

    2017-12-13

    In examining an initiative to develop and implement new cancer diagnostic pathways in two English localities, this paper evaluates 'what works' and examines the role of researchers in facilitating knowledge translation amongst teams of local clinicians and policy-makers. Using realist evaluation with a mixed methods case study approach, we conducted documentary analysis of meeting minutes and pathway iterations to map pathway development. We interviewed 14 participants to identify the contexts, mechanisms and outcomes (CMOs) that led to successful pathway development and implementation. Interviews were analysed thematically and four CMO configurations were developed. One site produced three fully implemented pathways, while the other produced two that were partly implemented. In explaining the differences, we found that a respected, independent, well-connected leader modelling partnership working and who facilitates a local, stable group that agree about the legitimacy of the data and project (context) can empower local teams to become sufficiently autonomous (mechanism) to develop and implement research-based pathways (outcome). Although both teams designed relevant, research-based cancer pathways, in the site where the pathways were successfully implemented the research team merely assisted, while, in the other, the research team drove the initiative. Based on our study findings, local stakeholders can apply local and research knowledge to develop and implement research-based pathways. However, success will depend on how academics empower local teams to create autonomy. Crucially, after re-packaging and translating research for local circumstances, identifying fertile environments with the right elements for implementation and developing collaborative relationships with local leaders, academics must step back.

  19. Software fault-tolerance by design diversity DEDIX: A tool for experiments

    NASA Technical Reports Server (NTRS)

    Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.

    1986-01-01

    The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.

  20. A problem of optimal control and observation for distributed homogeneous multi-agent system

    NASA Astrophysics Data System (ADS)

    Kruglikov, Sergey V.

    2017-12-01

    The paper considers the implementation of a algorithm for controlling a distributed complex of several mobile multi-robots. The concept of a unified information space of the controlling system is applied. The presented information and mathematical models of participants and obstacles, as real agents, and goals and scenarios, as virtual agents, create the base forming the algorithmic and software background for computer decision support system. The controlling scheme assumes the indirect management of the robotic team on the basis of optimal control and observation problem predicting intellectual behavior in a dynamic, hostile environment. A basic content problem is a compound cargo transportation by a group of participants in the case of a distributed control scheme in the terrain with multiple obstacles.

  1. Artificial intelligence support for scientific model-building

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1992-01-01

    Scientific model-building can be a time-intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientific development team to understand. We believe that artificial intelligence techniques can facilitate both the model-building and model-sharing process. In this paper, we overview our effort to build a scientific modeling software tool that aids the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high-level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities.

  2. A randomized wait-list controlled analysis of the implementation integrity of team-initiated problem solving processes.

    PubMed

    Newton, J Stephen; Horner, Robert H; Algozzine, Bob; Todd, Anne W; Algozzine, Kate

    2012-08-01

    Members of Positive Behavior Interventions and Supports (PBIS) teams from 34 elementary schools participated in a Team-Initiated Problem Solving (TIPS) Workshop and follow-up technical assistance. Within the context of a randomized wait-list controlled trial, team members who were the first recipients of the TIPS intervention demonstrated greater implementation integrity in using the problem-solving processes during their team meetings than did members of PBIS Teams in the Wait-List Control group. The success of TIPS at improving implementation integrity of the problem-solving processes is encouraging and suggests the value of conducting additional research focused on determining whether there is a functional relation between use of these problem-solving processes and actual resolution of targeted student academic and social problems. Copyright © 2012 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  3. Implementation of critical care response team.

    PubMed

    Al Shimemeri, Abdullah

    2014-04-01

    Analyses of hospital deaths have indicated that a significant proportion of the reported deaths might have been prevented had the patients received intensive level care early enough. Over the past few decades the critical care response team has become an important means of preventing these deaths. As the proactive arm of intensive care delivery, the critical care response team places emphasis on early identification of signs of clinical deterioration, which then prompts the mobilization of intensive care brought right to the patient's bedside. However, the setting up of a critical care response team is a difficult undertaking involving different levels of cooperation between all service stakeholders, and a bringing together of professional expertise and experience in its operations. The implementation of a critical care response team often involves a high-level restructuring of a hospital's service orientation. In the present work, the various factors and different models to be considered in implementing a critical care response team are addressed.

  4. Systematic review of practice guideline dissemination and implementation strategies for healthcare teams and team-based practice.

    PubMed

    Medves, Jennifer; Godfrey, Christina; Turner, Carly; Paterson, Margo; Harrison, Margaret; MacKenzie, Lindsay; Durando, Paola

    2010-06-01

    To synthesis the literature relevant to guideline dissemination and implementation strategies for healthcare teams and team-based practice. Systematic approach utilising Joanna Briggs Institute methods. Two reviewers screened all articles and where there was disagreement, a third reviewer determined inclusion. Initial search revealed 12,083 of which 88 met the inclusion criteria. Ten dissemination and implementation strategies identified with distribution of educational materials the most common. Studies were assessed for patient or practitioner outcomes and changes in practice, knowledge and economic outcomes. A descriptive analysis revealed multiple approaches using teams of healthcare providers were reported to have statistically significant results in knowledge, practice and/or outcomes for 72.7% of the studies. Team-based care using practice guidelines locally adapted can affect positively patient and provider outcomes. © 2010 The Authors. Journal Compilation © Blackwell Publishing Asia Pty Ltd.

  5. Towards implementing coordinated healthy lifestyle promotion in primary care: a mixed method study.

    PubMed

    Thomas, Kristin; Bendtsen, Preben; Krevers, Barbro

    2015-01-01

    Primary care is increasingly being encouraged to integrate healthy lifestyle promotion in routine care. However, implementation has been suboptimal. Coordinated care could facilitate lifestyle promotion practice but more empirical knowledge is needed about the implementation process of coordinated care initiatives. This study aimed to evaluate the implementation of a coordinated healthy lifestyle promotion initiative in a primary care setting. A mixed method, convergent, parallel design was used. Three primary care centres took part in a two-year research project. Data collection methods included individual interviews, document data and questionnaires. The General Theory of Implementation was used as a framework in the analysis to integrate the data sources. Multi-disciplinary teams were implemented in the centres although the role of the teams as a resource for coordinated lifestyle promotion was not fully embedded at the centres. Embedding of the teams was challenged by differences among the staff, patients and team members on resources, commitment, social norms and roles. The study highlights the importance of identifying and engaging key stakeholders early in an implementation process. The findings showed how the development phase influenced the implementation and embedding processes, which add aspects to the General Theory of Implementation.

  6. 45 CFR 153.350 - Risk adjustment data validation standards.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... implementation of any risk adjustment software and ensure proper validation of a statistically valid sample of... respect to implementation of risk adjustment software or as a result of data validation conducted pursuant... implementation of risk adjustment software or data validation. ...

  7. KEYNOTE 2 : Rebuilding the Tower of Babel - Better Communication with Standards

    DTIC Science & Technology

    2013-02-01

    and a member of the Object Management Group (OMG) SysML specification team. He has been developing multi-national complex systems for almost 35 years...critical systems development, virtual team management, systems development, and software development with UML, SysML and Architectural Frameworks

  8. Wireless Sensor Networks for Developmental and Flight Instrumentation

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Figueroa, Fernando; Becker, Jeffrey; Foster, Mark; Wang, Ray; Gamudevelli, Suman; Studor, George

    2011-01-01

    Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network and ZigBee Pro 2007 standards are finding increasing use in home automation and smart energy markets providing a framework for interoperable software. The Wireless Connections in Space Project, funded by the NASA Engineering and Safety Center, is developing technology, metrics and requirements for next-generation spacecraft avionics incorporating wireless data transport. The team from Stennis Space Center and Mobitrum Corporation, working under a NASA SBIR grant, has developed techniques for embedding plug-and-play software into ZigBee WSN prototypes implementing the IEEE 1451 Transducer Electronic Datasheet (TEDS) standard. The TEDS provides meta-information regarding sensors such as serial number, calibration curve and operational status. Incorporation of TEDS into wireless sensors leads directly to building application level software that can recognize sensors at run-time, dynamically instantiating sensors as they are added or removed. The Ames Research Center team has been experimenting with this technology building demonstration prototypes for on-board health monitoring. Innovations in technology, software and process can lead to dramatic improvements for managing sensor systems applied to Developmental and Flight Instrumentation (DFI) aboard aerospace vehicles. A brief overview of the plug-and-play ZigBee WSN technology is presented along with specific targets for application within the aerospace DFI market. The software architecture for the sensor nodes incorporating the TEDS information is described along with the functions of the Network Capable Gateway processor which bridges 802.15.4 PAN to the TCP/IP network. Client application software connects to the Gateway and is used to display TEDS information and real-time sensor data values updated every few seconds, incorporating error detection and logging to help measure performance and reliability in relevant target environments. Test results from our prototype WSN running the Mobitrum software system are summarized and the implications to the scalability and reliability for DFI applications are discussed. Our demonstration system, incorporating sensors for life support system and structural health monitoring is described along with test results obtained by running the demonstration prototype in relevant environments such as the Wireless Habitat Testbed at Johnson Space Center in Houston. An operations concept for improved sensor process flow from design to flight test is outlined specific to the areas of Environmental Control and Life Support System performance characterization and structural health monitoring of human-rated spacecraft. This operations concept will be used to highlight the areas where WSN technology, particularly plug-and-play software based on IEEE 1451, can improve the current process, resulting in significant reductions in the technical effort, overall cost and schedule for providing DFI capability for future spacecraft. RELEASED -

  9. Do learning collaboratives strengthen communication? A comparison of organizational team communication networks over time.

    PubMed

    Bunger, Alicia C; Lengnick-Hall, Rebecca

    Collaborative learning models were designed to support quality improvements, such as innovation implementation by promoting communication within organizational teams. Yet the effect of collaborative learning approaches on organizational team communication during implementation is untested. The aim of this study was to explore change in communication patterns within teams from children's mental health organizations during a year-long learning collaborative focused on implementing a new treatment. We adopt a social network perspective to examine intraorganizational communication within each team and assess change in (a) the frequency of communication among team members, (b) communication across organizational hierarchies, and (c) the overall structure of team communication networks. A pretest-posttest design compared communication among 135 participants from 21 organizational teams at the start and end of a learning collaborative. At both time points, participants were asked to list the members of their team and rate the frequency of communication with each along a 7-point Likert scale. Several individual, pair-wise, and team level communication network metrics were calculated and compared over time. At the individual level, participants reported communicating with more team members by the end of the learning collaborative. Cross-hierarchical communication did not change. At the team level, these changes manifested differently depending on team size. In large teams, communication frequency increased, and networks grew denser and slightly less centralized. In small teams, communication frequency declined, growing more sparse and centralized. Results suggest that team communication patterns change minimally but evolve differently depending on size. Learning collaboratives may be more helpful for enhancing communication among larger teams; thus, managers might consider selecting and sending larger staff teams to learning collaboratives. This study highlights key future research directions that can disentangle the relationship between learning collaboratives and team networks.

  10. Using Hourly Time-Outs and a Standardized Tool to Promote Team Communication, Medical Record Documentation, and Patient Satisfaction During Second-Stage Labor.

    PubMed

    Wood, Jessica; Stevenson, Eleanor

    2018-04-12

    During labor, effective communication and collaboration among the healthcare team is critical for patient safety; however, there is currently no standard for communication and documentation of the plan of care as agreed upon by healthcare team members and the woman in labor. The goal of this project was to increase consistency in communication and collaboration between clinicians and laboring women during secondstage labor. An hourly "time-out" meeting of all healthcare team members was initiated for all women during second-stage labor. A documentation tool was implemented to ensure regular and clear communication between the clinical team and laboring women. Data were collected via medical review of cases of second-stage labor lasting more than 2 hours (n = 21 in the pre-implementation group; n = 39 for 3 months postimplementation; and n = 468 patients for 2 years post-implementation). Surveys were conducted of the clinical team (n = 40) and patients (n = 28). Following implementation, documented agreement of the plan of care increased from 14.3% before the project to 82.1% 3 months after implementation and remained at 81.6% 2 years after implementation. All nurses who participated in the survey reported a clear understanding of how and when to complete necessary medical record documentation during secondstage labor. The providers viewed the project favorably. Most women (92.9%) reported satisfaction with their experience. This project enhanced collaborative communication between members of the clinical team and laboring women and improved patient satisfaction. The improvements were sustainable over a 2-year period.

  11. Value Driven Outcomes (VDO): a pragmatic, modular, and extensible software framework for understanding and improving health care costs and outcomes

    PubMed Central

    Kawamoto, Kensaku; Martin, Cary J; Williams, Kip; Tu, Ming-Chieh; Park, Charlton G; Hunter, Cheri; Staes, Catherine J; Bray, Bruce E; Deshmukh, Vikrant G; Holbrook, Reid A; Morris, Scott J; Fedderson, Matthew B; Sletta, Amy; Turnbull, James; Mulvihill, Sean J; Crabtree, Gordon L; Entwistle, David E; McKenna, Quinn L; Strong, Michael B; Pendleton, Robert C; Lee, Vivian S

    2015-01-01

    Objective To develop expeditiously a pragmatic, modular, and extensible software framework for understanding and improving healthcare value (costs relative to outcomes). Materials and methods In 2012, a multidisciplinary team was assembled by the leadership of the University of Utah Health Sciences Center and charged with rapidly developing a pragmatic and actionable analytics framework for understanding and enhancing healthcare value. Based on an analysis of relevant prior work, a value analytics framework known as Value Driven Outcomes (VDO) was developed using an agile methodology. Evaluation consisted of measurement against project objectives, including implementation timeliness, system performance, completeness, accuracy, extensibility, adoption, satisfaction, and the ability to support value improvement. Results A modular, extensible framework was developed to allocate clinical care costs to individual patient encounters. For example, labor costs in a hospital unit are allocated to patients based on the hours they spent in the unit; actual medication acquisition costs are allocated to patients based on utilization; and radiology costs are allocated based on the minutes required for study performance. Relevant process and outcome measures are also available. A visualization layer facilitates the identification of value improvement opportunities, such as high-volume, high-cost case types with high variability in costs across providers. Initial implementation was completed within 6 months, and all project objectives were fulfilled. The framework has been improved iteratively and is now a foundational tool for delivering high-value care. Conclusions The framework described can be expeditiously implemented to provide a pragmatic, modular, and extensible approach to understanding and improving healthcare value. PMID:25324556

  12. Agile Software Teams: How They Engage with Systems Engineering on DoD Acquisition Programs

    DTIC Science & Technology

    2014-07-01

    under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineer- ing Institute, a federally funded...issues that would preclude or limit the use of Agile methods within the DoD” [Broadus 2013]. As operational tempos increase and programs fight to...environment in which it operates . This makes software different from other disciplines that have toleranc- es, generally resulting in software engineering

  13. The Medical Imaging Interaction Toolkit: challenges and advances : 10 years of open-source development.

    PubMed

    Nolden, Marco; Zelzer, Sascha; Seitel, Alexander; Wald, Diana; Müller, Michael; Franz, Alfred M; Maleike, Daniel; Fangerau, Markus; Baumhauer, Matthias; Maier-Hein, Lena; Maier-Hein, Klaus H; Meinzer, Hans-Peter; Wolf, Ivo

    2013-07-01

    The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control. MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams. MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process. MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today's and tomorrow's clinically motivated research.

  14. Ontario's emergency department process improvement program: the experience of implementation.

    PubMed

    Rotteau, Leahora; Webster, Fiona; Salkeld, Erin; Hellings, Chelsea; Guttmann, Astrid; Vermeulen, Marian J; Bell, Robert S; Zwarenstein, Merrick; Rowe, Brian H; Nigam, Amit; Schull, Michael J

    2015-06-01

    In recent years, Lean manufacturing principles have been applied to health care quality improvement efforts to improve wait times. In Ontario, an emergency department (ED) process improvement program based on Lean principles was introduced by the Ministry of Health and Long-Term Care as part of a strategy to reduce ED length of stay (LOS) and to improve patient flow. This article aims to describe the hospital-based teams' experiences during the ED process improvement program implementation and the teams' perceptions of the key factors that influenced the program's success or failure. A qualitative evaluation was conducted based on semistructured interviews with hospital implementation team members, such as team leads, medical leads, and executive sponsors, at 10 purposively selected hospitals in Ontario, Canada. Sites were selected based, in part, on their changes in median ED LOS following the implementation period. A thematic framework approach as used for interviews, and a standard thematic coding framework was developed. Twenty-four interviews were coded and analyzed. The results are organized according to participants' experience and are grouped into four themes that were identified as significantly affecting the implementation experience: local contextual factors, relationship between improvement team and support players, staff engagement, and success and sustainability. The results demonstrate the importance of the context of implementation, establishing strong relationships and communication strategies, and preparing for implementation and sustainability prior to the start of the project. Several key factors were identified as important to the success of the program, such as preparing for implementation, ensuring strong executive support, creation of implementation teams based on the tasks and outcomes of the initiative, and using multiple communication strategies throughout the implementation process. Explicit incorporation of these factors into the development and implementation of future similar interventions in health care settings could be useful. © 2015 by the Society for Academic Emergency Medicine.

  15. Application of parallelized software architecture to an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Shakya, Rahul; Wright, Adam; Shin, Young Ho; Momin, Orko; Petkovsek, Steven; Wortman, Paul; Gautam, Prasanna; Norton, Adam

    2011-01-01

    This paper presents improvements made to Q, an autonomous ground vehicle designed to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2010 IGVC, Q was upgraded with a new parallelized software architecture and a new vision processor. Improvements were made to the power system reducing the number of batteries required for operation from six to one. In previous years, a single state machine was used to execute the bulk of processing activities including sensor interfacing, data processing, path planning, navigation algorithms and motor control. This inefficient approach led to poor software performance and made it difficult to maintain or modify. For IGVC 2010, the team implemented a modular parallel architecture using the National Instruments (NI) LabVIEW programming language. The new architecture divides all the necessary tasks - motor control, navigation, sensor data collection, etc. into well-organized components that execute in parallel, providing considerable flexibility and facilitating efficient use of processing power. Computer vision is used to detect white lines on the ground and determine their location relative to the robot. With the new vision processor and some optimization of the image processing algorithm used last year, two frames can be acquired and processed in 70ms. With all these improvements, Q placed 2nd in the autonomous challenge.

  16. The SeaDAS Processing and Analysis System: SeaWiFS, MODIS, and Beyond

    NASA Astrophysics Data System (ADS)

    MacDonald, M. D.; Ruebens, M.; Wang, L.; Franz, B. A.

    2005-12-01

    The SeaWiFS Data Analysis System (SeaDAS) is a comprehensive software package for the processing, display, and analysis of ocean data from a variety of satellite sensors. Continuous development and user support by programmers and scientists for more than a decade has helped to make SeaDAS the most widely used software package in the world for ocean color applications, with a growing base of users from the land and sea surface temperature community. Full processing support for past (CZCS, OCTS, MOS) and present (SeaWiFS, MODIS) sensors, and anticipated support for future missions such as NPP/VIIRS, enables end users to reproduce the standard ocean archive product suite distributed by NASA's Ocean Biology Processing Group (OBPG), as well as a variety of evaluation and intermediate ocean, land, and atmospheric products. Availability of the processing algorithm source codes and a software build environment also provide users with the tools to implement custom algorithms. Recent SeaDAS enhancements include synchronization of MODIS processing with the latest code and calibration updates from the MODIS Calibration Support Team (MCST), support for all levels of MODIS processing including Direct Broadcast, a port to the Macintosh OS X operating system, release of the display/analysis-only SeaDAS-Lite, and an extremely active web-based user support forum.

  17. Spaceport Command and Control System - Support Software Development

    NASA Technical Reports Server (NTRS)

    Tremblay, Shayne

    2016-01-01

    The Information Architecture Support (IAS) Team, the component of the Spaceport Command and Control System (SCCS) that is in charge of all the pre-runtime data, was in need of some report features to be added to their internal web application, Information Architecture (IA). Development of these reports is crucial for the speed and productivity of the development team, as they are needed to quickly and efficiently make specific and complicated data requests against the massive IA database. These reports were being put on the back burner, as other development of IA was prioritized over them, but the need for them resulted in internships being created to fill this need. The creation of these reports required learning Ruby on Rails development, along with related web technologies, and they will continue to serve IAS and other support software teams and their IA data needs.

  18. Digital transplantation pathology: combining whole slide imaging, multiplex staining and automated image analysis.

    PubMed

    Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J

    2012-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. ©Copyright 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.

  19. Effectiveness of a multi-level implementation strategy for ASD interventions: study protocol for two linked cluster randomized trials.

    PubMed

    Brookman-Frazee, Lauren; Stahmer, Aubyn C

    2018-05-09

    The Centers for Disease Control (2018) estimates that 1 in 59 children has autism spectrum disorder, and the annual cost of ASD in the U.S. is estimated to be $236 billion. Evidence-based interventions have been developed and demonstrate effectiveness in improving child outcomes. However, research on generalizable methods to scale up these practices in the multiple service systems caring for these children has been limited and is critical to meet this growing public health need. This project includes two, coordinated studies testing the effectiveness of the Translating Evidence-based Interventions (EBI) for ASD: Multi-Level Implementation Strategy (TEAMS) model. TEAMS focuses on improving implementation leadership, organizational climate, and provider attitudes and motivation in order to improve two key implementation outcomes-provider training completion and intervention fidelity and subsequent child outcomes. The TEAMS Leadership Institute applies implementation leadership strategies and TEAMS Individualized Provider Strategies for training applies motivational interviewing strategies to facilitate provider and organizational behavior change. A cluster randomized implementation/effectiveness Hybrid, type 3, trial with a dismantling design will be used to understand the effectiveness of TEAMS and the mechanisms of change across settings and participants. Study #1 will test the TEAMS model with AIM HI (An Individualized Mental Health Intervention for ASD) in publicly funded mental health services. Study #2 will test TEAMS with CPRT (Classroom Pivotal Response Teaching) in education settings. Thirty-seven mental health programs and 37 school districts will be randomized, stratified by county and study, to one of four groups (Standard Provider Training Only, Standard Provider Training + Leader Training, Enhanced Provider Training, Enhanced Provider Training + Leader Training) to test the effectiveness of combining standard, EBI-specific training with the two TEAMS modules individually and together on multiple implementation outcomes. Implementation outcomes including provider training completion, fidelity (coded by observers blind to group assignment) and child behavior change will be examined for 295 mental health providers, 295 teachers, and 590 children. This implementation intervention has the potential to increase quality of care for ASD in publicly funded settings by improving effectiveness of intervention implementation. The process and modules will be generalizable to multiple service systems, providers, and interventions, providing broad impact in community services. This study is registered with Clinicaltrials.gov ( NCT03380078 ). Registered 20 December 2017, retrospectively registered.

  20. The Elements of an Effective Software Development Plan - Software Development Process Guidebook

    DTIC Science & Technology

    2011-11-11

    standards and practices required for all XMPL software development. This SDP implements the <corporate> Standard Software Process (SSP). as tailored...Developing and integrating reusable software products • Approach to managing COTS/Reuse software implementation • COTS/Reuse software selection...final selection and submit to change board for approval MAINTENANCE Monitor current products for obsolescence or end of support Track new

  1. Issues in NASA Program and Project Management: Focus on Project Planning and Scheduling

    NASA Technical Reports Server (NTRS)

    Hoffman, Edward J. (Editor); Lawbaugh, William M. (Editor)

    1997-01-01

    Topics addressed include: Planning and scheduling training for working project teams at NASA, overview of project planning and scheduling workshops, project planning at NASA, new approaches to systems engineering, software reliability assessment, and software reuse in wind tunnel control systems.

  2. People, Process and Technology: Strategies for Assuring Sustainable Implementation of EMRs at Public-Sector Health Facilities in Kenya

    PubMed Central

    Kang’a, Samuel G.; Muthee, Veronica M.; Liku, Nzisa; Too, Diana; Puttkammer, Nancy

    2016-01-01

    The Ministry of Health (MoH) rollout of electronic medical record systems (EMRs) has continuously been embraced across health facilities in Kenya since 2012. This has been driven by a government led process supported by PEPFAR that recommended standardized systems for facilities. Various strategies were deployed to assure meaningful and sustainable EMRs implementation: sensitization of leadership; user training, formation of health facility-level multi-disciplinary teams; formation of county-level Technical Working Groups; data migration; routine data quality assessments; point of care adoption; successive release of software upgrades; and power provision. Successes recorded include goodwill and leadership from the county management (22 counties), growth in the number of EMR trained users (2561 health care workers), collaboration in among other things, data migration(90 health facilities completed) and establishment of county TWGs (13 TWGs). Sustenance of EMRs demand across facilities is possible through; county TWGs oversight, timely resolution of users’ issues and provision of reliable power. PMID:28269864

  3. A modeling paradigm for interdisciplinary water resources modeling: Simple Script Wrappers (SSW)

    NASA Astrophysics Data System (ADS)

    Steward, David R.; Bulatewicz, Tom; Aistrup, Joseph A.; Andresen, Daniel; Bernard, Eric A.; Kulcsar, Laszlo; Peterson, Jeffrey M.; Staggenborg, Scott A.; Welch, Stephen M.

    2014-05-01

    Holistic understanding of a water resources system requires tools capable of model integration. This team has developed an adaptation of the OpenMI (Open Modelling Interface) that allows easy interactions across the data passed between models. Capabilities have been developed to allow programs written in common languages such as matlab, python and scilab to share their data with other programs and accept other program's data. We call this interface the Simple Script Wrapper (SSW). An implementation of SSW is shown that integrates groundwater, economic, and agricultural models in the High Plains region of Kansas. Output from these models illustrates the interdisciplinary discovery facilitated through use of SSW implemented models. Reference: Bulatewicz, T., A. Allen, J.M. Peterson, S. Staggenborg, S.M. Welch, and D.R. Steward, The Simple Script Wrapper for OpenMI: Enabling interdisciplinary modeling studies, Environmental Modelling & Software, 39, 283-294, 2013. http://dx.doi.org/10.1016/j.envsoft.2012.07.006 http://code.google.com/p/simple-script-wrapper/

  4. People, Process and Technology: Strategies for Assuring Sustainable Implementation of EMRs at Public-Sector Health Facilities in Kenya.

    PubMed

    Kang'a, Samuel G; Muthee, Veronica M; Liku, Nzisa; Too, Diana; Puttkammer, Nancy

    2016-01-01

    The Ministry of Health (MoH) rollout of electronic medical record systems (EMRs) has continuously been embraced across health facilities in Kenya since 2012. This has been driven by a government led process supported by PEPFAR that recommended standardized systems for facilities. Various strategies were deployed to assure meaningful and sustainable EMRs implementation: sensitization of leadership; user training, formation of health facility-level multi-disciplinary teams; formation of county-level Technical Working Groups; data migration; routine data quality assessments; point of care adoption; successive release of software upgrades; and power provision. Successes recorded include goodwill and leadership from the county management (22 counties), growth in the number of EMR trained users (2561 health care workers), collaboration in among other things, data migration(90 health facilities completed) and establishment of county TWGs (13 TWGs). Sustenance of EMRs demand across facilities is possible through; county TWGs oversight, timely resolution of users' issues and provision of reliable power.

  5. The Evolution of On-Board Emergency Training for the International Space Station Crew

    NASA Technical Reports Server (NTRS)

    LaBuff, Skyler

    2015-01-01

    The crew of the International Space Station (ISS) receives extensive ground-training in order to safely and effectively respond to any potential emergency event while on-orbit, but few people realize that their training is not concluded when they launch into space. The evolution of the emergency On- Board Training events (OBTs) has recently moved from paper "scripts" to an intranet-based software simulation that allows for the crew, as well as the flight control teams in Mission Control Centers across the world, to share in an improved and more realistic training event. This emergency OBT simulator ensures that the participants experience the training event as it unfolds, completely unaware of the type, location, or severity of the simulated emergency until the scenario begins. The crew interfaces with the simulation software via iPads that they keep with them as they translate through the ISS modules, receiving prompts and information as they proceed through the response. Personnel in the control centers bring up the simulation via an intranet browser at their console workstations, and can view additional telemetry signatures in simulated ground displays in order to assist the crew and communicate vital information to them as applicable. The Chief Training Officers and emergency instructors set the simulation in motion, choosing the type of emergency (rapid depressurization, fire, or toxic atmosphere) and specific initial conditions to emphasize the desired training objectives. Project development, testing, and implementation was a collaborative effort between ISS emergency instructors, Chief Training Officers, Flight Directors, and the Crew Office using commercial off the shelf (COTS) hardware along with simulation software created in-house. Due to the success of the Emergency OBT simulator, the already-developed software has been leveraged and repurposed to develop a new emulator used during fire response ground-training to deliver data that the crew receives from the handheld Compound Specific Analyzer for Combustion Products (CSA-CP). This CSA-CP emulator makes use of a portion of codebase from the Emergency OBT simulator dealing with atmospheric contamination during fire scenarios, and feeds various data signatures to crew via an iPod Touch with a flight-like CSA-CP display. These innovative simulations, which make use of COTS hardware with custom in-house software, have yielded drastic improvements to emergency training effectiveness and risk reduction for ISS crew and flight control teams during on-orbit and ground training events.

  6. Distributed teaming on JPL projects

    NASA Technical Reports Server (NTRS)

    Baroff, L. E.

    2002-01-01

    This paper addresses structures, actions and technologies that contribute to real team development of a distributed team, and the leadership skills and tools that are used to implement that team development.

  7. Circles of Care: Implementation and Evaluation of Support Teams for African Americans with Cancer

    ERIC Educational Resources Information Center

    Hanson, Laura C.; Green, Melissa A.; Hayes, Michelle; Diehl, Sandra J.; Warnock, Steven; Corbie-Smith, Giselle; Lin, Feng-Chang; Earp, Jo Anne

    2014-01-01

    Background: Community-based peer support may help meet the practical, emotional, and spiritual needs of African Americans with advanced cancer. Support teams are a unique model of peer support for persons facing serious illness, but research is rare. This study sought to (a) implement new volunteer support teams for African Americans with advanced…

  8. Career Development via Counselor/Teacher Teams; Guide for Implementation.

    ERIC Educational Resources Information Center

    Royal Oak City School District, MI.

    The career development modules of the implementation guide, designed by counselor/teacher teams in Royal Oak, Michigan for junior high students, are intended to be used as a working copy for counselor/teacher teams. Career education concepts of self-awareness, assessment, and decision-making are correlated with the broad questions of: Who am I?…

  9. Teachers' Perceptions of the Effectiveness of Using Arabic Language Teaching Software in Omani Basic Education

    ERIC Educational Resources Information Center

    Al-Busaidi, Fatma; Al Hashmi, Abdullah; Al Musawi, Ali; Kazem, Ali

    2016-01-01

    This paper is part of a strategic research project that aimed to assess the effectiveness of the design and use of new software for Arabic language learning (ALL). However, the focus of this paper is to understand Arabic teachers' perceptions of the effectiveness of the software that was designed purposely by the project's team to facilitate ALL…

  10. Software Technology Transfer and Export Control.

    DTIC Science & Technology

    1981-01-01

    development projects of their own. By analogy, a Soviet team might be able to repeat the learning experience of the ADEPT-50 junior staff...recommendations concerning product form and further study . The posture of this group has been to consider software technology and its transfer as a process...and views of the Software Subgroup of Technical Working Group 7 (Computers) of the Critical Technologies Project . The work reported

  11. Computer Technology and Its Impact on Recreation and Sport Programs.

    ERIC Educational Resources Information Center

    Ross, Craig M.

    This paper describes several types of computer programs that can be useful to sports and recreation programs. Computerized tournament scheduling software is helpful to recreation and parks staff working with tournaments of 50 teams/individuals or more. Important features include team capacity, league formation, scheduling conflicts, scheduling…

  12. An Analysis of Programs and Implementation of Professional Learning Communities in the Red Clay Consolidated School District with Recommendations for Future Implementation

    ERIC Educational Resources Information Center

    Goodwin, Kenneth L., Jr.

    2012-01-01

    During the 2010-2011 school year, schools throughout the Red Clay Consolidated School District were expected to implement Professional Learning Communities (PLCs); however, little to no guidance was provided to school-level administrators and teacher teams. Not surprisingly, many schools implemented team meetings that were not aligned with…

  13. Implementing Role-Changing Versus Time-Changing Innovations in Health Care: Differences in Helpfulness of Staff Improvement Teams, Management, and Network for Learning.

    PubMed

    Nembhard, Ingrid M; Morrow, Christopher T; Bradley, Elizabeth H

    2015-12-01

    Health care organizations often fail in their effort to implement care-improving innovations. This article differentiates role-changing innovations, altering what workers do, from time-changing innovations, altering when tasks are performed or for how long. We examine our hypothesis that the degree to which access to groups that can alter organizational learning--staff, management, and external network--facilitates implementation depends on innovation type. Our longitudinal study using ordinal logistic regression and survey data on 517 hospitals' implementation of evidence-based practices for treating heart attack confirmed our thesis for factors granting access to each group: improvement team's representativeness (of affected staff), senior management engagement, and network membership. Although team representativeness and network membership were positively associated with implementing role-changing practices, senior management engagement was not. In contrast, senior management engagement was positively associated with implementing time-changing practices, whereas team representativeness was not, and network membership was not unless there was limited management engagement. These findings advance implementation science by explaining mixed results across past studies: Nature of change for workers alters potential facilitators' effects on implementation. © The Author(s) 2015.

  14. Protecting Against Faults in JPL Spacecraft

    NASA Technical Reports Server (NTRS)

    Morgan, Paula

    2007-01-01

    A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.

  15. Reconfigurable, Intelligently-Adaptive, Communication System, an SDR Platform

    NASA Technical Reports Server (NTRS)

    Roche, Rigoberto

    2016-01-01

    The Space Telecommunications Radio System (STRS) provides a common, consistent framework to abstract the application software from the radio platform hardware. STRS aims to reduce the cost and risk of using complex, configurable and reprogrammable radio systems across NASA missions. The Glenn Research Center (GRC) team made a software-defined radio (SDR) platform STRS compliant by adding an STRS operating environment and a field programmable gate array (FPGA) wrapper, capable of implementing each of the platforms interfaces, as well as a test waveform to exercise those interfaces. This effort serves to provide a framework toward waveform development on an STRS compliant platform to support future space communication systems for advanced exploration missions. Validated STRS compliant applications provided tested code with extensive documentation to potentially reduce risk, cost and efforts in development of space-deployable SDRs. This paper discusses the advantages of STRS, the integration of STRS onto a Reconfigurable, Intelligently-Adaptive, Communication System (RIACS) SDR platform, the sample waveform, and wrapper development efforts. The paper emphasizes the infusion of the STRS Architecture onto the RIACS platform for potential use in next generation SDRs for advance exploration missions.

  16. Agenda 21 goes electronic.

    PubMed

    Carter, D

    1996-01-01

    The Canada Center for Remote Sensing, in collaboration with the International Development Research Center, is developing an electronic atlas of Agenda 21, the Earth Summit action plan. This initiative promises to ease access for researchers and practitioners to implement the Agenda 21-action plan, which in its pilot study will focus on biological diversity. Known as the Biodiversity Volume of the Electronic Atlas of Agenda 21 (ELADA 21), this computer software technology will contain information and data on biodiversity, genetics, species, ecosystems, and ecosystem services. Specifically, it includes several country studies, documentation, as well as interactive scenarios linking biodiversity to socioeconomic issues. ELADA 21 will empower countries and agencies to report on and better manage biodiversity and related information. The atlas can be used to develop and test various scenarios and to exchange information within the South and with industrialized countries. At present, ELADA 21 has generated interest and becomes more available in the market. The challenge confronting the project team, however, is to find the atlas a permanent home, a country or agency willing to assume responsibility for maintaining, upgrading, and updating the software.

  17. The NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Gottlich, Gretchen L.; Bianco, David J.; Paulson, Sharon S.; Binkley, Robert L.; Kellogg, Yvonne D.; Beaumont, Chris J.; Schmunk, Robert B.; Kurtz, Michael J.; Accomazzi, Alberto

    1995-01-01

    The National Aeronautics and Space Act of 1958 established NASA and charged it to "provide for the widest practicable and appropriate dissemination of information concerning its activities and the results thereof." The search for innovative methods to distribute NASA's information lead a grass-roots team to create the NASA Technical Report Server (NTRS), which uses the World Wide Web and other popular Internet-based information systems as search engines. The NTRS is an inter-center effort which provides uniform access to various distributed publication servers residing on the Internet. Users have immediate desktop access to technical publications from NASA centers and institutes. The NTRS is comprised of several units, some constructed especially for inclusion in NTRS, and others that are existing NASA publication services that NTRS reuses. This paper presents the NTRS architecture, usage metrics, and the lessons learned while implementing and maintaining the service. The NTRS is largely constructed with freely available software running on existing hardware. NTRS builds upon existing hardware and software, and the resulting additional exposure for the body of literature contained ensures that NASA's institutional knowledge base will continue to receive the widest practicable and appropriate dissemination.

  18. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  19. Implementing an electronic medication overview in Belgium.

    PubMed

    Storms, Hannelore; Marquet, Kristel; Nelissen, Katherine; Hulshagen, Leen; Lenie, Jan; Remmen, Roy; Claes, Neree

    2014-12-16

    An accurate medication overview is essential to reduce medication errors. Therefore, it is essential to keep the medication overview up-to-date and to exchange healthcare information between healthcare professionals and patients. Digitally shared information yields possibilities to improve communication. However, implementing a digitally shared medication overview is challenging. This articles describes the development process of a secured, electronic platform designed for exchanging medication information as executed in a pilot study in Belgium, called "Vitalink". The goal of "Vitalink" is to improve the exchange of medication information between professionals working in healthcare and patients in order to achieve a more efficient cooperation and better quality of care. Healthcare professionals of primary and secondary health care and patients of four Belgian regions participated in the project. In each region project groups coordinated implementation and reported back to the steering committee supervising the pilot study. The electronic medication overview was developed based on consensus in the project groups. The steering committee agreed to establish secured and authorized access through the use of electronic identity documents (eID) and a secured, eHealth-platform conform prior governmental regulations regarding privacy and security of healthcare information. A successful implementation of an electronic medication overview strongly depends on the accessibility and usability of the tool for healthcare professionals. Coordinating teams of the project groups concluded, based on their own observations and on problems reported to them, that secured and quick access to medical data needed to be pursued. According to their observations, the identification process using the eHealth platform, crucial to ensure secured data, was very time consuming. Secondly, software packages should meet the needs of their users, thus be adapted to daily activities of healthcare professionals. Moreover, software should be easy to install and run properly. The project would have benefited from a cost analysis executed by the national bodies prior to implementation.

  20. State of the Practice of Intrusion Detection Technologies

    DTIC Science & Technology

    2000-01-01

    security incident response teams ) - the role of IDS in threat management, such as defining alarm severity, monitoring, alerting, and policy-based...attacks in an effort to sneak under the radar of security specialists and intrusion detection software, a U.S. Navy network security team said today...to get the smoking gun," said Stephen Northcutt, head of the Shadow intrusion detection team at the Naval Surface Warfare Center. "To know what’s

  1. Library Operations Policies and Procedures, Volume 2. Central Archive for Reusable Defense Software (CARDS)

    DTIC Science & Technology

    1994-02-28

    improvements. Pare 10 ka•- V •DkI U Release Manager The Release Manager provides franchisees with media copies of existing libraries, as needed. Security...implementors, and potential library franchisees . Security Team The Security Team assists the Security Officer with security analysis. Team members are...and Franchisees . A Potential User is an individual who requests a Library Account. A User Recruit has been sent a CARDS Library Account Registration

  2. Design study of Software-Implemented Fault-Tolerance (SIFT) computer

    NASA Technical Reports Server (NTRS)

    Wensley, J. H.; Goldberg, J.; Green, M. W.; Kutz, W. H.; Levitt, K. N.; Mills, M. E.; Shostak, R. E.; Whiting-Okeefe, P. M.; Zeidler, H. M.

    1982-01-01

    Software-implemented fault tolerant (SIFT) computer design for commercial aviation is reported. A SIFT design concept is addressed. Alternate strategies for physical implementation are considered. Hardware and software design correctness is addressed. System modeling and effectiveness evaluation are considered from a fault-tolerant point of view.

  3. Inter-professional in-situ simulated team and resuscitation training for patient safety: Description and impact of a programmatic approach.

    PubMed

    Zimmermann, Katja; Holzinger, Iris Bachmann; Ganassi, Lorena; Esslinger, Peter; Pilgrim, Sina; Allen, Meredith; Burmester, Margarita; Stocker, Martin

    2015-10-29

    Inter-professional teamwork is key for patient safety and team training is an effective strategy to improve patient outcome. In-situ simulation is a relatively new strategy with emerging efficacy, but best practices for the design, delivery and implementation have yet to be evaluated. Our aim is to describe and evaluate the implementation of an inter-professional in-situ simulated team and resuscitation training in a teaching hospital with a programmatic approach. We designed and implemented a team and resuscitation training program according to Kern's six steps approach for curriculum development. General and specific needs assessments were conducted as independent cross-sectional surveys. Teamwork, technical skills and detection of latent safety threats were defined as specific objectives. Inter-professional in-situ simulation was used as educational strategy. The training was embedded within the workdays of participants and implemented in our highest acuity wards (emergency department, intensive care unit, intermediate care unit). Self-perceived impact and self-efficacy were sampled with an anonymous evaluation questionnaire after every simulated training session. Assessment of team performance was done with the team-based self-assessment tool TeamMonitor applying Van der Vleuten's conceptual framework of longitudinal evaluation after experienced real events. Latent safety threats were reported during training sessions and after experienced real events. The general and specific needs assessments clearly identified the problems, revealed specific training needs and assisted with stakeholder engagement. Ninety-five interdisciplinary staff members of the Children's Hospital participated in 20 in-situ simulated training sessions within 2 years. Participant feedback showed a high effect and acceptance of training with reference to self-perceived impact and self-efficacy. Thirty-five team members experiencing 8 real critical events assessed team performance with TeamMonitor. Team performance assessment with TeamMonitor was feasible and identified specific areas to target future team training sessions. Training sessions as well as experienced real events revealed important latent safety threats that directed system changes. The programmatic approach of Kern's six steps for curriculum development helped to overcome barriers of design, implementation and assessment of an in-situ team and resuscitation training program. This approach may help improve effectiveness and impact of an in-situ simulated training program.

  4. Cassini Information Management System in Distributed Operations Collaboration and Cassini Science Planning

    NASA Technical Reports Server (NTRS)

    Equils, Douglas J.

    2008-01-01

    Launched on October 15, 1997, the Cassini-Huygens spacecraft began its ambitious journey to the Saturnian system with a complex suite of 12 scientific instruments, and another 6 instruments aboard the European Space Agencies Huygens Probe. Over the next 6 1/2 years, Cassini would continue its relatively simplistic cruise phase operations, flying past Venus, Earth, and Jupiter. However, following Saturn Orbit Insertion (SOI), Cassini would become involved in a complex series of tasks that required detailed resource management, distributed operations collaboration, and a data base for capturing science objectives. Collectively, these needs were met through a web-based software tool designed to help with the Cassini uplink process and ultimately used to generate more robust sequences for spacecraft operations. In 2001, in conjunction with the Southwest Research Institute (SwRI) and later Venustar Software and Engineering Inc., the Cassini Information Management System (CIMS) was released which enabled the Cassini spacecraft and science planning teams to perform complex information management and team collaboration between scientists and engineers in 17 countries. Originally tailored to help manage the science planning uplink process, CIMS has been actively evolving since its inception to meet the changing and growing needs of the Cassini uplink team and effectively reduce mission risk through a series of resource management validation algorithms. These algorithms have been implemented in the web-based software tool to identify potential sequence conflicts early in the science planning process. CIMS mitigates these sequence conflicts through identification of timing incongruities, pointing inconsistencies, flight rule violations, data volume issues, and by assisting in Deep Space Network (DSN) coverage analysis. In preparation for extended mission operations, CIMS has also evolved further to assist in the planning and coordination of the dual playback redundancy of highvalue data from targets such as Titan and Enceladus. This paper will outline the critical role that CIMS has played for Cassini in the distributed ops paradigm throughout operations. This paper will also examine the evolution that CIMS has undergone in the face of new science discoveries and fluctuating operational needs. And finally, this paper will conclude with theoretical adaptation of CIMS for other projects and the potential savings in cost and risk reduction that could potentially be tapped into by future missions.

  5. Virtual Team Governance: Addressing the Governance Mechanisms and Virtual Team Performance

    NASA Astrophysics Data System (ADS)

    Zhan, Yihong; Bai, Yu; Liu, Ziheng

    As technology has improved and collaborative software has been developed, virtual teams with geographically dispersed members spread across diverse physical locations have become increasingly prominent. Virtual team is supported by advancing communication technologies, which makes virtual teams able to largely transcend time and space. Virtual teams have changed the corporate landscape, which are more complex and dynamic than traditional teams since the members of virtual teams are spread on diverse geographical locations and their roles in the virtual team are different. Therefore, how to realize good governance of virtual team and arrive at good virtual team performance is becoming critical and challenging. Good virtual team governance is essential for a high-performance virtual team. This paper explores the performance and the governance mechanism of virtual team. It establishes a model to explain the relationship between the performance and the governance mechanisms in virtual teams. This paper is focusing on managing virtual teams. It aims to find the strategies to help business organizations to improve the performance of their virtual teams and arrive at the objectives of good virtual team management.

  6. Global Situational Awareness with Free Tools

    DTIC Science & Technology

    2015-01-15

    Client Technical Solutions • Software Engineering Measurement and Analysis • Architecture Practices • Product Line Practice • Team Software Process...multiple data sources • Snort (Snorby on Security Onion ) • Nagios • SharePoint RSS • Flow • Others • Leverage standard data formats • Keyhole Markup Language

  7. The development and implementation of a Hospital Emergency Response Team (HERT) for out-of-hospital surgical care.

    PubMed

    Scott, Christopher; Putnam, Brant; Bricker, Scott; Schneider, Laura; Raby, Stephanie; Koenig, William; Gausche-Hill, Marianne

    2012-06-01

    Over the past two decades, Los Angeles County has implemented a Hospital Emergency Response Team (HERT) to provide on-scene, advanced surgical care of injured patients as an element of the local Emergency Medical Services (EMS) system. Since 2008, the primary responsibility of the team has been to perform surgical procedures in the austere field setting when prolonged extrication is anticipated. Following the maxim of "life over limb," the team is equipped to provide rapid amputation of an entrapped extremity as well as other procedures and medical care, such as anxiolytics and advanced pain control. This report describes the development and implementation of a local EMS system HERT.

  8. The Evolution of an Interprofessional Shared Decision-Making Research Program: Reflective Case Study of an Emerging Paradigm.

    PubMed

    Dogba, Maman Joyce; Menear, Matthew; Stacey, Dawn; Brière, Nathalie; Légaré, France

    2016-07-19

    Healthcare research increasingly focuses on interprofessional collaboration and on shared decision making, but knowledge gaps remain about effective strategies for implementing interprofessional collaboration and shared decision-making together in clinical practice. We used Kuhn's theory of scientific revolutions to reflect on how an integrated interprofessional shared decision-making approach was developed and implemented over time. In 2007, an interdisciplinary team initiated a new research program to promote the implementation of an interprofessional shared decision-making approach in clinical settings. For this reflective case study, two new team members analyzed the team's four projects, six research publications, one unpublished and two published protocols and organized them into recognizable phases according to Kuhn's theory. The merging of two young disciplines led to challenges characteristic of emerging paradigms. Implementation of interprofessional shared-decision making was hindered by a lack of conceptual clarity, a dearth of theories and models, little methodological guidance, and insufficient evaluation instruments. The team developed a new model, identified new tools, and engaged knowledge users in a theory-based approach to implementation. However, several unresolved challenges remain. This reflective case study sheds light on the evolution of interdisciplinary team science. It offers new approaches to implementing emerging knowledge in the clinical context.

  9. Team Training in the Perioperative Arena: A Methodology for Implementation and Auditing Behavior.

    PubMed

    Rhee, Amanda J; Valentin-Salgado, Yessenia; Eshak, David; Feldman, David; Kischak, Pat; Reich, David L; LoPachin, Vicki; Brodman, Michael

    Preventable medical errors in the operating room are most often caused by ineffective communication and suboptimal team dynamics. TeamSTEPPS is a government-funded, evidence-based program that provides tools and education to improve teamwork in medicine. The study hospital implemented TeamSTEPPS in the operating room and merged the program with a surgical safety checklist. Audits were performed to collect both quantitative and qualitative information on time out (brief) and debrief conversations, using a standardized audit tool. A total of 1610 audits over 6 months were performed by live auditors. Performance was sustained at desired levels or improved for all qualitative metrics using χ 2 and linear regression analyses. Additionally, the absolute number of wrong site/side/person surgery and unintentionally retained foreign body counts decreased after TeamSTEPPS implementation.

  10. Distributed subterranean exploration and mapping with teams of UAVs

    NASA Astrophysics Data System (ADS)

    Rogers, John G.; Sherrill, Ryan E.; Schang, Arthur; Meadows, Shava L.; Cox, Eric P.; Byrne, Brendan; Baran, David G.; Curtis, J. Willard; Brink, Kevin M.

    2017-05-01

    Teams of small autonomous UAVs can be used to map and explore unknown environments which are inaccessible to teams of human operators in humanitarian assistance and disaster relief efforts (HA/DR). In addition to HA/DR applications, teams of small autonomous UAVs can enhance Warfighter capabilities and provide operational stand-off for military operations such as cordon and search, counter-WMD, and other intelligence, surveillance, and reconnaissance (ISR) operations. This paper will present a hardware platform and software architecture to enable distributed teams of heterogeneous UAVs to navigate, explore, and coordinate their activities to accomplish a search task in a previously unknown environment.

  11. A new role for the ACNP: the rapid response team leader.

    PubMed

    Morse, Kate J; Warshawsky, Deborah; Moore, Jacqueline M; Pecora, Denise C

    2006-01-01

    The implementation of a rapid response team or medical emergency team is 1 of the 6 initiatives of the Institute for Healthcare Improvement's 100,000 Lives Campaign with the goal to reduce the number of cardiopulmonary arrests outside the intensive care unit and inpatient mortality rates. The concept of RRT was pioneered in Australia and is now being implemented in many hospitals across the United States. This article reviews the current literature and describes the implementation of an RRT in a community hospital. The first-quarter data after implementation are described. The unique role of the acute care nurse practitioner in this hospital's model is described.

  12. Using Selection Pressure as an Asset to Develop Reusable, Adaptable Software Systems

    NASA Astrophysics Data System (ADS)

    Berrick, S. W.; Lynnes, C.

    2007-12-01

    The Goddard Earth Sciences Data and Information Services Center (GES DISC) at NASA has over the years developed and honed a number of reusable architectural components for supporting large-scale data centers with a large customer base. These include a processing system (S4PM) and an archive system (S4PA) based upon a workflow engine called the Simple, Scalable, Script-based Science Processor (S4P); an online data visualization and analysis system (Giovanni); and the radically simple and fast data search tool, Mirador. These subsystems are currently reused internally in a variety of combinations to implement customized data management on behalf of instrument science teams and other science investigators. Some of these subsystems (S4P and S4PM) have also been reused by other data centers for operational science processing. Our experience has been that development and utilization of robust, interoperable, and reusable software systems can actually flourish in environments defined by heterogeneous commodity hardware systems, the emphasis on value-added customer service, and continual cost reduction pressures. The repeated internal reuse that is fostered by such an environment encourages and even forces changes to the software that make it more reusable and adaptable. Allowing and even encouraging such selective pressures to software development has been a key factor in the success of S4P and S4PM, which are now available to the open source community under the NASA Open Source Agreement.

  13. Complex Instruction and Teaming: The Relationship between School Organization and the Introduction of an Instructional Innovation.

    ERIC Educational Resources Information Center

    Lee, Ginny; Filby, Nikola

    This document presents findings of a study that examined the impact of teacher teaming on the implementation of a comprehensive program of curriculum and instruction. The program, Complex Instruction (CI), was implemented in four middle schools in the Riverdale School District (Arizona), each of which utilized some form of teaming instruction. CI…

  14. Learning Together and Working Apart: Routines for Organizational Learning in Virtual Teams

    ERIC Educational Resources Information Center

    Dixon, Nancy

    2017-01-01

    Purpose: Research suggests that teaming routines facilitate learning in teams. This paper identifies and details how specific teaming routines, implemented in a virtual team, support its continual learning. The study's focus was to generate authentic and descriptive accounts of the interviewees' experiences with virtual teaming routines.…

  15. Implementing augmentative and alternative communication in inclusive educational settings: a case study.

    PubMed

    Stoner, Julia B; Angell, Maureen E; Bailey, Rita L

    2010-06-01

    The purpose of this study was to describe a single case of augmentative and alternative communication (AAC) implementation. Case study methodology was used to describe the perspectives of educational team members regarding AAC implementation for Joey, a high school junior with athetoid cerebral palsy. Benefits included greater intelligibility for Joey and subsequent comfort of the staff. Facilitators of Joey's AAC system use included the team's student-focused disposition and willingness to implement use of the device, Joey's increased intelligibility, peers' acceptance of the technology, and the resulting increase in Joey's socialization. Limited team cohesiveness, problem solving, and communication were the true barriers in this case. Implications of these facilitators and barriers are discussed and recommendations for school-based AAC implementation are made.

  16. Study and Implementation of the End-to-End Data Pipeline for the Virtis Imaging Spectrometer Onbaord Venus Express: "From Science Operations Planning to Data Archiving and Higher Lever Processing"

    NASA Astrophysics Data System (ADS)

    Cardesín Moinelo, Alejandro

    2010-04-01

    This PhD Thesis describes the activities performed during the Research Program undertaken for two years at the Istituto Nazionale di AstroFisica in Rome, Italy, as active member of the VIRTIS Technical and Scientific Team, and one additional year at the European Space Astronomy Center in Madrid, Spain, as member of the Mars Express Science Ground Segment. This document will show a study of all sections of the Science Ground Segment of the Venus Express mission, from the planning of the scientific operations, to the generation, calibration and archiving of the science data, including the production of valuable high level products. We will present and discuss here the end-to-end diagram of the ground segment from the technical and scientific point of view, in order to describe the overall flow of information: from the original scientific requests of the principal investigator and interdisciplinary teams, up to the spacecraft, and down again for the analysis of the measurements and interpretation of the scientific results. These scientific results drive to new and more elaborated scientific requests, which are used as feedback to the planning cycle, closing the circle. Special attention is given here to describe the implementation and development of the data pipeline for the VIRTIS instrument onboard Venus Express. During the research program, both the raw data generation pipeline and the data calibration pipeline were developed and automated in order to produce the final raw and calibrated data products from the input telemetry of the instrument. The final raw and calibrated products presented in this work are currently being used by the VIRTIS Science team for data analysis and are distributed to the whole scientific community via the Planetary Science Archive. More than 20,000 raw data files and 10,000 calibrated products have already been generated after almost 4 years of mission. In the final part of the Thesis, we will also present some high level data processing methods developed for the Mapping channel of the VIRTIS instrument. These methods have been implemented for the generation of high level global maps of measured radiance over the whole planet, which can then be used for the understanding of the global dynamics and morphology of the Venusian atmosphere. This method is currently being used to compare different emissions probing at different altitudes from the low cloud layers up to the upper mesosphere, by using the averaged projected values of radiance observed by the instrument, such as the near infrared windows at 1.7 μm and 2.3μm, the thermal region at 3.8μm and 5μm plus the analysis of particular emissions in the night and day side of the planet. This research has been undertaken under guidance and supervision of Giuseppe Piccioni, VIRTIS co-Principal Investigator, with support of the entire VIRTIS technical and scientific team, in particular of the Archiving team in Paris (LESIA-Meudon). The work has also been done in close collaboration with the Science and Mission Operations Centres in Madrid and Darmstadt (European Space Agency), the EGSE software developer (Techno Systems), the manufacturer of the VIRTIS instrument (Galileo Avionica) and the developer of the VIRTIS onboard software (DLR Berlin). The outcome of the technical and scientific work presented in this thesis is currently being used by the VIRTIS team to continue the investigations on the Venusian atmosphere and plan new scientific observations to improve the overall knowledge of the solar system. At the end of this document we show some of the many technical and scientific contributions, which have already been published in several international journals and conferences, and some articles of the European Space Agency used for public outreach.

  17. ISS Operations Cost Reductions Through Automation of Real-Time Planning Tasks

    NASA Technical Reports Server (NTRS)

    Hall, Timothy A.; Clancey, William J.; McDonald, Aaron; Toschlog, Jason; Tucker, Tyson; Khan, Ahmed; Madrid, Steven (Eric)

    2011-01-01

    In 2007 the Johnson Space Center s Mission Operations Directorate (MOD) management team challenged their organizations to find ways to reduce the cost of operations for supporting the International Space Station (ISS) in the Mission Control Center (MCC). Each MOD organization was asked to define and execute projects that would help them attain cost reductions by 2012. The MOD Operations Division Flight Planning Branch responded to this challenge by launching several software automation projects that would allow them to greatly improve console operations and reduce ISS console staffing and intern reduce operating costs. These tasks ranged from improving the management and integration mission plan changes, to automating the uploading and downloading of information to and from the ISS and the associated ground complex tasks that required multiple decision points. The software solutions leveraged several different technologies including customized web applications and implementation of industry standard web services architecture; as well as engaging a previously TRL 4-5 technology developed by Ames Research Center (ARC) that utilized an intelligent agent-based system to manage and automate file traffic flow, archive data, and generate console logs. These projects to date have allowed the MOD Operations organization to remove one full time (7 x 24 x 365) ISS console position in 2010; with the goal of eliminating a second full time ISS console support position by 2012. The team will also reduce one long range planning console position by 2014. When complete, these Flight Planning Branch projects will account for the elimination of 3 console positions and a reduction in staffing of 11 engineering personnel (EP) for ISS.

  18. Cost-minimization model of a multidisciplinary antibiotic stewardship team based on a successful implementation on a urology ward of an academic hospital.

    PubMed

    Dik, Jan-Willem H; Hendrix, Ron; Friedrich, Alex W; Luttjeboer, Jos; Panday, Prashant Nannan; Wilting, Kasper R; Lo-Ten-Foe, Jerome R; Postma, Maarten J; Sinha, Bhanu

    2015-01-01

    In order to stimulate appropriate antimicrobial use and thereby lower the chances of resistance development, an Antibiotic Stewardship Team (A-Team) has been implemented at the University Medical Center Groningen, the Netherlands. Focus of the A-Team was a pro-active day 2 case-audit, which was financially evaluated here to calculate the return on investment from a hospital perspective. Effects were evaluated by comparing audited patients with a historic cohort with the same diagnosis-related groups. Based upon this evaluation a cost-minimization model was created that can be used to predict the financial effects of a day 2 case-audit. Sensitivity analyses were performed to deal with uncertainties. Finally, the model was used to financially evaluate the A-Team. One whole year including 114 patients was evaluated. Implementation costs were calculated to be €17,732, which represent total costs spent to implement this A-Team. For this specific patient group admitted to a urology ward and consulted on day 2 by the A-Team, the model estimated total savings of €60,306 after one year for this single department, leading to a return on investment of 5.9. The implemented multi-disciplinary A-Team performing a day 2 case-audit in the hospital had a positive return on investment caused by a reduced length of stay due to a more appropriate antibiotic therapy. Based on the extensive data analysis, a model of this intervention could be constructed. This model could be used by other institutions, using their own data to estimate the effects of a day 2 case-audit in their hospital.

  19. A collaborative institutional model for integrating computer applications in the medical curriculum.

    PubMed Central

    Friedman, C. P.; Oxford, G. S.; Juliano, E. L.

    1991-01-01

    The introduction and promotion of information technology in an established medical curriculum with existing academic and technical support structures poses a number of challenges. The UNC School of Medicine has developed the Taskforce on Educational Applications in Medicine (TEAM), to coordinate this effort. TEAM works as a confederation of existing research and support units with interests in computers and education, along with a core of interested faculty with curricular responsibilities. Constituent units of the TEAM confederation include the medical center library, medical television studios, basic science teaching laboratories, educational development office, microcomputer and network support groups, academic affairs administration, and a subset of course directors and teaching faculty. Among our efforts have been the establishment of (1) a mini-grant program to support faculty initiated development and implementation of computer applications in the curriculum, (2) a symposium series with visiting speakers to acquaint faculty with current developments in medical informatics and related curricular efforts at other institution, (3) 20 computer workstations located in the multipurpose teaching labs where first and second year students do much of their academic work, (4) a demonstration center for evaluation of courseware and technologically advanced delivery systems. The student workstations provide convenient access to electronic mail, University schedules and calendars, the CoSy computer conferencing system, and several software applications integral to their courses in pathology, histology, microbiology, biochemistry, and neurobiology. The progress achieved toward the primary goal has modestly exceeded our initial expectations, while the collegiality and interest expressed toward TEAM activities in the local environment stand as empirical measures of the success of the concept. PMID:1807705

  20. Next Generation Simulation Framework for Robotic and Human Space Missions

    NASA Technical Reports Server (NTRS)

    Cameron, Jonathan M.; Balaram, J.; Jain, Abhinandan; Kuo, Calvin; Lim, Christopher; Myint, Steven

    2012-01-01

    The Dartslab team at NASA's Jet Propulsion Laboratory (JPL) has a long history of developing physics-based simulations based on the Darts/Dshell simulation framework that have been used to simulate many planetary robotic missions, such as the Cassini spacecraft and the rovers that are currently driving on Mars. Recent collaboration efforts between the Dartslab team at JPL and the Mission Operations Directorate (MOD) at NASA Johnson Space Center (JSC) have led to significant enhancements to the Dartslab DSENDS (Dynamics Simulator for Entry, Descent and Surface landing) software framework. The new version of DSENDS is now being used for new planetary mission simulations at JPL. JSC is using DSENDS as the foundation for a suite of software known as COMPASS (Core Operations, Mission Planning, and Analysis Spacecraft Simulation) that is the basis for their new human space mission simulations and analysis. In this paper, we will describe the collaborative process with the JPL Dartslab and the JSC MOD team that resulted in the redesign and enhancement of the DSENDS software. We will outline the improvements in DSENDS that simplify creation of new high-fidelity robotic/spacecraft simulations. We will illustrate how DSENDS simulations are assembled and show results from several mission simulations.

  1. Advancing Perspectives of Sustainability and Large-Scale Implementation of Design Teams in Ghana's Polytechnics: Issues and Opportunities

    ERIC Educational Resources Information Center

    Bakah, Marie Afua Baah; Voogt, Joke M.; Pieters, Jules M.

    2012-01-01

    Polytechnic staff perspectives are sought on the sustainability and large-scale implementation of design teams (DT), as a means for collaborative curriculum design and teacher professional development in Ghana's polytechnics, months after implementation. Data indicates that teachers still collaborate in DTs for curriculum design and professional…

  2. Deliberation Makes a Difference: Preparation Strategies for TeamSTEPPS Implementation in Small and Rural Hospitals

    PubMed Central

    Zhu, Xi; Baloh, Jure; Ward, Marcia M.; Stewart, Greg L.

    2016-01-01

    Small and rural hospitals face special challenges to implement and sustain organization-wide quality improvement (QI) initiatives due to limited resources and infrastructures. We studied the implementation of TeamSTEPPS, a national QI initiative, in 14 critical access hospitals. Drawing on QI and organization development theories, we propose five strategic preparation steps for TeamSTEPPS: assess needs, reflect on the context, set goals, develop a shared understanding, and select change agents. We explore how hospitals’ practices correspond to suggested best practices by analyzing qualitative data collected through quarterly interviews with key informants. We find that the level of deliberation was a key factor that differentiated hospitals’ practices. Hospitals that were more deliberate in preparing for the five strategic steps were more likely to experience engagement, perceive efficacy, foresee and manage barriers, and achieve progress during implementation. We discuss potential steps that hospitals may take to better prepare for TeamSTEPPS implementation. PMID:26429835

  3. Development of a web service for analysis in a distributed network.

    PubMed

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes.

  4. Development of a Web Service for Analysis in a Distributed Network

    PubMed Central

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586

  5. Rates, levels, and determinants of electronic health record system adoption: a study of hospitals in Riyadh, Saudi Arabia.

    PubMed

    Aldosari, Bakheet

    2014-05-01

    Outside a small number of OECD countries, little information exists regarding the rates, levels, and determinants of hospital electronic health record (EHR) system adoption. This study examines EHR system adoption in Riyadh, Saudi Arabia. Respondents from 22 hospitals were surveyed regarding the implementation, maintenance, and improvement phases of EHR system adoption. Thirty-seven items were graded on a three-point scale of preparedness/completion. Measured determinants included hospital size, level of care, ownership, and EHR system development team composition. Eleven of the hospitals had implemented fully functioning EHR systems, eight had systems in progress, and three had not adopted a system. Sixteen different systems were being used across the 19 adopting hospitals. Differential adoption levels were positively related to hospital size and negatively to the level of care (secondary versus tertiary). Hospital ownership (nonprofit versus private) and development team composition showed mixed effects depending on the particular adoption phase being considered. Adoption rates compare favourably with those reported from other countries and other districts in Saudi Arabia, but wide variations exist among hospitals in the levels of adoption of individual items. General weaknesses in the implementation phase concern the legacy of paper data systems, including document scanning and data conversion; in the maintenance phase concern updating/maintaining software; and in the improvement phase concern the communication and exchange of health information. This study is the first to investigate the level and determinants of EHR system adoption for public, other nonprofit, and private hospitals in Saudi Arabia. Wide interhospital variations in adoption bear implications for policy-making and funding intervention. Identified areas of weakness require action to increase the degree of adoption and usefulness of EHR systems. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. GEOSAT Follow-On (GFO) Altimeter Document Series. Volume 5; Version 1; GFO Radar Altimeter Processing at Wallops Flight Facility

    NASA Technical Reports Server (NTRS)

    Lockwood, Dennis W.; Conger, A. M.

    2003-01-01

    This document is a compendium of the WFF GFO Software Development Team's knowledge regarding of GDO CAL/VAL Data. It includes many elements of a requirements document, a software specification document, a software design document, and a user's guide. In the more technical sections, this document assumes the reader is familiar with GFO and its CAL/VAL Data.

  7. Evaluation and Validation (E&V) Team Public Report. Volume 5

    DTIC Science & Technology

    1990-10-31

    aspects, software engineering practices, etc. The E&V requirements which are developed will be used to guide the E&V technical effort. The currently...interoperability of Ada software engineering environment tools and data. The scope of the CAIS-A includes the functionality affecting transportability that is...requirement that they be CAIS conforming tools or data. That is, for example numerous CIVC data exist on special purpose software currently available

  8. Software Estimation: Developing an Accurate, Reliable Method

    DTIC Science & Technology

    2011-08-01

    Lake, CA ,93555- 6110 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S...Activity, the systems engineering team is responsible for system and software requirements. 2 . Process Dashboard is a software planning and tracking tool... CA 93555- 6110 760-939-6989 Brad Hodgins is an interim TSP Mentor Coach, SEI-Authorized TSP Coach, SEI-Certified PSP/TSP Instructor, and SEI

  9. A Measurement Framework for Team Level Assessment of Innovation Capability in Early Requirements Engineering

    NASA Astrophysics Data System (ADS)

    Regnell, Björn; Höst, Martin; Nilsson, Fredrik; Bengtsson, Henrik

    When developing software-intensive products for a market-place it is important for a development organisation to create innovative features for coming releases in order to achieve advantage over competitors. This paper focuses on assessment of innovation capability at team level in relation to the requirements engineering that is taking place before the actual product development projects are decided, when new business models, technology opportunities and intellectual property rights are created and investigated through e.g. prototyping and concept development. The result is a measurement framework focusing on four areas: innovation elicitation, selection, impact and ways-of-working. For each area, candidate measurements were derived from interviews to be used as inspiration in the development of a tailored measurement program. The framework is based on interviews with participants of a software team with specific innovation responsibilities and validated through cross-case analysis and feedback from practitioners.

  10. A Computer Supported Teamwork Project for People with a Visual Impairment.

    ERIC Educational Resources Information Center

    Hale, Greg

    2000-01-01

    Discussion of the use of computer supported teamwork (CSTW) in team-based organizations focuses on problems that visually impaired people have reading graphical user interface software via screen reader software. Describes a project that successfully used email for CSTW, and suggests issues needing further research. (LRW)

  11. Tutor Training in Computer Science: Tutor Opinions and Student Results.

    ERIC Educational Resources Information Center

    Carbone, Angela; Mitchell, Ian

    Edproj, a project team of faculty from the departments of computer science, software development and education at Monash University (Australia) investigated the quality of teaching and student learning and understanding in the computer science and software development departments. Edproj's research led to the development of a training program to…

  12. Improving Collaborative Learning in Online Software Engineering Education

    ERIC Educational Resources Information Center

    Neill, Colin J.; DeFranco, Joanna F.; Sangwan, Raghvinder S.

    2017-01-01

    Team projects are commonplace in software engineering education. They address a key educational objective, provide students critical experience relevant to their future careers, allow instructors to set problems of greater scale and complexity than could be tackled individually, and are a vehicle for socially constructed learning. While all…

  13. Learning Teamwork Skills in University Programming Courses

    ERIC Educational Resources Information Center

    Sancho-Thomas, Pilar; Fuentes-Fernandez, Ruben; Fernandez-Manjon, Baltasar

    2009-01-01

    University courses about computer programming usually seek to provide students not only with technical knowledge, but also with the skills required to work in real-life software projects. Nowadays, the development of software applications requires the coordinated efforts of the members of one or more teams. Therefore, it is important for software…

  14. Performance and Perceptions of Student Teams Created and Stratified Based on Academic Abilities.

    PubMed

    Camiel, Lana Dvorkin; Kostka-Rokosz, Maria; Tataronis, Gary; Goldman, Jennifer

    2017-04-01

    Objective. To compare student performance, elements of peer evaluation and satisfaction of teams created according to students' course entrance grade point average (GPA). Methods. Two course sections were divided into teams of four to five students utilizing Comprehensive Assessment of Team Member Effectiveness (CATME) software. Results. Of 336 students enrolled, 324 consented to participation. Weekly team quiz averages were 99.1% (higher GPA), 97.2% (lower GPA), 97.7% (mixed GPA). Weekly individual quiz averages were 87.2% (higher GPA), 83.3% (lower GPA), 85.2% (mixed GPA). Students with same GPA performed similarly individually independent of team assignment. Satisfaction ranged from 4.52 (higher GPA), 4.73 (lower GPA), 4.53 (mixed GPA). Conclusion. Academically stronger students in mixed GPA teams appeared to be at a slight disadvantage compared to similar students in higher GPA teams. There was no difference in team performance for academically weaker students in lower GPA versus mixed GPA teams. Team satisfaction was higher in lower GPA teams.

  15. Performance and Perceptions of Student Teams Created and Stratified Based on Academic Abilities

    PubMed Central

    Kostka-Rokosz, Maria; Tataronis, Gary; Goldman, Jennifer

    2017-01-01

    Objective. To compare student performance, elements of peer evaluation and satisfaction of teams created according to students’ course entrance grade point average (GPA). Methods. Two course sections were divided into teams of four to five students utilizing Comprehensive Assessment of Team Member Effectiveness (CATME) software. Results. Of 336 students enrolled, 324 consented to participation. Weekly team quiz averages were 99.1% (higher GPA), 97.2% (lower GPA), 97.7% (mixed GPA). Weekly individual quiz averages were 87.2% (higher GPA), 83.3% (lower GPA), 85.2% (mixed GPA). Students with same GPA performed similarly individually independent of team assignment. Satisfaction ranged from 4.52 (higher GPA), 4.73 (lower GPA), 4.53 (mixed GPA). Conclusion. Academically stronger students in mixed GPA teams appeared to be at a slight disadvantage compared to similar students in higher GPA teams. There was no difference in team performance for academically weaker students in lower GPA versus mixed GPA teams. Team satisfaction was higher in lower GPA teams. PMID:28496267

  16. Adapting the RoboCup Simulation for Autonomous Vehicle Team Information Fusion and Decision Making Experimentation

    DTIC Science & Technology

    2010-06-01

    researchers outside the government to produce the kinds of algorithms and software that would easily transition into solutions for teams of autonomous ... vehicles for military scenarios. To accomplish this, we began modifying the RoboCup soccer game step-by-step to incorporate rules that simulate these

  17. Ada training evaluation and recommendations from the Gamma Ray Observatory Ada Development Team

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Ada training experiences of the Gamma Ray Observatory Ada development team are related, and recommendations are made concerning future Ada training for software developers. Training methods are evaluated, deficiencies in the training program are noted, and a recommended approach, including course outline, time allocation, and reference materials, is offered.

  18. Proceedings of the International Academy for Information Management (IAIM) Annual Conference (13th, Helsinki, Finland, December 11-13, 1998).

    ERIC Educational Resources Information Center

    Rogers, Camille, Ed.

    The conference paper topics include: business and information technology (IT) education; knowledge management; teaching software applications; development of multimedia teaching materials; technology job skills in demand; IT management for executives; self-directed teams in information systems courses; a team building exercise to software…

  19. UTM TCL2 Software Requirements

    NASA Technical Reports Server (NTRS)

    Smith, Irene S.; Rios, Joseph L.; McGuirk, Patrick O.; Mulfinger, Daniel G.; Venkatesan, Priya; Smith, David R.; Baskaran, Vijayakumar; Wang, Leo

    2017-01-01

    The Unmanned Aircraft Systems (UAS) Traffic Management (UTM) Technical Capability Level (TCL) 2 software implements the UTM TCL 2 software requirements described herein. These software requirements are linked to the higher level UTM TCL 2 System Requirements. Each successive TCL implements additional UTM functionality, enabling additional use cases. TCL 2 demonstrated how to enable expanded multiple operations by implementing automation for beyond visual line-of-sight, tracking operations, and operations flying over sparsely populated areas.

  20. Teaming for Speech and Auditory Training.

    ERIC Educational Resources Information Center

    Nussbaum, Debra B.; Waddy-Smith, Bettie

    1985-01-01

    The article suggests three strategies for the audiologist and speech/communication specialist to use in assisting the preschool teacher to implement student's individualized education program: (1) demonstration teaming, (2) dual teaming; and (3) rotation teaming. (CL)

Top