Sample records for software deployment updating

  1. The Propulsive Small Expendable Deployer System (ProSEDS)

    NASA Technical Reports Server (NTRS)

    Lorenzini, Enrico C.; Cosmo, Mario L.; Estes, Robert D.; Sanmartin, Juan; Pelaez, Jesus; Ruiz, Manuel

    2003-01-01

    This Final Report covers the following main topics: 1) Brief Description of ProSEDS; 2) Mission Analysis; 3) Dynamics Reference Mission; 4) Dynamics Stability; 5) Deployment Control; 6) Updated System Performance; 7) Updated Mission Analysis; 8) Updated Dynamics Reference Mission; 9) Updated Deployment Control Profiles and Simulations; 10) Updated Reference Mission; 11) Evaluation of Power Delivered by the Tether; 12) Deployment Control Profile Ref. #78 and Simulations; 13) Kalman Filters for Mission Estimation; 14) Analysis/Estimation of Deployment Flight Data; 15) Comparison of ED Tethers and Electrical Thrusters; 16) Dynamics Analysis for Mission Starting at a Lower Altitude; 17) Deployment Performance at a Lower Altitude; 18) Satellite Orbit after a Tether Cut; 19) Deployment with Shorter Dyneema Tether Length; 20) Interactive Software for ED Tethers.

  2. The Propulsive Small Expendable Deployer System (ProSEDS)

    NASA Technical Reports Server (NTRS)

    Lorenzini, Enrico C.; Estes, Robert D.; Cosmo, Mario L.

    2001-01-01

    This is the Annual Report #2 entitled "The Propulsive Small Expendable Deployer System (ProSEDS)" prepared by the Smithsonian Astrophysical Observatory for NASA Marshall Space Flight Center. This report covers the period of activity from 1 August 2000 through 30 July 2001. The topics include: 1) Updated System Performance; 2) Mission Analysis; 3) Updated Dynamics Reference Mission; 4) Updated Deployment Control Profiles and Simulations; 5) Comparison of ED tethers and electrical thrusters; 6) Kalman filters for mission estimation; and 7) Delivery of interactive software for ED tethers.

  3. Specifying and Verifying the Correctness of Dynamic Software Updates

    DTIC Science & Technology

    2011-11-15

    additional branching introduced by update points and the need to analyze the state transformer code. As tools become faster and more effective , our...It shows the effectiveness of merging-based verification on practical examples, including Redis [20], a widely deployed server program. 2 Defining...Gupta’s reachability while side -stepping the problem that reachability can leave behavior CS-TR-4997 under-constrained. For example, for the vsftpd update

  4. The Propulsive Small Expendable Deployer System (ProSEDS)

    NASA Technical Reports Server (NTRS)

    Lorenzini, Enrico C.

    2002-01-01

    This Annual Report covers the following main topics: 1) Updated Reference Mission. The reference ProSEDS (Propulsive Small Expendable Deployer System) mission is evaluated for an updated launch date in the Summer of 2002 and for the new 80-s current operating cycle. Simulations are run for nominal solar activity condition at the time of launch and for extreme conditions of dynamic forcing. Simulations include the dynamics of the system, the electrodynamics of the bare tether, the neutral atmosphere and the thermal response of the tether. 2) Evaluation of power delivered by the tether system. The power delivered by the tethered system during the battery charging mode is computed under the assumption of minimum solar activity for the new launch date. 3) Updated Deployment Control Profiles and Simulations. A number of new deployment profiles were derived based on the latest results of the deployment ground tests. The flight profile is then derived based on the friction characteristics obtained from the deployment tests of the F-1 tether. 4) Analysis/estimation of deployment flight data. A process was developed to estimate the deployment trajectory of the endmass with respect to the Delta and the final libration amplitude from the data of the deployer turn counters. This software was tested successfully during the ProSEDS mission simulation at MSFC (Marshall Space Flight Center) EDAC (Environments Data Analysis Center).

  5. SHINE Virtual Machine Model for In-flight Updates of Critical Mission Software

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    This software is a new target for the Spacecraft Health Inference Engine (SHINE) knowledge base that compiles a knowledge base to a language called Tiny C - an interpreted version of C that can be embedded on flight processors. This new target allows portions of a running SHINE knowledge base to be updated on a "live" system without needing to halt and restart the containing SHINE application. This enhancement will directly provide this capability without the risk of software validation problems and can also enable complete integration of BEAM and SHINE into a single application. This innovation enables SHINE deployment in domains where autonomy is used during flight-critical applications that require updates. This capability eliminates the need for halting the application and performing potentially serious total system uploads before resuming the application with the loss of system integrity. This software enables additional applications at JPL (microsensors, embedded mission hardware) and increases the marketability of these applications outside of JPL.

  6. Recovering from "amnesia" brought about by radiation. Verification of the "Over the air" (OTA) application software update mechanism On-Board Solar Orbiter's Energetic Particle Detector

    NASA Astrophysics Data System (ADS)

    Da Silva, Antonio; Sánchez Prieto, Sebastián; Rodriguez Polo, Oscar; Parra Espada, Pablo

    Computer memories are not supposed to forget, but they do. Because of the proximity of the Sun, from the Solar Orbiter boot software perspective, it is mandatory to look out for permanent memory errors resulting from (SEL) latch-up failures in application binaries stored in EEPROM and its SDRAM deployment areas. In this situation, the last line in defense established by FDIR mechanisms is the capability of the boot software to provide an accurate report of the memories’ damages and to perform an application software update, that avoid the harmed locations by flashing EEPROM with a new binary. This paper describes the OTA EEPROM firmware update procedure verification of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. Since the maximum number of rewrites on real EEPROM is limited and permanent memory faults cannot be friendly emulated in real hardware, the verification has been accomplished by the use of a LEON2 Virtual Platform (Leon2ViP) with fault injection capabilities and real SpaceWire interfaces developed by the Space Research Group (SRG) of the University of Alcalá. This way it is possible to run the exact same target binary software as if was run on the real ICU platform. Furthermore, the use of this virtual hardware-in-the-loop (VHIL) approach makes it possible to communicate with Electrical Ground Support Equipment (EGSE) through real SpaceWire interfaces in an agile, controlled and deterministic environment.

  7. A Networked Sensor System for the Analysis of Plot-Scale Hydrology.

    PubMed

    Villalba, German; Plaza, Fernando; Zhong, Xiaoyang; Davis, Tyler W; Navarro, Miguel; Li, Yimei; Slater, Thomas A; Liang, Yao; Liang, Xu

    2017-03-20

    This study presents the latest updates to the Audubon Society of Western Pennsylvania (ASWP) testbed, a $50,000 USD, 104-node outdoor multi-hop wireless sensor network (WSN). The network collects environmental data from over 240 sensors, including the EC-5, MPS-1 and MPS-2 soil moisture and soil water potential sensors and self-made sap flow sensors, across a heterogeneous deployment comprised of MICAz, IRIS and TelosB wireless motes. A low-cost sensor board and software driver was developed for communicating with the analog and digital sensors. Innovative techniques (e.g., balanced energy efficient routing and heterogeneous over-the-air mote reprogramming) maintained high success rates (>96%) and enabled effective software updating, throughout the large-scale heterogeneous WSN. The edaphic properties monitored by the network showed strong agreement with data logger measurements and were fitted to pedotransfer functions for estimating local soil hydraulic properties. Furthermore, sap flow measurements, scaled to tree stand transpiration, were found to be at or below potential evapotranspiration estimates. While outdoor WSNs still present numerous challenges, the ASWP testbed proves to be an effective and (relatively) low-cost environmental monitoring solution and represents a step towards developing a platform for monitoring and quantifying statistically relevant environmental parameters from large-scale network deployments.

  8. A Networked Sensor System for the Analysis of Plot-Scale Hydrology

    PubMed Central

    Villalba, German; Plaza, Fernando; Zhong, Xiaoyang; Davis, Tyler W.; Navarro, Miguel; Li, Yimei; Slater, Thomas A.; Liang, Yao; Liang, Xu

    2017-01-01

    This study presents the latest updates to the Audubon Society of Western Pennsylvania (ASWP) testbed, a $50,000 USD, 104-node outdoor multi-hop wireless sensor network (WSN). The network collects environmental data from over 240 sensors, including the EC-5, MPS-1 and MPS-2 soil moisture and soil water potential sensors and self-made sap flow sensors, across a heterogeneous deployment comprised of MICAz, IRIS and TelosB wireless motes. A low-cost sensor board and software driver was developed for communicating with the analog and digital sensors. Innovative techniques (e.g., balanced energy efficient routing and heterogeneous over-the-air mote reprogramming) maintained high success rates (>96%) and enabled effective software updating, throughout the large-scale heterogeneous WSN. The edaphic properties monitored by the network showed strong agreement with data logger measurements and were fitted to pedotransfer functions for estimating local soil hydraulic properties. Furthermore, sap flow measurements, scaled to tree stand transpiration, were found to be at or below potential evapotranspiration estimates. While outdoor WSNs still present numerous challenges, the ASWP testbed proves to be an effective and (relatively) low-cost environmental monitoring solution and represents a step towards developing a platform for monitoring and quantifying statistically relevant environmental parameters from large-scale network deployments. PMID:28335534

  9. Working paper : national costs of the metropolitan ITS infrastructure : updated with 2004 deployment data

    DOT National Transportation Integrated Search

    The purpose of this report, "Working Paper National Costs of the Metropolitan ITS infrastructure: Updated with 2004 Deployment Data," is to update the estimates of the costs remaining to deploy Intelligent Transportation Systems (ITS) infrastructure ...

  10. Working paper : national costs of the metropolitan ITS infrastructure : updated with 2005 deployment data

    DOT National Transportation Integrated Search

    2006-07-01

    The purpose of this report, "Working Paper National Costs of the Metropolitan ITS Infrastructure: Updated with 2005 Deployment Data," is to update the estimates of the costs remaining to fully deploy Intelligent Transportation Systems (ITS) infrastru...

  11. Design of a secure remote management module for a software-operated medical device.

    PubMed

    Burnik, Urban; Dobravec, Štefan; Meža, Marko

    2017-12-09

    Software-based medical devices need to be maintained throughout their entire life cycle. The efficiency of after-sales maintenance can be improved by managing medical systems remotely. This paper presents how to design the remote access function extensions in order to prevent risks imposed by uncontrolled remote access. A thorough analysis of standards and legislation requirements regarding safe operation and risk management of medical devices is presented. Based on the formal requirements, a multi-layer machine design solution is proposed that eliminates remote connectivity risks by strict separation of regular device functionalities from remote management service, deploys encrypted communication links and uses digital signatures to prevent mishandling of software images. The proposed system may also be used as an efficient version update of the existing medical device designs.

  12. Mars Pathfinder Atmospheric Entry Navigation Operations

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Spencer, D. A.; Kallemeyn, P. H.; Vaughan, R. M.

    1997-01-01

    On July 4, 1997, after traveling close to 500 million km, the Pathfinder spacecraft successfully completed entry, descent, and landing, coming to rest on the surface of Mars just 27 km from its target point. In the present paper, the atmospheric entry and approach navigation activities required in support of this mission are discussed. In particular, the flight software parameter update and landing site prediction analyses performed by the Pathfinder operations navigation team are described. A suite of simulation tools developed during Pathfinder's design cycle, but extendible to Pathfinder operations, are also presented. Data regarding the accuracy of the primary parachute deployment algorithm is extracted from the Pathfinder flight data, demonstrating that this algorithm performed as predicted. The increased probability of mission success through the software parameter update process is discussed. This paper also demonstrates the importance of modeling atmospheric flight uncertainties in the estimation of an accurate landing site. With these atmospheric effects included, the final landed ellipse prediction differs from the post-flight determined landing site by less then 0.5 km in downtrack.

  13. Application of Advanced Decision-Analytic Technology to Rapid Deployment Joint Task Force Problems

    DTIC Science & Technology

    1981-06-01

    CHANGE 9 DIEGO GARCIA CHANGE 5: MOMPANA/K FROM I. SO FROM 2: AIRFIELD IMIS TO 2 AIRFIELD IMPS+DRI/II TO 6: COMM/NAV AIDS bENEF IT COET BENEFIT COFF 410...meetings: (1) To organize , display, and update the working group’s judgements about the relative costs and benefits of each level of each variable in...benefit to the organization . (3) Assess costs - In the DESIGN software, there is one type of limited resource to be allocated to the variables. This

  14. A Data Handling System for Modern and Future Fermilab Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Illingworth, R. A.

    2014-01-01

    Current and future Fermilab experiments such as Minerva, NOνA, and MicroBoone are now using an improved version of the Fermilab SAM data handling system. SAM was originally used by the CDF and D0 experiments for Run II of the Fermilab Tevatron to provide file metadata and location cataloguing, uploading of new files to tape storage, dataset management, file transfers between global processing sites, and processing history tracking. However SAM was heavily tailored to the Run II environment and required complex and hard to deploy client software, which made it hard to adapt to new experiments. The Fermilab Computing Sector hasmore » progressively updated SAM to use modern, standardized, technologies in order to more easily deploy it for current and upcoming Fermilab experiments, and to support the data preservation efforts of the Run II experiments.« less

  15. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.

  16. Using Consumer Electronics and Apps in Industrial Environments - Development of a Framework for Dynamic Feature Deployment and Extension by Using Apps on Field Devices

    NASA Astrophysics Data System (ADS)

    Schmitt, Mathias

    2014-12-01

    The aim of this paper is to give a preliminary insight regarding the current work in the field of mobile interaction in industrial environments by using established interaction technologies and metaphors from the consumer goods industry. The major objective is the development and implementation of a holistic app-framework, which enables dynamic feature deployment and extension by using mobile apps on industrial field devices. As a result, field device functionalities can be updated and adapted effectively in accordance with well-known appconcepts from consumer electronics to comply with the urgent requirements of more flexible and changeable factory systems of the future. In addition, a much more user-friendly and utilizable interaction with field devices can be realized. Proprietary software solutions and device-stationary user interfaces can be overcome and replaced by uniform, cross-vendor solutions

  17. Global Deployment Anaylsis System Algorithm Description (With Updates)

    DTIC Science & Technology

    1998-09-01

    Global Deployment Analysis System Algorithm Description (with Updates) By Noetics , Inc. For U.S. Army Concepts Analysis Agency Contract...t "O -Q £5.3 Q 20000224 107 aQU’no-bi-o^f r This Algorithm Description for the Global Deployment Analysis System (GDAS) was prepared by Noetics ...support for Paradox Runtime will be provided by the GDAS developers, CAA and Noetics Inc., and not by Borland International. GDAS for Windows has

  18. Autonomous system for Web-based microarray image analysis.

    PubMed

    Bozinov, Daniel

    2003-12-01

    Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.

  19. Sensor Placement Optimization using Chama

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Katherine A.; Nicholson, Bethany L.; Laird, Carl Damon

    Continuous or regularly scheduled monitoring has the potential to quickly identify changes in the environment. However, even with low - cost sensors, only a limited number of sensors can be deployed. The physical placement of these sensors, along with the sensor technology and operating conditions, can have a large impact on the performance of a monitoring strategy. Chama is an open source Python package which includes mixed - integer, stochastic programming formulations to determine sensor locations and technology that maximize monitoring effectiveness. The methods in Chama are general and can be applied to a wide range of applications. Chama ismore » currently being used to design sensor networks to monitor airborne pollutants and to monitor water quality in water distribution systems. The following documentation includes installation instructions and examples, description of software features, and software license. The software is intended to be used by regulatory agencies, industry, and the research community. It is assumed that the reader is familiar with the Python Programming Language. References are included for addit ional background on software components. Online documentation, hosted at http://chama.readthedocs.io/, will be updated as new features are added. The online version includes API documentation .« less

  20. Lessons learned in deploying software estimation technology and tools

    NASA Technical Reports Server (NTRS)

    Panlilio-Yap, Nikki; Ho, Danny

    1994-01-01

    Developing a software product involves estimating various project parameters. This is typically done in the planning stages of the project when there is much uncertainty and very little information. Coming up with accurate estimates of effort, cost, schedule, and reliability is a critical problem faced by all software project managers. The use of estimation models and commercially available tools in conjunction with the best bottom-up estimates of software-development experts enhances the ability of a product development group to derive reasonable estimates of important project parameters. This paper describes the experience of the IBM Software Solutions (SWS) Toronto Laboratory in selecting software estimation models and tools and deploying their use to the laboratory's product development groups. It introduces the SLIM and COSTAR products, the software estimation tools selected for deployment to the product areas, and discusses the rationale for their selection. The paper also describes the mechanisms used for technology injection and tool deployment, and concludes with a discussion of important lessons learned in the technology and tool insertion process.

  1. Physician use of updated anti-virus software in a tertiary Nigerian hospital.

    PubMed

    Laabes, E P; Nyango, D D; Ayedima, M M; Ladep, N G

    2010-01-01

    While physicians are becoming increasingly dependent on computers and the internet, highly lethal malware continue to be loaded into cyberspace. We sought to assess the proportion of physicians with updated anti-virus software in Jos University Teaching Hospital Nigeria and to determine perceived barriers to getting updates. We used a pre-tested semi-structured self-administered questionnaire to conduct a cross-sectional survey among 118 physicians. The mean age (+/- SD) of subjects was 34 (+/- 4) years, with 94 male and 24 female physicians. Forty-two (36.5%) of 115 physicians with anti-virus software used an updated program (95% Cl: 27, 45). The top-three antivirus software were: McAfee 40 (33.9%), AVG 37 (31.4%) and Norton 17 (14.4%). Common infections were: Trojan horse 22 (29.7%), Brontok worm 8 (10.8%), and Ravmonlog.exe 5 (6.8%). Internet browsing with a firewall was an independent determinant for use of updated anti-virus software [OR 4.3, 95% CI, 1.86, 10.02; P < 0.001]. Busy schedule, 40 (33.9%) and lack of credit card 39 (33.1%) were perceived barriers to updating antivirus software. The use of regularly updated anti-virus software is sub-optimal among physicians implying vulnerability to computer viruses. Physicians should be careful with flash drives and should avoid being victims of the raging arms race between malware producers and anti-virus software developers.

  2. Open-source framework for documentation of scientific software written on MATLAB-compatible programming languages

    NASA Astrophysics Data System (ADS)

    Konnik, Mikhail V.; Welsh, James

    2012-09-01

    Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.

  3. NASA's Global Imagery Browse Services - Technologies for Visualizing Earth Science Data

    NASA Astrophysics Data System (ADS)

    Cechini, M. F.; Boller, R. A.; Baynes, K.; Schmaltz, J. E.; Thompson, C. K.; Roberts, J. T.; Rodriguez, J.; Wong, M. M.; King, B. A.; King, J.; De Luca, A. P.; Pressley, N. N.

    2017-12-01

    For more than 20 years, the NASA Earth Observing System (EOS) has collected earth science data for thousands of scientific parameters now totaling nearly 15 Petabytes of data. In 2013, NASA's Global Imagery Browse Services (GIBS) formed its vision to "transform how end users interact and discover [EOS] data through visualizations." This vision included leveraging scientific and community best practices and standards to provide a scalable, compliant, and authoritative source for EOS earth science data visualizations. Since that time, GIBS has grown quickly and now services millions of daily requests for over 500 imagery layers representing hundreds of earth science parameters to a broad community of users. For many of these parameters, visualizations are available within hours of acquisition from the satellite. For others, visualizations are available for the entire mission of the satellite. The GIBS system is built upon the OnEarth and MRF open source software projects, which are provided by the GIBS team. This software facilitates standards-based access for compliance with existing GIS tools. The GIBS imagery layers are predominantly rasterized images represented in two-dimensional coordinate systems, though multiple projections are supported. The OnEarth software also supports the GIBS ingest pipeline to facilitate low latency updates to new or updated visualizations. This presentation will focus on the following topics: Overview of GIBS visualizations and user community Current benefits and limitations of the OnEarth and MRF software projects and related standards GIBS access methods and their in/compatibilities with existing GIS libraries and applications Considerations for visualization accuracy and understandability Future plans for more advanced visualization concepts including Vertical Profiles and Vector-Based Representations Future plans for Amazon Web Service support and deployments

  4. Software as a service approach to sensor simulation software deployment

    NASA Astrophysics Data System (ADS)

    Webster, Steven; Miller, Gordon; Mayott, Gregory

    2012-05-01

    Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.

  5. AWIPS II Application Development, a SPoRT Perspective

    NASA Technical Reports Server (NTRS)

    Burks, Jason E.; Smith, Matthew; McGrath, Kevin M.

    2014-01-01

    The National Weather Service (NWS) is deploying its next-generation decision support system, called AWIPS II (Advanced Weather Interactive Processing System II). NASA's Short-term Prediction Research and Transition (SPoRT) Center has developed several software 'plug-ins' to extend the capabilities of AWIPS II. SPoRT aims to continue its mission of improving short-term forecasts by providing NASA and NOAA products on the decision support system used at NWS weather forecast offices (WFOs). These products are not included in the standard Satellite Broadcast Network feed provided to WFOs. SPoRT has had success in providing support to WFOs as they have transitioned to AWIPS II. Specific examples of transitioning SPoRT plug-ins to WFOs with newly deployed AWIPS II systems will be presented. Proving Ground activities (GOES-R and JPSS) will dominate SPoRT's future AWIPS II activities, including tool development as well as enhancements to existing products. In early 2012 SPoRT initiated the Experimental Product Development Team, a group of AWIPS II developers from several institutions supporting NWS forecasters with innovative products. The results of the team's spring and fall 2013 meeting will be presented. Since AWIPS II developers now include employees at WFOs, as well as many other institutions related to weather forecasting, the NWS has dealt with a multitude of software governance issues related to the difficulties of multiple remotely collaborating software developers. This presentation will provide additional examples of Research-to-Operations plugins, as well as an update on how governance issues are being handled in the AWIPS II developer community.

  6. Email Updates about ADOPT | Transportation Research | NREL

    Science.gov Websites

    Email Updates about ADOPT Email Updates about ADOPT Subscribe Please provide the following information to subscribe for email updates about ADOPT, the Automotive Deployment Options Projection Tool . * indicates required Email Address: * Name (first and last): Organization/Affiliation Subscribe

  7. NASA Tech Briefs, November 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Topics covered include: Computer Program Recognizes Patterns in Time-Series Data; Program for User-Friendly Management of Input and Output Data Sets; Noncoherent Tracking of a Source of a Data-Modulated Signal; Software for Acquiring Image Data for PIV; Detecting Edges in Images by Use of Fuzzy Reasoning; A Timer for Synchronous Digital Systems; Prototype Parts of a Digital Beam-Forming Wide-Band Receiver; High-Voltage Droplet Dispenser; Network Extender for MIL-STD-1553 Bus; MMIC HEMT Power Amplifier for 140 to 170 GHz; Piezoelectric Diffraction-Based Optical Switches; Numerical Modeling of Nanoelectronic Devices; Organizing Diverse, Distributed Project Information; Eigensolver for a Sparse, Large Hermitian Matrix; Modified Polar-Format Software for Processing SAR Data; e-Stars Template Builder; Software for Acoustic Rendering; Functionally Graded Nanophase Beryllium/Carbon Composites; Thin Thermal-Insulation Blankets for Very High Temperatures; Prolonging Microgravity on Parabolic Airplane Flights; Device for Locking a Control Knob; Cable-Dispensing Cart; Foam Sensor Structures Would be Self-Deployable and Survive Hard Landings; Real-Gas Effects on Binary Mixing Layers; Earth-Space Link Attenuation Estimation via Ground Radar Kdp; Wedge Heat-Flux Indicators for Flash Thermography; Measuring Diffusion of Liquids by Common-Path Interferometry; Zero-Shear, Low-Disturbance Optical Delay Line; Whispering-Gallery Mode-Locked Lasers; Spatial Light Modulators as Optical Crossbar Switches; Update on EMD and Hilbert-Spectra Analysis of Time Series; Quad-Tree Visual-Calculus Analysis of Satellite Coverage; Dyakonov-Perel Effect on Spin Dephasing in n-Type GaAs; Update on Area Production in Mixing of Supercritical Fluids; and Quasi-Sun-Pointing of Spacecraft Using Radiation Pressure.

  8. Lessons Learned from the Deployment and Integration of a Microwave Sounder Based Tropical Cyclone Intensity and Surface Wind Estimation Algorithm into NOAA/NESDIS Satellite Product Operations

    NASA Astrophysics Data System (ADS)

    Longmore, S. P.; Knaff, J. A.; Schumacher, A.; Dostalek, J.; DeMaria, R.; Chirokova, G.; Demaria, M.; Powell, D. C.; Sigmund, A.; Yu, W.

    2014-12-01

    The Colorado State University (CSU) Cooperative Institute for Research in the Atmosphere (CIRA) has recently deployed a tropical cyclone (TC) intensity and surface wind radii estimation algorithm that utilizes Suomi National Polar-orbiting Partnership (S-NPP) satellite Advanced Technology Microwave Sounder (ATMS) and Advanced Microwave Sounding Unit (AMSU) from the NOAA18, NOAA19 and METOPA polar orbiting satellites for testing, integration and operations for the Product System Development and Implementation (PSDI) projects at NOAA's National Environmental Satellite, Data, and Information Service (NESDIS). This presentation discusses the evolution of the CIRA NPP/AMSU TC algorithms internally at CIRA and its migration and integration into the NOAA Data Exploitation (NDE) development and testing frameworks. The discussion will focus on 1) the development cycle of internal NPP/AMSU TC algorithms components by scientists and software engineers, 2) the exchange of these components into the NPP/AMSU TC software systems using the subversion version control system and other exchange methods, 3) testing, debugging and integration of the NPP/AMSU TC systems both at CIRA/NESDIS and 4) the update cycle of new releases through continuous integration. Lastly, a discussion of the methods that were effective and those that need revision will be detailed for the next iteration of the NPP/AMSU TC system.

  9. openBEB: open biological experiment browser for correlative measurements

    PubMed Central

    2014-01-01

    Background New experimental methods must be developed to study interaction networks in systems biology. To reduce biological noise, individual subjects, such as single cells, should be analyzed using high throughput approaches. The measurement of several correlative physical properties would further improve data consistency. Accordingly, a considerable quantity of data must be acquired, correlated, catalogued and stored in a database for subsequent analysis. Results We have developed openBEB (open Biological Experiment Browser), a software framework for data acquisition, coordination, annotation and synchronization with database solutions such as openBIS. OpenBEB consists of two main parts: A core program and a plug-in manager. Whereas the data-type independent core of openBEB maintains a local container of raw-data and metadata and provides annotation and data management tools, all data-specific tasks are performed by plug-ins. The open architecture of openBEB enables the fast integration of plug-ins, e.g., for data acquisition or visualization. A macro-interpreter allows the automation and coordination of the different modules. An update and deployment mechanism keeps the core program, the plug-ins and the metadata definition files in sync with a central repository. Conclusions The versatility, the simple deployment and update mechanism, and the scalability in terms of module integration offered by openBEB make this software interesting for a large scientific community. OpenBEB targets three types of researcher, ideally working closely together: (i) Engineers and scientists developing new methods and instruments, e.g., for systems-biology, (ii) scientists performing biological experiments, (iii) theoreticians and mathematicians analyzing data. The design of openBEB enables the rapid development of plug-ins, which will inherently benefit from the “house keeping” abilities of the core program. We report the use of openBEB to combine live cell microscopy, microfluidic control and visual proteomics. In this example, measurements from diverse complementary techniques are combined and correlated. PMID:24666611

  10. 47 CFR 400.7 - Eligible uses for grant funds.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the acquisition and deployment of hardware and software that enables the implementation and operation of Phase II E-911 services, for the acquisition and deployment of hardware and software to enable the migration to an IP-enabled emergency network, for the training in the use of such hardware and software, or...

  11. 47 CFR 400.7 - Eligible uses for grant funds.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the acquisition and deployment of hardware and software that enables the implementation and operation of Phase II E-911 services, for the acquisition and deployment of hardware and software to enable the migration to an IP-enabled emergency network, for the training in the use of such hardware and software, or...

  12. 47 CFR 400.7 - Eligible uses for grant funds.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the acquisition and deployment of hardware and software that enables the implementation and operation of Phase II E-911 services, for the acquisition and deployment of hardware and software to enable the migration to an IP-enabled emergency network, for the training in the use of such hardware and software, or...

  13. 47 CFR 400.7 - Eligible uses for grant funds.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the acquisition and deployment of hardware and software that enables the implementation and operation of Phase II E-911 services, for the acquisition and deployment of hardware and software to enable the migration to an IP-enabled emergency network, for the training in the use of such hardware and software, or...

  14. 47 CFR 400.7 - Eligible uses for grant funds.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the acquisition and deployment of hardware and software that enables the implementation and operation of Phase II E-911 services, for the acquisition and deployment of hardware and software to enable the migration to an IP-enabled emergency network, for the training in the use of such hardware and software, or...

  15. Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform.

    PubMed

    List, Markus

    2017-06-10

    Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.

  16. 76 FR 16785 - Meeting for Software Developers on the Technical Specifications for Common Formats for Patient...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-25

    ... Software Developers on the Technical Specifications for Common Formats for Patient Safety Data Collection... designed as an interactive forum where PSOs and software developers can provide input on these technical... updated event descriptions, forms, and technical specifications for software developers. As an update to...

  17. Introducing the CUAHSI Hydrologic Information System Desktop Application (HydroDesktop) and Open Development Community

    NASA Astrophysics Data System (ADS)

    Ames, D.; Kadlec, J.; Horsburgh, J. S.; Maidment, D. R.

    2009-12-01

    The Consortium of Universities for the Advancement of Hydrologic Sciences (CUAHSI) Hydrologic Information System (HIS) project includes extensive development of data storage and delivery tools and standards including WaterML (a language for sharing hydrologic data sets via web services); and HIS Server (a software tool set for delivering WaterML from a server); These and other CUASHI HIS tools have been under development and deployment for several years and together, present a relatively complete software “stack” to support the consistent storage and delivery of hydrologic and other environmental observation data. This presentation describes the development of a new HIS software tool called “HydroDesktop” and the development of an online open source software development community to update and maintain the software. HydroDesktop is a local (i.e. not server-based) client side software tool that ultimately will run on multiple operating systems and will provide a highly usable level of access to HIS services. The software provides many key capabilities including data query, map-based visualization, data download, local data maintenance, editing, graphing, data export to selected model-specific data formats, linkage with integrated modeling systems such as OpenMI, and ultimately upload to HIS servers from the local desktop software. As the software is presently in the early stages of development, this presentation will focus on design approach and paradigm and is viewed as an opportunity to encourage participation in the open development community. Indeed, recognizing the value of community based code development as a means of ensuring end-user adoption, this project has adopted an “iterative” or “spiral” software development approach which will be described in this presentation.

  18. Design of Measure and Control System for Precision Pesticide Deploying Dynamic Simulating Device

    NASA Astrophysics Data System (ADS)

    Liang, Yong; Liu, Pingzeng; Wang, Lu; Liu, Jiping; Wang, Lang; Han, Lei; Yang, Xinxin

    A measure and control system for precision deploying pesticide simulating equipment is designed in order to study pesticide deployment technology. The system can simulate every state of practical pesticide deployment, and carry through precise, simultaneous measure to every factor affecting pesticide deployment effects. The hardware and software incorporates a structural design of modularization. The system is divided into many different function modules of hardware and software, and exploder corresponding modules. The modules’ interfaces are uniformly defined, which is convenient for module connection, enhancement of system’s universality, explodes efficiency and systemic reliability, and make the program’s characteristics easily extended and easy maintained. Some relevant hardware and software modules can be adapted to other measures and control systems easily. The paper introduces the design of special numeric control system, the main module of information acquisition system and the speed acquisition module in order to explain the design process of the module.

  19. Distributed Common Ground System Army Increment 1 (DCGS-A Inc 1)

    DTIC Science & Technology

    2016-03-01

    Acquisition Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY - Fiscal...updated prior to the FDD ITAB in December 2012 and provided additional COA analysis/validation referenced in the FDD ADM (December 14, 2012) and FDD ...required by 10 U.S.C. 2334(a)(6). The Army Cost Review Board developed the FDD Army Cost Position (ACP), dated October 19, 2012, through the update of

  20. Bringing your tools to CyVerse Discovery Environment using Docker

    PubMed Central

    Devisetty, Upendra Kumar; Kennedy, Kathleen; Sarando, Paul; Merchant, Nirav; Lyons, Eric

    2016-01-01

    Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse’s Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse’s production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use. PMID:27803802

  1. Bringing your tools to CyVerse Discovery Environment using Docker.

    PubMed

    Devisetty, Upendra Kumar; Kennedy, Kathleen; Sarando, Paul; Merchant, Nirav; Lyons, Eric

    2016-01-01

    Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse's Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse's production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use.

  2. SIDECACHE: Information access, management and dissemination framework for web services.

    PubMed

    Doderer, Mark S; Burkhardt, Cory; Robbins, Kay A

    2011-06-14

    Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.

  3. DSS command software update

    NASA Technical Reports Server (NTRS)

    Stinnett, W. G.

    1980-01-01

    The modifications, additions, and testing results for a version of the Deep Space Station command software, generated for support of the Voyager Saturn encounter, are discussed. The software update requirements included efforts to: (1) recode portions of the software to permit recovery of approximately 2000 words of memory; (2) correct five Voyager Ground data System liens; (3) provide capability to automatically turn off the command processor assembly local printer during periods of low activity; and (4) correct anomalies existing in the software.

  4. CANES Contracting Strategies for Full Deployment

    DTIC Science & Technology

    2012-01-01

    9 CANES Program Functions in Full Deployment...contractors will design CANES, identifying specific hardware and developing the integration software necessary to consolidate existing C4I functions . At...would be responsible for execut- ing the purchased design and assembling the systems, ensuring that the integration software is functioning . An

  5. 47 CFR 59.3 - Information concerning deployment of new services and equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... services and equipment, including any software or upgrades of software integral to the use or operation of... services and equipment. 59.3 Section 59.3 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INFRASTRUCTURE SHARING § 59.3 Information concerning deployment of...

  6. Component-Based Visualization System

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco

    2005-01-01

    A software system has been developed that gives engineers and operations personnel with no "formal" programming expertise, but who are familiar with the Microsoft Windows operating system, the ability to create visualization displays to monitor the health and performance of aircraft/spacecraft. This software system is currently supporting the X38 V201 spacecraft component/system testing and is intended to give users the ability to create, test, deploy, and certify their subsystem displays in a fraction of the time that it would take to do so using previous software and programming methods. Within the visualization system there are three major components: the developer, the deployer, and the widget set. The developer is a blank canvas with widget menu items that give users the ability to easily create displays. The deployer is an application that allows for the deployment of the displays created using the developer application. The deployer has additional functionality that the developer does not have, such as printing of displays, screen captures to files, windowing of displays, and also serves as the interface into the documentation archive and help system. The third major component is the widget set. The widgets are the visual representation of the items that will make up the display (i.e., meters, dials, buttons, numerical indicators, string indicators, and the like). This software was developed using Visual C++ and uses COTS (commercial off-the-shelf) software where possible.

  7. Scalable and fail-safe deployment of the ATLAS Distributed Data Management system Rucio

    NASA Astrophysics Data System (ADS)

    Lassnig, M.; Vigne, R.; Beermann, T.; Barisits, M.; Garonne, V.; Serfon, C.

    2015-12-01

    This contribution details the deployment of Rucio, the ATLAS Distributed Data Management system. The main complication is that Rucio interacts with a wide variety of external services, and connects globally distributed data centres under different technological and administrative control, at an unprecedented data volume. It is therefore not possible to create a duplicate instance of Rucio for testing or integration. Every software upgrade or configuration change is thus potentially disruptive and requires fail-safe software and automatic error recovery. Rucio uses a three-layer scaling and mitigation strategy based on quasi-realtime monitoring. This strategy mainly employs independent stateless services, automatic failover, and service migration. The technologies used for deployment and mitigation include OpenStack, Puppet, Graphite, HAProxy and Apache. In this contribution, the interplay between these components, their deployment, software mitigation, and the monitoring strategy are discussed.

  8. Contributing opportunistic resources to the grid with HTCondor-CE-Bosco

    NASA Astrophysics Data System (ADS)

    Weitzel, Derek; Bockelman, Brian

    2017-10-01

    The HTCondor-CE [1] is the primary Compute Element (CE) software for the Open Science Grid. While it offers many advantages for large sites, for smaller, WLCG Tier-3 sites or opportunistic clusters, it can be a difficult task to install, configure, and maintain the HTCondor-CE. Installing a CE typically involves understanding several pieces of software, installing hundreds of packages on a dedicated node, updating several configuration files, and implementing grid authentication mechanisms. On the other hand, accessing remote clusters from personal computers has been dramatically improved with Bosco: site admins only need to setup SSH public key authentication and appropriate accounts on a login host. In this paper, we take a new approach with the HTCondor-CE-Bosco, a CE which combines the flexibility and reliability of the HTCondor-CE with the easy-to-install Bosco. The administrators of the opportunistic resource are not required to install any software: only SSH access and a user account are required from the host site. The OSG can then run the grid-specific portions from a central location. This provides a new, more centralized, model for running grid services, which complements the traditional distributed model. We will show the architecture of a HTCondor-CE-Bosco enabled site, as well as feedback from multiple sites that have deployed it.

  9. Deployment of a tool for measuring freeway safety performance.

    DOT National Transportation Integrated Search

    2011-12-01

    This project updated and deployed a freeway safety performance measurement tool, building upon a previous project that developed the core methodology. The tool evaluates the cumulative risk over time of an accident or a particular kind of accident. T...

  10. Metropolitan ITS deployment tracking : extract of data on traffic signals

    DOT National Transportation Integrated Search

    2000-03-01

    Metropolitan deployment tracking uses surveys targeted at state county, and local agencies within the metropolitan planning boundary for 78 of the largest metropolitan areas. Data were gathered in this manner in 1997 and these data were updated in 19...

  11. Six pitfalls in firewall deployment

    NASA Astrophysics Data System (ADS)

    Wilner, Bruce

    1996-03-01

    This note describes six key pitfalls in the deployment of popular commercial firewalls. The term `deployment' is intended to include the architecture of the firewall software itself, the integration of the firewall with the operating system platform, and the interconnection of the complete hardware/software combination within its target environment. After reviewing the evolution of Internet firewalls against the backdrop of classical trusted systems development, specific flaws and oversights in the familiar commercial deployments are analyzed in some detail. While significantly costlier solutions are available that address some of these problems, the analysis is applicable to the overwhelming majority of firewalls in use at both commercial and Government installations.

  12. Geothopica and the interactive analysis and visualization of the updated Italian National Geothermal Database

    NASA Astrophysics Data System (ADS)

    Trumpy, Eugenio; Manzella, Adele

    2017-02-01

    The Italian National Geothermal Database (BDNG), is the largest collection of Italian Geothermal data and was set up in the 1980s. It has since been updated both in terms of content and management tools: information on deep wells and thermal springs (with temperature > 30 °C) are currently organized and stored in a PostgreSQL relational database management system, which guarantees high performance, data security and easy access through different client applications. The BDNG is the core of the Geothopica web site, whose webGIS tool allows different types of user to access geothermal data, to visualize multiple types of datasets, and to perform integrated analyses. The webGIS tool has been recently improved by two specially designed, programmed and implemented visualization tools to display data on well lithology and underground temperatures. This paper describes the contents of the database and its software and data update, as well as the webGIS tool including the new tools for data lithology and temperature visualization. The geoinformation organized in the database and accessible through Geothopica is of use not only for geothermal purposes, but also for any kind of georesource and CO2 storage project requiring the organization of, and access to, deep underground data. Geothopica also supports project developers, researchers, and decision makers in the assessment, management and sustainable deployment of georesources.

  13. A Mathematics Software Database Update.

    ERIC Educational Resources Information Center

    Cunningham, R. S.; Smith, David A.

    1987-01-01

    Contains an update of an earlier listing of software for mathematics instruction at the college level. Topics are: advanced mathematics, algebra, calculus, differential equations, discrete mathematics, equation solving, general mathematics, geometry, linear and matrix algebra, logic, statistics and probability, and trigonometry. (PK)

  14. Marshburn updates software on the WHC UPA in the Node 3

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031133 (17 Jan. 2013) --- NASA astronaut Tom Marshburn, Expedition 34 flight engineer, updates software on the Waste and Hygiene Compartment?s Urine Processor Assembly in the Tranquility node of the International Space Station.

  15. Marshburn updates software on the WHC UPA in the Node 3

    NASA Image and Video Library

    2013-01-17

    ISS034-E-031130 (17 Jan. 2013) --- NASA astronaut Tom Marshburn, Expedition 34 flight engineer, updates software on the Waste and Hygiene Compartment?s Urine Processor Assembly in the Tranquility node of the International Space Station.

  16. The impact of Docker containers on the performance of genomic pipelines

    PubMed Central

    Palumbo, Emilio; Chatzou, Maria; Prieto, Pablo; Heuer, Michael L.; Notredame, Cedric

    2015-01-01

    Genomic pipelines consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This makes it easy to distribute and execute pipelines in a portable manner across a wide range of computing platforms. Thus, the question that arises is to what extent the use of Docker containers might affect the performance of these pipelines. Here we address this question and conclude that Docker containers have only a minor impact on the performance of common genomic pipelines, which is negligible when the executed jobs are long in terms of computational time. PMID:26421241

  17. The impact of Docker containers on the performance of genomic pipelines.

    PubMed

    Di Tommaso, Paolo; Palumbo, Emilio; Chatzou, Maria; Prieto, Pablo; Heuer, Michael L; Notredame, Cedric

    2015-01-01

    Genomic pipelines consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This makes it easy to distribute and execute pipelines in a portable manner across a wide range of computing platforms. Thus, the question that arises is to what extent the use of Docker containers might affect the performance of these pipelines. Here we address this question and conclude that Docker containers have only a minor impact on the performance of common genomic pipelines, which is negligible when the executed jobs are long in terms of computational time.

  18. Patch Transporter: Incentivized, Decentralized Software Patch System for WSN and IoT Environments

    PubMed Central

    Lee, JongHyup

    2018-01-01

    In the complicated settings of WSN (Wireless Sensor Networks) and IoT (Internet of Things) environments, keeping a number of heterogeneous devices updated is a challenging job, especially with respect to effectively discovering target devices and rapidly delivering the software updates. In this paper, we convert the traditional software update process to a distributed service. We set an incentive system for faithfully transporting the patches to the recipient devices. The incentive system motivates independent, self-interested transporters for helping the devices to be updated. To ensure the system correctly operates, we employ the blockchain system that enforces the commitment in a decentralized manner. We also present a detailed specification for the proposed protocol and validate it by model checking and simulations for correctness. PMID:29438337

  19. Patch Transporter: Incentivized, Decentralized Software Patch System for WSN and IoT Environments.

    PubMed

    Lee, JongHyup

    2018-02-13

    [-12]In the complicated settings of WSN (Wireless Sensor Networks) and IoT (Internet of Things) environments, keeping a number of heterogeneous devices updated is a challenging job, especially with respect to effectively discovering target devices and rapidly delivering the software updates. In this paper, we convert the traditional software update process to a distributed service. We set an incentive system for faithfully transporting the patches to the recipient devices. The incentive system motivates independent, self-interested transporters for helping the devices to be updated. To ensure the system correctly operates, we employ the blockchain system that enforces the commitment in a decentralized manner. We also present a detailed specification for the proposed protocol and validate it by model checking and simulations for correctness.

  20. India Solar Resource Data: Enhanced Data for Accelerated Deployment (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Identifying potential locations for solar photovoltaic (PV) and concentrating solar power (CSP) projects requires an understanding of the underlying solar resource. Under a bilateral partnership between the United States and India - the U.S.-India Energy Dialogue - the National Renewable Energy Laboratory has updated Indian solar data and maps using data provided by the Ministry of New and Renewable Energy (MNRE) and the National Institute for Solar Energy (NISE). This fact sheet overviews the updated maps and data, which help identify high-quality solar energy projects. This can help accelerate the deployment of solar energy in India.

  1. India Solar Resource Data: Enhanced Data for Accelerated Deployment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Identifying potential locations for solar photovoltaic (PV) and concentrating solar power (CSP) projects requires an understanding of the underlying solar resource. Under a bilateral partnership between the United States and India - the U.S.-India Energy Dialogue - the National Renewable Energy Laboratory has updated Indian solar data and maps using data provided by the Ministry of New and Renewable Energy (MNRE) and the National Institute for Solar Energy (NISE). This fact sheet overviews the updated maps and data, which help identify high-quality solar energy projects. This can help accelerate the deployment of solar energy in India.

  2. Large Deployable Reflector (LDR) feasibility study update

    NASA Technical Reports Server (NTRS)

    Alff, W. H.; Banderman, L. W.

    1983-01-01

    In 1982 a workshop was held to refine the science rationale for large deployable reflectors (LDR) and develop technology requirements that support the science rationale. At the end of the workshop, a set of LDR consensus systems requirements was established. The subject study was undertaken to update the initial LDR study using the new systems requirements. The study included mirror materials selection and configuration, thermal analysis, structural concept definition and analysis, dynamic control analysis and recommendations for further study. The primary emphasis was on the dynamic controls requirements and the sophistication of the controls system needed to meet LDR performance goals.

  3. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  4. Intelligent Transportation Systems in the National Parks System and Other Federal Public Lands - 2011 Update.

    DOT National Transportation Integrated Search

    2011-09-30

    The Intelligent Transportation Systems in Federal Public Lands report details the state of ITS deployment across all federal land management agencies (FLMAs) in 2011, updating a Volpe Center report completed in 2005. An assessment of the types ...

  5. Removing a barrier to computer-based outbreak and disease surveillance--the RODS Open Source Project.

    PubMed

    Espino, Jeremy U; Wagner, M; Szczepaniak, C; Tsui, F C; Su, H; Olszewski, R; Liu, Z; Chapman, W; Zeng, X; Ma, L; Lu, Z; Dara, J

    2004-09-24

    Computer-based outbreak and disease surveillance requires high-quality software that is well-supported and affordable. Developing software in an open-source framework, which entails free distribution and use of software and continuous, community-based software development, can produce software with such characteristics, and can do so rapidly. The objective of the Real-Time Outbreak and Disease Surveillance (RODS) Open Source Project is to accelerate the deployment of computer-based outbreak and disease surveillance systems by writing software and catalyzing the formation of a community of users, developers, consultants, and scientists who support its use. The University of Pittsburgh seeded the Open Source Project by releasing the RODS software under the GNU General Public License. An infrastructure was created, consisting of a website, mailing lists for developers and users, designated software developers, and shared code-development tools. These resources are intended to encourage growth of the Open Source Project community. Progress is measured by assessing website usage, number of software downloads, number of inquiries, number of system deployments, and number of new features or modules added to the code base. During September--November 2003, users generated 5,370 page views of the project website, 59 software downloads, 20 inquiries, one new deployment, and addition of four features. Thus far, health departments and companies have been more interested in using the software as is than in customizing or developing new features. The RODS laboratory anticipates that after initial installation has been completed, health departments and companies will begin to customize the software and contribute their enhancements to the public code base.

  6. Updating the Inductee Delivery Schedule.

    DTIC Science & Technology

    1987-03-01

    deployed forces at risk with the anticipated opposing forces for the expected level of combat intensity. An estimate of the number of individuals who...identification of shortfalls in critical skills. It prescribes the anticipation of requirements and return of personnel resources to military control as...with the Time Phased Force Deployment Data lists the forces that will be deployed over time. Each unit is then assigned to a risk group (forces

  7. Deployment of Shaped Charges by a Semi-Autonomous Ground Vehicle

    DTIC Science & Technology

    2007-06-01

    lives on a daily basis. BigFoot seeks to replace the local human component by deploying and remotely detonating shaped charges to destroy IEDs...robotic arm to deploy and remotely detonate shaped charges. BigFoot incorporates improved communication range over previous Autonomous Ground Vehicles...and an updated user interface that includes controls for the arm and camera by interfacing multiple microprocessors. BigFoot is capable of avoiding

  8. Delta XTE Spacecraft Solar Panel Deployment, Hangar AO at Cape Canaveral Air Station

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The footage shows technicians in the clean room checking and adjusting the deployment mechanism of the solar panel for XTE spacecraft. Other scenes show several technicians making adjustments to software for deployment of the solar panels.

  9. System for Continuous Delivery of MODIS Imagery to Internet Mapping Applications

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    This software represents a complete, unsupervised processing chain that generates a continuously updating global image of the Earth from the most recent available MODIS Level 1B scenes. The software constantly updates a global image of the Earth at 250 m per pixel.

  10. An efficient approach to the deployment of complex open source information systems

    PubMed Central

    Cong, Truong Van Chi; Groeneveld, Eildert

    2011-01-01

    Complex open source information systems are usually implemented as component-based software to inherit the available functionality of existing software packages developed by third parties. Consequently, the deployment of these systems not only requires the installation of operating system, application framework and the configuration of services but also needs to resolve the dependencies among components. The problem becomes more challenging when the application must be installed and used on different platforms such as Linux and Windows. To address this, an efficient approach using the virtualization technology is suggested and discussed in this paper. The approach has been applied in our project to deploy a web-based integrated information system in molecular genetics labs. It is a low-cost solution to benefit both software developers and end-users. PMID:22102770

  11. Improved CLARAty Functional-Layer/Decision-Layer Interface

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Rabideau, Gregg; Gaines, Daniel; Johnston, Mark; Chouinard, Caroline; Nessnas, Issa; Shu, I-Hsiang

    2008-01-01

    Improved interface software for communication between the CLARAty Decision and Functional layers has been developed. [The Coupled Layer Architecture for Robotics Autonomy (CLARAty) was described in Coupled-Layer Robotics Architecture for Autonomy (NPO-21218), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48. To recapitulate: the CLARAty architecture was developed to improve the modularity of robotic software while tightening coupling between planning/execution and basic control subsystems. Whereas prior robotic software architectures typically contained three layers, the CLARAty contains two layers: a decision layer (DL) and a functional layer (FL).] Types of communication supported by the present software include sending commands from DL modules to FL modules and sending data updates from FL modules to DL modules. The present software supplants prior interface software that had little error-checking capability, supported data parameters in string form only, supported commanding at only one level of the FL, and supported only limited updates of the state of the robot. The present software offers strong error checking, and supports complex data structures and commanding at multiple levels of the FL, and relative to the prior software, offers a much wider spectrum of state-update capabilities.

  12. National Deployment Estimate of the Metropolitan ITS Infrastructure : Updated with 2010 Deployment Data, 7th Revision

    DOT National Transportation Integrated Search

    2011-12-01

    The purpose of this report is to provide a summary and back-up information on the methodology, data sources, and results for the estimate of Intelligent Transportation Systems (ITS) capital expenditures in the top 75 metropolitan areas as of FY 2010....

  13. RAEGE Project Update: Yebes Observatory Broadband Receiver Ready for VGOS

    NASA Astrophysics Data System (ADS)

    IGN Yebes Observatory staff

    2016-12-01

    An update of the deployment and activities at the Spanish/Portuguese RAEGE project (``Atlantic Network of Geodynamical and Space Stations'') is presented. While regular observations with the Yebes radio telescope are on-going, technological developments about receivers for VGOS are progressing at the Yebes laboratories.

  14. Certification of production-quality gLite Job Management components

    NASA Astrophysics Data System (ADS)

    Andreetto, P.; Bertocco, S.; Capannini, F.; Cecchi, M.; Dorigo, A.; Frizziero, E.; Giacomini, F.; Gianelle, A.; Mezzadri, M.; Molinari, E.; Monforte, S.; Prelz, F.; Rebatto, D.; Sgaravatto, M.; Zangrando, L.

    2011-12-01

    With the advent of the recent European Union (EU) funded projects aimed at achieving an open, coordinated and proactive collaboration among the European communities that provide distributed computing services, more strict requirements and quality standards will be asked to middleware providers. Such a highly competitive and dynamic environment, organized to comply a business-oriented model, has already started pursuing quality criteria, thus requiring to formally define rigorous procedures, interfaces and roles for each step of the software life-cycle. This will ensure quality-certified releases and updates of the Grid middleware. In the European Middleware Initiative (EMI), the release management for one or more components will be organized into Product Team (PT) units, fully responsible for delivering production ready, quality-certified software and for coordinating each other to contribute to the EMI release as a whole. This paper presents the certification process, with respect to integration, installation, configuration and testing, adopted at INFN by the Product Team responsible for the gLite Web-Service based Computing Element (CREAM CE) and for the Workload Management System (WMS). The used resources, the testbeds layout, the integration and deployment methods, the certification steps to provide feedback to developers and to grant quality results are described.

  15. Designing and Implementing a Distributed System Architecture for the Mars Rover Mission Planning Software (Maestro)

    NASA Technical Reports Server (NTRS)

    Goldgof, Gregory M.

    2005-01-01

    Distributed systems allow scientists from around the world to plan missions concurrently, while being updated on the revisions of their colleagues in real time. However, permitting multiple clients to simultaneously modify a single data repository can quickly lead to data corruption or inconsistent states between users. Since our message broker, the Java Message Service, does not ensure that messages will be received in the order they were published, we must implement our own numbering scheme to guarantee that changes to mission plans are performed in the correct sequence. Furthermore, distributed architectures must ensure that as new users connect to the system, they synchronize with the database without missing any messages or falling into an inconsistent state. Robust systems must also guarantee that all clients will remain synchronized with the database even in the case of multiple client failure, which can occur at any time due to lost network connections or a user's own system instability. The final design for the distributed system behind the Mars rover mission planning software fulfills all of these requirements and upon completion will be deployed to MER at the end of 2005 as well as Phoenix (2007) and MSL (2009).

  16. Beam Position and Phase Monitor - Wire Mapping System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Heath A; Shurter, Robert B.; Gilpatrick, John D.

    2012-04-10

    The Los Alamos Neutron Science Center (LANSCE) deploys many cylindrical beam position and phase monitors (BPPM) throughout the linac to measure the beam central position, phase and bunched-beam current. Each monitor is calibrated and qualified prior to installation to insure it meets LANSCE requirements. The BPPM wire mapping system is used to map the BPPM electrode offset, sensitivity and higher order coefficients. This system uses a three-axis motion table to position the wire antenna structure within the cavity, simulating the beam excitation of a BPPM at a fundamental frequency of 201.25 MHz. RF signal strength is measured and recorded formore » the four electrodes as the antenna position is updated. An effort is underway to extend the systems service to the LANSCE facility by replacing obsolete electronic hardware and taking advantage of software enhancements. This paper describes the upgraded wire positioning system's new hardware and software capabilities including its revised antenna structure, motion control interface, RF measurement equipment and Labview software upgrades. The main purpose of the wire mapping system at LANSCE is to characterize the amplitude response versus beam central position of BPPMs before they are installed in the beam line. The wire mapping system is able to simulate a beam using a thin wire and measure the signal response as the wire position is varied within the BPPM aperture.« less

  17. 76 FR 9339 - State Energy Advisory Board (STEAB); Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-17

    ... energy advancement and deployment, and update members of the STEAB on routine business matters affecting... Berkeley National Laboratory (LBNL) in order to receive updates on new and emerging technologies as well as... empowered to conduct the meeting in a fashion that will facilitate the orderly conduct of business. This...

  18. Further Automate Planned Cluster Maintenance to Minimize System Downtime during Maintenance Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springmeyer, R.

    This report documents the integration and testing of the automated update process of compute clusters in LC to minimize impact to user productivity. Description: A set of scripts will be written and deployed to further standardize cluster maintenance activities and minimize downtime during planned maintenance windows. Completion Criteria: When the scripts have been deployed and used during planned maintenance windows and a timing comparison is completed between the existing process and the new more automated process, this milestone is complete. This milestone was completed on Aug 23, 2016 on the new CTS1 cluster called Jade when a request to upgrademore » the version of TOSS 3 was initiated while SWL jobs and normal user jobs were running. Jobs that were running when the update to the system began continued to run to completion. New jobs on the cluster started on the new release of TOSS 3. No system administrator action was required. Current update procedures in TOSS 2 begin by killing all users jobs. Then all diskfull nodes are updated, which can take a few hours. Only after the updates are applied are all nodes are rebooted, and then finally put back into service. A system administrator is required for all steps. In terms of human time spent during a cluster OS update, the TOSS 3 automated procedure on Jade took 0 FTE hours. Doing the same update without the Toss Update Tool would have required 4 FTE hours.« less

  19. Efficient Software Systems for Cardio Surgical Departments

    NASA Astrophysics Data System (ADS)

    Fountoukis, S. G.; Diomidous, M. J.

    2009-08-01

    Herein, the design implementation and deployment of an object oriented software system, suitable for the monitoring of cardio surgical departments, is investigated. Distributed design architectures are applied and the implemented software system can be deployed on distributed infrastructures. The software is flexible and adaptable to any cardio surgical environment regardless of the department resources used. The system exploits the relations and the interdependency of the successive bed positions that the patients occupy at the different health care units during their stay in a cardio surgical department, to determine bed availabilities and to perform patient scheduling and instant rescheduling whenever necessary. It also aims to successful monitoring of the workings of the cardio surgical departments in an efficient manner.

  20. Updates to the NASA Space Telecommunications Radio System (STRS) Architecture

    NASA Technical Reports Server (NTRS)

    Kacpura, Thomas J.; Handler, Louis M.; Briones, Janette; Hall, Charles S.

    2008-01-01

    This paper describes an update of the Space Telecommunications Radio System (STRS) open architecture for NASA space based radios. The STRS architecture has been defined as a framework for the design, development, operation and upgrade of space based software defined radios, where processing resources are constrained. The architecture has been updated based upon reviews by NASA missions, radio providers, and component vendors. The STRS Standard prescribes the architectural relationship between the software elements used in software execution and defines the Application Programmer Interface (API) between the operating environment and the waveform application. Modeling tools have been adopted to present the architecture. The paper will present a description of the updated API, configuration files, and constraints. Minimum compliance is discussed for early implementations. The paper then closes with a summary of the changes made and discussion of the relevant alignment with the Object Management Group (OMG) SWRadio specification, and enhancements to the specialized signal processing abstraction.

  1. Maintaining the Health of Software Monitors

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Rungta, Neha

    2013-01-01

    Software health management (SWHM) techniques complement the rigorous verification and validation processes that are applied to safety-critical systems prior to their deployment. These techniques are used to monitor deployed software in its execution environment, serving as the last line of defense against the effects of a critical fault. SWHM monitors use information from the specification and implementation of the monitored software to detect violations, predict possible failures, and help the system recover from faults. Changes to the monitored software, such as adding new functionality or fixing defects, therefore, have the potential to impact the correctness of both the monitored software and the SWHM monitor. In this work, we describe how the results of a software change impact analysis technique, Directed Incremental Symbolic Execution (DiSE), can be applied to monitored software to identify the potential impact of the changes on the SWHM monitor software. The results of DiSE can then be used by other analysis techniques, e.g., testing, debugging, to help preserve and improve the integrity of the SWHM monitor as the monitored software evolves.

  2. StrAuto: automation and parallelization of STRUCTURE analysis.

    PubMed

    Chhatre, Vikram E; Emerson, Kevin J

    2017-03-24

    Population structure inference using the software STRUCTURE has become an integral part of population genetic studies covering a broad spectrum of taxa including humans. The ever-expanding size of genetic data sets poses computational challenges for this analysis. Although at least one tool currently implements parallel computing to reduce computational overload of this analysis, it does not fully automate the use of replicate STRUCTURE analysis runs required for downstream inference of optimal K. There is pressing need for a tool that can deploy population structure analysis on high performance computing clusters. We present an updated version of the popular Python program StrAuto, to streamline population structure analysis using parallel computing. StrAuto implements a pipeline that combines STRUCTURE analysis with the Evanno Δ K analysis and visualization of results using STRUCTURE HARVESTER. Using benchmarking tests, we demonstrate that StrAuto significantly reduces the computational time needed to perform iterative STRUCTURE analysis by distributing runs over two or more processors. StrAuto is the first tool to integrate STRUCTURE analysis with post-processing using a pipeline approach in addition to implementing parallel computation - a set up ideal for deployment on computing clusters. StrAuto is distributed under the GNU GPL (General Public License) and available to download from http://strauto.popgen.org .

  3. Problems Related to Alcohol, Other Drugs, and Violence among Military Students. Prevention Update

    ERIC Educational Resources Information Center

    Higher Education Center for Alcohol, Drug Abuse, and Violence Prevention, 2011

    2011-01-01

    According to a Research Update from the National Institute on Drug Abuse, ongoing operations in Iraq and Afghanistan "continue to strain military personnel, returning veterans, and their families. Some have experienced long and multiple deployments, combat exposure, and physical injuries, as well as post-traumatic stress disorder (PTSD) and…

  4. Monitoring Climate Variability and Change in Northern Alaska: Updates to the U.S. Geological Survey (USGS) Climate and Permafrost Monitoring Network

    NASA Astrophysics Data System (ADS)

    Urban, F. E.; Clow, G. D.; Meares, D. C.

    2004-12-01

    Observations of long-term climate and surficial geological processes are sparse in most of the Arctic, despite the fact that this region is highly sensitive to climate change. Instrumental networks that monitor the interplay of climatic variability and geological/cryospheric processes are a necessity for documenting and understanding climate change. Improvements to the spatial coverage and temporal scale of Arctic climate data are in progress. The USGS, in collaboration with The Bureau of Land Management (BLM) and The Fish and Wildlife Service (FWS) currently maintains two types of monitoring networks in northern Alaska: (1) A 15 site network of continuously operating active-layer and climate monitoring stations, and (2) a 21 element array of deep bore-holes in which the thermal state of deep permafrost is monitored. Here, we focus on the USGS Alaska Active Layer and Climate Monitoring Network (AK-CLIM). These 15 stations are deployed in longitudinal transects that span Alaska north of the Brooks Range, (11 in The National Petroleum Reserve Alaska, (NPRA), and 4 in The Arctic National Wildlife Refuge (ANWR)). An informative overview and update of the USGS AK-CLIM network is presented, including insight to current data, processing and analysis software, and plans for data telemetry. Data collection began in 1998 and parameters currently measured include air temperature, soil temperatures (5-120 cm), snow depth, incoming and reflected short-wave radiation, soil moisture (15 cm), wind speed and direction. Custom processing and analysis software has been written that calculates additional parameters such as active layer thaw depth, thawing-degree-days, albedo, cloudiness, and duration of seasonal snow cover. Data from selected AK-CLIM stations are now temporally sufficient to begin identifying trends, anomalies, and inter-annual variability in the climate of northern Alaska.

  5. LHCb Build and Deployment Infrastructure for run 2

    NASA Astrophysics Data System (ADS)

    Clemencic, M.; Couturier, B.

    2015-12-01

    After the successful run 1 of the LHC, the LHCb Core software team has taken advantage of the long shutdown to consolidate and improve its build and deployment infrastructure. Several of the related projects have already been presented like the build system using Jenkins, as well as the LHCb Performance and Regression testing infrastructure. Some components are completely new, like the Software Configuration Database (using the Graph DB Neo4j), or the new packaging installation using RPM packages. Furthermore all those parts are integrated to allow easier and quicker releases of the LHCb Software stack, therefore reducing the risk of operational errors. Integration and Regression tests are also now easier to implement, allowing to improve further the software checks.

  6. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  7. Simplified Deployment of Health Informatics Applications by Providing Docker Images.

    PubMed

    Löbe, Matthias; Ganslandt, Thomas; Lotzmann, Lydia; Mate, Sebastian; Christoph, Jan; Baum, Benjamin; Sariyar, Murat; Wu, Jie; Stäubert, Sebastian

    2016-01-01

    Due to the specific needs of biomedical researchers, in-house development of software is widespread. A common problem is to maintain and enhance software after the funded project has ended. Even if many tools are made open source, only a couple of projects manage to attract a user basis large enough to ensure sustainability. Reasons for this include complex installation and configuration of biomedical software as well as an ambiguous terminology of the features provided; all of which make evaluation of software laborious. Docker is a para-virtualization technology based on Linux containers that eases deployment of applications and facilitates evaluation. We investigated a suite of software developments funded by a large umbrella organization for networked medical research within the last 10 years and created Docker containers for a number of applications to support utilization and dissemination.

  8. AceTree: a major update and case study in the long term maintenance of open-source scientific software.

    PubMed

    Katzman, Braden; Tang, Doris; Santella, Anthony; Bao, Zhirong

    2018-04-04

    AceTree, a software application first released in 2006, facilitates exploration, curation and editing of tracked C. elegans nuclei in 4-dimensional (4D) fluorescence microscopy datasets. Since its initial release, AceTree has been continuously used to interact with, edit and interpret C. elegans lineage data. In its 11 year lifetime, AceTree has been periodically updated to meet the technical and research demands of its community of users. This paper presents the newest iteration of AceTree which contains extensive updates, demonstrates the new applicability of AceTree in other developmental contexts, and presents its evolutionary software development paradigm as a viable model for maintaining scientific software. Large scale updates have been made to the user interface for an improved user experience. Tools have been grouped according to functionality and obsolete methods have been removed. Internal requirements have been changed that enable greater flexibility of use both in C. elegans contexts and in other model organisms. Additionally, the original 3-dimensional (3D) viewing window has been completely reimplemented. The new window provides a new suite of tools for data exploration. By responding to technical advancements and research demands, AceTree has remained a useful tool for scientific research for over a decade. The updates made to the codebase have extended AceTree's applicability beyond its initial use in C. elegans and enabled its usage with other model organisms. The evolution of AceTree demonstrates a viable model for maintaining scientific software over long periods of time.

  9. Robonaut 2 - Building a Robot on the International Space Station

    NASA Technical Reports Server (NTRS)

    Diftler, Myron; Badger, Julia; Joyce, Charles; Potter, Elliott; Pike, Leah

    2015-01-01

    In 2010, the Robonaut Project embarked on a multi-phase mission to perform technology demonstrations on-board the International Space Station (ISS), showcasing state of the art robotics technologies through the use of Robonaut 2 (R2). This phased approach implements a strategy that allows for the use of ISS as a test bed during early development to both demonstrate capability and test technology while still making advancements in the earth based laboratories for future testing and operations in space. While R2 was performing experimental trials onboard the ISS during the first phase, engineers were actively designing for Phase 2, Intra-Vehicular Activity (IVA) Mobility, that utilizes a set of zero-g climbing legs outfitted with grippers to grasp handrails and seat tracks. In addition to affixing the new climbing legs to the existing R2 torso, it became clear that upgrades to the torso to both physically accommodate the climbing legs and to expand processing power and capabilities of the robot were required. In addition to these upgrades, a new safety architecture was also implemented in order to account for the expanded capabilities of the robot. The IVA climbing legs not only needed to attach structurally to the R2 torso on ISS, but also required power and data connections that did not exist in the upper body. The climbing legs were outfitted with a blind mate adapter and coarse alignment guides for easy installation, but the upper body required extensive rewiring to accommodate the power and data connections. This was achieved by mounting a custom adapter plate to the torso and routing the additional wiring through the waist joint to connect to the new set of processors. In addition to the power and data channels, the integrated unit also required updated electronics boards, additional sensors and updated processors to accommodate a new operating system, software platform, and custom control system. In order to perform the unprecedented task of building a robot in space, extensive practice sessions and meticulous procedures were required. Since crew training time is at a premium, the R2 team took a skills-based training approach to ensure the astronauts were proficient with a basic skill set while refining the detailed procedures over several practice sessions and simulations. In addition to the crew activities, meticulous ground procedures were required in order to upgrade firmware on the upper body motor drivers. The new firmware for the IVA mobility unit needed to be deployed using the old software system. This also provided an opportunity to upgrade the upper body joints with new software and allowed for limited insight into the success of the updates. Complete verification that the updated firmware was successfully loaded was not confirmed until the rewiring of the upper body torso was complete.

  10. Evaluating Non-In-Place Update Techniques for Flash-Based Transaction Processing Systems

    NASA Astrophysics Data System (ADS)

    Wang, Yongkun; Goda, Kazuo; Kitsuregawa, Masaru

    Recently, flash memory is emerging as the storage device. With price sliding fast, the cost per capacity is approaching to that of SATA disk drives. So far flash memory has been widely deployed in consumer electronics even partly in mobile computing environments. For enterprise systems, the deployment has been studied by many researchers and developers. In terms of the access performance characteristics, flash memory is quite different from disk drives. Without the mechanical components, flash memory has very high random read performance, whereas it has a limited random write performance because of the erase-before-write design. The random write performance of flash memory is comparable with or even worse than that of disk drives. Due to such a performance asymmetry, naive deployment to enterprise systems may not exploit the potential performance of flash memory at full blast. This paper studies the effectiveness of using non-in-place-update (NIPU) techniques through the IO path of flash-based transaction processing systems. Our deliberate experiments using both open-source DBMS and commercial DBMS validated the potential benefits; x3.0 to x6.6 performance improvement was confirmed by incorporating non-in-place-update techniques into file system without any modification of applications or storage devices.

  11. 77 FR 50090 - Update to the 26 September 2011 Military Freight Traffic Unified Rules Publication (MFTURP) NO. 1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-20

    ... DEPARTMENT OF DEFENSE Department of the Army Update to the 26 September 2011 Military Freight Traffic Unified Rules Publication (MFTURP) NO. 1 AGENCY: Department of the Army, DoD. SUMMARY: The Military Surface Deployment and Distribution Command (SDDC) is providing notice that it is releasing an...

  12. First year of ALMA site software deployment: where everything comes together

    NASA Astrophysics Data System (ADS)

    González, Víctor; Mora, Matias; Araya, Rodrigo; Arredondo, Diego; Bartsch, Marcelo; Burgos, Pablo; Ibsen, Jorge; Reveco, Johnny; Sáez, Norman; Schemrl, Anton; Sepulveda, Jorge; Shen, Tzu-Chiang; Soto, Rubén; Troncoso, Nicolás; Zambrano, Mauricio; Barriga, Nicolás; Glendenning, Brian; Raffi, Gianni; Kern, Jeff

    2010-07-01

    Starting 2009, the ALMA project initiated one of its most exciting phases within construction: the first antenna from one of the vendors was delivered to the Assembly, Integration and Verification team. With this milestone and the closure of the ALMA Test Facility in New Mexico, the JAO Computing Group in Chile found itself in the front line of the project's software deployment and integration effort. Among the group's main responsibilities are the deployment, configuration and support of the observation systems, in addition to infrastructure administration, all of which needs to be done in close coordination with the development groups in Europe, North America and Japan. Software support has been the primary interaction key with the current users (mainly scientists, operators and hardware engineers), as the software is normally the most visible part of the system. During this first year of work with the production hardware, three consecutive software releases have been deployed and commissioned. Also, the first three antennas have been moved to the Array Operations Site, at 5.000 meters elevation, and the complete end-to-end system has been successfully tested. This paper shares the experience of this 15-people group as part of the construction team at the ALMA site, and working together with Computing IPT, on the achievements and problems overcomed during this period. It explores the excellent results of teamwork, and also some of the troubles that such a complex and geographically distributed project can run into. Finally, it approaches the challenges still to come, with the transition to the ALMA operations plan.

  13. AADL and Model-based Engineering

    DTIC Science & Technology

    2014-10-20

    and MBE Feiler, Oct 20, 2014 © 2014 Carnegie Mellon University We Rely on Software for Safe Aircraft Operation Embedded software systems ...D eveloper Compute Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency...confusion Hardware Engineer Why do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software

  14. Annual Report: Carbon Capture Simulation Initiative (CCSI) (30 September 2013)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, David C.; Syamlal, Madhava; Cottrell, Roger

    2013-09-30

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that is developing and deploying state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models, with uncertainty quantification (UQ), optimization, risk analysis and decision making capabilities. The CCSI Toolset incorporates commercial and open-source software currently in use by industry and is also developing new software tools asmore » necessary to fill technology gaps identified during execution of the project. Ultimately, the CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. CCSI is led by the National Energy Technology Laboratory (NETL) and leverages the Department of Energy (DOE) national laboratories’ core strengths in modeling and simulation, bringing together the best capabilities at NETL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and Pacific Northwest National Laboratory (PNNL). The CCSI’s industrial partners provide representation from the power generation industry, equipment manufacturers, technology providers and engineering and construction firms. The CCSI’s academic participants (Carnegie Mellon University, Princeton University, West Virginia University, Boston University and the University of Texas at Austin) bring unparalleled expertise in multiphase flow reactors, combustion, process synthesis and optimization, planning and scheduling, and process control techniques for energy processes. During Fiscal Year (FY) 13, CCSI announced the initial release of its first set of computational tools and models during the October 2012 meeting of its Industry Advisory Board. This initial release led to five companies licensing the CCSI Toolset under a Test and Evaluation Agreement this year. By the end of FY13, the CCSI Technical Team had completed development of an updated suite of computational tools and models. The list below summarizes the new and enhanced toolset components that were released following comprehensive testing during October 2013. 1. FOQUS. Framework for Optimization and Quantification of Uncertainty and Sensitivity. Package includes: FOQUS Graphic User Interface (GUI), simulation-based optimization engine, Turbine Client, and heat integration capabilities. There is also an updated simulation interface and new configuration GUI for connecting Aspen Plus or Aspen Custom Modeler (ACM) simulations to FOQUS and the Turbine Science Gateway. 2. A new MFIX-based Computational Fluid Dynamics (CFD) model to predict particle attrition. 3. A new dynamic reduced model (RM) builder, which generates computationally efficient RMs of the behavior of a dynamic system. 4. A completely re-written version of the algebraic surrogate model builder for optimization (ALAMO). The new version is several orders of magnitude faster than the initial release and eliminates the MATLAB dependency. 5. A new suite of high resolution filtered models for the hydrodynamics associated with horizontal cylindrical objects in a flow path. 6. The new Turbine Science Gateway (Cluster), which supports FOQUS for running multiple simulations for optimization or UQ using a local computer or cluster. 7. A new statistical tool (BSS-ANOVA-UQ) for calibration and validation of CFD models. 8. A new basic data submodel in Aspen Plus format for a representative high viscosity capture solvent, 2-MPZ system. 9. An updated RM tool for CFD (REVEAL) that can create a RM from MFIX. A new lightweight, stand-alone version will be available in late 2013. 10. An updated RM integration tool to convert the RM from REVEAL into a CAPE-OPEN or ACM model for use in a process simulator. 11. An updated suite of unified steady-state and dynamic process models for solid sorbent carbon capture included bubbling fluidized bed and moving bed reactors. 12. An updated and unified set of compressor models including steady-state design point model and dynamic model with surge detection. 13. A new framework for the synthesis and optimization of coal oxycombustion power plants using advanced optimization algorithms. This release focuses on modeling and optimization of a cryogenic air separation unit (ASU). 14. A new technical risk model in spreadsheet format. 15. An updated version of the sorbent kinetic/equilibrium model for parameter estimation for the 1st generation sorbent model. 16. An updated process synthesis superstructure model to determine optimal process configurations utilizing surrogate models from ALAMO for adsorption and regeneration in a solid sorbent process. 17. Validation models for NETL Carbon Capture Unit utilizing sorbent AX. Additional validation models will be available for sorbent 32D in 2014. 18. An updated hollow fiber membrane model and system example for carbon capture. 19. An updated reference power plant model in Thermoflex that includes additional steam extraction and reinjection points to enable heat integration module. 20. An updated financial risk model in spreadsheet format.« less

  15. The CoreWall Project: An Update for 2007

    NASA Astrophysics Data System (ADS)

    Yu-Chung Chen, J.; Higgins, S.; Hur, H.; Ito, E.; Jenkins, C. J.; Johnson, A.; Leigh, J.; Morin, P.; Lee, J.

    2007-12-01

    The CoreWall Suite is a NSF-supported collaborative development for a real-time core description (Corelyzer), stratigraphic correlation (Correlater), and data visualization (CoreNavigator) software to be used by the marine, terrestrial and Antarctic science communities. The overall goal of the Corewall software development is to bring portable cross-platform tools to the broader drilling and coring communities to expand and enhance data visualization and enhance collaborative integration of multiple datasets. The CoreWall Project is now in its second year and significant progress has been made on all 3 software components. Corelyzer has undergone 2 field deployments and testing by ANDRILL program in 2006 (and again in Fall 2007) and by ICDP's SAFOD project (summer 2007). In addition, Corewall group and ICDP are working together so that the core description (DIS) system can expose DIS core data directly into Corelyzer seamlessly and be available to future ICDP and IODP-Mission Specific Platform expeditions. Educators have also taken note of the software's ease of use and strong visualization capabilities to begin exploring curriculum projects with Corelyzer software. To ensure that the software development is integrated with other community IT activities the development of the U.S. IODP-Phase 2 Scientific Ocean Drilling Vessel (SODV), a Steering Committee was constituted. It is composed of key U.S. IODP and related database (e.g., CHRONOS, SedDB) developers and users as well as representatives of other core-based enterprises (e.g., ANDRILL, ICDP, LacCore). Corelyzer (CoreWall's main visual core description tool) software displays digital core images from one or more cores along with discrete data streams (eg. physical properties, downhole logs) and nested images (eg. thin sections, fossils) to provide a robust approach to the description of sediment cores. Corelyzer's digital image handling allows the cores to be viewed from micron to km scale determined by the image resolution along a sliding plane, effectively making it a "digital microscope". Detailed features such as lithologic variation, macroscopic grain size variation, bioturbation intensity, chemical composition and micropaleontology are easier to interpret and annotate. Significant new capabilities have been added to allow for importing multiple images and data types, sharing/exporting Corelyzer "work sessions" for multiple users, enhanced annotations, as well as support for other activities like examining clasts, and sample requests. The new Correlator software, the updated version of Splicer/Sagan software used by ODP for over 10 years, has been ported into a single new analysis tool that will work across multiple platforms and interact seamlessly with both JANUS (ODP's relational database), CHRONOS, PetDB, SedDB, dbSEABED and other databases. This functionality will result in a CoreWall Suite module that can be used and distributed anywhere for stratigraphic and age correlation tasks. CoreNavigator, a spatial data discovery tool, has taken on a virtual Globe interface that allows users to enter Corelyzer from a geographic-visual standpoint.

  16. Remote software upload techniques in future vehicles and their performance analysis

    NASA Astrophysics Data System (ADS)

    Hossain, Irina

    Updating software in vehicle Electronic Control Units (ECUs) will become a mandatory requirement for a variety of reasons, for examples, to update/fix functionality of an existing system, add new functionality, remove software bugs and to cope up with ITS infrastructure. Software modules of advanced vehicles can be updated using Remote Software Upload (RSU) technique. The RSU employs infrastructure-based wireless communication technique where the software supplier sends the software to the targeted vehicle via a roadside Base Station (BS). However, security is critically important in RSU to avoid any disasters due to malfunctions of the vehicle or to protect the proprietary algorithms from hackers, competitors or people with malicious intent. In this thesis, a mechanism of secure software upload in advanced vehicles is presented which employs mutual authentication of the software provider and the vehicle using a pre-shared authentication key before sending the software. The software packets are sent encrypted with a secret key along with the Message Digest (MD). In order to increase the security level, it is proposed the vehicle to receive more than one copy of the software along with the MD in each copy. The vehicle will install the new software only when it receives more than one identical copies of the software. In order to validate the proposition, analytical expressions of average number of packet transmissions for successful software update is determined. Different cases are investigated depending on the vehicle's buffer size and verification methods. The analytical and simulation results show that it is sufficient to send two copies of the software to the vehicle to thwart any security attack while uploading the software. The above mentioned unicast method for RSU is suitable when software needs to be uploaded to a single vehicle. Since multicasting is the most efficient method of group communication, updating software in an ECU of a large number of vehicles could benefit from it. However, like the unicast RSU, the security requirements of multicast communication, i.e., authenticity, confidentiality and integrity of the software transmitted and access control of the group members is challenging. In this thesis, an infrastructure-based mobile multicasting for RSU in vehicle ECUs is proposed where an ECU receives the software from a remote software distribution center using the road side BSs as gateways. The Vehicular Software Distribution Network (VSDN) is divided into small regions administered by a Regional Group Manager (RGM). Two multicast Group Key Management (GKM) techniques are proposed based on the degree of trust on the BSs named Fully-trusted (FT) and Semi-trusted (ST) systems. Analytical models are developed to find the multicast session establishment latency and handover latency for these two protocols. The average latency to perform mutual authentication of the software vendor and a vehicle, and to send the multicast session key by the software provider during multicast session initialization, and the handoff latency during multicast session is calculated. Analytical and simulation results show that the link establishment latency per vehicle of our proposed schemes is in the range of few seconds and the ST system requires few ms higher time than the FT system. The handoff latency is also in the range of few seconds and in some cases ST system requires less handoff time than the FT system. Thus, it is possible to build an efficient GKM protocol without putting too much trust on the BSs.

  17. Evolution of a radio communication relay system

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa G.; Pezeshkian, Narek; Hart, Abraham; Burmeister, Aaron; Holz, Kevin; Neff, Joseph; Roth, Leif

    2013-05-01

    Providing long-distance non-line-of-sight control for unmanned ground robots has long been recognized as a problem, considering the nature of the required high-bandwidth radio links. In the early 2000s, the DARPA Mobile Autonomous Robot Software (MARS) program funded the Space and Naval Warfare Systems Center (SSC) Pacific to demonstrate a capability for autonomous mobile communication relaying on a number of Pioneer laboratory robots. This effort also resulted in the development of ad hoc networking radios and software that were later leveraged in the development of a more practical and logistically simpler system, the Automatically Deployed Communication Relays (ADCR). Funded by the Joint Ground Robotics Enterprise and internally by SSC Pacific, several generations of ADCR systems introduced increasingly more capable hardware and software for automatic maintenance of communication links through deployment of static relay nodes from mobile robots. This capability was finally tapped in 2010 to fulfill an urgent need from theater. 243 kits of ruggedized, robot-deployable communication relays were produced and sent to Afghanistan to extend the range of EOD and tactical ground robots in 2012. This paper provides a summary of the evolution of the radio relay technology at SSC Pacific, and then focuses on the latest two stages, the Manually-Deployed Communication Relays and the latest effort to automate the deployment of these ruggedized and fielded relay nodes.

  18. Assessment of the Combat Developer’s Role in Post-Deployment Software Support (PDSS) 30 June 1980 - 28 February 1981. Volume IV.

    DTIC Science & Technology

    1981-01-31

    Intelligence and Security Command (INSCOM), the US Army Communications Command (USACC), and the US Army Computer Systems Command (USACSC). (3...responsibilities of the US-Army Intelligence and Security Command (INSCOM), the US Army Communications Command (USACC), and the US Army Computer Systems...necessary to sustain, modify, and improve a deployed system’s computer software, as defined by the User or his representative. It includes evaluation

  19. LESS: Link Estimation with Sparse Sampling in Intertidal WSNs

    PubMed Central

    Ji, Xiaoyu; Chen, Yi-chao; Li, Xiaopeng; Xu, Wenyuan

    2018-01-01

    Deploying wireless sensor networks (WSN) in the intertidal area is an effective approach for environmental monitoring. To sustain reliable data delivery in such a dynamic environment, a link quality estimation mechanism is crucial. However, our observations in two real WSN systems deployed in the intertidal areas reveal that link update in routing protocols often suffers from energy and bandwidth waste due to the frequent link quality measurement and updates. In this paper, we carefully investigate the network dynamics using real-world sensor network data and find it feasible to achieve accurate estimation of link quality using sparse sampling. We design and implement a compressive-sensing-based link quality estimation protocol, LESS, which incorporates both spatial and temporal characteristics of the system to aid the link update in routing protocols. We evaluate LESS in both real WSN systems and a large-scale simulation, and the results show that LESS can reduce energy and bandwidth consumption by up to 50% while still achieving more than 90% link quality estimation accuracy. PMID:29494557

  20. Challenges of Implementing Free and Open Source Software (FOSS): Evidence from the Indian Educational Setting

    ERIC Educational Resources Information Center

    Thankachan, Briju; Moore, David Richard

    2017-01-01

    The use of Free and Open Source Software (FOSS), a subset of Information and Communication Technology (ICT), can reduce the cost of purchasing software. Despite the benefit in the initial purchase price of software, deploying software requires total cost that goes beyond the initial purchase price. Total cost is a silent issue of FOSS and can only…

  1. Achieving Better Buying Power through Acquisition of Open Architecture Software Systems. Volume 2 Understanding Open Architecture Software Systems: Licensing and Security Research and Recommendations

    DTIC Science & Technology

    2016-01-06

    of- breed software components and software products lines (SPLs) that are subject to different IP license and cybersecurity requirements. The... commercially priced closed source software components, to be used in the design, implementation, deployment, and evolution of open architecture (OA... breed software components and software products lines (SPLs) that are subject to different IP license and cybersecurity requirements. The Department

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Mathew; Bowen, Brian; Coles, Dwight

    The Middleware Automated Deployment Utilities consists the these three components: MAD: Utility designed to automate the deployment of java applications to multiple java application servers. The product contains a front end web utility and backend deployment scripts. MAR: Web front end to maintain and update the components inside database. MWR-Encrypt: Web utility to convert a text string to an encrypted string that is used by the Oracle Weblogic application server. The encryption is done using the built in functions if the Oracle Weblogic product and is mainly used to create an encrypted version of a database password.

  3. SEDS1 mission software verification using a signal simulator

    NASA Technical Reports Server (NTRS)

    Pierson, William E.

    1992-01-01

    The first flight of the Small Expendable Deployer System (SEDS1) is schedule to fly as the secondary payload of a Delta 2 in March, 1993. The objective of the SEDS1 mission is to collect data to validate the concept of tethered satellite systems and to verify computer simulations used to predict their behavior. SEDS1 will deploy a 50 lb. instrumented satellite as an end mass using a 20 km tether. Langley Research Center is providing the end mass instrumentation, while the Marshall Space Flight Center is designing and building the deployer. The objective of the experiment is to test the SEDS design concept by demonstrating that the system will satisfactorily deploy the full 20 km tether without stopping prematurely, come to a smooth stop on the application of a brake, and cut the tether at the proper time after it swings to the local vertical. Also, SEDS1 will collect data which will be used to test the accuracy of tether dynamics models used to stimulate this type of deployment. The experiment will last about 1.5 hours and complete approximately 1.5 orbits. Radar tracking of the Delta II and end mass is planned. In addition, the SEDS1 on-board computer will continuously record, store, and transmit mission data over the Delta II S-band telemetry system. The Data System will count tether windings as the tether unwinds, log the times of each turn and other mission events, monitor tether tension, and record the temperature of system components. A summary of the measurements taken during the SEDS1 are shown. The Data System will also control the tether brake and cutter mechanisms. Preliminary versions of two major sections of the flight software, the data telemetry modules and the data collection modules, were developed and tested under the 1990 NASA/ASEE Summer Faculty Fellowship Program. To facilitate the debugging of these software modules, a prototype SEDS Data System was programmed to simulate turn count signals. During the 1991 summer program, the concept of simulating signals produced by the SEDS electronics systems and circuits was expanded and more precisely defined. During the 1992 summer program, the SEDS signal simulator was programmed to test the requirements of the SEDS Mission software, and this simulator will be used in the formal verification of the SEDS Mission Software. The formal test procedures specification was written which incorporates the use of the signal simulator to test the SEDS Mission Software and which incorporates procedures for testing the other major component of the SEDS software, the Monitor Software.

  4. 2012 Eco-Logical grant program annual report

    DOT National Transportation Integrated Search

    2000-01-01

    What is IDAS? IDAS, which stands for the ITS Deployment Analysis System, is software developed by the Federal Highway Administration that can be used to perform sketch planning analysis for ITS deployments. Planners and others can use IDAS to calcula...

  5. Development of the updated system of city underground pipelines based on Visual Studio

    NASA Astrophysics Data System (ADS)

    Zhang, Jianxiong; Zhu, Yun; Li, Xiangdong

    2009-10-01

    Our city has owned the integrated pipeline network management system with ArcGIS Engine 9.1 as the bottom development platform and with Oracle9i as basic database for storaging data. In this system, ArcGIS SDE9.1 is applied as the spatial data engine, and the system was a synthetic management software developed with Visual Studio visualization procedures development tools. As the pipeline update function of the system has the phenomenon of slower update and even sometimes the data lost, to ensure the underground pipeline data can real-time be updated conveniently and frequently, and the actuality and integrity of the underground pipeline data, we have increased a new update module in the system developed and researched by ourselves. The module has the powerful data update function, and can realize the function of inputting and outputting and rapid update volume of data. The new developed module adopts Visual Studio visualization procedures development tools, and uses access as the basic database to storage data. We can edit the graphics in AutoCAD software, and realize the database update using link between the graphics and the system. Practice shows that the update module has good compatibility with the original system, reliable and high update efficient of the database.

  6. Web Application Software for Ground Operations Planning Database (GOPDb) Management

    NASA Technical Reports Server (NTRS)

    Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey

    2013-01-01

    A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.

  7. Talking Back: Weapons, Warfare, and Feedback

    DTIC Science & Technology

    2010-04-01

    realize that these laws are not laws of physics . They don’t allow for performance or effectiveness comparisons either as they don’t have a common...the weapon’s next software update. Software updates are done by physical connections like most legacy systems as well as by secure data link...Generally the land based Air Force squadrons use physical connections due to the increased reliability, while sea based squadrons use the wireless

  8. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure

    NASA Astrophysics Data System (ADS)

    Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.

    2013-10-01

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.

  9. An Alternative Flight Software Trigger Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions Using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.; Gay, Robert S.; Stachowiak, Susan J.

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles.

  10. An Alternative Flight Software Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly; Gay, Robert; Stachowiak, Susan

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles

  11. An Alternative Flight Software Trigger Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.; Gay, Robert S.; Stachowiak, Susan J.

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter. In order to increase overall robustness, the vehicle also has an alternate method of triggering the drogue parachute deployment based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this velocity-based trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers excellent performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles.

  12. FMT (Flight Software Memory Tracker) For Cassini Spacecraft-Software Engineering Using JAVA

    NASA Technical Reports Server (NTRS)

    Kan, Edwin P.; Uffelman, Hal; Wax, Allan H.

    1997-01-01

    The software engineering design of the Flight Software Memory Tracker (FMT) Tool is discussed in this paper. FMT is a ground analysis software set, consisting of utilities and procedures, designed to track the flight software, i.e., images of memory load and updatable parameters of the computers on-board Cassini spacecraft. FMT is implemented in Java.

  13. Software Engineering Infrastructure in a Large Virtual Campus

    ERIC Educational Resources Information Center

    Cristobal, Jesus; Merino, Jorge; Navarro, Antonio; Peralta, Miguel; Roldan, Yolanda; Silveira, Rosa Maria

    2011-01-01

    Purpose: The design, construction and deployment of a large virtual campus are a complex issue. Present virtual campuses are made of several software applications that complement e-learning platforms. In order to develop and maintain such virtual campuses, a complex software engineering infrastructure is needed. This paper aims to analyse the…

  14. Integrated Environment for Development and Assurance

    DTIC Science & Technology

    2015-01-26

    Jan 26, 2015 © 2015 Carnegie Mellon University We Rely on Software for Safe Aircraft Operation Embedded software systems introduce a new class of...eveloper Compute Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency jitter affects...Why do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software system as major source of

  15. Computer-Assisted Language Learning for Japanese on the Macintosh: An Update of What's Available.

    ERIC Educational Resources Information Center

    Darnall, Cliff; And Others

    This paper outlines a presentation on available Macintosh computer software for learning Japanese. The software systems described are categorized by their emphasis on speaking, writing, or reading, with a special section on software for young learners. Software that emphasizes spoken language includes "Berlitz for Business…

  16. GeoSciML and EarthResourceML Update, 2012

    NASA Astrophysics Data System (ADS)

    Richard, S. M.; Commissionthe Management; Application Inte, I.

    2012-12-01

    CGI Interoperability Working Group activities during 2012 include deployment of services using the GeoSciML-Portrayal schema, addition of new vocabularies to support properties added in version 3.0, improvements to server software for deploying services, introduction of EarthResourceML v.2 for mineral resources, and collaboration with the IUSS on a markup language for soils information. GeoSciML and EarthResourceML have been used as the basis for the INSPIRE Geology and Mineral Resources specifications respectively. GeoSciML-Portrayal is an OGC GML simple-feature application schema for presentation of geologic map unit, contact, and shear displacement structure (fault and ductile shear zone) descriptions in web map services. Use of standard vocabularies for geologic age and lithology enables map services using shared legends to achieve visual harmonization of maps provided by different services. New vocabularies have been added to the collection of CGI vocabularies provided to support interoperable GeoSciML services, and can be accessed through http://resource.geosciml.org. Concept URIs can be dereferenced to obtain SKOS rdf or html representations using the SISSVoc vocabulary service. New releases of the FOSS GeoServer application greatly improve support for complex XML feature schemas like GeoSciML, and the ArcGIS for INSPIRE extension implements similar complex feature support for ArcGIS Server. These improved server implementations greatly facilitate deploying GeoSciML services. EarthResourceML v2 adds features for information related to mining activities. SoilML provides an interchange format for soil material, soil profile, and terrain information. Work is underway to add GeoSciML to the portfolio of Open Geospatial Consortium (OGC) specifications.

  17. GFEChutes Lo-Fi

    NASA Technical Reports Server (NTRS)

    Gist, Emily; Turner, Gary; Shelton, Robert; Vautier, Mana; Shaikh, Ashraf

    2013-01-01

    NASA needed to provide a software model of a parachute system for a manned re-entry vehicle. NASA has parachute codes, e.g., the Descent Simulation System (DSS), that date back to the Apollo Program. Since the space shuttle did not rely on parachutes as its primary descent control mechanism, DSS has not been maintained or incorporated into modern simulation architectures such as Osiris and Antares, which are used for new mission simulations. GFEChutes Lo-Fi is an object-oriented implementation of conventional parachute codes designed for use in modern simulation environments. The GFE (Government Furnished Equipment), low-fidelity (Lo-Fi) parachute model (GFEChutes Lo-Fi) is a software package capable of modeling the effects of multiple parachutes, deployed concurrently and/or sequentially, on a vehicle during the subsonic phase of reentry into planetary atmosphere. The term "low-fidelity" distinguishes models that represent the parachutes as simple forces acting on the vehicle, as opposed to independent aerodynamic bodies. GFEChutes Lo-Fi was created from these existing models to be clean, modular, certified as NASA Class C software, and portable, or "plug and play." The GFE Lo-Fi Chutes Model provides basic modeling capability of a sequential series of parachute activities. Actions include deploying the parachute, changing the reefing on the parachute, and cutting away the parachute. Multiple chutes can be deployed at any given time, but all chutes in that case are assumed to behave as individually isolated chutes; there is no modeling of any interactions between deployed chutes. Drag characteristics of a deployed chute are based on a coefficient of drag, the face area of the chute, and the local dynamic pressure only. The orientation of the chute is approximately modeled for purposes of obtaining torques on the vehicle, but the dynamic state of the chute as a separate entity is not integrated - the treatment is simply an approximation. The innovation in GFEChutes Lo-Fi is to use an object design that closely followed the mechanical characteristics and structure of a physical system of parachutes and their deployment mechanisms. Software objects represent the components of the system, and use of an object hierarchy allows a progression from general component outlines to specific implementations. These extra chutes were not part of the baseline deceleration sequence of drogues and mains, but still had to be simulated. The major innovation in GFEChutes Lo-Fi is the software design and architecture.

  18. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    NASA relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft launched that does not have a computer on board that will provide command and control services. There have been recent incidents where software has played a role in high-profile mission failures and hazardous incidents. For example, the Mars Orbiter, Mars Polar Lander, the DART (Demonstration of Autonomous Rendezvous Technology), and MER (Mars Exploration Rover) Spirit anomalies were all caused or contributed to by software. The Mission Control Centers for the Shuttle, ISS, and unmanned programs are highly dependant on software for data displays, analysis, and mission planning. Despite this growing dependence on software control and monitoring, there has been little to no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Meanwhile, academia and private industry have been stepping forward with procedures and standards for safety critical systems and software, for example Dr. Nancy Leveson's book Safeware: System Safety and Computers. The NASA Software Safety Standard, originally published in 1997, was widely ignored due to its complexity and poor organization. It also focused on concepts rather than definite procedural requirements organized around a software project lifecycle. Led by NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard has recently undergone a significant update. This new standard provides the procedures and guidelines for evaluating a project for safety criticality and then lays out the minimum project lifecycle requirements to assure the software is created, operated, and maintained in the safest possible manner. This update of the standard clearly delineates the minimum set of software safety requirements for a project without detailing the implementation for those requirements. This allows the projects leeway to meet these requirements in many forms that best suit a particular project's needs and safety risk. In other words, it tells the project what to do, not how to do it. This update also incorporated advances in the state of the practice of software safety from academia and private industry. It addresses some of the more common issues now facing software developers in the NASA environment such as the use of Commercial-Off-the-Shelf Software (COTS), Modified OTS (MOTS), Government OTS (GOTS), and reused software. A team from across NASA developed the update and it has had both NASA-wide internal reviews by software engineering, quality, safety, and project management. It has also had expert external review. This presentation and paper will discuss the new NASA Software Safety Standard, its organization, and key features. It will start with a brief discussion of some NASA mission failures and incidents that had software as one of their root causes. It will then give a brief overview of the NASA Software Safety Process. This will include an overview of the key personnel responsibilities and functions that must be performed for safety-critical software.

  19. Preparation and Deployment of a Forward-Deployed, Heavy Air Defense Battalion to Southwest Asia

    DTIC Science & Technology

    1993-04-15

    were updated and present in dental files. These efforts made the preparation for movement to Southwest Asia a simple task. It also allowed soldiers to...the change of command on 6 July 1990. Several fundamental changes in the leadership, training, maintenance, logistics, and care of soldiers and families...associated problems in the areas of personnel accountability and assignment, maintenance administration and readiness, and training deficiencies

  20. Experiences of engineering Grid-based medical software.

    PubMed

    Estrella, F; Hauer, T; McClatchey, R; Odeh, M; Rogulin, D; Solomonides, T

    2007-08-01

    Grid-based technologies are emerging as potential solutions for managing and collaborating distributed resources in the biomedical domain. Few examples exist, however, of successful implementations of Grid-enabled medical systems and even fewer have been deployed for evaluation in practice. The objective of this paper is to evaluate the use in clinical practice of a Grid-based imaging prototype and to establish directions for engineering future medical Grid developments and their subsequent deployment. The MammoGrid project has deployed a prototype system for clinicians using the Grid as its information infrastructure. To assist in the specification of the system requirements (and for the first time in healthgrid applications), use-case modelling has been carried out in close collaboration with clinicians and radiologists who had no prior experience of this modelling technique. A critical qualitative and, where possible, quantitative analysis of the MammoGrid prototype is presented leading to a set of recommendations from the delivery of the first deployed Grid-based medical imaging application. We report critically on the application of software engineering techniques in the specification and implementation of the MammoGrid project and show that use-case modelling is a suitable vehicle for representing medical requirements and for communicating effectively with the clinical community. This paper also discusses the practical advantages and limitations of applying the Grid to real-life clinical applications and presents the consequent lessons learned. The work presented in this paper demonstrates that given suitable commitment from collaborating radiologists it is practical to deploy in practice medical imaging analysis applications using the Grid but that standardization in and stability of the Grid software is a necessary pre-requisite for successful healthgrids. The MammoGrid prototype has therefore paved the way for further advanced Grid-based deployments in the medical and biomedical domains.

  1. 78 FR 32169 - Facilitating the Deployment of Text-to-911 and Other Next Generation 911 Applications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-29

    ... messaging services (i.e., all providers of software applications that enable a consumer to send text... messaging services (i.e., all providers of software applications that enable a consumer to send text... providers of interconnected text messaging services (i.e., all providers of software applications that...

  2. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valassi, A.; /CERN; Bartoldus, R.

    The CORAL software is widely used at CERN by the LHC experiments to access the data they store on relational databases, such as Oracle. Two new components have recently been added to implement a model involving a middle tier 'CORAL server' deployed close to the database and a tree of 'CORAL server proxies', providing data caching and multiplexing, deployed close to the client. A first implementation of the two new components, released in the summer 2009, is now deployed in the ATLAS online system to read the data needed by the High Level Trigger, allowing the configuration of a farmmore » of several thousand processes. This paper reviews the architecture of the software, its development status and its usage in ATLAS.« less

  3. Use of application containers and workflows for genomic data analysis.

    PubMed

    Schulz, Wade L; Durant, Thomas J S; Siddon, Alexa J; Torres, Richard

    2016-01-01

    The rapid acquisition of biological data and development of computationally intensive analyses has led to a need for novel approaches to software deployment. In particular, the complexity of common analytic tools for genomics makes them difficult to deploy and decreases the reproducibility of computational experiments. Recent technologies that allow for application virtualization, such as Docker, allow developers and bioinformaticians to isolate these applications and deploy secure, scalable platforms that have the potential to dramatically increase the efficiency of big data processing. While limitations exist, this study demonstrates a successful implementation of a pipeline with several discrete software applications for the analysis of next-generation sequencing (NGS) data. With this approach, we significantly reduced the amount of time needed to perform clonal analysis from NGS data in acute myeloid leukemia.

  4. Reconstruction of Cyber and Physical Software Using Novel Spread Method

    NASA Astrophysics Data System (ADS)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    Cyber and Physical software has been concerned for many years since 2010. Actually, many researchers would disagree with the deployment of traditional Spread Method for reconstruction of Cyber and physical software, which embodies the key principles reconstruction of cyber physical system. NSM(novel spread method), our new methodology for reconstruction of cyber and physical software, is the solution to all of these challenges.

  5. Development of advanced fermentor control applications for use in an industrial automation environment.

    PubMed

    Hamilton, Ryan; Tamminana, Krishna; Boyd, John; Sasaki, Gen; Toda, Alex; Haskell, Sid; Danbe, Elizabeth

    2013-04-01

    We present a software platform developed by Genentech and MathWorks Consulting Group that allows arbitrary MATLAB (MATLAB is a registered trademark of The MathWorks, Inc.) functions to perform supervisory control of process equipment (in this case, fermentors) via the OLE for process control (OPC) communication protocol, under the direction of an industrial automation layer. The software features automated synchronization and deployment of server control code and has been proven to be tolerant of OPC communication interruptions. Since deployment in the spring of 2010, this software has successfully performed supervisory control of more than 700 microbial fermentations in the Genentech pilot plant and has enabled significant reductions in the time required to develop and implement novel control strategies (months reduced to days). The software is available for download at the MathWorks File Exchange Web site at http://www.mathworks.com/matlabcentral/fileexchange/36866.

  6. IMPROVING (SOFTWARE) PATENT QUALITY THROUGH THE ADMINISTRATIVE PROCESS

    PubMed Central

    Rai, Arti K.

    2014-01-01

    The available evidence indicates that patent quality, particularly in the area of software, needs improvement. This Article argues that even an agency as institutionally constrained as the U.S. Patent and Trademark Office (“PTO”) could implement a portfolio of pragmatic, cost-effective quality improvement strategies. The argument in favor of these strategies draws upon not only legal theory and doctrine but also new data from a PTO software examination unit with relatively strict practices. Strategies that resolve around Section 112 of the patent statute could usefully be deployed at the initial examination stage. Other strategies could be deployed within the new post-issuance procedures available to the agency under the America Invents Act. Notably, although the strategies the Article discusses have the virtue of being neutral as to technology, they are likely to have a very significant practical impact in the area of software. PMID:25221346

  7. IMPROVING (SOFTWARE) PATENT QUALITY THROUGH THE ADMINISTRATIVE PROCESS.

    PubMed

    Rai, Arti K

    2013-11-24

    The available evidence indicates that patent quality, particularly in the area of software, needs improvement. This Article argues that even an agency as institutionally constrained as the U.S. Patent and Trademark Office ("PTO") could implement a portfolio of pragmatic, cost-effective quality improvement strategies. The argument in favor of these strategies draws upon not only legal theory and doctrine but also new data from a PTO software examination unit with relatively strict practices. Strategies that resolve around Section 112 of the patent statute could usefully be deployed at the initial examination stage. Other strategies could be deployed within the new post-issuance procedures available to the agency under the America Invents Act. Notably, although the strategies the Article discusses have the virtue of being neutral as to technology, they are likely to have a very significant practical impact in the area of software.

  8. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases.

  9. Secure it now or secure it later: the benefits of addressing cyber-security from the outset

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Nutaro, James

    2013-05-01

    The majority of funding for research and development (R&D) in cyber-security is focused on the end of the software lifecycle where systems have been deployed or are nearing deployment. Recruiting of cyber-security personnel is similarly focused on end-of-life expertise. By emphasizing cyber-security at these late stages, security problems are found and corrected when it is most expensive to do so, thus increasing the cost of owning and operating complex software systems. Worse, expenditures on expensive security measures often mean less money for innovative developments. These unwanted increases in cost and potential slowing of innovation are unavoidable consequences of an approach to security that finds and remediate faults after software has been implemented. We argue that software security can be improved and the total cost of a software system can be substantially reduced by an appropriate allocation of resources to the early stages of a software project. By adopting a similar allocation of R&D funds to the early stages of the software lifecycle, we propose that the costs of cyber-security can be better controlled and, consequently, the positive effects of this R&D on industry will be much more pronounced.

  10. Hedge math: Theoretical limits on minimum stockpile size across nuclear hedging strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lafleur, Jarret Marshall; Roesler, Alexander W.

    2016-09-01

    In June 2013, the Department of Defense published a congressionally mandated, unclassified update on the U.S. Nuclear Employment Strategy. Among the many updates in this document are three key ground rules for guiding the sizing of the non-deployed U.S. nuclear stockpile. Furthermore, these ground rules form an important and objective set of criteria against which potential future stockpile hedging strategies can be evaluated.

  11. Novel features and enhancements in BioBin, a tool for the biologically inspired binning and association analysis of rare variants

    PubMed Central

    Byrska-Bishop, Marta; Wallace, John; Frase, Alexander T; Ritchie, Marylyn D

    2018-01-01

    Abstract Motivation BioBin is an automated bioinformatics tool for the multi-level biological binning of sequence variants. Herein, we present a significant update to BioBin which expands the software to facilitate a comprehensive rare variant analysis and incorporates novel features and analysis enhancements. Results In BioBin 2.3, we extend our software tool by implementing statistical association testing, updating the binning algorithm, as well as incorporating novel analysis features providing for a robust, highly customizable, and unified rare variant analysis tool. Availability and implementation The BioBin software package is open source and freely available to users at http://www.ritchielab.com/software/biobin-download Contact mdritchie@geisinger.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28968757

  12. DSN Array Simulator

    NASA Technical Reports Server (NTRS)

    Tikidjian, Raffi; Mackey, Ryan

    2008-01-01

    The DSN Array Simulator (wherein 'DSN' signifies NASA's Deep Space Network) is an updated version of software previously denoted the DSN Receive Array Technology Assessment Simulation. This software (see figure) is used for computational modeling of a proposed DSN facility comprising user-defined arrays of antennas and transmitting and receiving equipment for microwave communication with spacecraft on interplanetary missions. The simulation includes variations in spacecraft tracked and communication demand changes for up to several decades of future operation. Such modeling is performed to estimate facility performance, evaluate requirements that govern facility design, and evaluate proposed improvements in hardware and/or software. The updated version of this software affords enhanced capability for characterizing facility performance against user-defined mission sets. The software includes a Monte Carlo simulation component that enables rapid generation of key mission-set metrics (e.g., numbers of links, data rates, and date volumes), and statistical distributions thereof as functions of time. The updated version also offers expanded capability for mixed-asset network modeling--for example, for running scenarios that involve user-definable mixtures of antennas having different diameters (in contradistinction to a fixed number of antennas having the same fixed diameter). The improved version also affords greater simulation fidelity, sufficient for validation by comparison with actual DSN operations and analytically predictable performance metrics.

  13. 24 CFR 908.104 - Requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... contracts with a service bureau to provide the system, the software must be periodically updated to.... Housing agencies that currently use automated software packages to transmit Forms HUD-50058 and HUD-50058... software required to develop and maintain an in-house automated data processing system (ADP) used to...

  14. Home | Simulation Research

    Science.gov Websites

    Group specializes in the research, development and deployment of software that support the design and controls design, the Spawn of EnergyPlus next-generation simulation engine, for building and control energy systems tools for OpenBuildingControl to support control design, deployment and verification of building

  15. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    USDA-ARS?s Scientific Manuscript database

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  16. Logistics Modernization Program Increment 2 (LMP Inc 2)

    DTIC Science & Technology

    2016-03-01

    Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY - Fiscal Year IA...Documentation within the LMP Increment 2 MS C ADM, the LMP Increment 2 Business Case was updated for the FDD using change pages to remove information...following approval of the Army Cost Position being developed for the FDD . The LMP Increment 2 Business Case Change Pages were approved and signed by the

  17. Uses of the Drupal CMS Collaborative Framework in the Woods Hole Scientific Community (Invited)

    NASA Astrophysics Data System (ADS)

    Maffei, A. R.; Chandler, C. L.; Work, T. T.; Shorthouse, D.; Furfey, J.; Miller, H.

    2010-12-01

    Organizations that comprise the Woods Hole scientific community (Woods Hole Oceanographic Institution, Marine Biological Laboratory, USGS Woods Hole Coastal and Marine Science Center, Woods Hole Research Center, NOAA NMFS Northeast Fisheries Science Center, SEA Education Association) have a long history of collaborative activity regarding computing, computer network and information technologies that support common, inter-disciplinary science needs. Over the past several years there has been growing interest in the use of the Drupal Content Management System (CMS) playing a variety of roles in support of research projects resident at several of these organizations. Many of these projects are part of science programs that are national and international in scope. Here we survey the current uses of Drupal within the Woods Hole scientific community and examine reasons it has been adopted. The promise of emerging semantic features in the Drupal framework is examined and projections of how pre-existing Drupal-based websites might benefit are made. Closer examination of Drupal software design exposes it as more than simply a content management system. The flexibility of its architecture; the power of its taxonomy module; the care taken in nurturing the open-source developer community that surrounds it (including organized and often well-attended code sprints); the ability to bind emerging software technologies as Drupal modules; the careful selection process used in adopting core functionality; multi-site hosting and cross-site deployment of updates and a recent trend towards development of use-case inspired Drupal distributions casts Drupal as a general-purpose application deployment framework. Recent work in the semantic arena casts Drupal as an emerging RDF framework as well. Examples of roles played by Drupal-based websites within the Woods Hole scientific community that will be discussed include: science data metadata database, organization main website, biological taxonomy development, bibliographic database, physical media data archive inventory manager, disaster-response website development framework, science project task management, science conference planning, and spreadsheet-to-database converter.

  18. SUNREL Documentation | Buildings | NREL

    Science.gov Websites

    using SUNREL software. When changes are made to the manual, we post updates under the updates link below . Although limited technical support is provided to users, we do post answers to some frequently asked

  19. R-189 (C-620) air compressor control logic software documentation. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, K.E.

    1995-06-08

    This relates to FFTF plant air compressors. Purpose of this document is to provide an updated Computer Software Description for the software to be used on R-189 (C-620-C) air compressor programmable controllers. Logic software design changes were required to allow automatic starting of a compressor that had not been previously started.

  20. Lifecycle Prognostics Architecture for Selected High-Cost Active Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N. Lybeck; B. Pham; M. Tawfik

    There are an extensive body of knowledge and some commercial products available for calculating prognostics, remaining useful life, and damage index parameters. The application of these technologies within the nuclear power community is still in its infancy. Online monitoring and condition-based maintenance is seeing increasing acceptance and deployment, and these activities provide the technological bases for expanding to add predictive/prognostics capabilities. In looking to deploy prognostics there are three key aspects of systems that are presented and discussed: (1) component/system/structure selection, (2) prognostic algorithms, and (3) prognostics architectures. Criteria are presented for component selection: feasibility, failure probability, consequences of failure,more » and benefits of the prognostics and health management (PHM) system. The basis and methods commonly used for prognostics algorithms are reviewed and summarized. Criteria for evaluating PHM architectures are presented: open, modular architecture; platform independence; graphical user interface for system development and/or results viewing; web enabled tools; scalability; and standards compatibility. Thirteen software products were identified and discussed in the context of being potentially useful for deployment in a PHM program applied to systems in a nuclear power plant (NPP). These products were evaluated by using information available from company websites, product brochures, fact sheets, scholarly publications, and direct communication with vendors. The thirteen products were classified into four groups of software: (1) research tools, (2) PHM system development tools, (3) deployable architectures, and (4) peripheral tools. Eight software tools fell into the deployable architectures category. Of those eight, only two employ all six modules of a full PHM system. Five systems did not offer prognostic estimates, and one system employed the full health monitoring suite but lacked operations and maintenance support. Each product is briefly described in Appendix A. Selection of the most appropriate software package for a particular application will depend on the chosen component, system, or structure. Ongoing research will determine the most appropriate choices for a successful demonstration of PHM systems in aging NPPs.« less

  1. Separating Added Value from Hype: Some Experiences and Prognostications

    NASA Astrophysics Data System (ADS)

    Reed, Dan

    2004-03-01

    These are exciting times for the interplay of science and computing technology. As new data archives, instruments and computing facilities are connected nationally and internationally, a new model of distributed scientific collaboration is emerging. However, any new technology brings both opportunities and challenges -- Grids are no exception. In this talk, we will discuss some of the experiences deploying Grid software in production environments, illustrated with experiences from the NSF PACI Alliance, the NSF Extensible Terascale Facility (ETF) and other Grid projects. From these experiences, we derive some guidelines for deployment and some suggestions for community engagement, software development and infrastructure

  2. Use of application containers and workflows for genomic data analysis

    PubMed Central

    Schulz, Wade L.; Durant, Thomas J. S.; Siddon, Alexa J.; Torres, Richard

    2016-01-01

    Background: The rapid acquisition of biological data and development of computationally intensive analyses has led to a need for novel approaches to software deployment. In particular, the complexity of common analytic tools for genomics makes them difficult to deploy and decreases the reproducibility of computational experiments. Methods: Recent technologies that allow for application virtualization, such as Docker, allow developers and bioinformaticians to isolate these applications and deploy secure, scalable platforms that have the potential to dramatically increase the efficiency of big data processing. Results: While limitations exist, this study demonstrates a successful implementation of a pipeline with several discrete software applications for the analysis of next-generation sequencing (NGS) data. Conclusions: With this approach, we significantly reduced the amount of time needed to perform clonal analysis from NGS data in acute myeloid leukemia. PMID:28163975

  3. LATUX: An Iterative Workflow for Designing, Validating, and Deploying Learning Analytics Visualizations

    ERIC Educational Resources Information Center

    Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew

    2015-01-01

    Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…

  4. Software platform virtualization in chemistry research and university teaching

    PubMed Central

    2009-01-01

    Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997

  5. Software platform virtualization in chemistry research and university teaching.

    PubMed

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  6. Virus Alert: Ten Steps to Safe Computing.

    ERIC Educational Resources Information Center

    Gunter, Glenda A.

    1997-01-01

    Discusses computer viruses and explains how to detect them; discusses virus protection and the need to update antivirus software; and offers 10 safe computing tips, including scanning floppy disks and commercial software, how to safely download files from the Internet, avoiding pirated software copies, and backing up files. (LRW)

  7. The eSMAF: a software for the assessment and follow-up of functional autonomy in geriatrics

    PubMed Central

    Boissy, Patrick; Brière, Simon; Tousignant, Michel; Rousseau, Eric

    2007-01-01

    Background Functional status or disability forms the core of most assessment instruments used to identify mix and level of resources and services needed by older adults who possess common characteristics. The Functional Autonomy Measurement System (SMAF) is a 29-item scale measuring functional ability in five different areas. It has been recommended for use for home care, for allocation of chronic beds, for developing care plans in institutional settings and for epidemiological and evaluative studies. The SMAF can also be used with a case-mix classification system (Iso-SMAF) to allocate resources based on patients' functional autonomy characteristics. The objective of this project was to develop a software version of the SMAF to facilitate the evaluation of the functional status of older adults in health services research and to optimize the clinical decision-making process. Results The eSMAF was developed over an 24-month period using a modified waterfall software engineering process. Requirements and functional specifications were determined using focus groups of stakeholders. Different versions of the software were iteratively field-tested in clinical and research environments and software adaptations made accordingly. User documentation and online help were created to assist the deployment of the software. The software is available in French or English versions under a 30-day unregistered demonstration license or a free restricted registered academic license. It can be used locally on a Windows-based PC or over a network to input SMAF data into a database, search and aggregate client data according to clinical and/or administrative criteria, and generate summary or detailed reports of selected data sets for print or export to another database. Conclusion In the last year, the software has been successfully deployed in the clinical workflow of different institutions in research and clinical applications. The software performed relatively well in terms of stability and performance. Barriers to implementation included antiquated computer hardware, low computer literacy and access to IT support. Key factors for the deployment of the software included standardization of the workflow, user training and support. PMID:17298673

  8. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT; ENVIRONMENTAL DECISION SUPPORT SOFTWARE; ENVIRONMENTAL SOFTWARE SITEPRO VERSION 2.0"

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has created the Environmental Technology Verification Program (ETV) to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The goal of the...

  9. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  10. Trajectory Design for a Cislunar Cubesat Leveraging Dynamical Systems Techniques: The Lunar Icecube Mission

    NASA Technical Reports Server (NTRS)

    Bosanac, Natasha; Cox, Andrew; Howell, Kathleen C.; Folta, David C.

    2017-01-01

    Lunar IceCube is a 6U CubeSat that is designed to detect and observe lunar volatiles from a highly inclined orbit. This spacecraft, equipped with a low-thrust engine, will be deployed from the upcoming Exploration Mission-1 vehicle in late 2018. However, significant uncertainty in the deployment conditions for secondary payloads impacts both the availability and geometry of transfers that deliver the spacecraft to the lunar vicinity. A framework that leverages dynamical systems techniques is applied to a recently updated set of deployment conditions and spacecraft parameter values for the Lunar IceCube mission, demonstrating the capability for rapid trajectory design.

  11. Large Deployable Reflector (LDR) system concept and technology definition study. Analysis of space station requirements for LDR

    NASA Astrophysics Data System (ADS)

    Agnew, Donald L.; Vinkey, Victor F.; Runge, Fritz C.

    1989-04-01

    A study was conducted to determine how the Large Deployable Reflector (LDR) might benefit from the use of the space station for assembly, checkout, deployment, servicing, refurbishment, and technology development. Requirements that must be met by the space station to supply benefits for a selected scenario are summarized. Quantitative and qualitative data are supplied. Space station requirements for LDR which may be utilized by other missions are identified. A technology development mission for LDR is outlined and requirements summarized. A preliminary experiment plan is included. Space Station Data Base SAA 0020 and TDM 2411 are updated.

  12. Large Deployable Reflector (LDR) system concept and technology definition study. Analysis of space station requirements for LDR

    NASA Technical Reports Server (NTRS)

    Agnew, Donald L.; Vinkey, Victor F.; Runge, Fritz C.

    1989-01-01

    A study was conducted to determine how the Large Deployable Reflector (LDR) might benefit from the use of the space station for assembly, checkout, deployment, servicing, refurbishment, and technology development. Requirements that must be met by the space station to supply benefits for a selected scenario are summarized. Quantitative and qualitative data are supplied. Space station requirements for LDR which may be utilized by other missions are identified. A technology development mission for LDR is outlined and requirements summarized. A preliminary experiment plan is included. Space Station Data Base SAA 0020 and TDM 2411 are updated.

  13. Trajectory design for a cislunar CubeSat leveraging dynamical systems techniques: The Lunar IceCube mission

    NASA Astrophysics Data System (ADS)

    Bosanac, Natasha; Cox, Andrew D.; Howell, Kathleen C.; Folta, David C.

    2018-03-01

    Lunar IceCube is a 6U CubeSat that is designed to detect and observe lunar volatiles from a highly inclined orbit. This spacecraft, equipped with a low-thrust engine, is expected to be deployed from the upcoming Exploration Mission-1 vehicle. However, significant uncertainty in the deployment conditions for secondary payloads impacts both the availability and geometry of transfers that deliver the spacecraft to the lunar vicinity. A framework that leverages dynamical systems techniques is applied to a recently updated set of deployment conditions and spacecraft parameter values for the Lunar IceCube mission, demonstrating the capability for rapid trajectory design.

  14. The Role of Organizational Sub-Cultures in Higher Education Adoption of Open Source Software (OSS) for Teaching/Learning

    ERIC Educational Resources Information Center

    Williams van Rooij, Shahron

    2010-01-01

    This paper contrasts the arguments offered in the literature advocating the adoption of open source software (OSS)--software delivered with its source code--for teaching and learning applications, with the reality of limited enterprise-wide deployment of those applications in U.S. higher education. Drawing on the fields of organizational…

  15. An Incremental Life-cycle Assurance Strategy for Critical System Certification

    DTIC Science & Technology

    2014-11-04

    for Safe Aircraft Operation Embedded software systems introduce a new class of problems not addressed by traditional system modeling & analysis...Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency jitter affects control behavior...do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software system as major source of

  16. Utilization of Intelligent Software Agent Features for Improving E-Learning Efforts: A Comprehensive Investigation

    ERIC Educational Resources Information Center

    Farzaneh, Mandana; Vanani, Iman Raeesi; Sohrabi, Babak

    2012-01-01

    E-learning is one of the most important learning approaches within which intelligent software agents can be efficiently used so as to automate and facilitate the process of learning. The aim of this paper is to illustrate a comprehensive categorization of intelligent software agent features, which is valuable for being deployed in the virtual…

  17. Building Partner Capacity: DOD Should Improve Its Reporting to Congress on Challenges to Expanding Ministry of Defense Advisors Program

    DTIC Science & Technology

    2015-02-11

    with additional information on the program’s performance ; and (3) develop a time frame for updating the policy for the MODA program. DOD...requirement development , State concurrence, DOD formal approval, recruitment, and training and pre-deployment (see fig. 1). While some of these...Georgia, and Bosnia and Herzegovina (see table 1).7 For more information on DOD’s first 2 Global MODA deployments, see app. II

  18. Advanced public transportation systems deployment in the United States : year 2002 update

    DOT National Transportation Integrated Search

    2003-06-01

    This report documents work performed under the Federal Transit Administration's Advanced Public Transportation Systems (APTS) Program, a program structured to undertake research and development of innovative applications of advanced navigation, infor...

  19. Advanced public transportation systems deployment in the United States : year 2000 update

    DOT National Transportation Integrated Search

    2002-05-01

    This report documents work performed under the Federal Transit Administration's Advanced Public Transportation Systems (APTS) Program, a program structured to undertake research and development of innovative applications of advanced navigation, infor...

  20. Advanced public transportation systems deployment in the United States : year 2004 update

    DOT National Transportation Integrated Search

    2005-06-01

    This report documents work performed under the Federal Transit Administration's Advanced Public Transportation Systems (APTS) Program, a program structured to undertake research and development of innovative applications of advanced navigation, infor...

  1. Advanced Public Transportation Systems Deployment in the United States, Year 2000, Update

    DOT National Transportation Integrated Search

    2002-05-01

    This report documents work performed under the Federal Transit Administration's Advanced Public Transportation Systems (APTS) Program, a program structured to undertake research and development of innovative applications of advanced navigation, infor...

  2. Advanced Public Transportation Systems Deployment in the United States. Update, January 1999

    DOT National Transportation Integrated Search

    1999-01-01

    This report documents work performed under FTA's Advanced Public Transportation Systems (APTS) Program, a program structured to undertake research and development of innovative applications of advanced navigation, information, and communication techn...

  3. Advanced public transportation systems deployment in the United States : update, January 1999

    DOT National Transportation Integrated Search

    1999-01-01

    This report documents work performed under FTA's Advanced Public Transportation Systems (APTS) Program, a program structured to undertake research and development of innovative applications of advances navigation, information, and communication techn...

  4. Mission Planning System Increment 5 (MPS Inc 5)

    DTIC Science & Technology

    2016-03-01

    DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY - Fiscal Year IA...Alternative Selected (Funds First Obligated (FFO)) (O/T) : Mar 2013 / Mar 2013 • MS B (O/T) : Apr 2012 / Apr 2012 • MS C (O/T) : N/A / N/A • FDD (O/T...Deployed Software Intensive Program" as described in the DOD Instruction 5000.02, January 7, 2015. 4. FDD provides approval to field the

  5. Annotated bibliography of software engineering laboratory literature

    NASA Technical Reports Server (NTRS)

    Buhler, Melanie; Valett, Jon

    1989-01-01

    An annotated bibliography is presented of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory. The bibliography was updated and reorganized substantially since the original version (SEL-82-006, November 1982). All materials were grouped into eight general subject areas for easy reference: (1) The Software Engineering Laboratory; (2) The Software Engineering Laboratory: Software Development Documents; (3) Software Tools; (4) Software Models; (5) Software Measurement; (6) Technology Evaluations; (7) Ada Technology; and (8) Data Collection. Subject and author indexes further classify these documents by specific topic and individual author.

  6. GENFAS- Decentralised PUS-Based Data Handling Software Using SOIS and SpaceWire

    NASA Astrophysics Data System (ADS)

    Fowell, Stuart D.; Wheeler, Simon; Mendham, Peter; Gasti, Wahida

    2011-08-01

    This paper describes GenFAS, a decentralised PUS- based Data Handling onboard software architecture, based on the SOIS and SpaceWire communication specifications. GenFAS was initially developed for and deployed on the MARC system under an ESA GSTP contract.

  7. A Nonlinear Dynamic Model and Free Vibration Analysis of Deployable Mesh Reflectors

    NASA Technical Reports Server (NTRS)

    Shi, H.; Yang, B.; Thomson, M.; Fang, H.

    2011-01-01

    This paper presents a dynamic model of deployable mesh reflectors, in which geometric and material nonlinearities of such a space structure are fully described. Then, by linearization around an equilibrium configuration of the reflector structure, a linearized model is obtained. With this linearized model, the natural frequencies and mode shapes of a reflector can be computed. The nonlinear dynamic model of deployable mesh reflectors is verified by using commercial finite element software in numerical simulation. As shall be seen, the proposed nonlinear model is useful for shape (surface) control of deployable mesh reflectors under thermal loads.

  8. Use of Docker for deployment and testing of astronomy software

    NASA Astrophysics Data System (ADS)

    Morris, D.; Voutsinas, S.; Hambly, N. C.; Mann, R. G.

    2017-07-01

    We describe preliminary investigations of using Docker for the deployment and testing of astronomy software. Docker is a relatively new containerization technology that is developing rapidly and being adopted across a range of domains. It is based upon virtualization at operating system level, which presents many advantages in comparison to the more traditional hardware virtualization that underpins most cloud computing infrastructure today. A particular strength of Docker is its simple format for describing and managing software containers, which has benefits for software developers, system administrators and end users. We report on our experiences from two projects - a simple activity to demonstrate how Docker works, and a more elaborate set of services that demonstrates more of its capabilities and what they can achieve within an astronomical context - and include an account of how we solved problems through interaction with Docker's very active open source development community, which is currently the key to the most effective use of this rapidly-changing technology.

  9. A System Dynamics Model of the Departmental Deployment of Instructional Resources.

    ERIC Educational Resources Information Center

    Beck, Bruce D.

    This paper reports on the development and testing of a system dynamics model of the departmental deployment of instructional resources at the University of Wisconsin-Madison. A model was developed using the Stella II computer software package. The model describes describes how departments keep student enrollments, number of course sections, and…

  10. March of the Starbugs: Configuring Fiber-bearing Robots on the UK-Schmidt Optical Plane

    NASA Astrophysics Data System (ADS)

    Lorente, N. P. F.; Vuong, M.; Satorre, C.; Hong, S. E.; Shortridge, K.; Goodwin, M.; Kuehn, K.

    2015-09-01

    The TAIPAN instrument, currently being developed for the Australian Astronomical Observatory's UK Schmidt telescope at Siding Spring Observatory, makes use of the AAO's Starbug technology to deploy 150 science fibers to target positions on the optical plane. This paper describes the software system for controlling and deploying the fiber-bearing Starbug robots. The TAIPAN software is responsible for allocating each Starbug to its next target position based on its current position and the distribution of targets, finding a collision-free path for each Starbug, and then simultaneously controlling the Starbug hardware in a closed loop, with a metrology camera used to determine the position of each Starbug in the field during reconfiguration. The software is written in C++ and Java and employs a DRAMA middleware layer (Farrell et al. 1995).

  11. The repository-based software engineering program: Redefining AdaNET as a mainstream NASA source

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Repository-based Software Engineering Program (RBSE) is described to inform and update senior NASA managers about the program. Background and historical perspective on software reuse and RBSE for NASA managers who may not be familiar with these topics are provided. The paper draws upon and updates information from the RBSE Concept Document, baselined by NASA Headquarters, Johnson Space Center, and the University of Houston - Clear Lake in April 1992. Several of NASA's software problems and what RBSE is now doing to address those problems are described. Also, next steps to be taken to derive greater benefit from this Congressionally-mandated program are provided. The section on next steps describes the need to work closely with other NASA software quality, technology transfer, and reuse activities and focuses on goals and objectives relative to this need. RBSE's role within NASA is addressed; however, there is also the potential for systematic transfer of technology outside of NASA in later stages of the RBSE program. This technology transfer is discussed briefly.

  12. Register to Download the Automotive Deployment Options Projection Tool |

    Science.gov Websites

    . * Indicates required field If you see this don't fill out this input box. Email address * Organization name , please specify I would like to receive periodic email updates about ADOPT. Yes Submit

  13. OHD/HL - National Weather Hydrology Laboratory

    Science.gov Websites

    resources and services. Design and Programming Standards and Guidelines General Programming C C++ FORTRAN Java v 2.0 updated 3/28/2008 Java v 1.9 Korn and Bash Shell Software Design Phase Guidelines OHD Design Specification Template OHD Design Specification Example Software Peer Review Guidelines and Checklists Software

  14. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases. To appear in an article of Journal of Database Management.

  15. Mitigating Motion Base Safety Issues: The NASA LaRC CMF Implementation

    NASA Technical Reports Server (NTRS)

    Bryant, Richard B., Jr.; Grupton, Lawrence E.; Martinez, Debbie; Carrelli, David J.

    2005-01-01

    The NASA Langley Research Center (LaRC), Cockpit Motion Facility (CMF) motion base design has taken advantage of inherent hydraulic characteristics to implement safety features using hardware solutions only. Motion system safety has always been a concern and its implementation is addressed differently by each organization. Some approaches rely heavily on software safety features. Software which performs safety functions is subject to more scrutiny making its approval, modification, and development time consuming and expensive. The NASA LaRC's CMF motion system is used for research and, as such, requires that the software be updated or modified frequently. The CMF's customers need the ability to update the simulation software frequently without the associated cost incurred with safety critical software. This paper describes the CMF engineering team's approach to achieving motion base safety by designing and implementing all safety features in hardware, resulting in applications software (including motion cueing and actuator dynamic control) being completely independent of the safety devices. This allows the CMF safety systems to remain intact and unaffected by frequent research system modifications.

  16. The CARMEN software as a service infrastructure.

    PubMed

    Weeks, Michael; Jessop, Mark; Fletcher, Martyn; Hodge, Victoria; Jackson, Tom; Austin, Jim

    2013-01-28

    The CARMEN platform allows neuroscientists to share data, metadata, services and workflows, and to execute these services and workflows remotely via a Web portal. This paper describes how we implemented a service-based infrastructure into the CARMEN Virtual Laboratory. A Software as a Service framework was developed to allow generic new and legacy code to be deployed as services on a heterogeneous execution framework. Users can submit analysis code typically written in Matlab, Python, C/C++ and R as non-interactive standalone command-line applications and wrap them as services in a form suitable for deployment on the platform. The CARMEN Service Builder tool enables neuroscientists to quickly wrap their analysis software for deployment to the CARMEN platform, as a service without knowledge of the service framework or the CARMEN system. A metadata schema describes each service in terms of both system and user requirements. The search functionality allows services to be quickly discovered from the many services available. Within the platform, services may be combined into more complicated analyses using the workflow tool. CARMEN and the service infrastructure are targeted towards the neuroscience community; however, it is a generic platform, and can be targeted towards any discipline.

  17. Development of a ROV Deployed Video Analysis Tool for Rapid Measurement of Submerged Oil/Gas Leaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savas, Omer

    Expanded deep sea drilling around the globe makes it necessary to have readily available tools to quickly and accurately measure discharge rates from accidental submerged oil/gas leak jets for the first responders to deploy adequate resources for containment. We have developed and tested a field deployable video analysis software package which is able to provide in the field sufficiently accurate flow rate estimates for initial responders in accidental oil discharges in submarine operations. The essence of our approach is based on tracking coherent features at the interface in the near field of immiscible turbulent jets. The software package, UCB_Plume, ismore » ready to be used by the first responders for field implementation. We have tested the tool on submerged water and oil jets which are made visible using fluorescent dyes. We have been able to estimate the discharge rate within 20% accuracy. A high end WINDOWS laptop computer is suggested as the operating platform and a USB connected high speed, high resolution monochrome camera as the imaging device are sufficient for acquiring flow images under continuous unidirectional illumination and running the software in the field. Results are obtained over a matter of minutes.« less

  18. Defense AT&L Magazine: A Publication of the Defense Acquisition University. Volume 34, Number 3, DAU 184

    DTIC Science & Technology

    2005-01-01

    developed a partnership with the Defense Acquisition University to in- tegrate DISA’s systems engineering processes, software , and network...in place, with processes being implemented: deployment management; systems engineering ; software engineering ; configuration man- agement; test and...CSS systems engineering is a transition partner with Carnegie Mellon University’s Software Engineering Insti- tute and its work on the capability

  19. Architectural Implications of Cloud Computing

    DTIC Science & Technology

    2011-10-24

    Public Cloud Infrastructure-as-a- Service (IaaS) Software -as-a- Service ( SaaS ) Cloud Computing Types Platform-as-a- Service (PaaS) Based on Type of...Twitter #SEIVirtualForum © 2011 Carnegie Mellon University Software -as-a- Service ( SaaS ) Model of software deployment in which a third-party...and System Solutions (RTSS) Program. Her current interests and projects are in service -oriented architecture (SOA), cloud computing, and context

  20. Cloud Computing

    DTIC Science & Technology

    2009-11-12

    Service (IaaS) Software -as-a- Service ( SaaS ) Cloud Computing Types Platform-as-a- Service (PaaS) Based on Type of Capability Based on access Based...Mellon University Software -as-a- Service ( SaaS ) Application-specific capabilities, e.g., service that provides customer management Allows organizations...as a Service ( SaaS ) Model of software deployment in which a provider licenses an application to customers for use as a service on

  1. Using the Battlefield Telemedicine System (BTS) to train deployed medical personnel in complicated medical tasks - a proof of concept.

    PubMed

    Irizarry, Daniel; Wadman, Michael C; Bernhagen, Mary A; Miljkovic, Nikola; Boedeker, Ben H

    2012-01-01

    This work describes the use of Adobe Connect software along with algorithm software to provide the necessary audio visual communication platform for telementoring a complex medical procedure to novice providers located at a distant site.

  2. Military Interoperable Digital Hospital Testbed (MIDHT)

    DTIC Science & Technology

    2013-10-01

    activities are selected highlights completed by Northrop Grumman during the year. Cycle 4 development: - Increased the max_allowed_packet size in MySQL ...deployment with the Java install that is required by CONNECT v3.3.1.3. - Updated the MIDHT code base to work with the CONNECT v.3.3.1.3 Core Libraries...Provided TATRC the CONNECTUniversalClientGUI binaries for use with CONNECT v3.3.1.3 − Created and deployed a common Java library for the CONNECT

  3. Automatic Detection of Previously-Unseen Application States for Deployment Environment Testing and Analysis

    PubMed Central

    Murphy, Christian; Vaughan, Moses; Ilahi, Waseem; Kaiser, Gail

    2010-01-01

    For large, complex software systems, it is typically impossible in terms of time and cost to reliably test the application in all possible execution states and configurations before releasing it into production. One proposed way of addressing this problem has been to continue testing and analysis of the application in the field, after it has been deployed. A practical limitation of many such automated approaches is the potentially high performance overhead incurred by the necessary instrumentation. However, it may be possible to reduce this overhead by selecting test cases and performing analysis only in previously-unseen application states, thus reducing the number of redundant tests and analyses that are run. Solutions for fault detection, model checking, security testing, and fault localization in deployed software may all benefit from a technique that ignores application states that have already been tested or explored. In this paper, we present a solution that ensures that deployment environment tests are only executed in states that the application has not previously encountered. In addition to discussing our implementation, we present the results of an empirical study that demonstrates its effectiveness, and explain how the new approach can be generalized to assist other automated testing and analysis techniques intended for the deployment environment. PMID:21197140

  4. Software Update.

    ERIC Educational Resources Information Center

    Currents, 2000

    2000-01-01

    A chart of 40 alumni-development database systems provides information on vendor/Web site, address, contact/phone, software name, price range, minimum suggested workstation/suggested server, standard reports/reporting tools, minimum/maximum record capacity, and number of installed sites/client type. (DB)

  5. Calibration Software for Use with Jurassicprok

    NASA Technical Reports Server (NTRS)

    Chapin, Elaine; Hensley, Scott; Siqueira, Paul

    2004-01-01

    The Jurassicprok Interferometric Calibration Software (also called "Calibration Processor" or simply "CP") estimates the calibration parameters of an airborne synthetic-aperture-radar (SAR) system, the raw measurement data of which are processed by the Jurassicprok software described in the preceding article. Calibration parameters estimated by CP include time delays, baseline offsets, phase screens, and radiometric offsets. CP examines raw radar-pulse data, single-look complex image data, and digital elevation map data. For each type of data, CP compares the actual values with values expected on the basis of ground-truth data. CP then converts the differences between the actual and expected values into updates for the calibration parameters in an interferometric calibration file (ICF) and a radiometric calibration file (RCF) for the particular SAR system. The updated ICF and RCF are used as inputs to both Jurassicprok and to the companion Motion Measurement Processor software (described in the following article) for use in generating calibrated digital elevation maps.

  6. Improving Earth Science Metadata: Modernizing ncISO

    NASA Astrophysics Data System (ADS)

    O'Brien, K.; Schweitzer, R.; Neufeld, D.; Burger, E. F.; Signell, R. P.; Arms, S. C.; Wilcox, K.

    2016-12-01

    ncISO is a package of tools developed at NOAA's National Center for Environmental Information (NCEI) that facilitates the generation of ISO 19115-2 metadata from NetCDF data sources. The tool currently exists in two iterations: a command line utility and a web-accessible service within the THREDDS Data Server (TDS). Several projects, including NOAA's Unified Access Framework (UAF), depend upon ncISO to generate the ISO-compliant metadata from their data holdings and use the resulting information to populate discovery tools such as NCEI's ESRI Geoportal and NOAA's data.noaa.gov CKAN system. In addition to generating ISO 19115-2 metadata, the tool calculates a rubric score based on how well the dataset follows the Attribute Conventions for Dataset Discovery (ACDD). The result of this rubric calculation, along with information about what has been included and what is missing is displayed in an HTML document generated by the ncISO software package. Recently ncISO has fallen behind in terms of supporting updates to conventions such updates to the ACDD. With the blessing of the original programmer, NOAA's UAF has been working to modernize the ncISO software base. In addition to upgrading ncISO to utilize version1.3 of the ACDD, we have been working with partners at Unidata and IOOS to unify the tool's code base. In essence, we are merging the command line capabilities into the same software that will now be used by the TDS service, allowing easier updates when conventions such as ACDD are updated in the future. In this presentation, we will discuss the work the UAF project has done to support updated conventions within ncISO, as well as describe how the updated tool is helping to improve metadata throughout the earth and ocean sciences.

  7. GEOS-5 During ORACLES: Status Update

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo; Longo, Karla

    2017-01-01

    In this talk we summarize the GEOS-5 capabilities to be deployed during the ORACLES 2016 Campaign. We describe model configuration, data products and web services available. We also discuss the measurement and flight requirements for the GEOS-5 Team.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katipamula, Srinivas; Gowri, Krishnan; Hernandez, George

    This paper describes one such reference process that can be deployed to provide continuous automated conditioned-based maintenance management for buildings that have BIM, a building automation system (BAS) and a computerized maintenance management software (CMMS) systems. The process can be deployed using an open source transactional network platform, VOLTTRON™, designed for distributed sensing and controls and supports both energy efficiency and grid services.

  9. Performance analysis of next-generation lunar laser retroreflectors

    NASA Astrophysics Data System (ADS)

    Ciocci, Emanuele; Martini, Manuele; Contessa, Stefania; Porcelli, Luca; Mastrofini, Marco; Currie, Douglas; Delle Monache, Giovanni; Dell'Agnello, Simone

    2017-09-01

    Starting from 1969, Lunar Laser Ranging (LLR) to the Apollo and Lunokhod Cube Corner Retroreflectors (CCRs) provided several tests of General Relativity (GR). When deployed, the Apollo/Lunokhod CCRs design contributed only a negligible fraction of the ranging error budget. Today the improvement over the years in the laser ground stations makes the lunar libration contribution relevant. So the libration now dominates the error budget limiting the precision of the experimental tests of gravitational theories. The MoonLIGHT-2 project (Moon Laser Instrumentation for General relativity High-accuracy Tests - Phase 2) is a next-generation LLR payload developed by the Satellite/lunar/GNSS laser ranging/altimetry and Cube/microsat Characterization Facilities Laboratory (SCF _ Lab) at the INFN-LNF in collaboration with the University of Maryland. With its unique design consisting of a single large CCR unaffected by librations, MoonLIGHT-2 can significantly reduce error contribution of the reflectors to the measurement of the lunar geodetic precession and other GR tests compared to Apollo/Lunokhod CCRs. This paper treats only this specific next-generation lunar laser retroreflector (MoonLIGHT-2) and it is by no means intended to address other contributions to the global LLR error budget. MoonLIGHT-2 is approved to be launched with the Moon Express 1(MEX-1) mission and will be deployed on the Moon surface in 2018. To validate/optimize MoonLIGHT-2, the SCF _ Lab is carrying out a unique experimental test called SCF-Test: the concurrent measurement of the optical Far Field Diffraction Pattern (FFDP) and the temperature distribution of the CCR under thermal conditions produced with a close-match solar simulator and simulated space environment. The focus of this paper is to describe the SCF _ Lab specialized characterization of the performance of our next-generation LLR payload. While this payload will improve the contribution of the error budget of the space segment (MoonLIGHT-2) to GR tests and to constraints on new gravitational theories (like non-minimally coupled gravity and spacetime torsion), the description of the associated physics analysis and global LLR error budget is outside of the chosen scope of present paper. We note that, according to Reasenberg et al. (2016), software models used for LLR physics and lunar science cannot process residuals with an accuracy better than few centimeters and that, in order to process millimeter ranging data (or better) coming from (not only) future reflectors, it is necessary to update and improve the respective models inside the software package. The work presented here on results of the SCF-test thermal and optical analysis shows that a good performance is expected by MoonLIGHT-2 after its deployment on the Moon. This in turn will stimulate improvements in LLR ground segment hardware and help refine the LLR software code and models. Without a significant improvement of the LLR space segment, the acquisition of improved ground LLR hardware and challenging LLR software refinements may languish for lack of motivation, since the librations of the old generation LLR payloads largely dominate the global LLR error budget.

  10. Annotated bibliography of software engineering laboratory literature

    NASA Technical Reports Server (NTRS)

    Groves, Paula; Valett, Jon

    1990-01-01

    An annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory is given. More than 100 publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. This document has been updated and reorganized substantially since the original version (SEL-82-006, November 1982). All materials have been grouped into eight general subject areas for easy reference: the Software Engineering Laboratory; the Software Engineering Laboratory-software development documents; software tools; software models; software measurement; technology evaluations; Ada technology; and data collection. Subject and author indexes further classify these documents by specific topic and individual author.

  11. Annotated bibliography of Software Engineering Laboratory literature

    NASA Technical Reports Server (NTRS)

    Morusiewicz, Linda; Valett, Jon

    1993-01-01

    This document is an annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory. Nearly 200 publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. This document has been updated and reorganized substantially since the original version (SEL-82-006, November 1982). All materials have been grouped into eight general subject areas for easy reference: the Software Engineering Laboratory; the Software Engineering Laboratory: software development documents; software tools; software models; software measurement; technology evaluations; Ada technology; and data collection. This document contains an index of these publications classified by individual author.

  12. MISR Data Product Specifications

    Atmospheric Science Data Center

    2016-11-25

    ... and usage of metadata. Improvements to MISR algorithmic software occasionally result in changes to file formats. While these changes ...  (DPS).   DPS Revision:   Rev. S Software Version:  5.0.9 Date:  September 20, 2010, updated April ...

  13. Deployment dynamics and control of large-scale flexible solar array system with deployable mast

    NASA Astrophysics Data System (ADS)

    Li, Hai-Quan; Liu, Xiao-Feng; Guo, Shao-Jing; Cai, Guo-Ping

    2016-10-01

    In this paper, deployment dynamics and control of large-scale flexible solar array system with deployable mast are investigated. The adopted solar array system is introduced firstly, including system configuration, deployable mast and solar arrays with several mechanisms. Then dynamic equation of the solar array system is established by the Jourdain velocity variation principle and a method for dynamics with topology changes is introduced. In addition, a PD controller with disturbance estimation is designed to eliminate the drift of spacecraft mainbody. Finally the validity of the dynamic model is verified through a comparison with ADAMS software and the deployment process and dynamic behavior of the system are studied in detail. Simulation results indicate that the proposed model is effective to describe the deployment dynamics of the large-scale flexible solar arrays and the proposed controller is practical to eliminate the drift of spacecraft mainbody.

  14. A space release/deployment system actuated by shape memory wires

    NASA Astrophysics Data System (ADS)

    Fragnito, Marino; Vetrella and, Sergio

    2002-11-01

    In this paper, the design of an innovative hold down/release and deployment device actuated by shape memory wires, to be used for the first time for the S MA RT microsatellite solar wings is shown. The release and deployment mechanisms are actuated by a Shape Memory wire (Nitinol), which allows a complete symmetrical and synchronous release, in a very short time, of the four wings in pairs. The hold down kinematic mechanism is preloaded to avoid vibration nonlinearities and unwanted deployment at launch. The deployment mechanism is a simple pulley system. The stiffness of the deployed panel-hinge system needs to be dimensioned in order to meet the on-orbit requirement for attitude control. One-way roller clutches are used to keep the panel at the desired angle during the mission. An ad hoc software has been developed to simulate both the release and deployment operations, coupling the SMA wire behavior with the system mechanics.

  15. Design and implementation of handheld and desktop software for the structured reporting of hepatic masses using the LI-RADS schema.

    PubMed

    Clark, Toshimasa J; McNeeley, Michael F; Maki, Jeffrey H

    2014-04-01

    The Liver Imaging Reporting and Data System (LI-RADS) can enhance communication between radiologists and clinicians if applied consistently. We identified an institutional need to improve liver imaging report standardization and developed handheld and desktop software to serve this purpose. We developed two complementary applications that implement the LI-RADS schema. A mobile application for iOS devices written in the Objective-C language allows for rapid characterization of hepatic observations under a variety of circumstances. A desktop application written in the Java language allows for comprehensive observation characterization and standardized report text generation. We chose the applications' languages and feature sets based on the computing resources of target platforms, anticipated usage scenarios, and ease of application installation, deployment, and updating. Our primary results are the publication of the core source code implementing the LI-RADS algorithm and the availability of the applications for use worldwide via our website, http://www.liradsapp.com/. The Java application is free open-source software that can be integrated into nearly any vendor's reporting system. The iOS application is distributed through Apple's iTunes App Store. Observation categorizations of both programs have been manually validated to be correct. The iOS application has been used to characterize liver tumors during multidisciplinary conferences of our institution, and several faculty members, fellows, and residents have adopted the generated text of Java application into their diagnostic reports. Although these two applications were developed for the specific reporting requirements of our liver tumor service, we intend to apply this development model to other diseases as well. Through semiautomated structured report generation and observation characterization, we aim to improve patient care while increasing radiologist efficiency. Published by Elsevier Inc.

  16. Integrated Web-Based Immersive Exploration of the Coordinated Canyon Experiment Data using Open Source STOQS Software

    NASA Astrophysics Data System (ADS)

    McCann, M. P.; Gwiazda, R.; O'Reilly, T. C.; Maier, K. L.; Lundsten, E. M.; Parsons, D. R.; Paull, C. K.

    2017-12-01

    The Coordinated Canyon Experiment (CCE) in Monterey Submarine Canyon has produced a wealth of oceanographic measurements whose analysis will improve understanding of turbidity current processes. Exploration of this data set, consisting of over 60 parameters from 15 platforms, is facilitated by using the open source Spatial Temporal Oceanographic Query System (STOQS) software (https://github.com/stoqs/stoqs). The Monterey Bay Aquarium Research Institute (MBARI) originally developed STOQS to help manage and visualize upper water column oceanographic measurements, but the generality of its data model permits effective use for any kind of spatial/temporal measurement data. STOQS consists of a PostgreSQL database and server-side Python/Django software; the client-side is jQuery JavaScript supporting AJAX requests to update a single page web application. The User Interface (UI) is optimized to provide a quick overview of data in spatial and temporal dimensions, as well as in parameter, platform, and data value space. A user may zoom into any feature of interest and select it, initiating a filter operation that updates the UI with an overview of all the data in the new filtered selection. When details are desired, radio buttons and checkboxes are selected to generate a number of different types of visualizations. These include color-filled temporal section and line plots, parameter-parameter plots, 2D map plots, and interactive 3D spatial visualizations. The Extensible 3D (X3D) standard and X3DOM JavaScript library provide the technology for presenting animated 3D data directly within the web browser. Most of the oceanographic measurements from the CCE (e.g. mooring mounted ADCP and CTD data) are easily visualized using established methods. However, unified integration and multiparameter display of several concurrently deployed sensors across a network of platforms is a challenge we hope to solve. Moreover, STOQS also allows display of data from a new instrument - the Benthic Event Detector (BED). The BED records 50Hz samples of orientation and acceleration when it moves. These data are converted to the CF-NetCDF format and then loaded into a STOQS database. Using the Spatial-3D view a user may interact with a virtual playback of BED motions, giving new insight into submarine canyon sediment density flows.

  17. Integrated Budget Office Toolbox

    NASA Technical Reports Server (NTRS)

    Rushing, Douglas A.; Blakeley, Chris; Chapman, Gerry; Robertson, Bill; Horton, Allison; Besser, Thomas; McCarthy, Debbie

    2010-01-01

    The Integrated Budget Office Toolbox (IBOT) combines budgeting, resource allocation, organizational funding, and reporting features in an automated, integrated tool that provides data from a single source for Johnson Space Center (JSC) personnel. Using a common interface, concurrent users can utilize the data without compromising its integrity. IBOT tracks planning changes and updates throughout the year using both phasing and POP-related (program-operating-plan-related) budget information for the current year, and up to six years out. Separating lump-sum funds received from HQ (Headquarters) into separate labor, travel, procurement, Center G&A (general & administrative), and servicepool categories, IBOT creates a script that significantly reduces manual input time. IBOT also manages the movement of travel and procurement funds down to the organizational level and, using its integrated funds management feature, helps better track funding at lower levels. Third-party software is used to create integrated reports in IBOT that can be generated for plans, actuals, funds received, and other combinations of data that are currently maintained in the centralized format. Based on Microsoft SQL, IBOT incorporates generic budget processes, is transportable, and is economical to deploy and support.

  18. A Framework for Analyzing and Testing the Performance of Software Services

    NASA Astrophysics Data System (ADS)

    Bertolino, Antonia; de Angelis, Guglielmo; di Marco, Antinisca; Inverardi, Paola; Sabetta, Antonino; Tivoli, Massimo

    Networks "Beyond the 3rd Generation" (B3G) are characterized by mobile and resource-limited devices that communicate through different kinds of network interfaces. Software services deployed in such networks shall adapt themselves according to possible execution contexts and requirement changes. At the same time, software services have to be competitive in terms of the Quality of Service (QoS) provided, or perceived by the end user.

  19. Joint Logistics Commanders’ Biennial Software Workshop (4th) Orlando II: Solving the PDSS (Post Deployment Software Support) Challenge Held in Orlando, Florida on 27-29 January 87. Volume 2. Proceedings

    DTIC Science & Technology

    1987-06-01

    described the state )f ruaturity of software engineering as being equivalent to the state of maturity of Civil Engineering before Pythagoras invented the...formal verification languages, theorem provers or secure configuration 0 management tools would have to be maintained and used in the PDSS Center to

  20. Strategic Mobility 21 Transition Plan: From Research Federation to Business Enterprise

    DTIC Science & Technology

    2010-12-31

    Transportation Management System (GTMS), Service Oriented Architecture (SOA), Service -as-a- Software ( SaaS ), Joint Capability Technolgoy Demonstration...the Software -as-a- Service ( SaaS ) format, whereby users access the application with the appropriate Internet authorizations. Security is provided by...integrating best-of-breed dual-use systems deployed in the software as a service ( SaaS ) environment. It includes single sign-on capabilities and was

  1. A Content Markup Language for Data Services

    NASA Astrophysics Data System (ADS)

    Noviello, C.; Acampa, P.; Mango Furnari, M.

    Network content delivery and documents sharing is possible using a variety of technologies, such as distributed databases, service-oriented applications, and so forth. The development of such systems is a complex job, because document life cycle involves a strong cooperation between domain experts and software developers. Furthermore, the emerging software methodologies, such as the service-oriented architecture and knowledge organization (e.g., semantic web) did not really solve the problems faced in a real distributed and cooperating settlement. In this chapter the authors' efforts to design and deploy a distribute and cooperating content management system are described. The main features of the system are a user configurable document type definition and a management middleware layer. It allows CMS developers to orchestrate the composition of specialized software components around the structure of a document. In this chapter are also reported some of the experiences gained on deploying the developed framework in a cultural heritage dissemination settlement.

  2. Human-System Integration Scorecard Update to VB.Net

    NASA Technical Reports Server (NTRS)

    Sanders, Blaze D.

    2009-01-01

    The purpose of this project was to create Human-System Integration (HSI) scorecard software, which could be utilized to validate that human factors have been considered early in hardware/system specifications and design. The HSI scorecard is partially based upon the revised Human Rating Requirements (HRR) intended for NASA's Constellation program. This software scorecard will allow for quick appraisal of HSI factors, by using visual aids to highlight low and rapidly changing scores. This project consisted of creating a user-friendly Visual Basic program that could be easily distributed and updated, to and by fellow colleagues. Updating the Microsoft Word version of the HSI scorecard to a computer application will allow for the addition of useful features, improved easy of use, and decreased completion time for user. One significant addition is the ability to create Microsoft Excel graphs automatically from scorecard data, to allow for clear presentation of problematic areas. The purpose of this paper is to describe the rational and benefits of creating the HSI scorecard software, the problems and goals of project, and future work that could be done.

  3. Update of GRASP/Ada reverse engineering tools for Ada

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1993-01-01

    The GRASP/Ada project (Graphical Representations of Algorithms, Structures, and Processes for Ada) successfully created and prototyped a new algorithmic level graphical representation for Ada software, the Control Structure Diagram (CSD). The primary impetus for creation of the CSD was to improve the comprehension efficiency of Ada software and, as a result, improve reliability and reduce costs. The emphasis was on the automatic generation of the CSD from Ada PDL or source code to support reverse engineering and maintenance. The CSD has the potential to replace traditional pretty printed Ada source code. In Phase 1 of the GRASP/Ada project, the CSD graphical constructs were created and applied manually to several small Ada programs. A prototype CSD generator (Version 1) was designed and implemented using FLEX and BISON running under VMS on a VAX 11-780. In Phase 2, the prototype was improved and ported to the Sun 4 platform under UNIX. A user interface was designed and partially implemented using the HP widget toolkit and the X Windows System. In Phase 3, the user interface was extensively reworked using the Athena widget toolkit and X Windows. The prototype was applied successfully to numerous Ada programs ranging in size from several hundred to several thousand lines of source code. Following Phase 3,e two update phases were completed. Update'92 focused on the initial analysis of evaluation data collected from software engineering students at Auburn University and the addition of significant enhancements to the user interface. Update'93 (the current update) focused on the statistical analysis of the data collected in the previous update and preparation of Version 3.4 of the prototype for limited distribution to facilitate further evaluation. The current prototype provides the capability for the user to generate CSD's from Ada PDL or source code in a reverse engineering as well as forward engineering mode with a level of flexibility suitable for practical application. An overview of the GRASP/Ada project with an emphasis on the current update is provided.

  4. Enhancements to TauDEM to support Rapid Watershed Delineation Services

    NASA Astrophysics Data System (ADS)

    Sazib, N. S.; Tarboton, D. G.

    2015-12-01

    Watersheds are widely recognized as the basic functional unit for water resources management studies and are important for a variety of problems in hydrology, ecology, and geomorphology. Nevertheless, delineating a watershed spread across a large region is still cumbersome due to the processing burden of working with large Digital Elevation Model. Terrain Analysis Using Digital Elevation Models (TauDEM) software supports the delineation of watersheds and stream networks from within desktop Geographic Information Systems. A rich set of watershed and stream network attributes are computed. However limitations of the TauDEM desktop tools are (1) it supports only one type of raster (tiff format) data (2) requires installation of software for parallel processing, and (3) data have to be in projected coordinate system. This paper presents enhancements to TauDEM that have been developed to extend its generality and support web based watershed delineation services. The enhancements of TauDEM include (1) reading and writing raster data with the open-source geospatial data abstraction library (GDAL) not limited to the tiff data format and (2) support for both geographic and projected coordinates. To support web services for rapid watershed delineation a procedure has been developed for sub setting the domain based on sub-catchments, with preprocessed data prepared for each catchment stored. This allows the watershed delineation to function locally, while extending to the full extent of watersheds using preprocessed information. Additional capabilities of this program includes computation of average watershed properties and geomorphic and channel network variables such as drainage density, shape factor, relief ratio and stream ordering. The updated version of TauDEM increases the practical applicability of it in terms of raster data type, size and coordinate system. The watershed delineation web service functionality is useful for web based software as service deployments that alleviate the need for users to install and work with desktop GIS software.

  5. Ecohydrologic coevolution in drylands: relative roles of vegetation, soil depth and runoff connectivity on ecosystem shifts.

    NASA Astrophysics Data System (ADS)

    Saco, P. M.; Moreno de las Heras, M.; Willgoose, G. R.

    2014-12-01

    Watersheds are widely recognized as the basic functional unit for water resources management studies and are important for a variety of problems in hydrology, ecology, and geomorphology. Nevertheless, delineating a watershed spread across a large region is still cumbersome due to the processing burden of working with large Digital Elevation Model. Terrain Analysis Using Digital Elevation Models (TauDEM) software supports the delineation of watersheds and stream networks from within desktop Geographic Information Systems. A rich set of watershed and stream network attributes are computed. However limitations of the TauDEM desktop tools are (1) it supports only one type of raster (tiff format) data (2) requires installation of software for parallel processing, and (3) data have to be in projected coordinate system. This paper presents enhancements to TauDEM that have been developed to extend its generality and support web based watershed delineation services. The enhancements of TauDEM include (1) reading and writing raster data with the open-source geospatial data abstraction library (GDAL) not limited to the tiff data format and (2) support for both geographic and projected coordinates. To support web services for rapid watershed delineation a procedure has been developed for sub setting the domain based on sub-catchments, with preprocessed data prepared for each catchment stored. This allows the watershed delineation to function locally, while extending to the full extent of watersheds using preprocessed information. Additional capabilities of this program includes computation of average watershed properties and geomorphic and channel network variables such as drainage density, shape factor, relief ratio and stream ordering. The updated version of TauDEM increases the practical applicability of it in terms of raster data type, size and coordinate system. The watershed delineation web service functionality is useful for web based software as service deployments that alleviate the need for users to install and work with desktop GIS software.

  6. Quick Overview Scout 2008 Version 1.0

    EPA Science Inventory

    The Scout 2008 version 1.0 statistical software package has been updated from past DOS and Windows versions to provide classical and robust univariate and multivariate graphical and statistical methods that are not typically available in commercial or freeware statistical softwar...

  7. Working paper : national costs of the metropolitan ITS infrastructure : updated with 2002 deployment data

    DOT National Transportation Integrated Search

    1995-02-01

    This paper addresses the relationship of truck size and weight (TS&W) policy, vehicle handling and stability, and safety. Handling and stability are the primary mechanisms relating vehicle characteristics and safety. Vehicle characteristics may also ...

  8. An Improved Co-evolutionary Particle Swarm Optimization for Wireless Sensor Networks with Dynamic Deployment

    PubMed Central

    Wang, Xue; Wang, Sheng; Ma, Jun-Jie

    2007-01-01

    The effectiveness of wireless sensor networks (WSNs) depends on the coverage and target detection probability provided by dynamic deployment, which is usually supported by the virtual force (VF) algorithm. However, in the VF algorithm, the virtual force exerted by stationary sensor nodes will hinder the movement of mobile sensor nodes. Particle swarm optimization (PSO) is introduced as another dynamic deployment algorithm, but in this case the computation time required is the big bottleneck. This paper proposes a dynamic deployment algorithm which is named “virtual force directed co-evolutionary particle swarm optimization” (VFCPSO), since this algorithm combines the co-evolutionary particle swarm optimization (CPSO) with the VF algorithm, whereby the CPSO uses multiple swarms to optimize different components of the solution vectors for dynamic deployment cooperatively and the velocity of each particle is updated according to not only the historical local and global optimal solutions, but also the virtual forces of sensor nodes. Simulation results demonstrate that the proposed VFCPSO is competent for dynamic deployment in WSNs and has better performance with respect to computation time and effectiveness than the VF, PSO and VFPSO algorithms.

  9. Cross-platform validation and analysis environment for particle physics

    NASA Astrophysics Data System (ADS)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    2017-11-01

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.

  10. An Integrated RFID and Barcode Tagged Item Inventory System for Deployment at New Brunswick Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younkin, James R; Kuhn, Michael J; Gradle, Colleen

    New Brunswick Laboratory (NBL) has a numerous inventory containing thousands of plutonium and uranium certified reference materials. The current manual inventory process is well established but is a lengthy process which requires significant oversight and double checking to ensure correctness. Oak Ridge National Laboratory has worked with NBL to develop and deploy a new inventory system which utilizes handheld computers with barcode scanners and radio frequency identification (RFID) readers termed the Tagged Item Inventory System (TIIS). Certified reference materials are identified by labels which incorporate RFID tags and barcodes. The label printing process and RFID tag association process are integratedmore » into the main desktop software application. Software on the handheld computers syncs with software on designated desktop machines and the NBL inventory database to provide a seamless inventory process. This process includes: 1) identifying items to be inventoried, 2) downloading the current inventory information to the handheld computer, 3) using the handheld to read item and location labels, and 4) syncing the handheld computer with a designated desktop machine to analyze the results, print reports, etc. The security of this inventory software has been a major concern. Designated roles linked to authenticated logins are used to control access to the desktop software while password protection and badge verification are used to control access to the handheld computers. The overall system design and deployment at NBL will be presented. The performance of the system will also be discussed with respect to a small piece of the overall inventory. Future work includes performing a full inventory at NBL with the Tagged Item Inventory System and comparing performance, cost, and radiation exposures to the current manual inventory process.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oppel, III, Fred; Hart, Brian; Hart, Derek

    Umbra is a software package that has been in development at Sandia National Laboratories since 1995, under the name Umbra since 1997. Umbra is a software framework written in C++ and Tcl/Tk that has been applied to many operations, primarily dealing with robotics and simulation. Umbra executables are C++ libraries orchestrated with Tcl/Tk scripts. Two major feature upgrades occurred from 4.7 to 4.8 1. System Umbra Module with its own Update Graph within the C++ framework. 2. New terrain graph for fast line-of-sight calculations All else were minor updates such as later versions of Visual Studio, OpenSceneGraph and Boost.

  12. Automation of Military Civil Engineering and Site Design Functions: Software Evaluation

    DTIC Science & Technology

    1989-09-01

    promising advantage over manual methods, USACERL is to evaluate available software to determine which, if any, is best suited to the type of civil...moved. Therefore, original surface data were assembled by scaling the northing and easting distances of field elevations and entering them manually into...in the software or requesting an update or addition to the software or manuals . Responses to forms submitted during the test were received at

  13. Closing the loop on improvement: Packaging experience in the Software Engineering Laboratory

    NASA Technical Reports Server (NTRS)

    Waligora, Sharon R.; Landis, Linda C.; Doland, Jerry T.

    1994-01-01

    As part of its award-winning software process improvement program, the Software Engineering Laboratory (SEL) has developed an effective method for packaging organizational best practices based on real project experience into useful handbooks and training courses. This paper shares the SEL's experience over the past 12 years creating and updating software process handbooks and training courses. It provides cost models and guidelines for successful experience packaging derived from SEL experience.

  14. Are Academic Programs Adequate for the Software Profession?

    ERIC Educational Resources Information Center

    Koster, Alexis

    2010-01-01

    According to the Bureau of Labor Statistics, close to 1.8 million people, or 77% of all computer professionals, were working in the design, development, deployment, maintenance, and management of software in 2006. The ACM [Association for Computing Machinery] model curriculum for the BS in computer science proposes that about 42% of the core body…

  15. Perceptions of Open Source versus Commercial Software: Is Higher Education Still on the Fence?

    ERIC Educational Resources Information Center

    van Rooij, Shahron Williams

    2007-01-01

    This exploratory study investigated the perceptions of technology and academic decision-makers about open source benefits and risks versus commercial software applications. The study also explored reactions to a concept for outsourcing campus-wide deployment and maintenance of open source. Data collected from telephone interviews were analyzed,…

  16. Extended System Operations Studies for Automated Guideway Transit Systems : Procedure for the Analysis of Representative AGT Deployments

    DOT National Transportation Integrated Search

    1981-12-01

    The purpose of this report is to present a general procedure for using the SOS software to analyze AGT systems. Data to aid the analyst in specifying input information, required as input to the software, are summarized in the appendices. The data are...

  17. User Documentation for Multiple Software Releases

    NASA Technical Reports Server (NTRS)

    Humphrey, R.

    1982-01-01

    In proposed solution to problems of frequent software releases and updates, documentation would be divided into smaller packages, each of which contains data relating to only one of several software components. Changes would not affect entire document. Concept would improve dissemination of information regarding changes and would improve quality of data supporting packages. Would help to insure both timeliness and more thorough scrutiny of changes.

  18. Software requirements for the study of contextual classifiers and label imperfections

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The software requirements for the study of contextual classifiers and imperfections in the labels are presented. In particular, the requirements are described for updating the posteriori probability of the picture element under consideration using information from its local neighborhood, designing the Fisher classifier, and other required routines. Only the necessary equations are given for the development of software.

  19. Acquisition Handbook - Update. Comprehensive Approach to Reusable Defensive Software (CARDS)

    DTIC Science & Technology

    1994-03-25

    designs, and implementation components (source code, test plans, procedures and results, and system/software documentation). This handbook provides a...activities where software components are acquired, evaluated, tested and sometimes modified. In addition to serving as a facility for the acquisition and...systems from such components [1]. Implementation components are at the lowest level and consist of: specifications; detailed designs; code, test

  20. Optimization modeling of U.S. renewable electricity deployment using local input variables

    NASA Astrophysics Data System (ADS)

    Bernstein, Adam

    For the past five years, state Renewable Portfolio Standard (RPS) laws have been a primary driver of renewable electricity (RE) deployments in the United States. However, four key trends currently developing: (i) lower natural gas prices, (ii) slower growth in electricity demand, (iii) challenges of system balancing intermittent RE within the U.S. transmission regions, and (iv) fewer economical sites for RE development, may limit the efficacy of RPS laws over the remainder of the current RPS statutes' lifetime. An outsized proportion of U.S. RE build occurs in a small number of favorable locations, increasing the effects of these variables on marginal RE capacity additions. A state-by-state analysis is necessary to study the U.S. electric sector and to generate technology specific generation forecasts. We used LP optimization modeling similar to the National Renewable Energy Laboratory (NREL) Renewable Energy Development System (ReEDS) to forecast RE deployment across the 8 U.S. states with the largest electricity load, and found state-level RE projections to Year 2031 significantly lower than thoseimplied in the Energy Information Administration (EIA) 2013 Annual Energy Outlook forecast. Additionally, the majority of states do not achieve their RPS targets in our forecast. Combined with the tendency of prior research and RE forecasts to focus on larger national and global scale models, we posit that further bottom-up state and local analysis is needed for more accurate policy assessment, forecasting, and ongoing revision of variables as parameter values evolve through time. Current optimization software eliminates much of the need for algorithm coding and programming, allowing for rapid model construction and updating across many customized state and local RE parameters. Further, our results can be tested against the empirical outcomes that will be observed over the coming years, and the forecast deviation from the actuals can be attributed to discrete parameter variances.

  1. Virtual Observer Controller (VOC) for Small Unit Infantry Laser Simulation Training

    DTIC Science & Technology

    2007-04-01

    per-seat license when deployed. As a result, ViaVoice was abandoned early in development. Next, the SPHINX engine from Carnegie Mellon University was...examined. Sphinx is Java-based software, providing cross-platform functionality, and it is also free, open-source software. Software developers at...IST had experience using SPHINX , so it was initially selected it to be the VOC speech engine. After implementing a small portion of the VOC grammar

  2. Computer Bits: Child Care Center Management Software Buying Guide Update.

    ERIC Educational Resources Information Center

    Neugebauer, Roger

    1987-01-01

    Compares seven center management programs used for basic financial and data management tasks such as accounting, payroll and attendance records, and mailing lists. Describes three other specialized programs and gives guidelines for selecting the best software for a particular center. (NH)

  3. 78 FR 78939 - 36(b)(1) Arms Sales Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-27

    ... Quantity or Quantities of Articles or Services under Consideration for Purchase: C-130J technical, engineering and software support; software updates and patches; familiarization training for Portable Flight... and contractor technical support services; and other related elements of logistics and program support...

  4. Update on PISCES

    NASA Technical Reports Server (NTRS)

    Pearson, Don; Hamm, Dustin; Kubena, Brian; Weaver, Jonathan K.

    2010-01-01

    An updated version of the Platform Independent Software Components for the Exploration of Space (PISCES) software library is available. A previous version was reported in Library for Developing Spacecraft-Mission-Planning Software (MSC-22983), NASA Tech Briefs, Vol. 25, No. 7 (July 2001), page 52. To recapitulate: This software provides for Web-based, collaborative development of computer programs for planning trajectories and trajectory- related aspects of spacecraft-mission design. The library was built using state-of-the-art object-oriented concepts and software-development methodologies. The components of PISCES include Java-language application programs arranged in a hierarchy of classes that facilitates the reuse of the components. As its full name suggests, the PISCES library affords platform-independence: The Java language makes it possible to use the classes and application programs with a Java virtual machine, which is available in most Web-browser programs. Another advantage is expandability: Object orientation facilitates expansion of the library through creation of a new class. Improvements in the library since the previous version include development of orbital-maneuver- planning and rendezvous-launch-window application programs, enhancement of capabilities for propagation of orbits, and development of a desktop user interface.

  5. Annotated bibliography of software engineering laboratory literature

    NASA Technical Reports Server (NTRS)

    Kistler, David; Bristow, John; Smith, Don

    1994-01-01

    This document is an annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory. Nearly 200 publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. This document has been updated and reorganized substantially since the original version (SEL-82-006, November 1982). All materials have been grouped into eight general subject areas for easy reference: (1) The Software Engineering Laboratory; (2) The Software Engineering Laboratory: Software Development Documents; (3) Software Tools; (4) Software Models; (5) Software Measurement; (6) Technology Evaluations; (7) Ada Technology; and (8) Data Collection. This document contains an index of these publications classified by individual author.

  6. NEON's Mobile Deployment Platform: A Resource for Community Research

    NASA Astrophysics Data System (ADS)

    Sanclements, M.

    2015-12-01

    Here we provide an update on construction and validation of the NEON Mobile Deployment Platforms (MDPs) as well as a description of the infrastructure and sensors available to researchers in the future. The MDPs will provide the means to observe stochastic or spatially important events, gradients, or quantities that cannot be reliably observed using fixed location sampling (e.g. fires and floods). Due to the transient temporal and spatial nature of such events, the MDPs will be designed to accommodate rapid deployment for time periods up to ~ 1 year. Broadly, the MDPs will be comprised of infrastructure and instrumentation capable of functioning individually or in conjunction with one another to support observations of ecological change, as well as education, training and outreach.

  7. ATM over hybrid fiber-coaxial cable networks: practical issues in deploying residential ATM services

    NASA Astrophysics Data System (ADS)

    Laubach, Mark

    1996-11-01

    Residential broadband access network technology based on asynchronous transfer modem (ATM) will soon reach commercial availability. The capabilities provided by ATM access network promise integrated services bandwidth available in excess of those provided by traditional twisted pair copper wire public telephone networks. ATM to the side of the home placed need quality of service capability closest to the subscriber allowing immediate support for Internet services and traditional voice telephony. Other services such as desktop video teleconferencing and enhanced server-based application support can be added as part of future evolution of the network. Additionally, advanced subscriber home networks can be supported easily. This paper presents an updated summary of the standardization efforts for the ATM over HFC definition work currently taking place in the ATM forum's residential broadband working group and the standards progress in the IEEE 802.14 cable TV media access control and physical protocol working group. This update is fundamental for establishing the foundation for delivering ATM-based integrated services via a cable TV network. An economic model for deploying multi-tiered services is presenting showing that a single-tier service is insufficient for a viable cable operator business. Finally, the use of an ATM based system lends itself well to various deployment scenarios of synchronous optical networks (SONET).

  8. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    PubMed

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  9. Clinical Predictive Modeling Development and Deployment through FHIR Web Services

    PubMed Central

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207

  10. Software Smarts

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Under an SBIR (Small Business Innovative Research) contract with Johnson Space Center, Knowledge Based Systems Inc. (KBSI) developed an intelligent software environment for modeling and analyzing mission planning activities, simulating behavior, and, using a unique constraint propagation mechanism, updating plans with each change in mission planning activities. KBSI developed this technology into a commercial product, PROJECTLINK, a two-way bridge between PROSIm, KBSI's process modeling and simulation software and leading project management software like Microsoft Project and Primavera's SureTrak Project Manager.

  11. Open Source Surrogate Safety Assessment Model, 2017 Enhancement and Update: SSAM Version 3.0 [Tech Brief

    DOT National Transportation Integrated Search

    2016-11-17

    The ETFOMM (Enhanced Transportation Flow Open Source Microscopic Model) Cloud Service (ECS) is a software product sponsored by the U.S. Department of Transportation in conjunction with the Microscopic Traffic Simulation Models and SoftwareAn Op...

  12. Online Videoconferencing Products: Update

    ERIC Educational Resources Information Center

    Burton, Douglas; Kitchen, Tim

    2011-01-01

    Software allowing real-time online video connectivity is rapidly evolving. The ability to connect students, staff, and guest speakers instantaneously carries great benefits for the online distance education classroom. This evaluation report compares four software applications at opposite ends of the cost spectrum: "DimDim", "Elluminate VCS",…

  13. Crispen's Five Antivirus Rules.

    ERIC Educational Resources Information Center

    Crispen, Patrick Douglas

    2000-01-01

    Explains five rules to protect computers from viruses. Highlights include commercial antivirus software programs and the need to upgrade them periodically (every year to 18 months); updating virus definitions at least weekly; scanning attached files from email with antivirus software before opening them; Microsoft Word macro protection; and the…

  14. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  15. Software engineering laboratory series: Annotated bibliography of software engineering laboratory literature

    NASA Technical Reports Server (NTRS)

    Morusiewicz, Linda; Valett, Jon

    1992-01-01

    This document is an annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory. More than 100 publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. This document has been updated and reorganized substantially since the original version (SEL-82-006, November 1982). All materials have been grouped into eight general subject areas for easy reference: (1) the Software Engineering Laboratory; (2) the Software Engineering Laboratory: Software Development Documents; (3) Software Tools; (4) Software Models; (5) Software Measurement; (6) Technology Evaluations; (7) Ada Technology; and (8) Data Collection. This document contains an index of these publications classified by individual author.

  16. Annotated bibliography of Software Engineering Laboratory literature

    NASA Technical Reports Server (NTRS)

    1985-01-01

    An annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory is presented. More than 100 publications are summarized. These publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. This document has been updated and reorganized substantially since the original version (SEL-82-006, November 1982). All materials are grouped into five general subject areas for easy reference: (1) the software engineering laboratory; (2) software tools; (3) models and measures; (4) technology evaluations; and (5) data collection. An index further classifies these documents by specific topic.

  17. Working paper : national costs of the metropolitan ITS infrastructure : update to the FHWA 1995 report

    DOT National Transportation Integrated Search

    2001-07-01

    This working paper has been prepared to provide new estimates of the costs to deploy Intelligent Transportation System (ITS) infrastructure elements in the largest metropolitan areas in the United States. It builds upon estimates that were distribute...

  18. Working Paper : national costs of the metropolitan ITS infrastructure : update to the FHWA 1995 report

    DOT National Transportation Integrated Search

    2000-08-01

    This working paper has been prepared to provide new estimates of the costs to deploy Intelligent Transportation System (ITS) infrastructure elements in the largest metropolitan areas in the United States. It builds upon estimates that were distribute...

  19. A software upgrade method for micro-electronics medical implants.

    PubMed

    Cao, Yang; Hao, Hongwei; Xue, Lin; Li, Luming; Ma, Bozhi

    2006-01-01

    A software upgrade method for micro-electronics medical implants is designed to enhance the devices' function or renew the software if there are some bugs found, the software updating or some memory units disabled. The implants needn't be replaced by operations if the faults can be corrected through reprogramming, which reduces the patients' pain and improves the safety effectively. This paper introduces the software upgrade method using in-application programming (IAP) and emphasizes how to insure the system, especially the implanted part's reliability and stability while upgrading.

  20. Thermal Tracker: The Secret Lives of Bats and Birds Revealed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Offshore wind developers and stakeholders can accelerate the sustainable, widespread deployment of offshore wind using a new open-source software program, called ThermalTracker. Researchers can now collect the data they need to better understand the potential effects of offshore wind turbines on bird and bat populations. This plug and play software can be used with any standard desktop computer, thermal camera, and statistical software to identify species and behaviors of animals in offshore locations.

  1. Usability Evaluation of Multimedia Courseware (MEL-SindD)

    NASA Astrophysics Data System (ADS)

    Yussof, Rahmah Lob; Badioze Zaman, Halimah

    Constructive evaluations on any software are needed to ensure the effectiveness and usability of the software. This assesment on the multimedia courseware is part of the researcher's study towards the development and usability of the early reading software for students with Down Syndrome (MEL-SindD). This paper will discuss the usability assesment of this courseware, the methods used for the evaluation as well as suitable approaches that can be deployed to evaluate the courseware effectiveness to disabled children.

  2. Development, Validation and Integration of the ATLAS Trigger System Software in Run 2

    NASA Astrophysics Data System (ADS)

    Keyes, Robert; ATLAS Collaboration

    2017-10-01

    The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.

  3. IHE cross-enterprise document sharing for imaging: interoperability testing software

    PubMed Central

    2010-01-01

    Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241

  4. IHE cross-enterprise document sharing for imaging: interoperability testing software.

    PubMed

    Noumeir, Rita; Renaud, Bérubé

    2010-09-21

    With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  5. Math on the Fast Track

    ERIC Educational Resources Information Center

    Howe, Quincy

    2006-01-01

    In this article, the author relates how a math-assessment software has allowed his school to track the academic progress of its students. The author relates that in the first year that the software was deployed, schoolwide averages in terms of national standing on the math ITBS rose from the 42nd to 59th percentile. In addition, a significant…

  6. Software for Collaborative Use of Large Interactive Displays

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Shab, Thodore; Wales, Roxana; Vera, Alonso; Tollinger, Irene; McCurdy, Michael; Lyubimov, Dmitriy

    2006-01-01

    The MERBoard Collaborative Workspace, which is currently being deployed to support the Mars Exploration Rover (MER) Missions, is the first instantiation of a new computing architecture designed to support collaborative and group computing using computing devices situated in NASA mission operations room. It is a software system for generation of large-screen interactive displays by multiple users

  7. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  8. Quantitative Microbial Risk Assessment Tutorial Installation of Software for Watershed Modeling in Support of QMRA - Updated 2017

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling: • QMRA Installation • SDMProjectBuilder (which includes the Microbial ...

  9. Gambling on CD-ROM.

    ERIC Educational Resources Information Center

    Lowe, John B.

    1988-01-01

    If the CD-ROM revolution is likened to gambling, players are information providers and consumers; the stakes are development, production, distribution, hardware, and software costs; and betting is represented by the costs of updating disks and hardware and software maintenance, and by pricing. Strategy should take into account cost savings,…

  10. Apollo: Changing the Way We Work.

    ERIC Educational Resources Information Center

    Schroeder, John R.; Bleed, Ron

    In January 1994, Arizona's Maricopa Community College District issued a request for proposals to develop new administrative software applications to solve problems related to high maintenance costs for existing systems and difficulties in updating software. The result was the Apollo Project, in which the District contracted with Oracle Corporation…

  11. Cross-platform validation and analysis environment for particle physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for onlinemore » validation of Monte Carlo event samples through a web interface.« less

  12. Evaluation of a Mobile Platform for Proof-of-Concept Autonomous Site Selection and Preparation

    NASA Astrophysics Data System (ADS)

    Gammell, Jonathan

    A mobile robotic platform for Autonomous Site Selection and Preparation (ASSP) was developed for an analogue deployment to Mauna Kea, Hawai`i. A team of rovers performed an autonomous Ground Penetrating Radar (GPR) survey and constructed a level landing pad. They used interchangeable payloads that allowed the GPR and blade to be easily exchanged. Autonomy was accomplished by integrating the individual hardware devices with software based on the ArgoSoft framework previously developed at UTIAS. The rovers were controlled by an on-board netbook. The successes and failures of the devices and software modules are evaluated within. Recommendations are presented to address problems discovered during the deployment and to guide future research on the platform.

  13. Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry

    NASA Technical Reports Server (NTRS)

    Brown, Denise L.; Munoz, Jean-Philippe; Gay, Robert

    2011-01-01

    The EFT-1 mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on onboard altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. There are four primary error sources impacting the sensed pressure: sensor errors, Analog to Digital conversion errors, aerodynamic errors, and atmosphere modeling errors. This last error source is induced by the conversion from pressure to altitude in the vehicle flight software, which requires an atmosphere model such as the US Standard 1976 Atmosphere model. There are several secondary error sources as well, such as waves, tides, and latencies in data transmission. Typically, for error budget calculations it is assumed that all error sources are independent, normally distributed variables. Thus, the initial approach to developing the EFT-1 barometric altimeter altitude error budget was to create an itemized error budget under these assumptions. This budget was to be verified by simulation using high fidelity models of the vehicle hardware and software. The simulation barometric altimeter model includes hardware error sources and a data-driven model of the aerodynamic errors expected to impact the pressure in the midbay compartment in which the sensors are located. The aerodynamic model includes the pressure difference between the midbay compartment and the free stream pressure as a function of altitude, oscillations in sensed pressure due to wake effects, and an acoustics model capturing fluctuations in pressure due to motion of the passive vents separating the barometric altimeters from the outside of the vehicle.

  14. Updates to the CMAQ Post Processing and Evaluation Tools for 2016

    EPA Science Inventory

    In the spring of 2016, the evaluation tools distributed with the CMAQ model code were updated and new tools were added to the existing set of tools. Observation data files, compatible with the AMET software, were also made available on the CMAS website for the first time with the...

  15. A reduced-form approach for representing the impacts of wind and solar PV deployment on the structure and operation of the electricity system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Nils; Strubegger, Manfred; McPherson, Madeleine

    In many climate change mitigation scenarios, integrated assessment models of the energy and climate systems rely heavily on renewable energy technologies with variable and uncertain generation, such as wind and solar PV, to achieve substantial decarbonization of the electricity sector. However, these models often include very little temporal resolution and thus have difficulty in representing the integration costs that arise from mismatches between electricity supply and demand. The global integrated assessment model, MESSAGE, has been updated to explicitly model the trade-offs between variable renewable energy (VRE) deployment and its impacts on the electricity system, including the implications for electricity curtailment,more » backup capacity, and system flexibility. These impacts have been parameterized using a reduced-form approach, which allows VRE integration impacts to be quantified on a regional basis. In addition, thermoelectric technologies were updated to include two modes of operation, baseload and flexible, to better account for the cost, efficiency, and availability penalties associated with flexible operation. In this paper, the modeling approach used in MESSAGE is explained and the implications for VRE deployment in mitigation scenarios are assessed. Three important stylized facts associated with integrating high VRE shares are successfully reproduced by our modeling approach: (1) the significant reduction in the utilization of non-VRE power plants; (2) the diminishing role for traditional baseload generators, such as nuclear and coal, and the transition to more flexible technologies; and (3) the importance of electricity storage and hydrogen electrolysis in facilitating the deployment of VRE.« less

  16. Software Engineering Research/Developer Collaborations in 2005

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom

    2006-01-01

    In CY 2005, three collaborations between software engineering technology providers and NASA software development personnel deployed three software engineering technologies on NASA development projects (a different technology on each project). The main purposes were to benefit the projects, infuse the technologies if beneficial into NASA, and give feedback to the technology providers to improve the technologies. Each collaboration project produced a final report. Section 2 of this report summarizes each project, drawing from the final reports and communications with the software developers and technology providers. Section 3 indicates paths to further infusion of the technologies into NASA practice. Section 4 summarizes some technology transfer lessons learned. Also included is an acronym list.

  17. A Software Engineering Approach based on WebML and BPMN to the Mediation Scenario of the SWS Challenge

    NASA Astrophysics Data System (ADS)

    Brambilla, Marco; Ceri, Stefano; Valle, Emanuele Della; Facca, Federico M.; Tziviskou, Christina

    Although Semantic Web Services are expected to produce a revolution in the development of Web-based systems, very few enterprise-wide design experiences are available; one of the main reasons is the lack of sound Software Engineering methods and tools for the deployment of Semantic Web applications. In this chapter, we present an approach to software development for the Semantic Web based on classical Software Engineering methods (i.e., formal business process development, computer-aided and component-based software design, and automatic code generation) and on semantic methods and tools (i.e., ontology engineering, semantic service annotation and discovery).

  18. Open source molecular modeling.

    PubMed

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  19. Cortical Coupling Reflects Bayesian Belief Updating in the Deployment of Spatial Attention.

    PubMed

    Vossel, Simone; Mathys, Christoph; Stephan, Klaas E; Friston, Karl J

    2015-08-19

    The deployment of visuospatial attention and the programming of saccades are governed by the inferred likelihood of events. In the present study, we combined computational modeling of psychophysical data with fMRI to characterize the computational and neural mechanisms underlying this flexible attentional control. Sixteen healthy human subjects performed a modified version of Posner's location-cueing paradigm in which the percentage of cue validity varied in time and the targets required saccadic responses. Trialwise estimates of the certainty (precision) of the prediction that the target would appear at the cued location were derived from a hierarchical Bayesian model fitted to individual trialwise saccadic response speeds. Trial-specific model parameters then entered analyses of fMRI data as parametric regressors. Moreover, dynamic causal modeling (DCM) was performed to identify the most likely functional architecture of the attentional reorienting network and its modulation by (Bayes-optimal) precision-dependent attention. While the frontal eye fields (FEFs), intraparietal sulcus, and temporoparietal junction (TPJ) of both hemispheres showed higher activity on invalid relative to valid trials, reorienting responses in right FEF, TPJ, and the putamen were significantly modulated by precision-dependent attention. Our DCM results suggested that the precision of predictability underlies the attentional modulation of the coupling of TPJ with FEF and the putamen. Our results shed new light on the computational architecture and neuronal network dynamics underlying the context-sensitive deployment of visuospatial attention. Spatial attention and its neural correlates in the human brain have been studied extensively with the help of fMRI and cueing paradigms in which the location of targets is pre-cued on a trial-by-trial basis. One aspect that has so far been neglected concerns the question of how the brain forms attentional expectancies when no a priori probability information is available but needs to be inferred from observations. This study elucidates the computational and neural mechanisms under which probabilistic inference governs attentional deployment. Our results show that Bayesian belief updating explains changes in cortical connectivity; in that directional influences from the temporoparietal junction on the frontal eye fields and the putamen were modulated by (Bayes-optimal) updates. Copyright © 2015 Vossel et al.

  20. Automating NEURON Simulation Deployment in Cloud Resources.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.

  1. Automating NEURON Simulation Deployment in Cloud Resources

    PubMed Central

    Santamaria, Fidel

    2016-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341

  2. The deployment of routing protocols in distributed control plane of SDN.

    PubMed

    Jingjing, Zhou; Di, Cheng; Weiming, Wang; Rong, Jin; Xiaochun, Wu

    2014-01-01

    Software defined network (SDN) provides a programmable network through decoupling the data plane, control plane, and application plane from the original closed system, thus revolutionizing the existing network architecture to improve the performance and scalability. In this paper, we learned about the distributed characteristics of Kandoo architecture and, meanwhile, improved and optimized Kandoo's two levels of controllers based on ideological inspiration of RCP (routing control platform). Finally, we analyzed the deployment strategies of BGP and OSPF protocol in a distributed control plane of SDN. The simulation results show that our deployment strategies are superior to the traditional routing strategies.

  3. History Microcomputer Games: Update 2.

    ERIC Educational Resources Information Center

    Sargent, James E.

    1985-01-01

    Provides full narrative reviews of B-1 Nuclear Bomber (Avalon, 1982); American History Adventure (Social Science Microcomputer Review Software, 1985); Government Simulations (Prentice-Hall, 1985); and The Great War, FDR and the New Deal, and Hitler's War, all from New Worlds Software, 1985. Lists additional information on five other history and…

  4. Multiyear Interactive Computer Almanac (MICA)

    Science.gov Websites

    from the U.S. Naval Observatory About MICA Features System Requirements Delta T File and Software Requirements | Delta T and Software Updates | FAQ and Bug Reports | Ordering ] Features MICA can perform the , and delta T). Twilight, rise, set, and transit times for major solar system bodies, selected bright

  5. COMPILATION OF SATURATED AND UNSATURATED ZONE MODELING SOFTWARE

    EPA Science Inventory

    The full report provides readers an overview of available ground-water modeling programs and related software. It is an update of EPA/600/R-93/118 and EPA/600/R-94/028, two previous reports from the same program at the International Ground Water Modeling Center (IGWMC) in Colora...

  6. 78 FR 79564 - Discontinuance of Annual Financial Assessments-Delay in Implementation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-30

    ... that due to delays in modifying computer software, VA is postponing implementation of this change. FOR... computer matching of income reported to the Internal Revenue Service (IRS) and Social Security... implemented by December 31, 2013. Due to delays in revising and updating supporting computer software, VA is...

  7. Advanced software development workstation project: Engineering scripting language. Graphical editor

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Software development is widely considered to be a bottleneck in the development of complex systems, both in terms of development and in terms of maintenance of deployed systems. Cost of software development and maintenance can also be very high. One approach to reducing costs and relieving this bottleneck is increasing the reuse of software designs and software components. A method for achieving such reuse is a software parts composition system. Such a system consists of a language for modeling software parts and their interfaces, a catalog of existing parts, an editor for combining parts, and a code generator that takes a specification and generates code for that application in the target language. The Advanced Software Development Workstation is intended to be an expert system shell designed to provide the capabilities of a software part composition system.

  8. A Bayesian Framework for Reliability Analysis of Spacecraft Deployments

    NASA Technical Reports Server (NTRS)

    Evans, John W.; Gallo, Luis; Kaminsky, Mark

    2012-01-01

    Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a two stage sequential Bayesian framework for reliability estimation of spacecraft deployment was developed for this purpose. This process was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the Optical Telescope Element. Initially, detailed studies of NASA deployment history, "heritage information", were conducted, extending over 45 years of spacecraft launches. This information was then coupled to a non-informative prior and a binomial likelihood function to create a posterior distribution for deployments of various subsystems uSing Monte Carlo Markov Chain sampling. Select distributions were then coupled to a subsequent analysis, using test data and anomaly occurrences on successive ground test deployments of scale model test articles of JWST hardware, to update the NASA heritage data. This allowed for a realistic prediction for the reliability of the complex Sunshield deployment, with credibility limits, within this two stage Bayesian framework.

  9. Cyber security best practices for the nuclear industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badr, I.

    2012-07-01

    When deploying software based systems, such as, digital instrumentation and controls for the nuclear industry, it is vital to include cyber security assessment as part of architecture and development process. When integrating and delivering software-intensive systems for the nuclear industry, engineering teams should make use of a secure, requirements driven, software development life cycle, ensuring security compliance and optimum return on investment. Reliability protections, data loss prevention, and privacy enforcement provide a strong case for installing strict cyber security policies. (authors)

  10. Lower Total Cost of Ownership of ONE-NET by Using Thin-Client Desktop Deployment and Virtualization-Based Server Technology

    DTIC Science & Technology

    2010-09-01

    NNWC) was used to calculate major cost components—labor, hardware, software , and transport, while a VMware tool was used to calculate power and...cooling costs for both solutions. In addition, VMware provided a cost estimate for the upfront hardware and software licensing costs needed to support...cost per seat (CPS) model developed by Naval Network Warfare Command (NNWC) was used to calculate major cost components—labor, hardware, software , and

  11. Lessons Learned from Autonomous Sciencecraft Experiment

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Sherwood, Rob; Tran, Daniel; Cichy, Benjamin; Rabideau, Gregg; Castano, Rebecca; Davies, Ashley; Mandl, Dan; Frye, Stuart; Trout, Bruce; hide

    2005-01-01

    An Autonomous Science Agent has been flying onboard the Earth Observing One Spacecraft since 2003. This software enables the spacecraft to autonomously detect and responds to science events occurring on the Earth such as volcanoes, flooding, and snow melt. The package includes AI-based software systems that perform science data analysis, deliberative planning, and run-time robust execution. This software is in routine use to fly the EO-l mission. In this paper we briefly review the agent architecture and discuss lessons learned from this multi-year flight effort pertinent to deployment of software agents to critical applications.

  12. Automation and Networking of Public Libraries in India Using the E-Granthalaya Software from the National Informatics Centre

    ERIC Educational Resources Information Center

    Matoria, Ram Kumar; Upadhyay, P. K.; Moni, Madaswamy

    2007-01-01

    Purpose: To describe the development of the library management system, e-Granthalaya, for public libraries in India. This is an initiative of the Indian government's National Informatics Centre (NIC). The paper outlines the challenges and the potential of a full-scale deployment of this software at a national level. Design/methodology/approach:…

  13. Deploying an Intelligent Pairing Assistant for Air Operation Centers

    DTIC Science & Technology

    2016-06-23

    primary contributions of this case study are applying artificial intelligence techniques to a novel domain and discussing the software evaluation...their standard workflows. The primary contributions of this case study are applying artificial intelligence techniques to a novel domain and...users for more efficient and accurate pairing? Participants Participants in the evaluation consisted of three SMEs employed at Intelligent Software

  14. Demographic Variables as Factors Influencing Accessibility and Utilisation of Library Software by Undergraduates in Two Private Universities in Nigeria

    ERIC Educational Resources Information Center

    Tolulope, Akano

    2017-01-01

    Libraries before the 21st century carried out daily routine library task such as cataloguing and classification, acquisition, reference services etc using manual procedures only but the advent of Information Technology as transformed these routine task that libraries can now automate their activities by deploying the use of library software in…

  15. Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers.

    PubMed

    Sochat, Vanessa V; Prybol, Cameron J; Kurtzer, Gregory M

    2017-01-01

    Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub's primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers.

  16. Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers

    PubMed Central

    Prybol, Cameron J.; Kurtzer, Gregory M.

    2017-01-01

    Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub’s primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers. PMID:29186161

  17. Safe Surgery Trainer

    DTIC Science & Technology

    2014-11-15

    design, testing, and development. b) Prototype Development – Continue developing SST software, game -flow, and mechanics. Continue developing art...refined learning objectives into measurement outlines. Update IRB submissions, edit usability game play study, and update I/ITSEC IRB. Provide case...minimal or near zero. 9) Related Activities a) Presenting at the Design of Learning Games Community Workshop, at I/ITSEC, Wednesday, Dec 3 rd

  18. Mosquito Control Techniques Developed for the US Military and an Update on the AMCA

    USDA-ARS?s Scientific Manuscript database

    Scientists at the USDA Center for Medical, Agricultural and Veterinary Entomology developed and field tested novel techniques to protect deployed military troops from diseases transmitted by mosquitoes and sand flies. Methods that proved to be very effective included (1) novel military personal prot...

  19. Update on SPLAT and cranberry fruitworm degree-days

    USDA-ARS?s Scientific Manuscript database

    This talk reviews the mating disruption mechanism and work that we have done in the Steffan lab summers 2012-2014. In 2016 we mechanized the deployment of mating disruption with the use of unmanned aerial vehicles. We are not continuing this form of mechanization at this time due to challenges with ...

  20. Project Update: Increased Fuel Affordability through Deployable Refining Technology

    DTIC Science & Technology

    2016-08-01

    gal of jet fuel to meet fit- for-purpose specifications for ultra-low sulfur diesel (< 15 ppm S). The treated fuel will be utilized in a ~40-hr...engine test to verify operating performance characteristics. Follow-on field demonstration opportunities may include treatment of overseas diesel fuel

  1. Districts Deploy Digital Tools to Engage Parents

    ERIC Educational Resources Information Center

    Fleming, Nora

    2012-01-01

    Digital technology is providing a growing variety of methods for school leaders to connect with parents anywhere, anytime--a tactic mirroring how technology is used to engage students. Through Twitter feeds, Facebook pages, and text messages sent in multiple languages, school staff members are giving parents instant updates, news, and information…

  2. Software quality in 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, C.

    1997-11-01

    For many years, software quality assurance lagged behind hardware quality assurance in terms of methods, metrics, and successful results. New approaches such as Quality Function Deployment (QFD) the ISO 9000-9004 standards, the SEI maturity levels, and Total Quality Management (TQM) are starting to attract wide attention, and in some cases to bring software quality levels up to a parity with manufacturing quality levels. Since software is on the critical path for many engineered products, and for internal business systems as well, the new approaches are starting to affect global competition and attract widespread international interest. It can be hypothesized thatmore » success in mastering software quality will be a key strategy for dominating global software markets in the 21st century.« less

  3. The Five 'R's' for Developing Trusted Software Frameworks to increase confidence in, and maximise reuse of, Open Source Software.

    NASA Astrophysics Data System (ADS)

    Fraser, Ryan; Gross, Lutz; Wyborn, Lesley; Evans, Ben; Klump, Jens

    2015-04-01

    Recent investments in HPC, cloud and Petascale data stores, have dramatically increased the scale and resolution that earth science challenges can now be tackled. These new infrastructures are highly parallelised and to fully utilise them and access the large volumes of earth science data now available, a new approach to software stack engineering needs to be developed. The size, complexity and cost of the new infrastructures mean any software deployed has to be reliable, trusted and reusable. Increasingly software is available via open source repositories, but these usually only enable code to be discovered and downloaded. As a user it is hard for a scientist to judge the suitability and quality of individual codes: rarely is there information on how and where codes can be run, what the critical dependencies are, and in particular, on the version requirements and licensing of the underlying software stack. A trusted software framework is proposed to enable reliable software to be discovered, accessed and then deployed on multiple hardware environments. More specifically, this framework will enable those who generate the software, and those who fund the development of software, to gain credit for the effort, IP, time and dollars spent, and facilitate quantification of the impact of individual codes. For scientific users, the framework delivers reviewed and benchmarked scientific software with mechanisms to reproduce results. The trusted framework will have five separate, but connected components: Register, Review, Reference, Run, and Repeat. 1) The Register component will facilitate discovery of relevant software from multiple open source code repositories. The registration process of the code should include information about licensing, hardware environments it can be run on, define appropriate validation (testing) procedures and list the critical dependencies. 2) The Review component is targeting on the verification of the software typically against a set of benchmark cases. This will be achieved by linking the code in the software framework to peer review forums such as Mozilla Science or appropriate Journals (e.g. Geoscientific Model Development Journal) to assist users to know which codes to trust. 3) Referencing will be accomplished by linking the Software Framework to groups such as Figshare or ImpactStory that help disseminate and measure the impact of scientific research, including program code. 4) The Run component will draw on information supplied in the registration process, benchmark cases described in the review and relevant information to instantiate the scientific code on the selected environment. 5) The Repeat component will tap into existing Provenance Workflow engines that will automatically capture information that relate to a particular run of that software, including identification of all input and output artefacts, and all elements and transactions within that workflow. The proposed trusted software framework will enable users to rapidly discover and access reliable code, reduce the time to deploy it and greatly facilitate sharing, reuse and reinstallation of code. Properly designed it could enable an ability to scale out to massively parallel systems and be accessed nationally/ internationally for multiple use cases, including Supercomputer centres, cloud facilities, and local computers.

  4. Development of a New VLBI Data Analysis Software

    NASA Technical Reports Server (NTRS)

    Bolotin, Sergei; Gipson, John M.; MacMillan, Daniel S.

    2010-01-01

    We present an overview of a new VLBI analysis software under development at NASA GSFC. The new software will replace CALC/SOLVE and many related utility programs. It will have the capabilities of the current system as well as incorporate new models and data analysis techniques. In this paper we give a conceptual overview of the new software. We formulate the main goals of the software. The software should be flexible and modular to implement models and estimation techniques that currently exist or will appear in future. On the other hand it should be reliable and possess production quality for processing standard VLBI sessions. Also, it needs to be capable of processing observations from a fully deployed network of VLBI2010 stations in a reasonable time. We describe the software development process and outline the software architecture.

  5. Motion and ranging sensor system for through-the-wall surveillance system

    NASA Astrophysics Data System (ADS)

    Black, Jeffrey D.

    2002-08-01

    A portable Through-the-Wall Surveillance System is being developed for law enforcement, counter-terrorism, and military use. The Motion and Ranging Sensor is a radar that operates in a frequency band that allows for surveillance penetration of most non-metallic walls. Changes in the sensed radar returns are analyzed to detect the human motion that would typically be present during a hostage or barricaded suspect scenario. The system consists of a Sensor Unit, a handheld Remote Display Unit, and an optional laptop computer Command Display Console. All units are battery powered and a wireless link provides command and data communication between units. The Sensor Unit is deployed close to the wall or door through which the surveillance is to occur. After deploying the sensor the operator may move freely as required by the scenario. Up to five Sensor Units may be deployed at a single location. A software upgrade to the Command Display Console is also being developed. This software upgrade will combine the motion detected by multiple Sensor Units and determine and track the location of detected motion in two dimensions.

  6. Operationalizing Cyberspace for Today’s Combat Air Force

    DTIC Science & Technology

    2010-04-01

    rootkit techniques to run inside common Windows services (sometimes bundled with fake antivirus software ) or in Windows safe mode, and it can hide...has shifted to downloading other malware, with its main focus on fake alerts and rogue antivirus software . 5. TR/Dldr.Agent.JKH - Compromised U.S...patch, software update, or security breech away from failure. In short, what works AU/ACSC/SIMMONS/AY10 5 today, may not work tomorrow; this fact

  7. Design and deploying study of a new petal-type deployable solid surface antenna

    NASA Astrophysics Data System (ADS)

    Huang, He; Guan, Fu-Ling; Pan, Liang-Lai; Xu, Yan

    2018-07-01

    Deployable solid surface reflector is still one of the most important ways to fulfill the ultra-high-accuracy and ultra-large-aperture reflector antennas. However the drawback of integrate stiffness is still a main problem for solid surface reflectors in the former research. To figure out this problem, a New Petal-type Deployable Solid Surface Antenna (NPDSSA) is developed in this study. A kind of drag springs are applied as linkages with adjacent petals to improve the integrate rigidity. The structural design is introduced and the geometric parameters are analyzed to find their effects on the rotation and package capacities. The software simulations and laboratory model tests are conducted to verify the deploying process of NPDSSA. Two models are employed to study the property of linkage butts and drag springs. It is indicated that model NPDSSA with the application of linkage butts and drag springs has better integrality and stability during the deploying. Finally it is concluded that NPDSSA is feasible for space applications.

  8. A Roadmap to Continuous Integration for ATLAS Software Development

    NASA Astrophysics Data System (ADS)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million lines of C++ and 1.4 million lines of python code. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This paper describes the CI incorporation program for the ATLAS software infrastructure. It brings modern open source tools such as Jenkins and GitLab into the ATLAS Nightly System, rationalizes hardware resource allocation and administrative operations, provides improved feedback and means to fix broken builds promptly for developers. Once adopted, ATLAS CI practices will improve and accelerate innovation cycles and result in increased confidence in new software deployments. The paper reports the status of Jenkins integration with the ATLAS Nightly System as well as short and long term plans for the incorporation of CI practices.

  9. The impact of organizational structure on flight software cost risk

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus; Lum, Karen; Monson, Erik

    2004-01-01

    This paper summarizes the final results of the follow-up study updating the estimated software effort growth for those projects that were still under development and including an evaluation of the roles versus observed cost risk for the missions included in the original study which expands the data set to thirteen missions.

  10. 77 FR 46763 - Documents to Support Submission of an Electronic Common Technical Document; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-06

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2011-N-0724... not prepared at present to accept submissions utilizing this new version because eCTD software vendors need time to update their software to accommodate this information and because its use will require...

  11. RELAP-7 Software Verification and Validation Plan - Requirements Traceability Matrix (RTM) Part 2: Code Assessment Strategy, Procedure, and RTM Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Jun Soo; Choi, Yong Joon; Smith, Curtis Lee

    2016-09-01

    This document addresses two subjects involved with the RELAP-7 Software Verification and Validation Plan (SVVP): (i) the principles and plan to assure the independence of RELAP-7 assessment through the code development process, and (ii) the work performed to establish the RELAP-7 assessment plan, i.e., the assessment strategy, literature review, and identification of RELAP-7 requirements. Then, the Requirements Traceability Matrices (RTMs) proposed in previous document (INL-EXT-15-36684) are updated. These RTMs provide an efficient way to evaluate the RELAP-7 development status as well as the maturity of RELAP-7 assessment through the development process.

  12. Adaptive System Modeling for Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Thomas, Justin

    2011-01-01

    This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).

  13. SNAPSHOT: A MODERN, SUSTAINABLE HOLDUP MEASUREMENT SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowe, Nathan C; Younkin, James R; Smith, Steven E

    2016-01-01

    SNAPSHOT is a software platform designed to eventually replace Holdup Measurement System 4 (HMS 4), which is the current state-of-the-art for acquisition and analysis of nondestructive assay measurement data for in situ nuclear materials, holdup, in support of criticality safety and material control and accounting. HMS 4 is over 10 years old and is currently unsustainable due to hardware and software incompatibilities that have arisen from advances in detector electronics, primarily updates to multi-channel analyzers (MCAs), and both computer and handheld operating systems. SNAPSHOT is a complete redesign of HMS 4 that addresses the issue of compatibility with modern MCAsmore » and operating systems and that is designed with a flexible architecture to support long-term sustainability. It also provides an updated and more user friendly interface and is being developed under an NQA 1 software quality assurance (SQA) program to facilitate site acceptance for safety-related applications. This paper provides an overview of the SNAPSHOT project including details of the software development process, the SQA program, and the architecture designed to support sustainability.« less

  14. High Performance Computing Operations Review Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cupps, Kimberly C.

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  15. Global Precipitation Measurement (GPM) Safety Inhibit Timeline Tool

    NASA Technical Reports Server (NTRS)

    Dion, Shirley

    2012-01-01

    The Global Precipitation Measurement (GPM) Observatory is a joint mission under the partnership by National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA), Japan. The NASA Goddard Space Flight Center (GSFC) has the lead management responsibility for NASA on GPM. The GPM program will measure precipitation on a global basis with sufficient quality, Earth coverage, and sampling to improve prediction of the Earth's climate, weather, and specific components of the global water cycle. As part of the development process, NASA built the spacecraft (built in-house at GSFC) and provided one instrument (GPM Microwave Imager (GMI) developed by Ball Aerospace) JAXA provided the launch vehicle (H2-A by MHI) and provided one instrument (Dual-Frequency Precipitation Radar (DPR) developed by NTSpace). Each instrument developer provided a safety assessment which was incorporated into the NASA GPM Safety Hazard Assessment. Inhibit design was reviewed for hazardous subsystems which included the High Gain Antenna System (HGAS) deployment, solar array deployment, transmitter turn on, propulsion system release, GMI deployment, and DPR radar turn on. The safety inhibits for these listed hazards are controlled by software. GPM developed a "pathfinder" approach for reviewing software that controls the electrical inhibits. This is one of the first GSFC in-house programs that extensively used software controls. The GPM safety team developed a methodology to document software safety as part of the standard hazard report. As part of this process a new tool "safety inhibit time line" was created for management of inhibits and their controls during spacecraft buildup and testing during 1& Tat GSFC and at the Range in Japan. In addition to understanding inhibits and controls during 1& T the tool allows the safety analyst to better communicate with others the changes in inhibit states with each phase of hardware and software testing. The tool was very useful for communicating compliance with safety requirements especially when working with a foreign partner.

  16. Enhanced In-Pile Instrumentation at the Advanced Test Reactor

    NASA Astrophysics Data System (ADS)

    Rempe, Joy L.; Knudson, Darrell L.; Daw, Joshua E.; Unruh, Troy; Chase, Benjamin M.; Palmer, Joe; Condie, Keith G.; Davis, Kurt L.

    2012-08-01

    Many of the sensors deployed at materials and test reactors cannot withstand the high flux/high temperature test conditions often requested by users at U.S. test reactors, such as the Advanced Test Reactor (ATR) at the Idaho National Laboratory. To address this issue, an instrumentation development effort was initiated as part of the ATR National Scientific User Facility in 2007 to support the development and deployment of enhanced in-pile sensors. This paper provides an update on this effort. Specifically, this paper identifies the types of sensors currently available to support in-pile irradiations and those sensors currently available to ATR users. Accomplishments from new sensor technology deployment efforts are highlighted by describing new temperature and thermal conductivity sensors now available to ATR users. Efforts to deploy enhanced in-pile sensors for detecting elongation and real-time flux detectors are also reported, and recently-initiated research to evaluate the viability of advanced technologies to provide enhanced accuracy for measuring key parameters during irradiation testing are noted.

  17. ToxPi Graphical User Interface 2.0: Dynamic exploration, visualization, and sharing of integrated data models.

    PubMed

    Marvel, Skylar W; To, Kimberly; Grimm, Fabian A; Wright, Fred A; Rusyn, Ivan; Reif, David M

    2018-03-05

    Drawing integrated conclusions from diverse source data requires synthesis across multiple types of information. The ToxPi (Toxicological Prioritization Index) is an analytical framework that was developed to enable integration of multiple sources of evidence by transforming data into integrated, visual profiles. Methodological improvements have advanced ToxPi and expanded its applicability, necessitating a new, consolidated software platform to provide functionality, while preserving flexibility for future updates. We detail the implementation of a new graphical user interface for ToxPi (Toxicological Prioritization Index) that provides interactive visualization, analysis, reporting, and portability. The interface is deployed as a stand-alone, platform-independent Java application, with a modular design to accommodate inclusion of future analytics. The new ToxPi interface introduces several features, from flexible data import formats (including legacy formats that permit backward compatibility) to similarity-based clustering to options for high-resolution graphical output. We present the new ToxPi interface for dynamic exploration, visualization, and sharing of integrated data models. The ToxPi interface is freely-available as a single compressed download that includes the main Java executable, all libraries, example data files, and a complete user manual from http://toxpi.org .

  18. TreeVector: scalable, interactive, phylogenetic trees for the web.

    PubMed

    Pethica, Ralph; Barker, Gary; Kovacs, Tim; Gough, Julian

    2010-01-28

    Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees. We introduce TreeVector, a Scalable Vector Graphics-and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print. TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user's own web server. It has already been deployed on two recognized and widely used database Web sites.

  19. Software Engineering Improvement Plan

    NASA Technical Reports Server (NTRS)

    2006-01-01

    In performance of this task order, bd Systems personnel provided support to the Flight Software Branch and the Software Working Group through multiple tasks related to software engineering improvement and to activities of the independent Technical Authority (iTA) Discipline Technical Warrant Holder (DTWH) for software engineering. To ensure that the products, comments, and recommendations complied with customer requirements and the statement of work, bd Systems personnel maintained close coordination with the customer. These personnel performed work in areas such as update of agency requirements and directives database, software effort estimation, software problem reports, a web-based process asset library, miscellaneous documentation review, software system requirements, issue tracking software survey, systems engineering NPR, and project-related reviews. This report contains a summary of the work performed and the accomplishments in each of these areas.

  20. Software testing

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.

    2016-01-01

    Now more than ever, scientific results are dependent on sophisticated software and analysis. Why should we trust code written by others? How do you ensure your own code produces sensible results? How do you make sure it continues to do so as you update, modify, and add functionality? Software testing is an integral part of code validation and writing tests should be a requirement for any software project. I will talk about Python-based tools that make managing and running tests much easier and explore some statistics for projects hosted on GitHub that contain tests.

  1. Analysis of Single-Frequency GNSS Data for Determination of Time-Dependent Flow and Deformation of Fast-Moving Glaciers

    NASA Astrophysics Data System (ADS)

    Davis, J. L.; Elosegui, P.; Nettles, M.

    2012-12-01

    Single-frequency GNSS data has not generally been used for high-accuracy geodetic applications since the 1990s, but there are significant advantages if single-frequency GNSS receivers can be usefully deployed for studies of fast-moving outlet glaciers. The cost for these receivers is significantly lower (~50%) than for dual-frequency receivers, a significant benefit given the high spatial density at which these system are deployed on the glacier and the high risk for damage or loss in the glacial environment. In addition, the size of the data files that need to be transferred from extremely remote locations, often at very slow transmission rates, is significantly reduced. Consideration of single-frequency systems for this application is viable because of the relatively small extent (< 50 km) of the entire network to be deployed. Unfortunately, the availability of research-quality software that can perform kinematic solutions on single-frequency data is limited. We have developed the BAKAR software employing a stochastic filter to analyze single-frequency GNSS data. The software can implement a range of stochastic models for time-dependent site position. In this presentation, we describe the BAKAR software, and discuss its strengths and weaknesses. On one hand, chief among the challenges we have encountered are determination of accurate prior positions, and bursts of polar ionospheric activity that impede cycle-slip detection, even over intersite distances as short as 10 km. On the other hand, use of a single-frequency observable is theoretically less sensitive to multipath and signal scattering. We will quantitatively assess these effects, and assess the accuracy of BAKAR in a range of situations and applications.

  2. Effects of Deployment Investment on the Growth of the Biofuels Industry. 2016 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vimmerstedt, Laura J.; Warner, Ethan S.; Stright, Dana

    This report updates the 2013 report of the same title. Some text originally published in that report is retained and indicated in gray. In support of the national goals for biofuel use in the United States, numerous technologies have been developed that convert biomass to biofuels. Some of these biomass to biofuel conversion technology pathways are operating at commercial scales, while others are in earlier stages of development. The advancement of a new pathway toward commercialization involves various types of progress, including yield improvements, process engineering, and financial performance. Actions of private investors and public programs can accelerate the demonstrationmore » and deployment of new conversion technology pathways. These investors (both private and public) will pursue a range of pilot, demonstration, and pioneer scale biorefinery investments; the most cost-effective set of investments for advancing the maturity of any given biomass to biofuel conversion technology pathway is unknown. In some cases, whether or not the pathway itself will ultimately be technically and financially successful is also unknown. This report presents results from the Biomass Scenario Model--a system dynamics model of the biomass to biofuels system--that estimate effects of investments in biorefineries at different maturity levels and operational scales. The report discusses challenges in estimating effects of such investments and explores the interaction between this deployment investment and a volumetric production incentive. Model results show that investments in demonstration and deployment have a substantial growth impact on the development of the biofuels industry. Results also show that other conditions, such as accompanying incentives, have major impacts on the effectiveness of such investments. Results from the 2013 report are compared to new results. This report does not advocate for or against investments, incentives, or policies, but analyzes simulations of their hypothetical effects.« less

  3. Security and Privacy Assurance Research (SPAR) Pilot Final Report

    DTIC Science & Technology

    2015-11-30

    for a single querier interacting with a single encrypted database. In order to deploy the technology, the underlying cryptography must support multiple...underlying cryptography . A full SPAR system should be evaluated too including the software itself. Software should be checked for consistency with...ESPADA included cryptography libraries (e.g., gnutls, nettle, and openssl). Consider a hypothetical scenario in which a vulnerability is discovered in

  4. Burn Resuscitation Decision Support System (BRDSS)

    DTIC Science & Technology

    2013-09-01

    effective for burn care in the deployed and en route care settings. In this period, we completed Human Factors studies, hardware testing , software design ... designated U.S. Army Institute of Surgical Research (USAISR) clinical team. Phase 1 System Requirements and Software Development Arcos will draft a...airworthiness testing . The hardware finalists will be sent to U.S. Army Aeromedical Research Laboratory (USAARL) for critical airworthiness testing . Phase

  5. Computational methods and software systems for dynamics and control of large space structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Felippa, C. A.; Farhat, C.; Pramono, E.

    1990-01-01

    Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers.

  6. Department of Defense High Performance Computing Modernization Program. 2006 Annual Report

    DTIC Science & Technology

    2007-03-01

    Department. We successfully completed several software development projects that introduced parallel, scalable production software now in use across the...imagined. They are developing and deploying weather and ocean models that allow our soldiers, sailors, marines and airmen to plan missions more effectively...and to navigate adverse environments safely. They are modeling molecular interactions leading to the development of higher energy fuels, munitions

  7. The Deployment of Routing Protocols in Distributed Control Plane of SDN

    PubMed Central

    Jingjing, Zhou; Di, Cheng; Weiming, Wang; Rong, Jin; Xiaochun, Wu

    2014-01-01

    Software defined network (SDN) provides a programmable network through decoupling the data plane, control plane, and application plane from the original closed system, thus revolutionizing the existing network architecture to improve the performance and scalability. In this paper, we learned about the distributed characteristics of Kandoo architecture and, meanwhile, improved and optimized Kandoo's two levels of controllers based on ideological inspiration of RCP (routing control platform). Finally, we analyzed the deployment strategies of BGP and OSPF protocol in a distributed control plane of SDN. The simulation results show that our deployment strategies are superior to the traditional routing strategies. PMID:25250395

  8. An updated system for guidance of heterogeneous platforms used for multiple gliders in a real-time experiment

    NASA Astrophysics Data System (ADS)

    Smedstad, L.; Barron, C. N.; Book, J. W.; Osborne, J. J.; Souopgui, I.; Rice, A. E.; Linzell, R. S.

    2017-12-01

    The Guidance of Heterogeneous Observation Systems (GHOST) is a tool designed to sample ocean model outputs to determine a suite of possible path options for unmanned platforms. The system is built around a Runge-Kutta method to determine all possible paths, followed by a cost function calculation, an enforcement of safe operating area, and an analysis to determine a top 10% level of cost function and to rank the paths that qualify. A field experiment took place from 16 May until 5 June 2017 aboard the R/V Savannah operating out of the Duke University Marine Laboratory (DUML) in Beaufort, NC. Gliders were deployed in alternating groups with missions defined by one of two possible categories: a station-keeping array and a moving array. Unlike previous versions of the software, which monitored platforms individually, these gliders were placed in groups of 2-5 gliders with the same tasks. Daily runs of the GHOST software were performed for each mission category and for two different 1 km orientations of the Navy Coastal Ocean Model (NCOM). By limiting the number of trial solutions and by sorting through the best results, a quick turnaround was made possible for glider operators to determine waypoints in order to remain in desired areas or to move in paths that sampled areas of highest thermohaline variability. Limiting risk by restricting solutions to defined areas with statistically less likely occurrences of high ocean currents was an important consideration in this study area that was located just inshore of the Gulf Stream.

  9. Emergency deployment of genetically engineered veterinary vaccines in Europe.

    PubMed

    Ramezanpour, Bahar; de Foucauld, Jean; Kortekaas, Jeroen

    2016-06-24

    On the 9th of November 2015, preceding the World Veterinary Vaccine Congress, a workshop was held to discuss how veterinary vaccines can be deployed more rapidly to appropriately respond to future epizootics in Europe. Considering their potential and unprecedented suitability for surge production, the workshop focussed on vaccines based on genetically engineered viruses and replicon particles. The workshop was attended by academics and representatives from leading pharmaceutical companies, regulatory experts, the European Medicines Agency and the European Commission. We here outline the present regulatory pathways for genetically engineered vaccines in Europe and describe the incentive for the organization of the pre-congress workshop. The participants agreed that existing European regulations on the deliberate release of genetically engineered vaccines into the environment should be updated to facilitate quick deployment of these vaccines in emergency situations. Copyright © 2016.

  10. The role of the ADS in software discovery and citation

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto

    2018-01-01

    As the primary index of scholarly content in astronomy and physics, the NASA Astrophysics Data System (ADS) is collaborating with the AAS journals and the Zenodo repository in an effort to promote the preservation of scientific software used in astronomy research and its citation in scholarly publications. In this talk I will discuss how ADS is updating its service infrastructure to allow for the publication, indexing, and citation of software records in scientific articles.

  11. UTM TCL 2.0 Software Version Description (SVD) Document

    NASA Technical Reports Server (NTRS)

    Mcguirk, Patrick

    2017-01-01

    This is the Unmanned Aircraft Systems (UAS) Traffic Management (UTM) Technical Capability Level(TCL) 2.0 Software Version Description (SVD) document. This UTM TCL 2.0 SVD describes the following four topics: 1. Software Release Contents: A listing of the files comprising this release 2. Installation Instructions: How to install the release and get it running 3. Changes Since Previous Release: General updates since the previous UTM release 4. Known Issues: Known issues and limitations in this release

  12. Revolution Now 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul Donohoo-Vallett

    Revolution Now is an annually updated report produced by the Energy Department’s Office of Energy Efficiency and Renewable Energy that documents the accelerated deployment of five clean energy technologies thriving in the U.S. market – wind turbines, solar technologies for both utility-scale and distributed photovoltaic (PV), electric vehicles (EVs) and light-emitting diodes (LEDs).

  13. The Generalized Support Software (GSS) Domain Engineering Process: An Object-Oriented Implementation and Reuse Success at Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Condon, Steven; Hendrick, Robert; Stark, Michael E.; Steger, Warren

    1997-01-01

    The Flight Dynamics Division (FDD) of NASA's Goddard Space Flight Center (GSFC) recently embarked on a far-reaching revision of its process for developing and maintaining satellite support software. The new process relies on an object-oriented software development method supported by a domain specific library of generalized components. This Generalized Support Software (GSS) Domain Engineering Process is currently in use at the NASA GSFC Software Engineering Laboratory (SEL). The key facets of the GSS process are (1) an architecture for rapid deployment of FDD applications, (2) a reuse asset library for FDD classes, and (3) a paradigm shift from developing software to configuring software for mission support. This paper describes the GSS architecture and process, results of fielding the first applications, lessons learned, and future directions

  14. Lessons Learned from Deploying an Analytical Task Management Database

    NASA Technical Reports Server (NTRS)

    O'Neil, Daniel A.; Welch, Clara; Arceneaux, Joshua; Bulgatz, Dennis; Hunt, Mitch; Young, Stephen

    2007-01-01

    Defining requirements, missions, technologies, and concepts for space exploration involves multiple levels of organizations, teams of people with complementary skills, and analytical models and simulations. Analytical activities range from filling a To-Be-Determined (TBD) in a requirement to creating animations and simulations of exploration missions. In a program as large as returning to the Moon, there are hundreds of simultaneous analysis activities. A way to manage and integrate efforts of this magnitude is to deploy a centralized database that provides the capability to define tasks, identify resources, describe products, schedule deliveries, and generate a variety of reports. This paper describes a web-accessible task management system and explains the lessons learned during the development and deployment of the database. Through the database, managers and team leaders can define tasks, establish review schedules, assign teams, link tasks to specific requirements, identify products, and link the task data records to external repositories that contain the products. Data filters and spreadsheet export utilities provide a powerful capability to create custom reports. Import utilities provide a means to populate the database from previously filled form files. Within a four month period, a small team analyzed requirements, developed a prototype, conducted multiple system demonstrations, and deployed a working system supporting hundreds of users across the aeros pace community. Open-source technologies and agile software development techniques, applied by a skilled team enabled this impressive achievement. Topics in the paper cover the web application technologies, agile software development, an overview of the system's functions and features, dealing with increasing scope, and deploying new versions of the system.

  15. Data update in a land information network

    NASA Astrophysics Data System (ADS)

    Mullin, Robin C.

    1988-01-01

    The on-going update of data exchanged in a land information network is examined. In the past, major developments have been undertaken to enable the exchange of data between land information systems. A model of a land information network and the data update process have been developed. Based on these, a functional description of the database and software to perform data updating is presented. A prototype of the data update process was implemented using the ARC/INFO geographic information system. This was used to test four approaches to data updating, i.e., bulk, block, incremental, and alert updates. A bulk update is performed by replacing a complete file with an updated file. A block update requires that the data set be partitioned into blocks. When an update occurs, only the blocks which are affected need to be transferred. An incremental update approach records each feature which is added or deleted and transmits only the features needed to update the copy of the file. An alert is a marker indicating that an update has occurred. It can be placed in a file to warn a user that if he is active in an area containing markers, updated data is available. The four approaches have been tested using a cadastral data set.

  16. Information architecture for a planetary 'exploration web'

    NASA Technical Reports Server (NTRS)

    Lamarra, N.; McVittie, T.

    2002-01-01

    'Web services' is a common way of deploying distributed applications whose software components and data sources may be in different locations, formats, languages, etc. Although such collaboration is not utilized significantly in planetary exploration, we believe there is significant benefit in developing an architecture in which missions could leverage each others capabilities. We believe that an incremental deployment of such an architecture could significantly contribute to the evolution of increasingly capable, efficient, and even autonomous remote exploration.

  17. Optimizing Automatic Deployment Using Non-functional Requirement Annotations

    NASA Astrophysics Data System (ADS)

    Kugele, Stefan; Haberl, Wolfgang; Tautschnig, Michael; Wechs, Martin

    Model-driven development has become common practice in design of safety-critical real-time systems. High-level modeling constructs help to reduce the overall system complexity apparent to developers. This abstraction caters for fewer implementation errors in the resulting systems. In order to retain correctness of the model down to the software executed on a concrete platform, human faults during implementation must be avoided. This calls for an automatic, unattended deployment process including allocation, scheduling, and platform configuration.

  18. Evaluating the Measure of Effectiveness of Using a Deployed Command and Control System on Land Battlefield

    DTIC Science & Technology

    2015-09-01

    SOA Service-Oriented Architecture SOTM Satellite Communications-on-the-Move SoS System of Systems SwCIs Software Criticality Indices TPM Technical...into the C2 system. To manage stakeholders’ expectations, there is a need to evaluate the effectiveness of the deployed C2 system having implemented ...the C2 system. However, there is a need to recognize the limitations and constraints on the land battlefield to implement these requirements. There

  19. Genome re-annotation: a wiki solution?

    PubMed Central

    Salzberg, Steven L

    2007-01-01

    The annotation of most genomes becomes outdated over time, owing in part to our ever-improving knowledge of genomes and in part to improvements in bioinformatics software. Unfortunately, annotation is rarely if ever updated and resources to support routine reannotation are scarce. Wiki software, which would allow many scientists to edit each genome's annotation, offers one possible solution. PMID:17274839

  20. Computer Program Re-layers Engineering Drawings

    NASA Technical Reports Server (NTRS)

    Crosby, Dewey C., III

    1990-01-01

    RULCHK computer program aids in structuring layers of information pertaining to part or assembly designed with software described in article "Software for Drawing Design Details Concurrently" (MFS-28444). Checks and optionally updates structure of layers for part. Enables designer to construct model and annotate its documentation without burden of manually layering part to conform to standards at design time.

  1. Real-Time GPS Monitoring for Earthquake Rapid Assessment in the San Francisco Bay Area

    NASA Astrophysics Data System (ADS)

    Guillemot, C.; Langbein, J. O.; Murray, J. R.

    2012-12-01

    The U.S. Geological Survey Earthquake Science Center has deployed a network of eight real-time Global Positioning System (GPS) stations in the San Francisco Bay area and is implementing software applications to continuously evaluate the status of the deformation within the network. Real-time monitoring of the station positions is expected to provide valuable information for rapidly estimating source parameters should a large earthquake occur in the San Francisco Bay area. Because earthquake response applications require robust data access, as a first step we have developed a suite of web-based applications which are now routinely used to monitor the network's operational status and data streaming performance. The web tools provide continuously updated displays of important telemetry parameters such as data latency and receive rates, as well as source voltage and temperature information within each instrument enclosure. Automated software on the backend uses the streaming performance data to mitigate the impact of outages, radio interference and bandwidth congestion on deformation monitoring operations. A separate set of software applications manages the recovery of lost data due to faulty communication links. Displacement estimates are computed in real-time for various combinations of USGS, Plate Boundary Observatory (PBO) and Bay Area Regional Deformation (BARD) network stations. We are currently comparing results from two software packages (one commercial and one open-source) used to process 1-Hz data on the fly and produce estimates of differential positions. The continuous monitoring of telemetry makes it possible to tune the network to minimize the impact of transient interruptions of the data flow, from one or more stations, on the estimated positions. Ongoing work is focused on using data streaming performance history to optimize the quality of the position, reduce drift and outliers by switching to the best set of stations within the network, and automatically select the "next best" station to use as reference. We are also working towards minimizing the loss of streamed data during concurrent data downloads by improving file management on the GPS receivers.

  2. Bio-Docklets: virtualization containers for single-step execution of NGS pipelines.

    PubMed

    Kim, Baekdoo; Ali, Thahmina; Lijeron, Carlos; Afgan, Enis; Krampis, Konstantinos

    2017-08-01

    Processing of next-generation sequencing (NGS) data requires significant technical skills, involving installation, configuration, and execution of bioinformatics data pipelines, in addition to specialized postanalysis visualization and data mining software. In order to address some of these challenges, developers have leveraged virtualization containers toward seamless deployment of preconfigured bioinformatics software and pipelines on any computational platform. We present an approach for abstracting the complex data operations of multistep, bioinformatics pipelines for NGS data analysis. As examples, we have deployed 2 pipelines for RNA sequencing and chromatin immunoprecipitation sequencing, preconfigured within Docker virtualization containers we call Bio-Docklets. Each Bio-Docklet exposes a single data input and output endpoint and from a user perspective, running the pipelines as simply as running a single bioinformatics tool. This is achieved using a "meta-script" that automatically starts the Bio-Docklets and controls the pipeline execution through the BioBlend software library and the Galaxy Application Programming Interface. The pipeline output is postprocessed by integration with the Visual Omics Explorer framework, providing interactive data visualizations that users can access through a web browser. Our goal is to enable easy access to NGS data analysis pipelines for nonbioinformatics experts on any computing environment, whether a laboratory workstation, university computer cluster, or a cloud service provider. Beyond end users, the Bio-Docklets also enables developers to programmatically deploy and run a large number of pipeline instances for concurrent analysis of multiple datasets. © The Authors 2017. Published by Oxford University Press.

  3. Bio-Docklets: virtualization containers for single-step execution of NGS pipelines

    PubMed Central

    Kim, Baekdoo; Ali, Thahmina; Lijeron, Carlos; Afgan, Enis

    2017-01-01

    Abstract Processing of next-generation sequencing (NGS) data requires significant technical skills, involving installation, configuration, and execution of bioinformatics data pipelines, in addition to specialized postanalysis visualization and data mining software. In order to address some of these challenges, developers have leveraged virtualization containers toward seamless deployment of preconfigured bioinformatics software and pipelines on any computational platform. We present an approach for abstracting the complex data operations of multistep, bioinformatics pipelines for NGS data analysis. As examples, we have deployed 2 pipelines for RNA sequencing and chromatin immunoprecipitation sequencing, preconfigured within Docker virtualization containers we call Bio-Docklets. Each Bio-Docklet exposes a single data input and output endpoint and from a user perspective, running the pipelines as simply as running a single bioinformatics tool. This is achieved using a “meta-script” that automatically starts the Bio-Docklets and controls the pipeline execution through the BioBlend software library and the Galaxy Application Programming Interface. The pipeline output is postprocessed by integration with the Visual Omics Explorer framework, providing interactive data visualizations that users can access through a web browser. Our goal is to enable easy access to NGS data analysis pipelines for nonbioinformatics experts on any computing environment, whether a laboratory workstation, university computer cluster, or a cloud service provider. Beyond end users, the Bio-Docklets also enables developers to programmatically deploy and run a large number of pipeline instances for concurrent analysis of multiple datasets. PMID:28854616

  4. Software Engineering Research/Developer Collaborations in 2004 (C104)

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Markosian, Lawrance

    2005-01-01

    In 2004, six collaborations between software engineering technology providers and NASA software development personnel deployed a total of five software engineering technologies (for references, see Section 7.2) on the NASA projects. The main purposes were to benefit the projects, infuse the technologies if beneficial into NASA, and give feedback to the technology providers to improve the technologies. Each collaboration project produced a final report (for references, see Section 7.1). Section 2 of this report summarizes each project, drawing from the final reports and communications with the software developers and technology providers. Section 3 indicates paths to further infusion of the technologies into NASA practice. Section 4 summarizes some technology transfer lessons learned. Section 6 lists the acronyms used in this report.

  5. Rapid Deployment of Optimal Control for Building HVAC Systems Using Innovative Software Tools and a Hybrid Heuristic/Model Based Control Approach

    DTIC Science & Technology

    2017-03-21

    for public release; distribution is unlimited 13. SUPPLEMENTARY NOTES None 14. ABSTRACT ESTCP project EW-201409 aimed at demonstrating the benefits ...of innovative software technology for building HV AC systems. These benefits included reduced system energy use and cost as wetl as improved...Control Approach March 2017 This document has been cleared for public release; Distribution Statement A

  6. Computational Methods for Identification, Optimization and Control of PDE Systems

    DTIC Science & Technology

    2010-04-30

    focused on the development of numerical methods and software specifically for the purpose of solving control, design, and optimization prob- lems where...that provide the foundations of simulation software must play an important role in any research of this type, the demands placed on numerical methods...y sus Aplicaciones , Ciudad de Cor- doba - Argentina, October 2007. 3. Inverse Problems in Deployable Space Structures, Fourth Conference on Inverse

  7. Joint Information Environment: DOD Needs to Strengthen Governance and Management

    DTIC Science & Technology

    2016-07-01

    provide fast and secure connections to any application or service from any authorized network at any time Software application rationalization and...deployment at all sites. DOD further defines an automated information system as a system of computer hardware, computer software , data or telecommunications ...Why GAO Did This Study For fiscal year 2017, DOD plans to spend more than $38 billion on information technology to support thousands of networks and

  8. Management and Stewardship of Airborne Observational Data for the NSF/NCAR HIAPER (GV) and NSF/NCAR C-130 at the National Center for Atmospheric Research (NCAR) Earth Observing Laboratory (EOL)

    NASA Astrophysics Data System (ADS)

    Aquino, J.

    2014-12-01

    The National Science Foundation (NSF) provides the National Center for Atmospheric Research (NCAR) Earth Observing Laboratory (EOL) funding for the operation, maintenance and upgrade of two research aircraft: the NSF/NCAR High-performance Instrumented Airborne Platform for Environmental Research (HIAPER) Gulfstream V and the NSF/NCAR Hercules C-130. A suite of in-situ and remote sensing airborne instruments housed at the EOL Research Aviation Facility (RAF) provide a basic set of measurements that are typically deployed on most airborne field campaigns. In addition, instruments to address more specific research requirements are provided by collaborating participants from universities, industry, NASA, NOAA or other agencies. The data collected are an important legacy of these field campaigns. A comprehensive metadata database and integrated cyber-infrastructure, along with a robust data workflow that begins during the field phase and extends to long-term archival (current aircraft data holdings go back to 1967), assures that: all data and associated software are safeguarded throughout the data handling process; community standards of practice for data stewardship and software version control are followed; simple and timely community access to collected data and associated software tools are provided; and the quality of the collected data is preserved, with the ultimate goal of supporting research and the reproducibility of published results. The components of this data system to be presented include: robust, searchable web access to data holdings; reliable, redundant data storage; web-based tools and scripts for efficient creation, maintenance and update of data holdings; access to supplemental data and documentation; storage of data in standardized data formats; comprehensive metadata collection; mature version control; human-discernable storage practices; and procedures to inform users of changes. In addition, lessons learned, shortcomings, and desired upgrades will be discussed.

  9. Crustaceans from a tropical estuarine sand-mud flat, Pacific, Costa Rica, (1984-1988) revisited.

    PubMed

    Vargas-Zamora, José A; Sibaja-Cordero, Jeffrey A; Vargas-Castillo, Rita

    2012-12-01

    The availability of data sets for time periods of more than a year is scarce for tropical environments. Advances in hardware and software speed-up the re-analysis of old data sets and facilitates the description of population oscillations. Using recent taxonomic literature and software we have updated and re-analized the information on crustacean diversity and population fluctuations from a set of cores collected at a mud-sand flat in the mid upper Gulf of Nicoya estuary, Pacific coast of Costa Rica (1984-1988). A total of 112 morphological species of macroinvertebrates was found, of which 29 were crustaceans. Taxonomic problems, maily with the peracarids, prevented the identification of a group of species. The abundance patterns of the crab Pinnixa valerii, the ostracod Cyprideis pacifica, and the cumacean Coricuma nicoyensis were analized with the Generalized Additive Models of the free software R. The models evidenced a variety of population oscillations during the sampling period. These oscillations probably included perturbations induced by external factors, like the strong red tide events of 1985. In additon, early on 1984 the populations might have been at an altered state due to the inpact of El Niño 1982-83. Thus, the oscillations observed during the study period departed from the expected seasonality (dry vs rainy) pattern and are thus considered atypical for this tropical estuarine tidal-flat. Crustacean diversity and population peaks were within the range of examples found in worldwide literature. However, abundances of the cumacean C. nicoyensis, an endemic species, are the highest reported for a tropical estuary. Comparative data from tropical tidal flat crustaceans continues to be scarce. Crustaceans (total vs groups) had population changes in response to the deployment of predator exclusion cages during the dry and rainy seasons of 1985. Temporal and spatial patchiness characterized the abundances of P. valeri, C. pacifica and C. nicoyenis.

  10. The ALICE Software Release Validation cluster

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Krzewicki, M.

    2015-12-01

    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.

  11. TEMPUS: Simulating personnel and tasks in a 3-D environment

    NASA Technical Reports Server (NTRS)

    Badler, N. I.; Korein, J. D.

    1985-01-01

    The latest TEMPUS installation occurred in March, 1985. Another update is slated for early June, 1985. An updated User's Manual is in preparation and will be delivered approximately mid-June, 1985. NASA JSC has full source code listings and internal documentation for installed software. NASA JSC staff has received instruction in the use of TEMPUS. Telephone consultations have augmented on-site instruction.

  12. Deploying the ODIS robot in Iraq and Afghanistan

    NASA Astrophysics Data System (ADS)

    Smuda, Bill; Schoenherr, Edward; Andrusz, Henry; Gerhart, Grant

    2005-05-01

    The wars in Iraq and Afghanistan have shown the importance of robotic technology as a force multiplier and a tool for moving soldiers out of harms way. Situations on the ground make soldiers performing checkpoint operations easy targets for snipers and suicide bombers. Robotics technology reduces risk to soldiers and other personnel at checkpoints. Early user involvement in innovative and aggressive development and acquisition strategies are the key to moving robotic and associated technology into the hands of the user. This paper updates activity associated with rapid development of the Omni-Directional Inspection System (ODIS) robot for under vehicle inspection and reports on our field experience with robotics in Iraq and Afghanistan. In February of 2004, two TARDEC Engineers departed for a mission to Iraq and Afghanistan with ten ODIS Robots. Six robots were deployed in the Green Zone in Baghdad. Two Robots were deployed at Kandahar Army Airfield and two were deployed at Bagram Army Airfield in Afghanistan. The TARDEC Engineers who performed this mission trained the soldiers and provided initial on site support. They also trained Exponent employees assigned to the Rapid Equipping Force in ODIS repair. We will discuss our initial deployment, lessons learned and future plans.

  13. Fiji: an open-source platform for biological-image analysis.

    PubMed

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  14. The Research of Software Engineering Curriculum Reform

    NASA Astrophysics Data System (ADS)

    Kuang, Li-Qun; Han, Xie

    With the problem that software engineering training can't meet the needs of the community, this paper analysis some outstanding reasons in software engineering curriculum teaching, such as old teaching contents, weak in practice and low quality of teachers etc. We propose the methods of teaching reform as guided by market demand, update the teaching content, optimize the teaching methods, reform the teaching practice, strengthen the teacher-student exchange and promote teachers and students together. We carried out the reform and explore positive and achieved the desired results.

  15. Software Supportability Risk Assessment in OT&E (Operational Test and Evaluation): Literature Review, Current Research Review, and Data Base Assemblage.

    DTIC Science & Technology

    1984-09-28

    variables before simula- tion of model - Search for reality checks a, - Express uncertainty as a probability density distribution. a. H2 a, H-22 TWIF... probability that the software con- tains errors. This prior is updated as test failure data are accumulated. Only a p of 1 (software known to contain...discusssed; both parametric and nonparametric versions are presented. It is shown by the author that the bootstrap underlies the jackknife method and

  16. NASA Tech Briefs, March 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Topics covered include: Tool for Bending a Metal Tube Precisely in a Confined Space; Multiple-Use Mechanisms for Attachment to Seat Tracks; Force-Measuring Clamps; Cellular Pressure-Actuated Joint; Block QCA Fault-Tolerant Logic Gates; Hybrid VLSI/QCA Architecture for Computing FFTs; Arrays of Carbon Nanotubes as RF Filters in Waveguides; Carbon Nanotubes as Resonators for RF Spectrum Analyzers; Software for Viewing Landsat Mosaic Images; Updated Integrated Mission Program; Software for Sharing and Management of Information; Update on Integrated Optical Design Analyzer; Optical-Quality Thin Polymer Membranes; Rollable Thin Shell Composite-Material Paraboloidal Mirrors; Folded Resonant Horns for Power Ultrasonic Applications; Touchdown Ball-Bearing System for Magnetic Bearings; Flux-Based Deadbeat Control of Induction-Motor Torque; Block Copolymers as Templates for Arrays of Carbon Nanotubes; Throttling Cryogen Boiloff To Control Cryostat Temperature; Collaborative Software Development Approach Used to Deliver the New Shuttle Telemetry Ground Station; Turbulence in Supercritical O2/H2 and C7H16/N2 Mixing Layers; and Time-Resolved Measurements in Optoelectronic Microbioanal.

  17. Operational CryoSat Product Quality Assessment

    NASA Astrophysics Data System (ADS)

    Mannan, Rubinder; Webb, Erica; Hall, Amanda; Bouzinac, Catherine

    2013-12-01

    The performance and quality of the CryoSat data products are routinely assessed by the Instrument Data quality Evaluation and Analysis Service (IDEAS). This information is then conveyed to the scientific and user community in order to allow them to utilise CryoSat data with confidence. This paper presents details of the Quality Control (QC) activities performed for CryoSat products under the IDEAS contract. Details of the different QC procedures and tools deployed by IDEAS to assess the quality of operational data are presented. The latest updates to the Instrument Processing Facility (IPF) for the Fast Delivery Marine (FDM) products and the future update to Baseline-C are discussed.

  18. Phylesystem: a git-based data store for community-curated phylogenetic estimates.

    PubMed

    McTavish, Emily Jane; Hinchliff, Cody E; Allman, James F; Brown, Joseph W; Cranston, Karen A; Holder, Mark T; Rees, Jonathan A; Smith, Stephen A

    2015-09-01

    Phylogenetic estimates from published studies can be archived using general platforms like Dryad (Vision, 2010) or TreeBASE (Sanderson et al., 1994). Such services fulfill a crucial role in ensuring transparency and reproducibility in phylogenetic research. However, digital tree data files often require some editing (e.g. rerooting) to improve the accuracy and reusability of the phylogenetic statements. Furthermore, establishing the mapping between tip labels used in a tree and taxa in a single common taxonomy dramatically improves the ability of other researchers to reuse phylogenetic estimates. As the process of curating a published phylogenetic estimate is not error-free, retaining a full record of the provenance of edits to a tree is crucial for openness, allowing editors to receive credit for their work and making errors introduced during curation easier to correct. Here, we report the development of software infrastructure to support the open curation of phylogenetic data by the community of biologists. The backend of the system provides an interface for the standard database operations of creating, reading, updating and deleting records by making commits to a git repository. The record of the history of edits to a tree is preserved by git's version control features. Hosting this data store on GitHub (http://github.com/) provides open access to the data store using tools familiar to many developers. We have deployed a server running the 'phylesystem-api', which wraps the interactions with git and GitHub. The Open Tree of Life project has also developed and deployed a JavaScript application that uses the phylesystem-api and other web services to enable input and curation of published phylogenetic statements. Source code for the web service layer is available at https://github.com/OpenTreeOfLife/phylesystem-api. The data store can be cloned from: https://github.com/OpenTreeOfLife/phylesystem. A web application that uses the phylesystem web services is deployed at http://tree.opentreeoflife.org/curator. Code for that tool is available from https://github.com/OpenTreeOfLife/opentree. mtholder@gmail.com. © The Author 2015. Published by Oxford University Press.

  19. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less

  20. A Methodology for Cybercraft Requirement Definition and Initial System Design

    DTIC Science & Technology

    2008-06-01

    the software development concepts of the SDLC , requirements, use cases and domain modeling . It ...collectively as Software Development 5 Life Cycle ( SDLC ) models . While there are numerous models that fit under the SDLC definition, all are based on... developed that provided expanded understanding of the domain, it is necessary to either update an existing domain model or create another domain

  1. Rad Toolbox User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eckerman, Keith F.; Sjoreen, Andrea L.

    2013-05-01

    The Radiological Toolbox software developed by Oak Ridge National Laboratory (ORNL) for U. S. Nuclear Regulatory Commission (NRC) is designed to provide electronic access to the vast and varied data that underlies the field of radiation protection. These data represent physical, chemical, anatomical, physiological, and mathematical parameters detailed in various handbooks which a health physicist might consult while in his office. The initial motivation for the software was to serve the needs of the health physicist away from his office and without access to his handbooks; e.g., NRC inspectors. The earlier releases of the software were widely used and acceptedmore » around the world by not only practicing health physicist but also those within educational programs. This release updates the software to accommodate changes in Windows operating systems and, in some aspects, radiation protection. This release has been tested on Windows 7 and 8 and on 32- and 64-bit machines. The nuclear decay data has been updated and thermal neutron capture cross sections and cancer risk coefficients have been included. This document and the software’s user’s guide provide further details and documentation of the information captured within the Radiological Toolbox.« less

  2. A Distributed Cache Update Deployment Strategy in CDN

    NASA Astrophysics Data System (ADS)

    E, Xinhua; Zhu, Binjie

    2018-04-01

    The CDN management system distributes content objects to the edge of the internet to achieve the user's near access. Cache strategy is an important problem in network content distribution. A cache strategy was designed in which the content effective diffusion in the cache group, so more content was storage in the cache, and it improved the group hit rate.

  3. NREL: International Activities - U.S.-China Renewable Energy Partnership

    Science.gov Websites

    Solar PV and TC88 Wind working groups. Renewable Energy Technology These projects enhance policies to Collaboration on innovative business models and financing solutions for solar PV deployment. Micrositing and O development. Current Projects Recommendations for photovoltaic (PV) and wind grid code updates. New energy

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werling, Eric

    This report presents the Building America Research-to-Market Plan (Plan), including the integrated Building America Technology-to-Market Roadmaps (Roadmaps) that will guide Building America’s research, development, and deployment (RD&D) activities over the coming years. The Plan and Roadmaps will be updated as necessary to adapt to research findings and evolving stakeholder needs, and they will reflect input from DOE and stakeholders.

  5. Experiences with a generator tool for building clinical application modules.

    PubMed

    Kuhn, K A; Lenz, R; Elstner, T; Siegele, H; Moll, R

    2003-01-01

    To elaborate main system characteristics and relevant deployment experiences for the health information system (HIS) Orbis/OpenMed, which is in widespread use in Germany, Austria, and Switzerland. In a deployment phase of 3 years in a 1.200 bed university hospital, where the system underwent significant improvements, the system's functionality and its software design have been analyzed in detail. We focus on an integrated CASE tool for generating embedded clinical applications and for incremental system evolution. We present a participatory and iterative software engineering process developed for efficient utilization of such a tool. The system's functionality is comparable to other commercial products' functionality; its components are embedded in a vendor-specific application framework, and standard interfaces are being used for connecting subsystems. The integrated generator tool is a remarkable feature; it became a key factor of our project. Tool generated applications are workflow enabled and embedded into the overall data base schema. Rapid prototyping and iterative refinement are supported, so application modules can be adapted to the users' work practice. We consider tools supporting an iterative and participatory software engineering process highly relevant for health information system architects. The potential of a system to continuously evolve and to be effectively adapted to changing needs may be more important than sophisticated but hard-coded HIS functionality. More work will focus on HIS software design and on software engineering. Methods and tools are needed for quick and robust adaptation of systems to health care processes and changing requirements.

  6. NASA Integrated Network Monitor and Control Software Architecture

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Anderson, Michael; Kowal, Steve; Levesque, Michael; Sindiy, Oleg; Donahue, Kenneth; Barnes, Patrick

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Space Communications and Navigation office (SCaN) has commissioned a series of trade studies to define a new architecture intended to integrate the three existing networks that it operates, the Deep Space Network (DSN), Space Network (SN), and Near Earth Network (NEN), into one integrated network that offers users a set of common, standardized, services and interfaces. The integrated monitor and control architecture utilizes common software and common operator interfaces that can be deployed at all three network elements. This software uses state-of-the-art concepts such as a pool of re-programmable equipment that acts like a configurable software radio, distributed hierarchical control, and centralized management of the whole SCaN integrated network. For this trade space study a model-based approach using SysML was adopted to describe and analyze several possible options for the integrated network monitor and control architecture. This model was used to refine the design and to drive the costing of the four different software options. This trade study modeled the three existing self standing network elements at point of departure, and then described how to integrate them using variations of new and existing monitor and control system components for the different proposed deployments under consideration. This paper will describe the trade space explored, the selected system architecture, the modeling and trade study methods, and some observations on useful approaches to implementing such model based trade space representation and analysis.

  7. cFE/CFS (Core Flight Executive/Core Flight System)

    NASA Technical Reports Server (NTRS)

    Wildermann, Charles P.

    2008-01-01

    This viewgraph presentation describes in detail the requirements and goals of the Core Flight Executive (cFE) and the Core Flight System (CFS). The Core Flight Software System is a mission independent, platform-independent, Flight Software (FSW) environment integrating a reusable core flight executive (cFE). The CFS goals include: 1) Reduce time to deploy high quality flight software; 2) Reduce project schedule and cost uncertainty; 3) Directly facilitate formalized software reuse; 4) Enable collaboration across organizations; 5) Simplify sustaining engineering (AKA. FSW maintenance); 6) Scale from small instruments to System of Systems; 7) Platform for advanced concepts and prototyping; and 7) Common standards and tools across the branch and NASA wide.

  8. Virtual Exercise Training Software System

    NASA Technical Reports Server (NTRS)

    Vu, L.; Kim, H.; Benson, E.; Amonette, W. E.; Barrera, J.; Perera, J.; Rajulu, S.; Hanson, A.

    2018-01-01

    The purpose of this study was to develop and evaluate a virtual exercise training software system (VETSS) capable of providing real-time instruction and exercise feedback during exploration missions. A resistive exercise instructional system was developed using a Microsoft Kinect depth-camera device, which provides markerless 3-D whole-body motion capture at a small form factor and minimal setup effort. It was hypothesized that subjects using the newly developed instructional software tool would perform the deadlift exercise with more optimal kinematics and consistent technique than those without the instructional software. Following a comprehensive evaluation in the laboratory, the system was deployed for testing and refinement in the NASA Extreme Environment Mission Operations (NEEMO) analog.

  9. NDEx - the Network Data Exchange, A Network Commons for Biologists | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Network models of biology, whether curated or derived from large-scale data analysis, are critical tools in the understanding of cancer mechanisms and in the design and personalization of therapies. The NDEx Project (Network Data Exchange) will create, deploy, and maintain an open-source, web-based software platform and public website to enable scientists, organizations, and software applications to share, store, manipulate, and publish biological networks.

  10. Exploring the Cost and Functionality of MEDCOM Web Services

    DTIC Science & Technology

    2005-10-24

    Software Name 24. What backend database software supports your intranet/Internet content? (check all that apply)-. o Oracle o Microsoft SQL Server E0...Department of Defense (DoD) service branches, which funded and deployed an Internet portal, TRICARE Online, to serve as an information conduit between the...public website, the information contained on the intranet is traditionally limited to the members of the hosting command. The local information serves as

  11. Cloud Security: Issues and Research Directions

    DTIC Science & Technology

    2014-11-18

    4. Cloud Computing Security: What Changes with Software - Defined Networking ? Maur´ıcio Tsugawa, Andr´ea Matsunaga, and Jos´e A. B. Fortes 5...machine’s memory from an untrusted or malicious hypervisor. In Chapter 4, Tsugawa et al. discuss the security issues introduced when Software - Defined ... Networking ( SDN ) is deployed within and across clouds. Chapters 5-9 are focused on the protection of data stored in the cloud. In Chapter 5, Wang et

  12. Framework for ReSTful Web Services in OSGi

    NASA Technical Reports Server (NTRS)

    Shams, Khawaja S.; Norris, Jeffrey S.; Powell, Mark W.; Crockett, Thomas M.; Mittman, David S.; Fox, Jason M.; Joswig, Joseph C.; Wallick, Michael N.; Torres, Recaredo J.; Rabe, Kenneth

    2009-01-01

    Ensemble ReST is a software system that eases the development, deployment, and maintenance of server-side application programs to perform functions that would otherwise be performed by client software. Ensemble ReST takes advantage of the proven disciplines of ReST (Representational State Transfer. ReST leverages the standardized HTTP protocol to enable developers to offer services to a diverse variety of clients: from shell scripts to sophisticated Java application suites

  13. The Use of Modeling for Flight Software Engineering on SMAP

    NASA Technical Reports Server (NTRS)

    Murray, Alexander; Jones, Chris G.; Reder, Leonard; Cheng, Shang-Wen

    2011-01-01

    The Soil Moisture Active Passive (SMAP) mission proposes to deploy an Earth-orbiting satellite with the goal of obtaining global maps of soil moisture content at regular intervals. Launch is currently planned in 2014. The spacecraft bus would be built at the Jet Propulsion Laboratory (JPL), incorporating both new avionics as well as hardware and software heritage from other JPL projects. [4] provides a comprehensive overview of the proposed mission

  14. Cycle 24 HST+COS Target Acquisition Monitor Summary

    NASA Astrophysics Data System (ADS)

    Penton, Steven V.; White, James

    2018-06-01

    HST/COS calibration program 14847 (P14857) was designed to verify that all three COS Target Acquisition (TA) modes were performing nominally during Cycle 24. The program was designed not only to determine if any of the COS TA flight software (FSW) patchable constants need updating but also to determine the values of any required parameter updates. All TA modes were determined to be performing nominally during the Cycle 24 calendar period of October 1, 2016 - October 1, 2017. No COS SIAF, TA subarray, or FSW parameter updates were required as a result of this program.

  15. Forest sector and primary forest products industry contributions to the economies of the southern states: 2011 update

    Treesearch

    Consuelo Brandeis; Donald G. Hodges

    2015-01-01

    The analysis in this article provides an update on the southern forest sector economic activity after the downturn experienced in 2008–2009. The analysis was conducted using Impact Analysis for Planning (IMPLAN) software and data sets for 2009 and 2011 and results from the USDA Forest Service Timber Products Output latest survey of primary wood processing mills....

  16. INDEPENDENT EVALUATION OF THE GAM EX5ALN MINIATURE LINE-NARROWED KRF EXCIMER LASER

    DTIC Science & Technology

    2017-06-01

    software included the disabled tabs and buttons that clutter the panels. Information on these panels was not updated correctly (e.g., shots per fill and...total shots are not stored correctly and appear to contain random data, the lock function on the fill page does not update correctly, the time to...fill level after 7 M shots . .............................................................................. Error! Bookmark not defined. 7: Shelf-life

  17. Non-Grey Radiation Modeling using Thermal Desktop/Sindaworks TFAWS06-1009

    NASA Technical Reports Server (NTRS)

    Anderson, Kevin R.; Paine, Chris

    2006-01-01

    This paper provides an overview of the non-grey radiation modeling capabilities of Cullimore and Ring's Thermal Desktop(Registered TradeMark) Version 4.8 SindaWorks software. The non-grey radiation analysis theory implemented by Sindaworks and the methodology used by the software are outlined. Representative results from a parametric trade study of a radiation shield comprised of a series of v-grooved shaped deployable panels is used to illustrate the capabilities of the SindaWorks non-grey radiation thermal analysis software using emissivities with temperature and wavelength dependency modeled via a Hagen-Rubens relationship.

  18. Software licensing policy for the Open Source Application Development Portal (OSADP).

    DOT National Transportation Integrated Search

    1998-07-01

    The purpose of the Commercial Vehicle Information Systems and Networks Model Deployment Initiative (CVISN MDI) is to demonstrate the technical and institutional feasibility, costs, and benefits of the primary Intelligent Transportation Systems (ITS) ...

  19. Scalable Deployment of Advanced Building Energy Management Systems

    DTIC Science & Technology

    2013-05-01

    150  Figure J.5 Sensor Schema...151  Figure J.6 Temperature Sensor Schema...augments an existing BMS with additional sensors /meters and uses a reduced-order model and diagnostic software to make performance deviations visible

  20. A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yu; Oberweis, Andreas

    Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.

  1. REVEAL: Software Documentation and Platform Migration

    NASA Technical Reports Server (NTRS)

    Wilson, Michael A.; Veibell, Victoir T.

    2011-01-01

    The Research Environment for Vehicle Embedded Analysis on Linux (REVEAL) is reconfigurable data acquisition software designed for network-distributed test and measurement applications. In development since 2001, it has been successfully demonstrated in support of a number of actual missions within NASA's Suborbital Science Program. Improvements to software configuration control were needed to properly support both an ongoing transition to operational status and continued evolution of REVEAL capabilities. For this reason the project described in this report targets REVEAL software source documentation and deployment of the software on a small set of hardware platforms different from what is currently used in the baseline system implementation. This presentation specifically describes the actions taken over a ten week period by two undergraduate student interns and serves as an overview of the content of the final report for that internship.

  2. Overview of Hazard Assessment and Emergency Planning Software of Use to RN First Responders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waller, E; Millage, K; Blakely, W F

    2008-08-26

    There are numerous software tools available for field deployment, reach-back, training and planning use in the event of a radiological or nuclear (RN) terrorist event. Specialized software tools used by CBRNe responders can increase information available and the speed and accuracy of the response, thereby ensuring that radiation doses to responders, receivers, and the general public are kept as low as reasonably achievable. Software designed to provide health care providers with assistance in selecting appropriate countermeasures or therapeutic interventions in a timely fashion can improve the potential for positive patient outcome. This paper reviews various software applications of relevance tomore » radiological and nuclear (RN) events that are currently in use by first responders, emergency planners, medical receivers, and criminal investigators.« less

  3. Software architecture and engineering for patient records: current and future.

    PubMed

    Weng, Chunhua; Levine, Betty A; Mun, Seong K

    2009-05-01

    During the "The National Forum on the Future of the Defense Health Information System," a track focusing on "Systems Architecture and Software Engineering" included eight presenters. These presenters identified three key areas of interest in this field, which include the need for open enterprise architecture and a federated database design, net centrality based on service-oriented architecture, and the need for focus on software usability and reusability. The eight panelists provided recommendations related to the suitability of service-oriented architecture and the enabling technologies of grid computing and Web 2.0 for building health services research centers and federated data warehouses to facilitate large-scale collaborative health care and research. Finally, they discussed the need to leverage industry best practices for software engineering to facilitate rapid software development, testing, and deployment.

  4. The StratusLab cloud distribution: Use-cases and support for scientific applications

    NASA Astrophysics Data System (ADS)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.

  5. Develop a Model Component

    NASA Technical Reports Server (NTRS)

    Ensey, Tyler S.

    2013-01-01

    During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a fluid component, a discrete pressure switch. The switch takes a fluid pressure input, and if the pressure is greater than a designated cutoff pressure, the switch would stop fluid flow.

  6. DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

    NASA Astrophysics Data System (ADS)

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-03-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  7. DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy.

    PubMed

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A; Kapur, Tina; Wells, William M; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-02-11

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  8. DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy

    PubMed Central

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-01-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose “DeepInfer” – an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections. PMID:28615794

  9. Update on TAO moored ORG array

    NASA Technical Reports Server (NTRS)

    Freitag, H. Paul

    1994-01-01

    During the Coupled Ocean Atmosphere Response Experiment (COARE) six TAO moorings were equipped with optical rain gauges (ORG's). In late 1993 moorings deployed on the equator at 154E and 157.5E were recovered and not redeployed as they were augmentations to the TAO array for COARE only. In December 1993, four TAO moorings were equipped with ORG's: one each at 2N, 156E and 2S, 156E and ORG doublets on the equator at 0, 156E and 0, 165E. The 2N, 156E mooring has been lost. By the end of April all sites will have been serviced and six refurbished sensors will again be deployed in the same locations.

  10. LBT Distributed Archive: Status and Features

    NASA Astrophysics Data System (ADS)

    Knapic, C.; Smareglia, R.; Thompson, D.; Grede, G.

    2011-07-01

    After the first release of the LBT Distributed Archive, this successful collaboration is continuing within the LBT corporation. The IA2 (Italian Center for Astronomical Archive) team had updated the LBT DA with new features in order to facilitate user data retrieval while abiding by VO standards. To facilitate the integration of data from any new instruments, we have migrated to a new database, developed new data distribution software, and enhanced features in the LBT User Interface. The DBMS engine has been changed to MySQL. Consequently, the data handling software now uses java thread technology to update and synchronize the main storage archives on Mt. Graham and in Tucson, as well as archives in Trieste and Heidelberg, with all metadata and proprietary data. The LBT UI has been updated with additional features allowing users to search by instrument and some of the more important characteristics of the images. Finally, instead of a simple cone search service over all LBT image data, new instrument specific SIAP and cone search services have been developed. They will be published in the IVOA framework later this fall.

  11. The Effects of Word Processing Software on User Satisfaction: An Empirical Study of Micro, Mini, and Mainframe Computers Using an Interactive Artificial Intelligence Expert-System.

    ERIC Educational Resources Information Center

    Rushinek, Avi; Rushinek, Sara

    1984-01-01

    Describes results of a system rating study in which users responded to WPS (word processing software) questions. Study objectives were data collection and evaluation of variables; statistical quantification of WPS's contribution (along with other variables) to user satisfaction; design of an expert system to evaluate WPS; and database update and…

  12. Programmer's reference manual for the VAX-Gerber link software package. Revision 1. 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isobe, G.W.

    1985-10-01

    This guide provides the information necessary to edit, modify, and run the VAX-Gerber software link. Since the project is in the testing stage and still being modified, this guide discussess the final desired stage along with the current stage. The current stage is to set up as to allow the programmer to easily modify and update codes as necessary.

  13. Steps towards Improving GNSS Systematic Errors and Biases

    NASA Astrophysics Data System (ADS)

    Herring, T.; Moore, M.

    2017-12-01

    Four general areas of analysis method improvements, three related to data analysis models and the fourth to calibration methods, have been recommended at the recent unified analysis workshop (UAW) and we discuss aspects of these areas for improvement. The gravity fields used in the GNSS orbit integrations should be updated to match modern fields to make them consistent with the fields being used by the other IAG services. The update would include the static part of the field and a time variable component. The force models associated with radiation forces are the most uncertain and modeling of these forces can be made more consistent with the exchange of attitude information. The international GNSS service (IGS) will develop an attitude format and make attitude information available so that analysis centers can validate their models. The IGS has noted the appearance of the GPS draconitic period and harmonics of this period in time series of various geodetic products (e.g., positions and Earth orientation parameters). An updated short-period (diurnal and semidiurnal) model is needed and a method to determine the best model developed. The final area, not directly related to analysis models, is the recommendation that site dependent calibration of GNSS antennas are needed since these have a direct effect on the ITRF realization and position offsets when antennas are changed. Evaluation of the effects of the use of antenna specific phase center models will be investigated for those sites where these values are available without disturbing an existing antenna installation. Potential development of an in-situ antenna calibration system is strongly encouraged. In-situ calibration would be deployed at core sites where GNSS sites are tied to other geodetic systems. With recent expansion of the number of GPS satellites transmitting unencrypted codes on the GPS L2 frequency and the availability of software GNSS receivers in-situ calibration between an existing installation and a movable directional antenna is now more likely to generate accurate results than earlier analog switching systems. With all of these improvements, there is the expectation that there will be better agreement between the space geodetic methods thus allowing more definitive assessment and modeling of the Earth's time variable shape and gravity field.

  14. Revolution…Now The Future Arrives for Five Clean Energy Technologies – 2015 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    In 2013, the U.S. Department of Energy (DOE) released the Revolution Now report, highlighting four transformational technologies: land-based wind power, silicon photovoltaic (PV) solar modules, light-emitting diodes (LEDs), and electric vehicles (EVs). That study and its 2014 update showed how dramatic reductions in cost are driving a surge in consumer, industrial, and commercial adoption for these clean energy technologies—as well as yearly progress. In addition to presenting the continued progress made over the last year in these areas, this year’s update goes further. Two separate sections now cover large, central, utility-scale PV plants and smaller, rooftop, distributed PV systems tomore » highlight how both have achieved significant deployment nationwide, and have done so through different innovations, such as easier access to capital for utility-scale PV and reductions of non-hardware costs and third-party ownership for distributed PV. Along with these core technologies« less

  15. [Comparison among various software for LMS growth curve fitting methods].

    PubMed

    Han, Lin; Wu, Wenhong; Wei, Qiuxia

    2015-03-01

    To explore the methods to realize the growth curve fitting of coefficients of skewness-median-coefficient of variation (LMS) using different software, and to optimize growth curve statistical method for grass-root child and adolescent staffs. Regular physical examination data of head circumference for normal infants aging 3, 6, 9 and 12 months in Baotou City were analyzed. Statistical software such as SAS, R, STATA and SPSS were used to fit the LMS growth curve and the results were evaluated upon the user 's convenience, study circle, user interface, results display forms, software update and maintenance and so on. Growth curve fitting results showed the same calculation outcome and each of statistical software had its own advantages and disadvantages. With all the evaluation aspects in consideration, R software excelled others in LMS growth curve fitting. R software have the advantage over other software in grass roots child and adolescent staff.

  16. NDAS Hardware Translation Layer Development

    NASA Technical Reports Server (NTRS)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  17. Open-source meteor detection software for low-cost single-board computers

    NASA Astrophysics Data System (ADS)

    Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.

    2016-01-01

    This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.

  18. 76 FR 40844 - Changes to Move Update Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-12

    ... accuracy standard: a. For computerized lists, Coding Accuracy Support System (CASS)- certified address matching software and current USPS City State Product, within a mailer's computer systems or through an...

  19. Static and Dynamic Verification of Critical Software for Space Applications

    NASA Astrophysics Data System (ADS)

    Moreira, F.; Maia, R.; Costa, D.; Duro, N.; Rodríguez-Dapena, P.; Hjortnaes, K.

    Space technology is no longer used only for much specialised research activities or for sophisticated manned space missions. Modern society relies more and more on space technology and applications for every day activities. Worldwide telecommunications, Earth observation, navigation and remote sensing are only a few examples of space applications on which we rely daily. The European driven global navigation system Galileo and its associated applications, e.g. air traffic management, vessel and car navigation, will significantly expand the already stringent safety requirements for space based applications Apart from their usefulness and practical applications, every single piece of onboard software deployed into the space represents an enormous investment. With a long lifetime operation and being extremely difficult to maintain and upgrade, at least when comparing with "mainstream" software development, the importance of ensuring their correctness before deployment is immense. Verification &Validation techniques and technologies have a key role in ensuring that the onboard software is correct and error free, or at least free from errors that can potentially lead to catastrophic failures. Many RAMS techniques including both static criticality analysis and dynamic verification techniques have been used as a means to verify and validate critical software and to ensure its correctness. But, traditionally, these have been isolated applied. One of the main reasons is the immaturity of this field in what concerns to its application to the increasing software product(s) within space systems. This paper presents an innovative way of combining both static and dynamic techniques exploiting their synergy and complementarity for software fault removal. The methodology proposed is based on the combination of Software FMEA and FTA with Fault-injection techniques. The case study herein described is implemented with support from two tools: The SoftCare tool for the SFMEA and SFTA, and the Xception tool for fault-injection. Keywords: Verification &Validation, RAMS, Onboard software, SFMEA, STA, Fault-injection 1 This work is being performed under the project STADY Applied Static And Dynamic Verification Of Critical Software, ESA/ESTEC Contract Nr. 15751/02/NL/LvH.

  20. DEVELOPING THE NATIONAL GEOTHERMAL DATA SYSTEM ADOPTION OF CKAN FOR DOMESTIC & INTERNATIONAL DATA DEPLOYMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Ryan J.; Kuhmuench, Christoph; Richard, Stephen M.

    2013-03-01

    The National Geothermal Data System (NGDS) De- sign and Testing Team is developing NGDS software currently referred to as the “NGDS Node-In-A-Box”. The software targets organizations or individuals who wish to host at least one of the following: • an online repository containing resources for the NGDS; • an online site for creating metadata to register re- sources with the NGDS • NDGS-conformant Web APIs that enable access to NGDS data (e.g., WMS, WFS, WCS); • NDGS-conformant Web APIs that support dis- covery of NGDS resources via catalog service (e.g. CSW) • a web site that supports discovery and under-more » standing of NGDS resources A number of different frameworks for development of this online application were reviewed. The NGDS Design and Testing Team determined to use CKAN (http://ckan.org/), because it provides the closest match between out of the box functionality and NGDS node-in-a-box requirements. To achieve the NGDS vision and goals, this software development project has been inititated to provide NGDS data consumers with a highly functional inter- face to access the system, and to ease the burden on data providers who wish to publish data in the sys- tem. It is important to note that this software package constitutes a reference implementation. The NGDS software is based on open standards, which means other server software can make resources available, and other client applications can utilize NGDS data. A number of international organizations have ex- pressed interest in the NGDS approach to data access. The CKAN node implementation can provide a sim- ple path for deploying this technology in other set- tings.« less

  1. Deploying a Route Optimization EFB Application for Commercial Airline Operational Trials

    NASA Technical Reports Server (NTRS)

    Roscoe, David A.; Vivona, Robert A.; Woods, Sharon E.; Karr, David A.; Wing, David J.

    2016-01-01

    The Traffic Aware Planner (TAP), developed for NASA Langley Research Center to support the Traffic Aware Strategic Aircrew Requests (TASAR) project, is a flight-efficiency software application developed for an Electronic Flight Bag (EFB). Tested in two flight trials and planned for operational testing by two commercial airlines, TAP is a real-time trajectory optimization application that leverages connectivity with onboard avionics and broadband Internet sources to compute and recommend route modifications to flight crews to improve fuel and time performance. The application utilizes a wide range of data, including Automatic Dependent Surveillance Broadcast (ADS-B) traffic, Flight Management System (FMS) guidance and intent, on-board sensors, published winds and weather, and Special Use Airspace (SUA) schedules. This paper discusses the challenges of developing and deploying TAP to various EFB platforms, our solutions to some of these challenges, and lessons learned, to assist commercial software developers and hardware manufacturers in their efforts to implement and extend TAP functionality in their environments. EFB applications (such as TAP) typically access avionics data via an ARINC 834 Simple Text Avionics Protocol (STAP) server hosted by an Aircraft Interface Device (AID) or other installed hardware. While the protocol is standardized, the data sources, content, and transmission rates can vary from aircraft to aircraft. Additionally, the method of communicating with the AID may vary depending on EFB hardware and/or the availability of onboard networking services, such as Ethernet, WIFI, Bluetooth, or other mechanisms. EFBs with portable and installed components can be implemented using a variety of operating systems, and cockpits are increasingly incorporating tablet-based technologies, further expanding the number of platforms the application may need to support. Supporting multiple EFB platforms, AIDs, avionics datasets, and user interfaces presents a challenge for software developers and the management of their code baselines. Maintaining multiple baselines to support all deployment targets can be extremely cumbersome and expensive. Certification also needs to be considered when developing the application. Regardless of whether the software is itself destined to be certified, data requirements in support of the application and user interface elements may introduce certification requirements for EFB manufacturers and the airlines. The example of TAP, the challenges faced, solutions implemented, and lessons learned will give EFB application and hardware developers insight into future potential requirements in deploying TAP or similar flight-deck EFB applications.

  2. An Inquiry into the Cost of Post Deployment Software Support (PDSS)

    DTIC Science & Technology

    1989-09-01

    Equations .......... ii vi AFIT/GLM/LSY/835- I0 The increasing cost of software maintenance is taking a larger share of the military bidget each year... increments as needed (3:59). The second page of tne Form 75 starts with a section stating how the hours, and consequently the funds, will be allocated to...length of time required, the timeline can be in hourly, weekly, mnunthly, or quarterly increments . Some milestones included are formal approval, test

  3. Agile: From Software to Mission Systems

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Shirley, Mark; Hobart, Sarah

    2017-01-01

    To maximize efficiency and flexibility in Mission Operations System (MOS) design, we are evolving principles from agile and lean methods for software, to the complete mission system. This allows for reduced operational risk at reduced cost, and achieves a more effective design through early integration of operations into mission system engineering and flight system design. The core principles are assessment of capability through demonstration, risk reduction through targeted experiments, early test and deployment, and maturation of processes and tools through use.

  4. Contracting for Agile Software Development in the Department of Defense: An Introduction

    DTIC Science & Technology

    2015-08-01

    Requirements are fixed at a more granular level; reviews of the work product happen more frequently and assess each individual increment rather than a “ big bang ...boundaries than “ big - bang ” development. The implementation of incremental or progressive reviews enables just that—any issues identified at the time of the...the contract needs to support the delivery of deployable software at defined increments/intervals, rather than incentivizing “ big - bang ” efforts or

  5. Bioenergy Technologies Office Multi-Year Program Plan: July 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    2014-07-09

    This is the May 2014 Update to the Bioenergy Technologies Office Multi-Year Program Plan, which sets forth the goals and structure of the Office. It identifies the research, development, demonstration, and deployment activities the Office will focus on over the next five years and outlines why these activities are important to meeting the energy and sustainability challenges facing the nation.

  6. Reed-Solomon error-correction as a software patch mechanism.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pendley, Kevin D.

    This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.

  7. On Open and Collaborative Software Development in the DoD

    DTIC Science & Technology

    2010-04-30

    of this community and the larger F/OSS communities to make changes (and commit those changes) to the artifact base. This churning effect...Succinctly, it is this churning and frequent updates (i.e., "release early, release often") to the artifacts that spark innovation through...the entire project. Artifacts are frequently updated and churned over by the F/OSS community, resulting in better quality and innovation. It is up

  8. Second generation registry framework.

    PubMed

    Bellgard, Matthew I; Render, Lee; Radochonski, Maciej; Hunter, Adam

    2014-01-01

    Information management systems are essential to capture data be it for public health and human disease, sustainable agriculture, or plant and animal biosecurity. In public health, the term patient registry is often used to describe information management systems that are used to record and track phenotypic data of patients. Appropriate design, implementation and deployment of patient registries enables rapid decision making and ongoing data mining ultimately leading to improved patient outcomes. A major bottleneck encountered is the static nature of these registries. That is, software developers are required to work with stakeholders to determine requirements, design the system, implement the required data fields and functionality for each patient registry. Additionally, software developer time is required for ongoing maintenance and customisation. It is desirable to deploy a sophisticated registry framework that can allow scientists and registry curators possessing standard computing skills to dynamically construct a complete patient registry from scratch and customise it for their specific needs with little or no need to engage a software developer at any stage. This paper introduces our second generation open source registry framework which builds on our previous rare disease registry framework (RDRF). This second generation RDRF is a new approach as it empowers registry administrators to construct one or more patient registries without software developer effort. New data elements for a diverse range of phenotypic and genotypic measurements can be defined at any time. Defined data elements can then be utilised in any of the created registries. Fine grained, multi-level user and workgroup access can be applied to each data element to ensure appropriate access and data privacy. We introduce the concept of derived data elements to assist the data element standards communities on how they might be best categorised. We introduce the second generation RDRF that enables the user-driven dynamic creation of patient registries. We believe this second generation RDRF is a novel approach to patient registry design, implementation and deployment and a significant advance on existing registry systems.

  9. Second generation registry framework

    PubMed Central

    2014-01-01

    Background Information management systems are essential to capture data be it for public health and human disease, sustainable agriculture, or plant and animal biosecurity. In public health, the term patient registry is often used to describe information management systems that are used to record and track phenotypic data of patients. Appropriate design, implementation and deployment of patient registries enables rapid decision making and ongoing data mining ultimately leading to improved patient outcomes. A major bottleneck encountered is the static nature of these registries. That is, software developers are required to work with stakeholders to determine requirements, design the system, implement the required data fields and functionality for each patient registry. Additionally, software developer time is required for ongoing maintenance and customisation. It is desirable to deploy a sophisticated registry framework that can allow scientists and registry curators possessing standard computing skills to dynamically construct a complete patient registry from scratch and customise it for their specific needs with little or no need to engage a software developer at any stage. Results This paper introduces our second generation open source registry framework which builds on our previous rare disease registry framework (RDRF). This second generation RDRF is a new approach as it empowers registry administrators to construct one or more patient registries without software developer effort. New data elements for a diverse range of phenotypic and genotypic measurements can be defined at any time. Defined data elements can then be utilised in any of the created registries. Fine grained, multi-level user and workgroup access can be applied to each data element to ensure appropriate access and data privacy. We introduce the concept of derived data elements to assist the data element standards communities on how they might be best categorised. Conclusions We introduce the second generation RDRF that enables the user-driven dynamic creation of patient registries. We believe this second generation RDRF is a novel approach to patient registry design, implementation and deployment and a significant advance on existing registry systems. PMID:24982690

  10. Waggle: A Framework for Intelligent Attentive Sensing and Actuation

    NASA Astrophysics Data System (ADS)

    Sankaran, R.; Jacob, R. L.; Beckman, P. H.; Catlett, C. E.; Keahey, K.

    2014-12-01

    Advances in sensor-driven computation and computationally steered sensing will greatly enable future research in fields including environmental and atmospheric sciences. We will present "Waggle," an open-source hardware and software infrastructure developed with two goals: (1) reducing the separation and latency between sensing and computing and (2) improving the reliability and longevity of sensing-actuation platforms in challenging and costly deployments. Inspired by "deep-space probe" systems, the Waggle platform design includes features that can support longitudinal studies, deployments with varying communication links, and remote management capabilities. Waggle lowers the barrier for scientists to incorporate real-time data from their sensors into their computations and to manipulate the sensors or provide feedback through actuators. A standardized software and hardware design allows quick addition of new sensors/actuators and associated software in the nodes and enables them to be coupled with computational codes both insitu and on external compute infrastructure. The Waggle framework currently drives the deployment of two observational systems - a portable and self-sufficient weather platform for study of small-scale effects in Chicago's urban core and an open-ended distributed instrument in Chicago that aims to support several research pursuits across a broad range of disciplines including urban planning, microbiology and computer science. Built around open-source software, hardware, and Linux OS, the Waggle system comprises two components - the Waggle field-node and Waggle cloud-computing infrastructure. Waggle field-node affords a modular, scalable, fault-tolerant, secure, and extensible platform for hosting sensors and actuators in the field. It supports insitu computation and data storage, and integration with cloud-computing infrastructure. The Waggle cloud infrastructure is designed with the goal of scaling to several hundreds of thousands of Waggle nodes. It supports aggregating data from sensors hosted by the nodes, staging computation, relaying feedback to the nodes and serving data to end-users. We will discuss the Waggle design principles and their applicability to various observational research pursuits, and demonstrate its capabilities.

  11. Intelligent Systems Technologies for Ops

    NASA Technical Reports Server (NTRS)

    Smith, Ernest E.; Korsmeyer, David J.

    2012-01-01

    As NASA supports International Space Station assembly complete operations through 2020 (or later) and prepares for future human exploration programs, there is additional emphasis in the manned spaceflight program to find more efficient and effective ways of providing the ground-based mission support. Since 2006 this search for improvement has led to a significant cross-fertilization between the NASA advanced software development community and the manned spaceflight operations community. A variety of mission operations systems and tools have been developed over the past decades as NASA has operated the Mars robotic missions, the Space Shuttle, and the International Space Station. NASA Ames Research Center has been developing and applying its advanced intelligent systems research to mission operations tools for both unmanned Mars missions operations since 2001 and to manned operations with NASA Johnson Space Center since 2006. In particular, the fundamental advanced software development work under the Exploration Technology Program, and the experience and capabilities developed for mission operations systems for the Mars surface missions, (Spirit/Opportunity, Phoenix Lander, and MSL) have enhanced the development and application of advanced mission operation systems for the International Space Station and future spacecraft. This paper provides an update on the status of the development and deployment of a variety of intelligent systems technologies adopted for manned mission operations, and some discussion of the planned work for Autonomous Mission Operations in future human exploration. We discuss several specific projects between the Ames Research Center and the Johnson Space Centers Mission Operations Directorate, and how these technologies and projects are enhancing the mission operations support for the International Space Station, and supporting the current Autonomous Mission Operations Project for the mission operation support of the future human exploration programs.

  12. VIMOS Instrument Control Software Design: an Object Oriented Approach

    NASA Astrophysics Data System (ADS)

    Brau-Nogué, Sylvie; Lucuix, Christian

    2002-12-01

    The Franco-Italian VIMOS instrument is a VIsible imaging Multi-Object Spectrograph with outstanding multiplex capabilities, allowing to take spectra of more than 800 objects simultaneously, or integral field spectroscopy mode in a 54x54 arcsec area. VIMOS is being installed at the Nasmyth focus of the third Unit Telescope of the European Southern Observatory Very Large Telescope (VLT) at Mount Paranal in Chile. This paper will describe the analysis, the design and the implementation of the VIMOS Instrument Control System, using UML notation. Our Control group followed an Object Oriented software process while keeping in mind the ESO VLT standard control concepts. At ESO VLT a complete software library is available. Rather than applying waterfall lifecycle, ICS project used iterative development, a lifecycle consisting of several iterations. Each iteration consisted in : capture and evaluate the requirements, visual modeling for analysis and design, implementation, test, and deployment. Depending of the project phases, iterations focused more or less on specific activity. The result is an object model (the design model), including use-case realizations. An implementation view and a deployment view complement this product. An extract of VIMOS ICS UML model will be presented and some implementation, integration and test issues will be discussed.

  13. Software Defined Cyberinfrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, Ian; Blaiszik, Ben; Chard, Kyle

    Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policiesmore » by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.« less

  14. 10 CFR 602.19 - Records and data.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...

  15. 10 CFR 602.19 - Records and data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...

  16. 10 CFR 602.19 - Records and data.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...

  17. 10 CFR 602.19 - Records and data.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...

  18. 10 CFR 602.19 - Records and data.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...

  19. Architecture of a framework for providing information services for public transport.

    PubMed

    García, Carmelo R; Pérez, Ricardo; Lorenzo, Alvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino

    2012-01-01

    This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained.

  20. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2005-01-01

    NASA (National Aeronautics and Space Administration) relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft (manned or unmanned) launched that did not have a computer on board that provided vital command and control services. Despite this growing dependence on software control and monitoring, there has been no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Led by the NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard (STD-18l9.13B) has recently undergone a significant update in an attempt to provide that consistency. This paper will discuss the key features of the new NASA Software Safety Standard. It will start with a brief history of the use and development of software in safety critical applications at NASA. It will then give a brief overview of the NASA Software Working Group and the approach it took to revise the software engineering process across the Agency.

  1. The Barriers to Acceptance of Plug-in Electric Vehicles: 2017 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, Mark R.

    Vehicle manufacturers, government agencies, universities, private researchers, and organizations worldwide are pursuing advanced vehicle technologies that aim to reduce the consumption of petroleum in the forms of gasoline and diesel. Plug-in electric vehicles (PEVs) are one such technology. This report, an update to the previous version published in December 2016, details findings from a study in February 2017 of broad American public sentiments toward issues that surround PEVs. This report is supported by the U.S. Department of Energy's Vehicle Technologies Office in alignment with its mission to develop and deploy these technologies to improve energy security, enhance mobility flexibility, reducemore » transportation costs, and increase environmental sustainability.« less

  2. Web-Based Interface for Command and Control of Network Sensors

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Doubleday, Joshua R.; Shams, Khawaja S.

    2010-01-01

    This software allows for the visualization and control of a network of sensors through a Web browser interface. It is currently being deployed for a network of sensors monitoring Mt. Saint Helen s volcano; however, this innovation is generic enough that it can be deployed for any type of sensor Web. From this interface, the user is able to fully control and monitor the sensor Web. This includes, but is not limited to, sending "test" commands to individual sensors in the network, monitoring for real-world events, and reacting to those events

  3. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, ENVIRONMENTAL DECISION SUPPORT SOFTWARE, DECISION FX, INC., GROUNDWATER FX

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has created the Environmental Technology Verification Program (ETV) to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The goal of the...

  4. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, ENVIRONMENTAL DECISION SUPPORT SOFTWARE, DECISION FX, INC. SAMPLING FX

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has created the Environmental Technology Verification Program (ETV) to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The goal of the...

  5. The software architecture to control the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Oya, I.; Füßling, M.; Antonino, P. O.; Conforti, V.; Hagge, L.; Melkumyan, D.; Morgenstern, A.; Tosti, G.; Schwanke, U.; Schwarz, J.; Wegner, P.; Colomé, J.; Lyard, E.

    2016-07-01

    The Cherenkov Telescope Array (CTA) project is an initiative to build two large arrays of Cherenkov gamma- ray telescopes. CTA will be deployed as two installations, one in the northern and the other in the southern hemisphere, containing dozens of telescopes of different sizes. CTA is a big step forward in the field of ground- based gamma-ray astronomy, not only because of the expected scientific return, but also due to the order-of- magnitude larger scale of the instrument to be controlled. The performance requirements associated with such a large and distributed astronomical installation require a thoughtful analysis to determine the best software solutions. The array control and data acquisition (ACTL) work-package within the CTA initiative will deliver the software to control and acquire the data from the CTA instrumentation. In this contribution we present the current status of the formal ACTL system decomposition into software building blocks and the relationships among them. The system is modelled via the Systems Modelling Language (SysML) formalism. To cope with the complexity of the system, this architecture model is sub-divided into different perspectives. The relationships with the stakeholders and external systems are used to create the first perspective, the context of the ACTL software system. Use cases are employed to describe the interaction of those external elements with the ACTL system and are traced to a hierarchy of functionalities (abstract system functions) describing the internal structure of the ACTL system. These functions are then traced to fully specified logical elements (software components), the deployment of which as technical elements, is also described. This modelling approach allows us to decompose the ACTL software in elements to be created and the ow of information within the system, providing us with a clear way to identify sub-system interdependencies. This architectural approach allows us to build the ACTL system model and trace requirements to deliverables (source code, documentation, etc.), and permits the implementation of a flexible use-case driven software development approach thanks to the traceability from use cases to the logical software elements. The Alma Common Software (ACS) container/component framework, used for the control of the Atacama Large Millimeter/submillimeter Array (ALMA) is the basis for the ACTL software and as such it is considered as an integral part of the software architecture.

  6. Wave data processing toolbox manual

    USGS Publications Warehouse

    Sullivan, Charlene M.; Warner, John C.; Martini, Marinna A.; Lightsom, Frances S.; Voulgaris, George; Work, Paul

    2006-01-01

    Researchers routinely deploy oceanographic equipment in estuaries, coastal nearshore environments, and shelf settings. These deployments usually include tripod-mounted instruments to measure a suite of physical parameters such as currents, waves, and pressure. Instruments such as the RD Instruments Acoustic Doppler Current Profiler (ADCP(tm)), the Sontek Argonaut, and the Nortek Aquadopp(tm) Profiler (AP) can measure these parameters. The data from these instruments must be processed using proprietary software unique to each instrument to convert measurements to real physical values. These processed files are then available for dissemination and scientific evaluation. For example, the proprietary processing program used to process data from the RD Instruments ADCP for wave information is called WavesMon. Depending on the length of the deployment, WavesMon will typically produce thousands of processed data files. These files are difficult to archive and further analysis of the data becomes cumbersome. More imperative is that these files alone do not include sufficient information pertinent to that deployment (metadata), which could hinder future scientific interpretation. This open-file report describes a toolbox developed to compile, archive, and disseminate the processed wave measurement data from an RD Instruments ADCP, a Sontek Argonaut, or a Nortek AP. This toolbox will be referred to as the Wave Data Processing Toolbox. The Wave Data Processing Toolbox congregates the processed files output from the proprietary software into two NetCDF files: one file contains the statistics of the burst data and the other file contains the raw burst data (additional details described below). One important advantage of this toolbox is that it converts the data into NetCDF format. Data in NetCDF format is easy to disseminate, is portable to any computer platform, and is viewable with public-domain freely-available software. Another important advantage is that a metadata structure is embedded with the data to document pertinent information regarding the deployment and the parameters used to process the data. Using this format ensures that the relevant information about how the data was collected and converted to physical units is maintained with the actual data. EPIC-standard variable names have been utilized where appropriate. These standards, developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) (http://www.pmel.noaa.gov/epic/), provide a universal vernacular allowing researchers to share data without translation.

  7. Towards automated traceability maintenance

    PubMed Central

    Mäder, Patrick; Gotel, Orlena

    2012-01-01

    Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided. PMID:23471308

  8. ESSAA: Embedded system safety analysis assistant

    NASA Technical Reports Server (NTRS)

    Wallace, Peter; Holzer, Joseph; Guarro, Sergio; Hyatt, Larry

    1987-01-01

    The Embedded System Safety Analysis Assistant (ESSAA) is a knowledge-based tool that can assist in identifying disaster scenarios. Imbedded software issues hazardous control commands to the surrounding hardware. ESSAA is intended to work from outputs to inputs, as a complement to simulation and verification methods. Rather than treating the software in isolation, it examines the context in which the software is to be deployed. Given a specified disasterous outcome, ESSAA works from a qualitative, abstract model of the complete system to infer sets of environmental conditions and/or failures that could cause a disasterous outcome. The scenarios can then be examined in depth for plausibility using existing techniques.

  9. SDO FlatSat Facility

    NASA Technical Reports Server (NTRS)

    Amason, David L.

    2008-01-01

    The goal of the Solar Dynamics Observatory (SDO) is to understand and, ideally, predict the solar variations that influence life and society. It's instruments will measure the properties of the Sun and will take hifh definition images of the Sun every few seconds, all day every day. The FlatSat is a high fidelity electrical and functional representation of the SDO spacecraft bus. It is a high fidelity test bed for Integration & Test (I & T), flight software, and flight operations. For I & T purposes FlatSat will be a driver to development and dry run electrical integration procedures, STOL test procedures, page displays, and the command and telemetry database. FlatSat will also serve as a platform for flight software acceptance and systems testing for the flight software system component including the spacecraft main processors, power supply electronics, attitude control electronic, gimbal control electrons and the S-band communications card. FlatSat will also benefit the flight operations team through post-launch flight software code and table update development and verification and verification of new and updated flight operations products. This document highlights the benefits of FlatSat; describes the building of FlatSat; provides FlatSat facility requirements, access roles and responsibilities; and, and discusses FlatSat mechanical and electrical integration and functional testing.

  10. Desktop Publishing: The New Wave in Business Education.

    ERIC Educational Resources Information Center

    Huprich, Violet M.

    1989-01-01

    Discusses the challenges of teaching desktop publishing (DTP); the industry is in flux with the software packages constantly being updated. Indicates that the demand for those with DTP skills is great. (JOW)

  11. Software architecture of the III/FBI segment of the FBI's integrated automated identification system

    NASA Astrophysics Data System (ADS)

    Booker, Brian T.

    1997-02-01

    This paper will describe the software architecture of the Interstate Identification Index (III/FBI) Segment of the FBI's Integrated Automated Fingerprint Identification System (IAFIS). IAFIS is currently under development, with deployment to begin in 1998. III/FBI will provide the repository of criminal history and photographs for criminal subjects, as well as identification data for military and civilian federal employees. Services provided by III/FBI include maintenance of the criminal and civil data, subject search of the criminal and civil data, and response generation services for IAFIS. III/FBI software will be comprised of both COTS and an estimated 250,000 lines of developed C code. This paper will describe the following: (1) the high-level requirements of the III/FBI software; (2) the decomposition of the III/FBI software into Computer Software Configuration Items (CSCIs); (3) the top-level design of the III/FBI CSCIs; and (4) the relationships among the developed CSCIs and the COTS products that will comprise the III/FBI software.

  12. Effects of Medical Device Regulations on the Development of Stand-Alone Medical Software: A Pilot Study.

    PubMed

    Blagec, Kathrin; Jungwirth, David; Haluza, Daniela; Samwald, Matthias

    2018-01-01

    Medical device regulations which aim to ensure safety standards do not only apply to hardware devices but also to standalone medical software, e.g. mobile apps. To explore the effects of these regulations on the development and distribution of medical standalone software. We invited a convenience sample of 130 domain experts to participate in an online survey about the impact of current regulations on the development and distribution of medical standalone software. 21 respondents completed the questionnaire. Participants reported slight positive effects on usability, reliability, and data security of their products, whereas the ability to modify already deployed software and customization by end users were negatively impacted. The additional time and costs needed to go through the regulatory process were perceived as the greatest obstacles in developing and distributing medical software. Further research is needed to compare positive effects on software quality with negative impacts on market access and innovation. Strategies for avoiding over-regulation while still ensuring safety standards need to be devised.

  13. Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha

    2016-01-01

    To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.

  14. Performance verification of network function virtualization in software defined optical transport networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Hu, Liyazhou; Wang, Wei; Li, Yajie; Zhang, Jie

    2017-01-01

    With the continuous opening of resource acquisition and application, there are a large variety of network hardware appliances deployed as the communication infrastructure. To lunch a new network application always implies to replace the obsolete devices and needs the related space and power to accommodate it, which will increase the energy and capital investment. Network function virtualization1 (NFV) aims to address these problems by consolidating many network equipment onto industry standard elements such as servers, switches and storage. Many types of IT resources have been deployed to run Virtual Network Functions (vNFs), such as virtual switches and routers. Then how to deploy NFV in optical transport networks is a of great importance problem. This paper focuses on this problem, and gives an implementation architecture of NFV-enabled optical transport networks based on Software Defined Optical Networking (SDON) with the procedure of vNFs call and return. Especially, an implementation solution of NFV-enabled optical transport node is designed, and a parallel processing method for NFV-enabled OTN nodes is proposed. To verify the performance of NFV-enabled SDON, the protocol interaction procedures of control function virtualization and node function virtualization are demonstrated on SDON testbed. Finally, the benefits and challenges of the parallel processing method for NFV-enabled OTN nodes are simulated and analyzed.

  15. ASKI: A modular toolbox for scattering-integral-based seismic full waveform inversion and sensitivity analysis utilizing external forward codes

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).

  16. A Upgrade of the Aeroheating Software "MINIVER"

    NASA Technical Reports Server (NTRS)

    Louderback, Pierce

    2013-01-01

    Many software packages assist engineers with performing flight vehicle analysis, but some of these packages have gone many years without updates or significant improvements to their workflows. One such software package, known as MINIVER, is a powerful yet lightweight tool used for aeroheating analyses. However, it is an aging program that has not seen major improvements within the past decade. As part of a collaborative effort with the Florida Institute of Technology, MINIVER has received a major user interface overhaul, a change in program language, and will be continually receiving updates to improve its capabilities. The user interface update includes a migration from a command-line interface to that of a graphical user interface supported in the Windows operating system. The organizational structure of the pre-processor has been transformed to clearly defined categories to provide ease of data entry. Helpful tools have been incorporated, including the ability to copy sections of cases as well as a generalized importer which aids in bulk data entry. A visual trajectory editor has been included, as well as a CAD Editor which allows the user to input simplified geometries in order to generate MINIVER cases in bulk. To demonstrate its continued effectiveness, a case involving the JAXA OREX flight vehicle will be included, providing comparisons to captured flight data as well as other computational solutions. The most recent upgrade effort incorporated the use of the CAD Editor, and current efforts are investigating methods to link MINIVER projects with SINDA/Fluint and Thermal Desktop.

  17. An Upgrade of the Aeroheating Software "MINIVER"

    NASA Technical Reports Server (NTRS)

    Louderback, Pierce M.

    2013-01-01

    Many software packages assist engineers with performing flight vehicle analysis, but some of these packages have gone many years without updates or significant improvements to their workflows. One such software, known as MINIVER, is a powerful yet lightweight tool that is used for aeroheating analyses. However, it is an aging program that has not seen major improvements within the past decade. As part of a collaborative effort with Florida Institute of Technology, MINIVER has received a major user interface overhaul, a change in program language, and will be continually receiving updates to improve its capabilities. The user interface update includes a migration from a command-line interface to that of a graphical user interface supported in the Windows operating system. The organizational structure of the preprocessor has been transformed to clearly defined categories to provide ease of data entry. Helpful tools have been incorporated, including the ability to copy sections of cases as well as a generalized importer which aids in bulk data entry. A visual trajectory editor has been included, as well as a CAD Editor which allows the user to input simplified geometries in order to generate MINIVER cases in bulk. To demonstrate its continued effectiveness, a case involving the JAXA OREX flight vehicle will be included, providing comparisons to captured flight data as well as other computational solutions. The most recent upgrade effort incorporated the use of the CAD Editor, and current efforts are investigating methods to link MINIVER projects with SINDA/Fluint and Thermal Desktop.

  18. CNA’s Integrated Ship Database, Fourth Quarter 2011 Update

    DTIC Science & Technology

    2012-09-01

    CNA’s Integrated Ship Database Fourth Quarter 2011 Update Gregory N. Suess, Lynette A . McClain, and Rhea Stone CNA Interactive Software DIS-2012-U...194) during a replenishment at sea (RAS).” Navy.mil Official Website of the United States Navy, last accessed May 24, 2012, at http://www.navy.mil...that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if

  19. Safe and Effective Deployment of Personnel to Support the Ebola Response - West Africa.

    PubMed

    Rouse, Edward N; Zarecki, Shauna Mettee; Flowers, Donald; Robinson, Shawn T; Sheridan, Reed J; Goolsby, Gary D; Nemhauser, Jeffrey; Kuwabara, Sachiko

    2016-07-08

    From the initial task of getting "50 deployers within 30 days" into the field to support the 2014-2016 Ebola virus disease (Ebola) epidemic response in West Africa to maintaining well over 200 staff per day in the most affected countries (Guinea, Liberia, and Sierra Leone) during the peak of the response, ensuring the safe and effective deployment of international responders was an unprecedented accomplishment by CDC. Response experiences shared by CDC deployed staff returning from West Africa were quickly incorporated into lessons learned and resulted in new activities to better protect the health, safety, security, and resiliency of responding personnel. Enhanced screening of personnel to better match skill sets and experience with deployment needs was developed as a staffing strategy. The mandatory predeployment briefings were periodically updated with these lessons to ensure that staff were aware of what to expect before, during, and after their deployments. Medical clearance, security awareness, and resiliency programs became a standard part of both predeployment and postdeployment activities. Response experience also led to the identification and provision of more appropriate equipment for the environment. Supporting the social and emotional needs of deployed staff and their families also became an agency focus for care and communication. These enhancements set a precedent as a new standard for future CDC responses, regardless of size or complexity.The activities summarized in this report would not have been possible without collaboration with many U.S and international partners (http://www.cdc.gov/vhf/ebola/outbreaks/2014-west-africa/partners.html).

  20. Automated Software Vulnerability Analysis

    NASA Astrophysics Data System (ADS)

    Sezer, Emre C.; Kil, Chongkyung; Ning, Peng

    Despite decades of research, software continues to have vulnerabilities. Successful exploitations of these vulnerabilities by attackers cost millions of dollars to businesses and individuals. Unfortunately, most effective defensive measures, such as patching and intrusion prevention systems, require an intimate knowledge of the vulnerabilities. Many systems for detecting attacks have been proposed. However, the analysis of the exploited vulnerabilities is left to security experts and programmers. Both the human effortinvolved and the slow analysis process are unfavorable for timely defensive measure to be deployed. The problem is exacerbated by zero-day attacks.

  1. Proposed Navy Software Acquisition Improvement Strategy

    DTIC Science & Technology

    2009-03-16

    Production and Deployment Operations and Support PRR IOC FOC OTRR DoD/ASN/RDA Policies Call for Gov’t SMEs to Define System Req’s, Support Milestone Reviews...of the SW; but with Gov’t Software SME oversight and insight W o A B C 12 Statement A: Approved for Public Release; Distribution is Unlimited 12...Comp, Segment levels is not sufficient to ensure & meet OA goalsSegment Level CSCIs CSCs Level of De SW CSCI 2 SW CSCI 1 SW CSCI ### Gov’t SW SMEs

  2. Joint Logistics Commanders’ Workshop on Post Deployment Software Support (PDSS) for Mission-Critical Computer Software. Volume 2. Workshop Proceedings.

    DTIC Science & Technology

    1984-06-01

    exist for the same item, as opposed to separate budget and fund codes for separate but related items. Multiple pro- cedures and fund codes can oe used...funds. If some funds are marked for multiple years and others must be obligated or outlaid witnin one year, contracting for PDSS tasks must be partitioned...Experience: PDSS requires both varied experience factors in multiple dis- ciplines and the sustaining of a critical mass of experience factors and

  3. Rapidly Deployable Security System Final Report CRADA No. TC-2030-01

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kohlhepp, V.; Whiteman, B.; McKibben, M. T.

    The ultimate objective of the LEADER and LLNL strategic partnership was to develop and commercialize_a security-based system product and platform for the use in protecting the substantial physical and economic assets of the government and commerce of the United States. The primary goal of this project was to integrate video surveillance hardware developed by LLNL with a security software backbone developed by LEADER. Upon completion of the project, a prototype hardware/software security system that is highly scalable was to be demonstrated.

  4. Preparing Florida for deployment of SafetyAnalyst for all roads.

    DOT National Transportation Integrated Search

    2012-05-01

    SafetyAnalyst is an advanced software system designed to provide the state and local highway agencies with a comprehensive set of tools to enhance their programming of site-specific highway safety improvements. As one of the 27 states that sponsored ...

  5. Review of super Ni/Cd cell designs and performance

    NASA Technical Reports Server (NTRS)

    Abrams-Blakemore, Bruce

    1993-01-01

    Eagle-Picher Industries, Inc., in cooperation with Hughes Aircraft Company, began production of the Super Nickel-Cadmium cell in 1989. Since that time the Super Nickel-Cadmium cell has been deployed in a wide variety of satellites. This paper will review one of those programs and provide a performance update. We will discuss storage requirements and capacity histories for the various Super NiCad Cell designs.

  6. Scalable Data Management, Analysis, and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Han-Wei

    This report is the entire final report for the SciDAC project authored by the whole team. OSU is part of the contributors to the report. This report is organized into sections and subsections, each covering an area of development and deployment of technologies applied to scientific applications of interest to the Department of Energy. Each sub-section includes: 1) a summary description of the research, development, and deployment carried out, the results and the extent to which the stated project objectives were met; 2) significant results, including major findings, developments, or conclusions; 3) products, such as publications and presentations, software developed,more » project website(s), technologies or techniques, inventions, awards, etc., and 4) conclusions of the projects and future directions for research, development, and deployment in this technology area.« less

  7. I3Mote: An Open Development Platform for the Intelligent Industrial Internet

    PubMed Central

    Martinez, Borja; Vilajosana, Xavier; Kim, Il Han; Zhou, Jianwei; Tuset-Peiró, Pere; Xhafa, Ariton; Poissonnier, Dominique; Lu, Xiaolin

    2017-01-01

    In this article we present the Intelligent Industrial Internet (I3) Mote, an open hardware platform targeting industrial connectivity and sensing deployments. The I3Mote features the most advanced low-power components to tackle sensing, on-board computing and wireless/wired connectivity for demanding industrial applications. The platform has been designed to fill the gap in the industrial prototyping and early deployment market with a compact form factor, low-cost and robust industrial design. I3Mote is an advanced and compact prototyping system integrating the required components to be deployed as a product, leveraging the need for adopting industries to build their own tailored solution. This article describes the platform design, firmware and software ecosystem and characterizes its performance in terms of energy consumption. PMID:28452945

  8. Update on Progress of Space Station Integrated Kinetic Launcher for Orbital Payload Systems (SSIKLOPS) - Cyclops

    NASA Technical Reports Server (NTRS)

    Newswander, Daniel; Smith, James P.; Lamb, Craig R.; Ballard, Perry G.

    2014-01-01

    The Space Station Integrated Kinetic Launcher for Orbital Payload Systems (SSIKLOPS), known as "Cyclops" to the International Space Station (ISS) community, was introduced last August (2013) during Technical Session V: From Earth to Orbit of the 27th Annual AIAA/USU Conference on Small Satellites. Cyclops is a collaboration between the NASA ISS Program, NASA Johnson Space Center Engineering, and Department of Defense (DoD) Space Test Program (STP) communities to develop a dedicated 50-100 kg class ISS small satellite deployment system. This paper will address the progress of Cyclops through its fabrication, assembly, flight certification, and on-orbit demonstration phases. It will also go into more detail regarding its anatomy, its satellite deployment concept of operations, and its satellite interfaces and requirements. Cyclops is manifested to fly on Space-X 4 which is currently scheduled in July 2014 with its initial satellite deployment demonstration of DoD STP's SpinSat and UT/TAMU's Lonestar satellites being late summer or fall of 2014.

  9. An Assessment of Integrated Health Management (IHM) Frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N. Lybeck; M. Tawfik; L. Bond

    In order to meet the ever increasing demand for energy, the United States nuclear industry is turning to life extension of existing nuclear power plants (NPPs). Economically ensuring the safe, secure, and reliable operation of aging nuclear power plants presents many challenges. The 2009 Light Water Reactor Sustainability Workshop identified online monitoring of active and structural components as essential to the better understanding and management of the challenges posed by aging nuclear power plants. Additionally, there is increasing adoption of condition-based maintenance (CBM) for active components in NPPs. These techniques provide a foundation upon which a variety of advanced onlinemore » surveillance, diagnostic, and prognostic techniques can be deployed to continuously monitor and assess the health of NPP systems and components. The next step in the development of advanced online monitoring is to move beyond CBM to estimating the remaining useful life of active components using prognostic tools. Deployment of prognostic health management (PHM) on the scale of a NPP requires the use of an integrated health management (IHM) framework - a software product (or suite of products) used to manage the necessary elements needed for a complete implementation of online monitoring and prognostics. This paper provides a thoughtful look at the desirable functions and features of IHM architectures. A full PHM system involves several modules, including data acquisition, system modeling, fault detection, fault diagnostics, system prognostics, and advisory generation (operations and maintenance planning). The standards applicable to PHM applications are indentified and summarized. A list of evaluation criteria for PHM software products, developed to ensure scalability of the toolset to an environment with the complexity of a NPP, is presented. Fourteen commercially available PHM software products are identified and classified into four groups: research tools, PHM system development tools, deployable architectures, and peripheral tools.« less

  10. Lowering the Barrier for Standards-Compliant and Discoverable Hydrological Data Publication

    NASA Astrophysics Data System (ADS)

    Kadlec, J.

    2013-12-01

    The growing need for sharing and integration of hydrological and climate data across multiple organizations has resulted in the development of distributed, services-based, standards-compliant hydrological data management and data hosting systems. The problem with these systems is complicated set-up and deployment. Many existing systems assume that the data publisher has remote-desktop access to a locally managed server and experience with computer network setup. For corporate websites, shared web hosting services with limited root access provide an inexpensive, dynamic web presence solution using the Linux, Apache, MySQL and PHP (LAMP) software stack. In this paper, we hypothesize that a webhosting service provides an optimal, low-cost solution for hydrological data hosting. We propose a software architecture of a standards-compliant, lightweight and easy-to-deploy hydrological data management system that can be deployed on the majority of existing shared internet webhosting services. The architecture and design is validated by developing Hydroserver Lite: a PHP and MySQL-based hydrological data hosting package that is fully standards-compliant and compatible with the Consortium of Universities for Advancement of Hydrologic Sciences (CUAHSI) hydrologic information system. It is already being used for management of field data collection by students of the McCall Outdoor Science School in Idaho. For testing, the Hydroserver Lite software has been installed on multiple different free and low-cost webhosting sites including Godaddy, Bluehost and 000webhost. The number of steps required to set-up the server is compared with the number of steps required to set-up other standards-compliant hydrologic data hosting systems including THREDDS, IstSOS and MapServer SOS.

  11. Technology for Libraries.

    ERIC Educational Resources Information Center

    Phenix, Katharine; And Others

    1990-01-01

    Five articles discuss information technology in libraries: (1) "Software for Libraries" (Katharine Phenix); (2) "Online Update: European Online Services" (Martin Kesselman); (3) "Connect Time: Online Pricing Breakthroughs" (Barbara Quint); (4) "Microcomputing: Micro Biology Computer Viruses" (James LaRue);…

  12. Proactive Security Testing and Fuzzing

    NASA Astrophysics Data System (ADS)

    Takanen, Ari

    Software is bound to have security critical flaws, and no testing or code auditing can ensure that software is flaw-less. But software security testing requirements have improved radically during the past years, largely due to criticism from security conscious consumers and Enterprise customers. Whereas in the past, security flaws were taken for granted (and patches were quietly and humbly installed), they now are probably one of the most common reasons why people switch vendors or software providers. The maintenance costs from security updates often add to become one of the biggest cost items to large Enterprise users. Fortunately test automation techniques have also improved. Techniques like model-based testing (MBT) enable efficient generation of security tests that reach good confidence levels in discovering zero-day mistakes in software. This technique is called fuzzing.

  13. Software for autonomous astronomical observatories: challenges and opportunities in the age of big data

    NASA Astrophysics Data System (ADS)

    Sybilski, Piotr W.; Pawłaszek, Rafał; Kozłowski, Stanisław K.; Konacki, Maciej; Ratajczak, Milena; Hełminiak, Krzysztof G.

    2014-07-01

    We present the software solution developed for a network of autonomous telescopes, deployed and tested in Solaris Project. The software aims to fulfil the contemporary needs of distributed autonomous observatories housing medium sized telescopes: ergonomics, availability, security and reusability. The datafication of such facilities seems inevitable and we give a preliminary study of the challenges and opportunities waiting for software developers. Project Solaris is a global network of four 0.5 m autonomous telescopes conducting a survey of eclipsing binaries in the Southern Hemisphere. The Project's goal is to detect and characterise circumbinary planets using the eclipse timing method. The observatories are located on three continents, and the headquarters coordinating and monitoring the network is in Poland. All four are operational as of December 2013.

  14. Integrated Systems Health Management (ISHM) Toolkit

    NASA Technical Reports Server (NTRS)

    Venkatesh, Meera; Kapadia, Ravi; Walker, Mark; Wilkins, Kim

    2013-01-01

    A framework of software components has been implemented to facilitate the development of ISHM systems according to a methodology based on Reliability Centered Maintenance (RCM). This framework is collectively referred to as the Toolkit and was developed using General Atomics' Health MAP (TM) technology. The toolkit is intended to provide assistance to software developers of mission-critical system health monitoring applications in the specification, implementation, configuration, and deployment of such applications. In addition to software tools designed to facilitate these objectives, the toolkit also provides direction to software developers in accordance with an ISHM specification and development methodology. The development tools are based on an RCM approach for the development of ISHM systems. This approach focuses on defining, detecting, and predicting the likelihood of system functional failures and their undesirable consequences.

  15. Software Authority Transition through Multiple Distributors

    PubMed Central

    Han, Kyusunk; Shon, Taeshik

    2014-01-01

    The rapid growth in the use of smartphones and tablets has changed the software distribution ecosystem. The trend today is to purchase software through application stores rather than from traditional offline markets. Smartphone and tablet users can install applications easily by purchasing from the online store deployed in their device. Several systems, such as Android or PC-based OS units, allow users to install software from multiple sources. Such openness, however, can promote serious threats, including malware and illegal usage. In order to prevent such threats, several stores use online authentication techniques. These methods can, however, also present a problem whereby even licensed users cannot use their purchased application. In this paper, we discuss these issues and provide an authentication method that will make purchased applications available to the registered user at all times. PMID:25143971

  16. Software authority transition through multiple distributors.

    PubMed

    Han, Kyusunk; Shon, Taeshik

    2014-01-01

    The rapid growth in the use of smartphones and tablets has changed the software distribution ecosystem. The trend today is to purchase software through application stores rather than from traditional offline markets. Smartphone and tablet users can install applications easily by purchasing from the online store deployed in their device. Several systems, such as Android or PC-based OS units, allow users to install software from multiple sources. Such openness, however, can promote serious threats, including malware and illegal usage. In order to prevent such threats, several stores use online authentication techniques. These methods can, however, also present a problem whereby even licensed users cannot use their purchased application. In this paper, we discuss these issues and provide an authentication method that will make purchased applications available to the registered user at all times.

  17. Architecture of a Framework for Providing Information Services for Public Transport

    PubMed Central

    García, Carmelo R.; Pérez, Ricardo; Lorenzo, Álvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino

    2012-01-01

    This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained. PMID:22778585

  18. A Generic Software Architecture For Prognostics

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher; Daigle, Matthew J.; Sankararaman, Shankar; Goebel, Kai; Watkins, Jason

    2017-01-01

    Prognostics is a systems engineering discipline focused on predicting end-of-life of components and systems. As a relatively new and emerging technology, there are few fielded implementations of prognostics, due in part to practitioners perceiving a large hurdle in developing the models, algorithms, architecture, and integration pieces. As a result, no open software frameworks for applying prognostics currently exist. This paper introduces the Generic Software Architecture for Prognostics (GSAP), an open-source, cross-platform, object-oriented software framework and support library for creating prognostics applications. GSAP was designed to make prognostics more accessible and enable faster adoption and implementation by industry, by reducing the effort and investment required to develop, test, and deploy prognostics. This paper describes the requirements, design, and testing of GSAP. Additionally, a detailed case study involving battery prognostics demonstrates its use.

  19. CONRAD Software Architecture

    NASA Astrophysics Data System (ADS)

    Guzman, J. C.; Bennett, T.

    2008-08-01

    The Convergent Radio Astronomy Demonstrator (CONRAD) is a collaboration between the computing teams of two SKA pathfinder instruments, MeerKAT (South Africa) and ASKAP (Australia). Our goal is to produce the required common software to operate, process and store the data from the two instruments. Both instruments are synthesis arrays composed of a large number of antennas (40 - 100) operating at centimeter wavelengths with wide-field capabilities. Key challenges are the processing of high volume of data in real-time as well as the remote mode of operations. Here we present the software architecture for CONRAD. Our design approach is to maximize the use of open solutions and third-party software widely deployed in commercial applications, such as SNMP and LDAP, and to utilize modern web-based technologies for the user interfaces, such as AJAX.

  20. GeoGebra 3D from the Perspectives of Elementary Pre-Service Mathematics Teachers Who Are Familiar with a Number of Software Programs

    ERIC Educational Resources Information Center

    Baltaci, Serdal; Yildiz, Avni

    2015-01-01

    Each new version of the GeoGebra dynamic mathematics software goes through updates and innovations. One of these innovations is the GeoGebra 5.0 version. This version aims to facilitate 3D instruction by offering opportunities for students to analyze 3D objects. While scanning the previous studies of GeoGebra 3D, it is seen that they mainly focus…

  1. Exe-Guard Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Rhett; Marshall, Tim; Chavez, Adrian

    The exe-Guard Project is an alliance between Dominion Virginia Power (DVP), Sandia National Laboratories (SNL), Dartmouth University, and Schweitzer Engineering Laboratories (SEL). SEL is primary recipient on this project. The exe-Guard project was selected for award under DE-FOA-0000359 with CFDA number 81.122 to address Topic Area of Interest 4: Hardened platforms and Systems. The exe-Guard project developed an antivirus solution for control system embedded devices to prevent the execution of unauthorized code and maintain settings and configuration integrity. This project created a white list antivirus solution for control systems capable of running on embedded Linux® operating systems. White list antivirusmore » methods allow only credible programs to run through the use of digital signatures and hash functions. Once a system’s secure state is baselined, white list antivirus software denies deviations from that state because of the installation of malicious code as this changes hash results. Black list antivirus software has been effective in traditional IT environments but has negative implications for control systems. Black list antivirus uses pattern matching and behavioral analysis to identify system threats while relying on regular updates to the signature file and recurrent system scanning. Black list antivirus is vulnerable to zero day exploits which have not yet been incorporated into a signature file update. System scans hamper the performance of high availability applications, as revealed in NIST special publication 1058 which summarizes the impact of blacklist antivirus on control systems: Manual or “on-demand” scanning has a major effect on control processes in that they take CPU time needed by the control process (Sometimes close to 100% of CPU time). Minimizing the antivirus software throttle setting will reduce but not eliminate this effect. Signature updates can also take up to 100% of CPU time, but for a much shorter period than a typical manual scanning process. Control systems are vulnerable to performance losses if off-the-shelf blacklist antivirus solutions aren’t implemented with care. This investment in configuration in addition to constant decommissioning to perform manual signature file updates is unprecedented and impractical. Additionally, control systems are often disconnected or islanded from the network making the delivery of signature updates difficult. Exe-Guard project developed a white list antivirus solution that mitigated the above drawbacks and allows control systems to cost-effectively apply malware protection while maintaining high reliability. The application of security patches can also be minimized since white listing maintains constant defense against unauthorized code execution. Security patches can instead be applied in less frequent intervals where system decommissioning can be scheduled and planned for. Since control systems are less dynamic than IT environments, the feasibility of maintaining a secure baselined state is more practical. Because upgrades are performed in infrequent, calculated intervals, it allows a new security baseline to be established before the system is returned to service. Exe-Guard built on the efforts of SNL under the Code Seal project. SNL demonstrated prototype Trust Anchors on the project which are independent monitoring and control devices that can be integrated into untrustworthy components. The exe-Guard team started with the lessons learned under this project then designed commercial solution for white list malware protection. Malware is a real threat, even on islanded or un-networked installations, since operators can unintentionally install infected files, plug in infected mass storage devices, or infect a piece of equipment on the islanded local area network that can then spread to other connected equipment. Protection at the device level is one of the last layers of defense in a security-in-depth defense model before an asset becomes compromised. This project provided non-destructive intrusion, isolation and automated response solution, achieving a goal of the Department of Energy (DOE) Roadmap to Secure Control Systems. It also addressed CIP-007-R4 which requires asset owners to employ malicious software prevention tools on assets within the electronic security perimeter. In addition, the CIP-007-R3 requirement for security patch management is minimized because white listing narrows the impact of vulnerabilities and patch releases. The exe-Guard Project completed all tasks identified in the statement of project objective and identified additional tasks within scope that were performed and completed within the original budget. The cost share was met and all deliverables were successfully completed and submitted on time. Most importantly the technology developed and commercialized under this project has been adopted by the Energy sector and thousands of devices with exe-Guard technology integrated in them have now been deployed and are protecting our power systems today« less

  2. REVEAL: Software Documentation and Platform Migration

    NASA Technical Reports Server (NTRS)

    Wilson, Michael A.; Veibell, Victoir T.; Freudinger, Lawrence C.

    2008-01-01

    The Research Environment for Vehicle Embedded Analysis on Linux (REVEAL) is reconfigurable data acquisition software designed for network-distributed test and measurement applications. In development since 2001, it has been successfully demonstrated in support of a number of actual missions within NASA s Suborbital Science Program. Improvements to software configuration control were needed to properly support both an ongoing transition to operational status and continued evolution of REVEAL capabilities. For this reason the project described in this report targets REVEAL software source documentation and deployment of the software on a small set of hardware platforms different from what is currently used in the baseline system implementation. This report specifically describes the actions taken over a ten week period by two undergraduate student interns and serves as a final report for that internship. The topics discussed include: the documentation of REVEAL source code; the migration of REVEAL to other platforms; and an end-to-end field test that successfully validates the efforts.

  3. LightForce Photon-Pressure Collision Avoidance: Updated Efficiency Analysis Utilizing a Highly Parallel Simulation Approach

    DTIC Science & Technology

    2014-09-01

    simulation time frame from 30 days to one year. This was enabled by porting the simulation to the Pleiades supercomputer at NASA Ames Research Center, a...including the motivation for changes to our past approach. We then present the software implementation (3) on the NASA Ames Pleiades supercomputer...significantly updated since last year’s paper [25]. The main incentive for that was the shift to a highly parallel approach in order to utilize the Pleiades

  4. Atmospheric release model for the E-area low-level waste facility: Updates and modifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    The atmospheric release model (ARM) utilizes GoldSim® Monte Carlo simulation software (GTG, 2017) to evaluate the flux of gaseous radionuclides as they volatilize from E-Area disposal facility waste zones, diffuse into the air-filled soil pores surrounding the waste, and emanate at the land surface. This report documents the updates and modifications to the ARM for the next planned E-Area PA considering recommendations from the 2015 PA strategic planning team outlined by Butcher and Phifer.

  5. Balancing exploration and exploitation in population-based sampling improves fragment-based de novo protein structure prediction.

    PubMed

    Simoncini, David; Schiex, Thomas; Zhang, Kam Y J

    2017-05-01

    Conformational search space exploration remains a major bottleneck for protein structure prediction methods. Population-based meta-heuristics typically enable the possibility to control the search dynamics and to tune the balance between local energy minimization and search space exploration. EdaFold is a fragment-based approach that can guide search by periodically updating the probability distribution over the fragment libraries used during model assembly. We implement the EdaFold algorithm as a Rosetta protocol and provide two different probability update policies: a cluster-based variation (EdaRose c ) and an energy-based one (EdaRose en ). We analyze the search dynamics of our new Rosetta protocols and show that EdaRose c is able to provide predictions with lower C αRMSD to the native structure than EdaRose en and Rosetta AbInitio Relax protocol. Our software is freely available as a C++ patch for the Rosetta suite and can be downloaded from http://www.riken.jp/zhangiru/software/. Our protocols can easily be extended in order to create alternative probability update policies and generate new search dynamics. Proteins 2017; 85:852-858. © 2016 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Web-Enabled Systems for Student Access.

    ERIC Educational Resources Information Center

    Harris, Chad S.; Herring, Tom

    1999-01-01

    California State University, Fullerton is developing a suite of server-based, Web-enabled applications that distribute the functionality of its student information system software to external customers without modifying the mainframe applications or databases. The cost-effective, secure, and rapidly deployable business solution involves using the…

  7. 2005 5th Annual CMMI Technology Conference and User Group. Volume 4: Thursday

    DTIC Science & Technology

    2005-11-17

    Identification and Involvement in the CMMI, Mr. James R. Armstrong , Systems and Software Consortium Ensuring the Right Process is Deployed Right...Customer-Driven Organization Chart Marketing Management: Analysis, Planning, Implementation and Control Philip Kotler © Prentice Hall Being Customer

  8. Asset Reuse of Images from a Repository

    ERIC Educational Resources Information Center

    Herman, Deirdre

    2014-01-01

    According to Markus's theory of reuse, when digital repositories are deployed to collect and distribute organizational assets, they supposedly help ensure accountability, extend information exchange, and improve productivity. Such repositories require a large investment due to the continuing costs of hardware, software, user licenses, training,…

  9. Lunar Applications in Reconfigurable Computing

    NASA Technical Reports Server (NTRS)

    Somervill, Kevin

    2008-01-01

    NASA s Constellation Program is developing a lunar surface outpost in which reconfigurable computing will play a significant role. Reconfigurable systems provide a number of benefits over conventional software-based implementations including performance and power efficiency, while the use of standardized reconfigurable hardware provides opportunities to reduce logistical overhead. The current vision for the lunar surface architecture includes habitation, mobility, and communications systems, each of which greatly benefit from reconfigurable hardware in applications including video processing, natural feature recognition, data formatting, IP offload processing, and embedded control systems. In deploying reprogrammable hardware, considerations similar to those of software systems must be managed. There needs to be a mechanism for discovery enabling applications to locate and utilize the available resources. Also, application interfaces are needed to provide for both configuring the resources as well as transferring data between the application and the reconfigurable hardware. Each of these topics are explored in the context of deploying reconfigurable resources as an integral aspect of the lunar exploration architecture.

  10. The GMOD Drupal bioinformatic server framework.

    PubMed

    Papanicolaou, Alexie; Heckel, David G

    2010-12-15

    Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com.

  11. NASA Tech Briefs, February 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    opics covered include: Integrated Electrode Arrays for Neuro-Prosthetic Implants; Eroding Potentiometers; Common/Dependent-Pressure-Vessel Nickel-Hydrogen Batteries; 120-GHz HEMT Oscillator With Surface-Wave-Assisted Antenna; 80-GHz MMIC HEMT Voltage-Controlled Oscillator; High-Energy-Density Capacitors; Microscale Thermal-Transpiration Gas Pump; Instrument for Measuring Temperature of Water; Improved Measurement of Coherence in Presence of Instrument Noise; Compact Instruments Measure Helium-Leak Rates; Irreversible Entropy Production in Two-Phase Mixing Layers; Subsonic and Supersonic Effects in Bose-Einstein Condensate; Nanolaminate Mirrors With "Piston" Figure-Control Actuators; Mixed Conducting Electrodes for Better AMTEC Cells; Process for Encapsulating Protein Crystals; Lightweight, Self-Deployable Wheels; Grease-Resistant O Rings for Joints in Solid Rocket Motors; LabVIEW Serial Driver Software for an Electronic Load; Software Computes Tape-Casting Parameters; Software for Tracking Costs of Mars Projects; Software for Replicating Data Between X.500 and LDAP Directories; The Technical Work Plan Tracking Tool; Improved Multiple-DOF SAW Piezoelectric Motors; Propulsion Flight-Test Fixture; Mechanical Amplifier for a Piezoelectric Transducer; Swell Sleeves for Testing Explosive Devices; Linear Back-Drive Differentials; Miniature Inchworm Actuators Fabricated by Use of LIGA; Using ERF Devices to Control Deployments of Space Structures; High-Temperature Switched-Reluctance Electric Motor; System for Centering a Turbofan in a Nacelle During Tests; Fabricating Composite-Material Structures Containing SMA Ribbons; Optimal Feedback Control of Thermal Networks; Artifacts for Calibration of Submicron Width Measurements; Navigating a Mobile Robot Across Terrain Using Fuzzy Logic; Designing Facilities for Collaborative Operations; and Quantitating Iron in Serum Ferritin by Use of ICP-MS.

  12. The development of expertise using an intelligent computer-aided training system

    NASA Technical Reports Server (NTRS)

    Johnson, Debra Steele

    1991-01-01

    An initial examination was conducted of an Intelligent Tutoring System (ITS) developed for use in industry. The ITS, developed by NASA, simulated a satellite deployment task. More specifically, the PD (Payload Assist Module Deployment)/ICAT (Intelligent Computer Aided Training) System simulated a nominal Payload Assist Module (PAM) deployment. The development of expertise on this task was examined using three Flight Dynamics Officer (FDO) candidates who has no previous experience with this task. The results indicated that performance improved rapidly until Trial 5, followed by more gradual improvements through Trial 12. The performance dimensions measured included performance speed, actions completed, errors, help required, and display fields checked. Suggestions for further refining the software and for deciding when to expose trainees to more difficult task scenarios are discussed. Further, the results provide an initial demonstration of the effectiveness of the PD/ICAT system in training the nominal PAM deployment task and indicate the potential benefits of using ITS's for training other FDO tasks.

  13. The development of expertise on an intelligent tutoring system

    NASA Technical Reports Server (NTRS)

    Johnson, Debra Steele

    1989-01-01

    An initial examination was conducted of an Intelligent Tutoring System (ITS) developed for use in industry. The ITS, developed by NASA, simulated a satellite deployment task. More specifically, the PD (Payload Assist Module Deployment)/ICAT (Intelligent Computer Aided Training) System simulated a nominal Payload Assist Module (PAM) deployment. The development of expertise on this task was examined using three Flight Dynamics Officer (FDO) candidates who had no previous experience with this task. The results indicated that performance improved rapidly until Trial 5, followed by more gradual improvements through Trial 12. The performance dimensions measured included performance speed, actions completed, errors, help required, and display fields checked. Suggestions for further refining the software and for deciding when to expose trainees to more difficult task scenarios are discussed. Further, the results provide an initial demonstration of the effectiveness of the PD/ICAT system in training the nominal PAM deployment task and indicate the potential benefits of using ITS's for training other FDO tasks.

  14. A data management system for weight control and design-to-cost

    NASA Technical Reports Server (NTRS)

    Bryant, J. C.

    1978-01-01

    The definition of the mass properties data of aircraft changed on a daily basis as do design details of the aircraft. This dynamic nature of the definition has generally encouraged those responsible for the data to update the data on a weekly or monthly basis. The by-product of these infrequent updates was the requirement of manual records to maintain daily activity. The development of WAVES changed the approach to management of mass properties data. WAVES has given the ability to update the data on a daily basis thereby eliminating the need for manual records. WAVES has demonstrated that a software product can support a data management system for engineering data.

  15. PAVECHECK : training material updated user's manual including GPS.

    DOT National Transportation Integrated Search

    2009-01-01

    PAVECHECK is a software package used to integrate nondestructive test data from various testing systems to provide the pavement engineer with a comprehensive evaluation of both surface and subsurface conditions. This User's Manual is intended to demo...

  16. Emsoft User's Guide and Modeling Software (2002 Update)

    EPA Science Inventory

    Chemicals that readily vaporize at relatively low temperatures can migrate from contaminated soils into the atmosphere via a process called volatilization. Volatilization represents a potentially significant exposure pathway because humans can come in contact with volatilized com...

  17. Methodology update for estimating volume to service flow ratio.

    DOT National Transportation Integrated Search

    2015-12-01

    Volume/service flow ratio (VSF) is calculated by the Highway Performance Monitoring System (HPMS) software as an indicator of peak hour congestion. It is an essential input to the Kentucky Transportation Cabinets (KYTC) key planning applications, ...

  18. A Checklist for Submitting Your Risk Management Plan (RMP)

    EPA Pesticide Factsheets

    Important information about 2014 submissions and a checklist to consider in preparing and resubmitting a 5-year update, as required by 40 CFR part 68. Use the RMP*eSubmit software application, which replaced RMP*Submit.

  19. Electronic Flight Bag (EFB) 2015 Industry Survey.

    DOT National Transportation Integrated Search

    2015-10-01

    This document provides an overview of Electronic Flight Bag (EFB) hardware and software capabilities, including portable electronic devices (PEDs) used as EFBs, as of July 2015. This document updates and replaces the Volpe Centers previous EFB ind...

  20. New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database

    NASA Technical Reports Server (NTRS)

    Laher, Russ; Rector, John

    2004-01-01

    Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.

  1. An Update on Design Tools for Optimization of CMC 3D Fiber Architectures

    NASA Technical Reports Server (NTRS)

    Lang, J.; DiCarlo, J.

    2012-01-01

    Objective: Describe and up-date progress for NASA's efforts to develop 3D architectural design tools for CMC in general and for SIC/SiC composites in particular. Describe past and current sequential work efforts aimed at: Understanding key fiber and tow physical characteristics in conventional 2D and 3D woven architectures as revealed by microstructures in the literature. Developing an Excel program for down-selecting and predicting key geometric properties and resulting key fiber-controlled properties for various conventional 3D architectures. Developing a software tool for accurately visualizing all the key geometric details of conventional 3D architectures. Validating tools by visualizing and predicting the Internal geometry and key mechanical properties of a NASA SIC/SIC panel with a 3D orthogonal architecture. Applying the predictive and visualization tools toward advanced 3D orthogonal SiC/SIC composites, and combining them into a user-friendly software program.

  2. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.

  3. gemcWeb: A Cloud Based Nuclear Physics Simulation Software

    NASA Astrophysics Data System (ADS)

    Markelon, Sam

    2017-09-01

    gemcWeb allows users to run nuclear physics simulations from the web. Being completely device agnostic, scientists can run simulations from anywhere with an Internet connection. Having a full user system, gemcWeb allows users to revisit and revise their projects, and share configurations and results with collaborators. gemcWeb is based on simulation software gemc, which is based on standard GEant4. gemcWeb requires no C++, gemc, or GEant4 knowledge. Using a simple but powerful GUI allows users to configure their project from geometries and configurations stored on the deployment server. Simulations are then run on the server, with results being posted to the user, and then securely stored. Python based and open-source, the main version of gemcWeb is hosted internally at Jefferson National Labratory and used by the CLAS12 and Electron-Ion Collider Project groups. However, as the software is open-source, and hosted as a GitHub repository, an instance can be deployed on the open web, or any institution's intra-net. An instance can be configured to host experiments specific to an institution, and the code base can be modified by any individual or group. Special thanks to: Maurizio Ungaro, PhD., creator of gemc; Markus Diefenthaler, PhD., advisor; and Kyungseon Joo, PhD., advisor.

  4. The Particle Physics Data Grid. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    2002-08-16

    The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less

  5. Radiological Monitoring Equipment For Real-Time Quantification Of Area Contamination In Soils And Facility Decommissioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. V. Carpenter; Jay A. Roach; John R Giles

    2005-09-01

    The environmental restoration industry offers several sys¬tems that perform scan-type characterization of radiologically contaminated areas. The Idaho National Laboratory (INL) has developed and deployed a suite of field systems that rapidly scan, characterize, and analyse radiological contamination in surface soils. The base system consists of a detector, such as sodium iodide (NaI) spectrometers, a global positioning system (GPS), and an integrated user-friendly computer interface. This mobile concept was initially developed to provide precertifica¬tion analyses of soils contaminated with uranium, thorium, and radium at the Fernald Closure Project, near Cincinnati, Ohio. INL has expanded the functionality of this basic system tomore » create a suite of integrated field-deployable analytical systems. Using its engineering and radiation measurement expertise, aided by computer hardware and software support, INL has streamlined the data acquisition and analysis process to provide real-time information presented on wireless screens and in the form of coverage maps immediately available to field technicians. In addition, custom software offers a user-friendly interface with user-selectable alarm levels and automated data quality monitoring functions that validate the data. This system is deployed from various platforms, depending on the nature of the survey. The deployment platforms include a small all-terrain vehicle used to survey large, relatively flat areas, a hand-pushed unit for areas where manoeuvrability is important, an excavator-mounted system used to scan pits and trenches where personnel access is restricted, and backpack- mounted systems to survey rocky shoreline features and other physical settings that preclude vehicle-based deployment. Variants of the base system include sealed proportional counters for measuring actinides (i.e., plutonium-238 and americium-241) in building demolitions, soil areas, roadbeds, and process line routes at the Miamisburg Closure Project near Dayton, Ohio. In addition, INL supports decontamination operations at the Oak Ridge National Laboratory.« less

  6. HashDist: Reproducible, Relocatable, Customizable, Cross-Platform Software Stacks for Open Hydrological Science

    NASA Astrophysics Data System (ADS)

    Ahmadia, A. J.; Kees, C. E.

    2014-12-01

    Developing scientific software is a continuous balance between not reinventing the wheel and getting fragile codes to interoperate with one another. Binary software distributions such as Anaconda provide a robust starting point for many scientific software packages, but this solution alone is insufficient for many scientific software developers. HashDist provides a critical component of the development workflow, enabling highly customizable, source-driven, and reproducible builds for scientific software stacks, available from both the IPython Notebook and the command line. To address these issues, the Coastal and Hydraulics Laboratory at the US Army Engineer Research and Development Center has funded the development of HashDist in collaboration with Simula Research Laboratories and the University of Texas at Austin. HashDist is motivated by a functional approach to package build management, and features intelligent caching of sources and builds, parametrized build specifications, and the ability to interoperate with system compilers and packages. HashDist enables the easy specification of "software stacks", which allow both the novice user to install a default environment and the advanced user to configure every aspect of their build in a modular fashion. As an advanced feature, HashDist builds can be made relocatable, allowing the easy redistribution of binaries on all three major operating systems as well as cloud, and supercomputing platforms. As a final benefit, all HashDist builds are reproducible, with a build hash specifying exactly how each component of the software stack was installed. This talk discusses the role of HashDist in the hydrological sciences, including its use by the Coastal and Hydraulics Laboratory in the development and deployment of the Proteus Toolkit as well as the Rapid Operational Access and Maneuver Support project. We demonstrate HashDist in action, and show how it can effectively support development, deployment, teaching, and reproducibility for scientists working in the hydrological sciences. The HashDist documentation is available from: http://hashdist.readthedocs.org/en/latest/ HashDist is currently hosted at: https://github.com/hashdist/hashdist

  7. Medical Surveillance Monthly Report (MSMR). Volume 15, Number 4, May 2008

    DTIC Science & Technology

    2008-05-01

    diagnoses of sarcoidosis , active components, U.S. Armed Forces, 1999-2007 _______________ 15 Update: Deployment health assessments, U.S. Armed Forces...VOL. 15 / NO. 4 • MAY 2008 15 Incident Diagnoses of Sarcoidosis , Active Components, U.S. Armed Forces, 1999-2007 Figure 1. Annual numbers of...incident diagnoses of sarcoidosis by clinical setting, and proportions of incident cases diagnosed during hospitalization, active components, U.S. Armed

  8. Fabric Structures Team Technology Update

    DTIC Science & Technology

    2011-11-01

    Command Posts – • Julia McAdams – Chemical Engineer • Liz Swisher – Electrical Engineer • Chris Aall – Mechanical Engineer • Clinton McAdams...TEMPER design originally built for AMED through Force Provider (640 sq ft with a 20 ft long airlock) • The entire airlock is made of textiles and...Activity (USAMMDA) UNCLASSIFIED Large Command Post Airbeam Shelter NSRDEC Deployment – Sept 2011 UNCLASSIFIED Airbeam & Frame Backpackable Tents • Primary

  9. Focused Logistics, Joint Vision 2010: A Joint Logistics Roadmap

    DTIC Science & Technology

    2010-01-01

    AIS). AIT devices include bar codes for individual items, optical memory cards for multipacks and containers, radio frequency tags for containers and...Fortezza Card and Firewall technologies are being developed to prevent unau- thorized access. As for infrastructure, DISA has already made significant in...radio frequency tags and optical memory cards , to continuously update the JTAV database. By September 1998, DSS will be deployed in all wholesale

  10. Orbital operations study. Appendix A: Interactivity analysis

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Supplemental analyses conducted to verify that safe, feasible, design concepts exist for accomplishing the attendant interface activities of the orbital operations mission are presented. The data are primarily concerned with functions and concepts common to more than one of the interfacing activities or elements. Specific consideration is given to state vector update, payload deployment, communications links, jet plume impingement, attached element operations, docking and structural interface assessment, and propellant transfer.

  11. Movements and Habitat Use of Dwarf and Pygmy Sperm Whales using Remotely-deployed LIMPET Satellite Tags

    DTIC Science & Technology

    2015-09-30

    whales that facilitate relatively close approaches to the animals without obviously disturbing them. With this experience, during field projects in... Marine Mammal Electronic Tags” funded through a Science and Technology Transfer (STTR) program Phase II Option 1 contract, Office of Naval Research...assessment: updating photo-identification catalogs for estimating abundance, assessing the nature and extent of fishery interactions with pantropical

  12. Hinode/EIS science planning and operations tools

    NASA Astrophysics Data System (ADS)

    Rainnie, Jonn A.

    2016-07-01

    We present the design, implementation and maintenance of the suite of software enabling scientists to design and schedule Hinode/EIS1 operations. The total of this software is the EIS Science Planning Tools (EISPT), and is predominately written in IDL (Interactive Data Language), coupled with SolarSoft (SSW), an IDL library developed for solar missions. Hinode is a multi-instrument and wavelength mission designed to observe the Sun. It is a joint Japan/UK/US consortium (with ESA and Norwegian involvement). Launched in September 2006, its principal scientific goals are to study the Sun's variability and the causes of solar activity. Hinode operations are coordinated at ISAS (Tokyo, Japan). A daily Science Operations meeting is attended by the instrument teams and the spacecraft team. Nominally, science plan uploads cover periods of two or three days. When the forthcoming operations have been agreed, the necessary spacecraft operations parameters are created. These include scheduling for spacecraft pointing and ground stations. The Extreme UV Imaging Spectrometer (EIS) instrument, led by the UK (the PI institute is MSSL), is designed to observe the emission spectral lines of the solar atmosphere. Observations are composed of reusable, hierarchical components, including lines lists (wavelengths of spectral lines), rasters (exposure times, line list, etc.) and studies (defines one or more rasters). Studies are the basic unit of "timeline" scheduling. They are a useful construct for generating more complex sequences of observations, reducing the planning burden. Instrument observations must first be validated. An initial requirement was that operations be shared equally by the 3 main EIS teams (Japan, UK and US). Hence, a major design focus of the software was "Remote Operations", whereby any scientist in any location can run the software, schedule a science plan and send it to the spacecraft commanding team. It would then be validated and combined with the science plans of the other instruments. Then uploaded to the spacecraft. As for any space mission, telemetry size and rate are important constraints. For each planning cycle the instruments are issued a maximum data allocation. EISPT interactively calculates the telemetry requirements of each observation and plan. Autonomous operations was a challenging concept designed to observe the early onset of various dynamic events, including solar flares. The planning cycle precluded observers responding to such short-term events. Hence, the instrument can be run in a (low-telemetry) "hunter" mode at a suitable target. Upon detecting an event the current observation ceases and another automatically begins at the event location. This "response" observation involves a smaller field-of-view and higher cadence. It's impossible to predict if this mechanism will be activated, and if so how much telemetry is acquired. The EISPT has operated successfully since it was deployed in November 2006. Nominally it is used six days a week. It has been maintained and updated as required to take account of changing mission operations. A large update was made in 2013/14 to develop the facility to coordinate observations with other solar missions (SDO/AIA and IRIS).

  13. The geospatial modeling interface (GMI) framework for deploying and assessing environmental models

    USDA-ARS?s Scientific Manuscript database

    Geographical information systems (GIS) software packages have been used for close to three decades as analytical tools in environmental management for geospatial data assembly, processing, storage, and visualization of input data and model output. However, with increasing availability and use of ful...

  14. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, ENVIRONMENTAL DECISION SUPPORT SOFTWARE, UNIVERSITY OF TENNESSEE RESEARCH CORPORATION, SPATIAL ANALYSIS AND DECISION ASSISTANCE (SADA)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has created the Environmental Technology Verification Program (ETV) to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The goal of the...

  15. PIV/HPIV Film Analysis Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV/HPIV film analysis software system was developed that calculates the 2-dimensional spatial autocorrelations of subregions of Particle Image Velocimetry (PIV) or Holographic Particle Image Velocimetry (HPIV) film recordings. The software controls three hardware subsystems including (1) a Kodak Megaplus 1.4 camera and EPIX 4MEG framegrabber subsystem, (2) an IEEE/Unidex 11 precision motion control subsystem, and (3) an Alacron I860 array processor subsystem. The software runs on an IBM PC/AT host computer running either the Microsoft Windows 3.1 or Windows 95 operating system. It is capable of processing five PIV or HPIV displacement vectors per second, and is completely automated with the exception of user input to a configuration file prior to analysis execution for update of various system parameters.

  16. General Purpose Fortran Program for Discrete-Ordinate-Method Radiative Transfer in Scattering and Emitting Layered Media: An Update of DISORT

    NASA Technical Reports Server (NTRS)

    Tsay, Si-Chee; Stamnes, Knut; Wiscombe, Warren; Laszlo, Istvan; Einaudi, Franco (Technical Monitor)

    2000-01-01

    This update reports a state-of-the-art discrete ordinate algorithm for monochromatic unpolarized radiative transfer in non-isothermal, vertically inhomogeneous, but horizontally homogeneous media. The physical processes included are Planckian thermal emission, scattering with arbitrary phase function, absorption, and surface bidirectional reflection. The system may be driven by parallel or isotropic diffuse radiation incident at the top boundary, as well as by internal thermal sources and thermal emission from the boundaries. Radiances, fluxes, and mean intensities are returned at user-specified angles and levels. DISORT has enjoyed considerable popularity in the atmospheric science and other communities since its introduction in 1988. Several new DISORT features are described in this update: intensity correction algorithms designed to compensate for the 8-M forward-peak scaling and obtain accurate intensities even in low orders of approximation; a more general surface bidirectional reflection option; and an exponential-linear approximation of the Planck function allowing more accurate solutions in the presence of large temperature gradients. DISORT has been designed to be an exemplar of good scientific software as well as a program of intrinsic utility. An extraordinary effort has been made to make it numerically well-conditioned, error-resistant, and user-friendly, and to take advantage of robust existing software tools. A thorough test suite is provided to verify the program both against published results, and for consistency where there are no published results. This careful attention to software design has been just as important in DISORT's popularity as its powerful algorithmic content.

  17. OASIS: a data and software distribution service for Open Science Grid

    NASA Astrophysics Data System (ADS)

    Bockelman, B.; Caballero Bejar, J.; De Stefano, J.; Hover, J.; Quick, R.; Teige, S.

    2014-06-01

    The Open Science Grid encourages the concept of software portability: a user's scientific application should be able to run at as many sites as possible. It is necessary to provide a mechanism for OSG Virtual Organizations to install software at sites. Since its initial release, the OSG Compute Element has provided an application software installation directory to Virtual Organizations, where they can create their own sub-directory, install software into that sub-directory, and have the directory shared on the worker nodes at that site. The current model has shortcomings with regard to permissions, policies, versioning, and the lack of a unified, collective procedure or toolset for deploying software across all sites. Therefore, a new mechanism for data and software distributing is desirable. The architecture for the OSG Application Software Installation Service (OASIS) is a server-client model: the software and data are installed only once in a single place, and are automatically distributed to all client sites simultaneously. Central file distribution offers other advantages, including server-side authentication and authorization, activity records, quota management, data validation and inspection, and well-defined versioning and deletion policies. The architecture, as well as a complete analysis of the current implementation, will be described in this paper.

  18. ETICS: the international software engineering service for the grid

    NASA Astrophysics Data System (ADS)

    Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.

    2008-07-01

    The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.

  19. Using Docker Containers to Extend Reproducibility Architecture for the NASA Earth Exchange (NEX)

    NASA Technical Reports Server (NTRS)

    Votava, Petr; Michaelis, Andrew; Spaulding, Ryan; Becker, Jeffrey C.

    2016-01-01

    NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. As NEX has been growing into a petabyte-size platform for analysis, experiments and data production, it has been increasingly important to enable users to easily retrace their steps, identify what datasets were produced by which process chains, and give them ability to readily reproduce their results. This can be a tedious and difficult task even for a small project, but is almost impossible on large processing pipelines. We have developed an initial reproducibility and knowledge capture solution for the NEX, however, if users want to move the code to another system, whether it is their home institution cluster, laptop or the cloud, they have to find, build and install all the required dependencies that would run their code. This can be a very tedious and tricky process and is a big impediment to moving code to data and reproducibility outside the original system. The NEX team has tried to assist users who wanted to move their code into OpenNEX on Amazon cloud by creating custom virtual machines with all the software and dependencies installed, but this, while solving some of the issues, creates a new bottleneck that requires the NEX team to be involved with any new request, updates to virtual machines and general maintenance support. In this presentation, we will describe a solution that integrates NEX and Docker to bridge the gap in code-to-data migration. The core of the solution is saemi-automatic conversion of science codes, tools and services that are already tracked and described in the NEX provenance system, to Docker - an open-source Linux container software. Docker is available on most computer platforms, easy to install and capable of seamlessly creating and/or executing any application packaged in the appropriate format. We believe this is an important step towards seamless process deployment in heterogeneous environments that will enhance community access to NASA data and tools in a scalable way, promote software reuse, and improve reproducibility of scientific results.

  20. Using Docker Containers to Extend Reproducibility Architecture for the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Votava, P.; Michaelis, A.; Spaulding, R.; Becker, J. C.

    2016-12-01

    NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. As NEX has been growing into a petabyte-size platform for analysis, experiments and data production, it has been increasingly important to enable users to easily retrace their steps, identify what datasets were produced by which process chains, and give them ability to readily reproduce their results. This can be a tedious and difficult task even for a small project, but is almost impossible on large processing pipelines. We have developed an initial reproducibility and knowledge capture solution for the NEX, however, if users want to move the code to another system, whether it is their home institution cluster, laptop or the cloud, they have to find, build and install all the required dependencies that would run their code. This can be a very tedious and tricky process and is a big impediment to moving code to data and reproducibility outside the original system. The NEX team has tried to assist users who wanted to move their code into OpenNEX on Amazon cloud by creating custom virtual machines with all the software and dependencies installed, but this, while solving some of the issues, creates a new bottleneck that requires the NEX team to be involved with any new request, updates to virtual machines and general maintenance support. In this presentation, we will describe a solution that integrates NEX and Docker to bridge the gap in code-to-data migration. The core of the solution is saemi-automatic conversion of science codes, tools and services that are already tracked and described in the NEX provenance system, to Docker - an open-source Linux container software. Docker is available on most computer platforms, easy to install and capable of seamlessly creating and/or executing any application packaged in the appropriate format. We believe this is an important step towards seamless process deployment in heterogeneous environments that will enhance community access to NASA data and tools in a scalable way, promote software reuse, and improve reproducibility of scientific results.

  1. Feasibility Study of Implementing a Mobile Collaborative Information Platform for International Safeguards Inspections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gastelum, Zoe N.; Gitau, Ernest T. N.; Doehle, Joel R.

    2014-09-01

    In response to the growing pervasiveness of mobile technologies such as tablets and smartphones, the International Atomic Energy Agency and the U.S. Department of Energy National Laboratories have been exploring the potential use of these platforms for international safeguards activities. Specifically of interest are information systems (software, and accompanying servers and architecture) deployed on mobile devices to increase the situational awareness and productivity of an IAEA safeguards inspector in the field, while simultaneously reducing paperwork and pack weight of safeguards equipment. Exploratory development in this area has been met with skepticism regarding the ability to overcome technology deployment challenges formore » IAEA safeguards equipment. This report documents research conducted to identify potential challenges for the deployment of a mobile collaborative information system to the IAEA, and proposes strategies to mitigate those challenges.« less

  2. Development and Deployment of the OpenMRS-Ebola Electronic Health Record System for an Ebola Treatment Center in Sierra Leone

    PubMed Central

    Jazayeri, Darius; Teich, Jonathan M; Ball, Ellen; Nankubuge, Patricia Alexandra; Rwebembera, Job; Wing, Kevin; Sesay, Alieu Amara; Kanter, Andrew S; Ramos, Glauber D; Walton, David; Cummings, Rachael; Checchi, Francesco; Fraser, Hamish S

    2017-01-01

    Background Stringent infection control requirements at Ebola treatment centers (ETCs), which are specialized facilities for isolating and treating Ebola patients, create substantial challenges for recording and reviewing patient information. During the 2014-2016 West African Ebola epidemic, paper-based data collection systems at ETCs compromised the quality, quantity, and confidentiality of patient data. Electronic health record (EHR) systems have the potential to address such problems, with benefits for patient care, surveillance, and research. However, no suitable software was available for deployment when large-scale ETCs opened as the epidemic escalated in 2014. Objective We present our work on rapidly developing and deploying OpenMRS-Ebola, an EHR system for the Kerry Town ETC in Sierra Leone. We describe our experience, lessons learned, and recommendations for future health emergencies. Methods We used the OpenMRS platform and Agile software development approaches to build OpenMRS-Ebola. Key features of our work included daily communications between the development team and ground-based operations team, iterative processes, and phased development and implementation. We made design decisions based on the restrictions of the ETC environment and regular user feedback. To evaluate the system, we conducted predeployment user questionnaires and compared the EHR records with duplicate paper records. Results We successfully built OpenMRS-Ebola, a modular stand-alone EHR system with a tablet-based application for infectious patient wards and a desktop-based application for noninfectious areas. OpenMRS-Ebola supports patient tracking (registration, bed allocation, and discharge); recording of vital signs and symptoms; medication and intravenous fluid ordering and monitoring; laboratory results; clinician notes; and data export. It displays relevant patient information to clinicians in infectious and noninfectious zones. We implemented phase 1 (patient tracking; drug ordering and monitoring) after 2.5 months of full-time development. OpenMRS-Ebola was used for 112 patient registrations, 569 prescription orders, and 971 medication administration recordings. We were unable to fully implement phases 2 and 3 as the ETC closed because of a decrease in new Ebola cases. The phase 1 evaluation suggested that OpenMRS-Ebola worked well in the context of the rollout, and the user feedback was positive. Conclusions To our knowledge, OpenMRS-Ebola is the most comprehensive adaptable clinical EHR built for a low-resource setting health emergency. It is designed to address the main challenges of data collection in highly infectious environments that require robust infection prevention and control measures and it is interoperable with other electronic health systems. Although we built and deployed OpenMRS-Ebola more rapidly than typical software, our work highlights the challenges of having to develop an appropriate system during an emergency rather than being able to rapidly adapt an existing one. Lessons learned from this and previous emergencies should be used to ensure that a set of well-designed, easy-to-use, pretested health software is ready for quick deployment in future. PMID:28827211

  3. The study on network security based on software engineering

    NASA Astrophysics Data System (ADS)

    Jia, Shande; Ao, Qian

    2012-04-01

    Developing a SP is a sensitive task because the SP itself can lead to security weaknesses if it is not conform to the security properties. Hence, appropriate techniques are necessary to overcome such problems. These techniques must accompany the policy throughout its deployment phases. The main contribution of this paper is then, the proposition of three of these activities: validation, test and multi-SP conflict management. Our techniques are inspired by the well established techniques of the software engineering for which we have found some similarities with the security domain.

  4. The Ensemble Canon

    NASA Technical Reports Server (NTRS)

    MIittman, David S

    2011-01-01

    Ensemble is an open architecture for the development, integration, and deployment of mission operations software. Fundamentally, it is an adaptation of the Eclipse Rich Client Platform (RCP), a widespread, stable, and supported framework for component-based application development. By capitalizing on the maturity and availability of the Eclipse RCP, Ensemble offers a low-risk, politically neutral path towards a tighter integration of operations tools. The Ensemble project is a highly successful, ongoing collaboration among NASA Centers. Since 2004, the Ensemble project has supported the development of mission operations software for NASA's Exploration Systems, Science, and Space Operations Directorates.

  5. Lattice QCD Application Development within the US DOE Exascale Computing Project

    NASA Astrophysics Data System (ADS)

    Brower, Richard; Christ, Norman; DeTar, Carleton; Edwards, Robert; Mackenzie, Paul

    2018-03-01

    In October, 2016, the US Department of Energy launched the Exascale Computing Project, which aims to deploy exascale computing resources for science and engineering in the early 2020's. The project brings together application teams, software developers, and hardware vendors in order to realize this goal. Lattice QCD is one of the applications. Members of the US lattice gauge theory community with significant collaborators abroad are developing algorithms and software for exascale lattice QCD calculations. We give a short description of the project, our activities, and our plans.

  6. The Collaborative Information Portal and NASA's Mars Exploration Rover Mission

    NASA Technical Reports Server (NTRS)

    Mak, Ronald; Walton, Joan

    2005-01-01

    The Collaborative Information Portal was enterprise software developed jointly by the NASA Ames Research Center and the Jet Propulsion Laboratory for NASA's Mars Exploration Rover mission. Mission managers, engineers, scientists, and researchers used this Internet application to view current staffing and event schedules, download data and image files generated by the rovers, receive broadcast messages, and get accurate times in various Mars and Earth time zones. This article describes the features, architecture, and implementation of this software, and concludes with lessons we learned from its deployment and a look towards future missions.

  7. Lattice QCD Application Development within the US DOE Exascale Computing Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brower, Richard; Christ, Norman; DeTar, Carleton

    In October, 2016, the US Department of Energy launched the Exascale Computing Project, which aims to deploy exascale computing resources for science and engineering in the early 2020's. The project brings together application teams, software developers, and hardware vendors in order to realize this goal. Lattice QCD is one of the applications. Members of the US lattice gauge theory community with significant collaborators abroad are developing algorithms and software for exascale lattice QCD calculations. We give a short description of the project, our activities, and our plans.

  8. Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments

    NASA Astrophysics Data System (ADS)

    Kintsakis, Athanassios M.; Psomopoulos, Fotis E.; Symeonidis, Andreas L.; Mitkas, Pericles A.

    Hermes introduces a new "describe once, run anywhere" paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.

  9. Highway User Benefit Analysis System Research Project #128

    DOT National Transportation Integrated Search

    2000-10-01

    In this research, a methodology for estimating road user costs of various competing alternatives was developed. Also, software was developed to calculate the road user cost, perform economic analysis and update cost tables. The methodology is based o...

  10. Development of an Automated Security Incident Reporting System (SIRS) for Bus Transit

    DOT National Transportation Integrated Search

    1986-12-01

    The security incident reporting system (sirs) is a microcomputer-based software program demonstrated at the metropolitan transit commission (mtc) in Minneapolis, mn. Sirs is designed to provide convenient storage, update and retrieval of security inc...

  11. Text File Comparator

    NASA Technical Reports Server (NTRS)

    Kotler, R. S.

    1983-01-01

    File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

  12. Anatomy of an Extensible Open Source PACS.

    PubMed

    Valente, Frederico; Silva, Luís A Bastião; Godinho, Tiago Marques; Costa, Carlos

    2016-06-01

    The conception and deployment of cost effective Picture Archiving and Communication Systems (PACS) is a concern for small to medium medical imaging facilities, research environments, and developing countries' healthcare institutions. Financial constraints and the specificity of these scenarios contribute to a low adoption rate of PACS in those environments. Furthermore, with the advent of ubiquitous computing and new initiatives to improve healthcare information technologies and data sharing, such as IHE and XDS-i, a PACS must adapt quickly to changes. This paper describes Dicoogle, a software framework that enables developers and researchers to quickly prototype and deploy new functionality taking advantage of the embedded Digital Imaging and Communications in Medicine (DICOM) services. This full-fledged implementation of a PACS archive is very amenable to extension due to its plugin-based architecture and out-of-the-box functionality, which enables the exploration of large DICOM datasets and associated metadata. These characteristics make the proposed solution very interesting for prototyping, experimentation, and bridging functionality with deployed applications. Besides being an advanced mechanism for data discovery and retrieval based on DICOM object indexing, it enables the detection of inconsistencies in an institution's data and processes. Several use cases have benefited from this approach such as radiation dosage monitoring, Content-Based Image Retrieval (CBIR), and the use of the framework as support for classes targeting software engineering for clinical contexts.

  13. Ensemble: an Architecture for Mission-Operations Software

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Powell, Mark; Fox, Jason; Rabe, Kenneth; Shu, IHsiang; McCurdy, Michael; Vera, Alonso

    2008-01-01

    Ensemble is the name of an open architecture for, and a methodology for the development of, spacecraft mission operations software. Ensemble is also potentially applicable to the development of non-spacecraft mission-operations- type software. Ensemble capitalizes on the strengths of the open-source Eclipse software and its architecture to address several issues that have arisen repeatedly in the development of mission-operations software: Heretofore, mission-operations application programs have been developed in disparate programming environments and integrated during the final stages of development of missions. The programs have been poorly integrated, and it has been costly to develop, test, and deploy them. Users of each program have been forced to interact with several different graphical user interfaces (GUIs). Also, the strategy typically used in integrating the programs has yielded serial chains of operational software tools of such a nature that during use of a given tool, it has not been possible to gain access to the capabilities afforded by other tools. In contrast, the Ensemble approach offers a low-risk path towards tighter integration of mission-operations software tools.

  14. An Integrated Software Package to Enable Predictive Simulation Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang

    The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less

  15. Developing a Cyberinfrastructure for integrated assessments of environmental contaminants.

    PubMed

    Kaur, Taranjit; Singh, Jatinder; Goodale, Wing M; Kramar, David; Nelson, Peter

    2005-03-01

    The objective of this study was to design and implement prototype software for capturing field data and automating the process for reporting and analyzing the distribution of mercury. The four phase process used to design, develop, deploy and evaluate the prototype software is described. Two different development strategies were used: (1) design of a mobile data collection application intended to capture field data in a meaningful format and automate transfer into user databases, followed by (2) a re-engineering of the original software to develop an integrated database environment with improved methods for aggregating and sharing data. Results demonstrated that innovative use of commercially available hardware and software components can lead to the development of an end-to-end digital cyberinfrastructure that captures, records, stores, transmits, compiles and integrates multi-source data as it relates to mercury.

  16. NASA Tech Briefs, August 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Topics covered include: Stable, Thermally Conductive Fillers for Bolted Joints; Connecting to Thermocouples with Fewer Lead Wires; Zipper Connectors for Flexible Electronic Circuits; Safety Interlock for Angularly Misdirected Power Tool; Modular, Parallel Pulse-Shaping Filter Architectures; High-Fidelity Piezoelectric Audio Device; Photovoltaic Power Station with Ultracapacitors for Storage; Time Analyzer for Time Synchronization and Monitor of the Deep Space Network; Program for Computing Albedo; Integrated Software for Analyzing Designs of Launch Vehicles; Abstract-Reasoning Software for Coordinating Multiple Agents; Software Searches for Better Spacecraft-Navigation Models; Software for Partly Automated Recognition of Targets; Antistatic Polycarbonate/Copper Oxide Composite; Better VPS Fabrication of Crucibles and Furnace Cartridges; Burn-Resistant, Strong Metal-Matrix Composites; Self-Deployable Spring-Strip Booms; Explosion Welding for Hermetic Containerization; Improved Process for Fabricating Carbon Nanotube Probes; Automated Serial Sectioning for 3D Reconstruction; and Parallel Subconvolution Filtering Architectures.

  17. Social Learning Strategies: Bridge-Building between Fields.

    PubMed

    Kendal, Rachel L; Boogert, Neeltje J; Rendell, Luke; Laland, Kevin N; Webster, Mike; Jones, Patricia L

    2018-07-01

    While social learning is widespread, indiscriminate copying of others is rarely beneficial. Theory suggests that individuals should be selective in what, when, and whom they copy, by following 'social learning strategies' (SLSs). The SLS concept has stimulated extensive experimental work, integrated theory, and empirical findings, and created impetus to the social learning and cultural evolution fields. However, the SLS concept needs updating to accommodate recent findings that individuals switch between strategies flexibly, that multiple strategies are deployed simultaneously, and that there is no one-to-one correspondence between psychological heuristics deployed and resulting population-level patterns. The field would also benefit from the simultaneous study of mechanism and function. SLSs provide a useful vehicle for bridge-building between cognitive psychology, neuroscience, and evolutionary biology. Copyright © 2018. Published by Elsevier Ltd.

  18. Software augmented buildings: Exploiting existing infrastructure to improve energy efficiency and comfort in commercial buildings

    NASA Astrophysics Data System (ADS)

    Balaji, Bharathan

    Commercial buildings consume 19% of energy in the US as of 2010, and traditionally, their energy use has been optimized through improved equipment efficiency and retrofits. Beyond improved hardware and infrastructure, there exists a tremendous potential in reducing energy use through better monitoring and operation. We present several applications that we developed and deployed to support our thesis that building energy use can be reduced through sensing, monitoring and optimization software that modulates use of building subsystems including HVAC. We focus on HVAC systems as these constitute 48-55% of building energy use. Specifically, in case of sensing, we describe an energy apportionment system that enables us to estimate real-time zonal HVAC power consumption by analyzing existing sensor information. With this energy breakdown, we can measure effectiveness of optimization solutions and identify inefficiencies. Central to energy efficiency improvement is determination of human occupancy in buildings. But this information is often unavailable or expensive to obtain using wide scale sensor deployment. We present our system that infers room level occupancy inexpensively by leveraging existing WiFi infrastructure. Occupancy information can be used not only to directly control HVAC but also to infer state of the building for predictive control. Building energy use is strongly influenced by human behaviors, and timely feedback mechanisms can encourage energy saving behavior. Occupants interact with HVAC using thermostats which has shown to be inadequate for thermal comfort. Building managers are responsible for incorporating energy efficiency measures, but our interviews reveal that they struggle to maintain efficiency due to lack of analytical tools and contextual information. We present our software services that provide energy feedback to occupants and building managers, improves comfort with personalized control and identifies energy wasting faults. For wide scale deployment of such energy saving software, they need to be portable across multiple buildings. However, buildings consist of heterogeneous equipment and use inconsistent naming schema, and developers need extensive domain knowledge to map sensor information to a standard format. To enable portability, we present an active learning algorithm that automates mapping building sensor metadata to a standard naming schema.

  19. Unidata Cyberinfrastructure in the Cloud

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Young, J. W.

    2016-12-01

    Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.

  20. SparkMed: a framework for dynamic integration of multimedia medical data into distributed m-Health systems.

    PubMed

    Constantinescu, Liviu; Kim, Jinman; Feng, David Dagan

    2012-01-01

    With the advent of 4G and other long-term evolution (LTE) wireless networks, the traditional boundaries of patient record propagation are diminishing as networking technologies extend the reach of hospital infrastructure and provide on-demand mobile access to medical multimedia data. However, due to legacy and proprietary software, storage and decommissioning costs, and the price of centralization and redevelopment, it remains complex, expensive, and often unfeasible for hospitals to deploy their infrastructure for online and mobile use. This paper proposes the SparkMed data integration framework for mobile healthcare (m-Health), which significantly benefits from the enhanced network capabilities of LTE wireless technologies, by enabling a wide range of heterogeneous medical software and database systems (such as the picture archiving and communication systems, hospital information system, and reporting systems) to be dynamically integrated into a cloud-like peer-to-peer multimedia data store. Our framework allows medical data applications to share data with mobile hosts over a wireless network (such as WiFi and 3G), by binding to existing software systems and deploying them as m-Health applications. SparkMed integrates techniques from multimedia streaming, rich Internet applications (RIA), and remote procedure call (RPC) frameworks to construct a Self-managing, Pervasive Automated netwoRK for Medical Enterprise Data (SparkMed). Further, it is resilient to failure, and able to use mobile and handheld devices to maintain its network, even in the absence of dedicated server devices. We have developed a prototype of the SparkMed framework for evaluation on a radiological workflow simulation, which uses SparkMed to deploy a radiological image viewer as an m-Health application for telemedical use by radiologists and stakeholders. We have evaluated our prototype using ten devices over WiFi and 3G, verifying that our framework meets its two main objectives: 1) interactive delivery of medical multimedia data to mobile devices; and 2) attaching to non-networked medical software processes without significantly impacting their performance. Consistent response times of under 500 ms and graphical frame rates of over 5 frames per second were observed under intended usage conditions. Further, overhead measurements displayed linear scalability and low resource requirements.

Top