Sample records for back-end software sub-system

  1. Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.

    PubMed

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk

    2009-07-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.

  2. Low-Cost, High-Speed Back-End Processing System for High-Frequency Ultrasound B-Mode Imaging

    PubMed Central

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T.; Shung, K. Kirk

    2009-01-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution. PMID:19574160

  3. Storage system software solutions for high-end user needs

    NASA Technical Reports Server (NTRS)

    Hogan, Carole B.

    1992-01-01

    Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.

  4. VLBI2010 Receiver Back End Comparison

    NASA Technical Reports Server (NTRS)

    Petrachenko, Bill

    2013-01-01

    VLBI2010 requires a receiver back-end to convert analog RF signals from the receiver front end into channelized digital data streams to be recorded or transmitted electronically. The back end functions are typically performed in two steps: conversion of analog RF inputs into IF bands (see Table 2), and conversion of IF bands into channelized digital data streams (see Tables 1a, 1b and 1c). The latter IF systems are now completely digital and generically referred to as digital back ends (DBEs). In Table 2 two RF conversion systems are compared, and in Tables 1a, 1b, and 1c nine DBE systems are compared. Since DBE designs are advancing rapidly, the data in these tables are only guaranteed to be current near the update date of this document.

  5. A Failing Grade for the German End-of-Life Vehicles Take-Back System

    ERIC Educational Resources Information Center

    Nakajima, Nina; Vanderburg, Willem H.

    2005-01-01

    The German end-of-life vehicle take-back system is described and analyzed in terms of its impact on the environment and the car companies involved. It is concluded that although this system is often cited as an example of a successful take-back scheme, it is not one that maximizes the value recovered from end-of-life vehicles. As a result,…

  6. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software, and systems.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., parts, firmware, software, and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software, and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  7. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  8. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  9. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  10. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  11. Front-End/Gateway Software: Availability and Usefulness.

    ERIC Educational Resources Information Center

    Kesselman, Martin

    1985-01-01

    Reviews features of front-end software packages (interface between user and online system)--database selection, search strategy development, saving and downloading, hardware and software requirements, training and documentation, online systems and database accession, and costs--and discusses gateway services (user searches through intermediary…

  12. Engineering of Data Acquiring Mobile Software and Sustainable End-User Applications

    NASA Technical Reports Server (NTRS)

    Smith, Benton T.

    2013-01-01

    The criteria for which data acquiring software and its supporting infrastructure should be designed should take the following two points into account: the reusability and organization of stored online and remote data and content, and an assessment on whether abandoning a platform optimized design in favor for a multi-platform solution significantly reduces the performance of an end-user application. Furthermore, in-house applications that control or process instrument acquired data for end-users should be designed with a communication and control interface such that the application's modules can be reused as plug-in modular components in greater software systems. The application of the above mentioned is applied using two loosely related projects: a mobile application, and a website containing live and simulated data. For the intelligent devices mobile application AIDM, the end-user interface have a platform and data type optimized design, while the database and back-end applications store this information in an organized manner and manage access to that data to only to authorized user end application(s). Finally, the content for the website was derived from a database such that the content can be included and uniform to all applications accessing the content. With these projects being ongoing, I have concluded from my research that the applicable methods presented are feasible for both projects, and that a multi-platform design for the mobile application only marginally drop the performance of the mobile application.

  13. Experimental demonstration of software defined data center optical networks with Tbps end-to-end tunability

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Zhang, Jie; Ji, Yuefeng; Li, Hui; Wang, Huitao; Ge, Chao

    2015-10-01

    The end-to-end tunability is important to provision elastic channel for the burst traffic of data center optical networks. Then, how to complete the end-to-end tunability based on elastic optical networks? Software defined networking (SDN) based end-to-end tunability solution is proposed for software defined data center optical networks, and the protocol extension and implementation procedure are designed accordingly. For the first time, the flexible grid all optical networks with Tbps end-to-end tunable transport and switch system have been online demonstrated for data center interconnection, which are controlled by OpenDayLight (ODL) based controller. The performance of the end-to-end tunable transport and switch system has been evaluated with wavelength number tuning, bit rate tuning, and transmit power tuning procedure.

  14. The initial data products from the EUVE software - A photon's journey through the End-to-End System

    NASA Technical Reports Server (NTRS)

    Antia, Behram

    1993-01-01

    The End-to-End System (EES) is a unique collection of software modules created for use at the Center for EUV Astrophysics. The 'pipeline' is a shell script which executes selected EES modules and creates initial data products: skymaps, data sets for individual sources (called 'pigeonholes') and catalogs of sources. This article emphasizes the data from the all-sky survey, conducted between July 22, 1992 and January 21, 1993. A description of each of the major data products will be given and, as an example of how the pipeline works, the reader will follow a photon's path through the software pipeline into a pigeonhole. These data products are the primary goal of the EUVE all-sky survey mission, and so their relative importance for the follow-up science will also be discussed.

  15. Status report of the end-to-end ASKAP software system: towards early science operations

    NASA Astrophysics Data System (ADS)

    Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew

    2016-08-01

    300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the "end-to-end" data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.

  16. An open-source data storage and visualization back end for experimental data.

    PubMed

    Nielsen, Kenneth; Andersen, Thomas; Jensen, Robert; Nielsen, Jane H; Chorkendorff, Ib

    2014-04-01

    In this article, a flexible free and open-source software system for data logging and presentation will be described. The system is highly modular and adaptable and can be used in any laboratory in which continuous and/or ad hoc measurements require centralized storage. A presentation component for the data back end has furthermore been written that enables live visualization of data on any device capable of displaying Web pages. The system consists of three parts: data-logging clients, a data server, and a data presentation Web site. The logging of data from independent clients leads to high resilience to equipment failure, whereas the central storage of data dramatically eases backup and data exchange. The visualization front end allows direct monitoring of acquired data to see live progress of long-duration experiments. This enables the user to alter experimental conditions based on these data and to interfere with the experiment if needed. The data stored consist both of specific measurements and of continuously logged system parameters. The latter is crucial to a variety of automation and surveillance features, and three cases of such features are described: monitoring system health, getting status of long-duration experiments, and implementation of instant alarms in the event of failure.

  17. Software Management System

    NASA Technical Reports Server (NTRS)

    1994-01-01

    A software management system, originally developed for Goddard Space Flight Center (GSFC) by Century Computing, Inc. has evolved from a menu and command oriented system to a state-of-the art user interface development system supporting high resolution graphics workstations. Transportable Applications Environment (TAE) was initially distributed through COSMIC and backed by a TAE support office at GSFC. In 1993, Century Computing assumed the support and distribution functions and began marketing TAE Plus, the system's latest version. The software is easy to use and does not require programming experience.

  18. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.

    PubMed

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.

  19. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models

    PubMed Central

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542

  20. An End-to-End System to Enable Quick, Easy and Inexpensive Deployment of Hydrometeorological Stations

    NASA Astrophysics Data System (ADS)

    Celicourt, P.; Piasecki, M.

    2014-12-01

    The high cost of hydro-meteorological data acquisition, communication and publication systems along with limited qualified human resources is considered as the main reason why hydro-meteorological data collection remains a challenge especially in developing countries. Despite significant advances in sensor network technologies which gave birth to open hardware and software, low-cost (less than $50) and low-power (in the order of a few miliWatts) sensor platforms in the last two decades, sensors and sensor network deployment remains a labor-intensive, time consuming, cumbersome, and thus expensive task. These factors give rise for the need to develop a affordable, simple to deploy, scalable and self-organizing end-to-end (from sensor to publication) system suitable for deployment in such countries. The design of the envisioned system will consist of a few Sensed-And-Programmed Arduino-based sensor nodes with low-cost sensors measuring parameters relevant to hydrological processes and a Raspberry Pi micro-computer hosting the in-the-field back-end data management. This latter comprises the Python/Django model of the CUAHSI Observations Data Model (ODM) namely DjangODM backed by a PostgreSQL Database Server. We are also developing a Python-based data processing script which will be paired with the data autoloading capability of Django to populate the DjangODM database with the incoming data. To publish the data, the WOFpy (WaterOneFlow Web Services in Python) developed by the Texas Water Development Board for 'Water Data for Texas' which can produce WaterML web services from a variety of back-end database installations such as SQLite, MySQL, and PostgreSQL will be used. A step further would be the development of an appealing online visualization tool using Python statistics and analytics tools (Scipy, Numpy, Pandas) showing the spatial distribution of variables across an entire watershed as a time variant layer on top of a basemap.

  1. Effectiveness of back-to-back testing

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.

    1987-01-01

    Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.

  2. Back-end of the fuel cycle - Indian scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wattal, P.K.

    Nuclear power has a key role in meeting the energy demands of India. This can be sustained by ensuring robust technology for the back end of the fuel cycle. Considering the modest indigenous resources of U and a huge Th reserve, India has adopted a three stage Nuclear Power Programme (NPP) based on 'closed fuel cycle' approach. This option on 'Recovery and Recycle' serves twin objectives of ensuring adequate supply of nuclear fuel and also reducing the long term radio-toxicity of the wastes. Reprocessing of the spent fuel by Purex process is currently employed. High Level Liquid Waste (HLW) generatedmore » during reprocessing is vitrified and undergoes interim storage. Back-end technologies are constantly modified to address waste volume minimization and radio-toxicity reduction. Long-term management of HLW in Indian context would involve partitioning of long lived minor actinides and recovery of valuable fission products specifically cesium. Recovery of minor actinides from HLW and its recycle is highly desirable for the sustained growth of India's NPPs. In this context, programme for developing and deploying partitioning technologies on industrial scale is pursued. The partitioned elements could be either transmuted in Fast Reactors (FRs)/Accelerated Driven Systems (ADS) as an integral part of sustainable Indian NPP. (authors)« less

  3. 40 CFR 63.493 - Back-end process provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.493 Back-end process provisions. Owners and operators of new and existing affected sources shall comply with the requirements in...

  4. Towards a cross-platform software framework to support end-to-end hydrometeorological sensor network deployment

    NASA Astrophysics Data System (ADS)

    Celicourt, P.; Sam, R.; Piasecki, M.

    2016-12-01

    Global phenomena such as climate change and large scale environmental degradation require the collection of accurate environmental data at detailed spatial and temporal scales from which knowledge and actionable insights can be derived using data science methods. Despite significant advances in sensor network technologies, sensors and sensor network deployment remains a labor-intensive, time consuming, cumbersome and expensive task. These factors demonstrate why environmental data collection remains a challenge especially in developing countries where technical infrastructure, expertise and pecuniary resources are scarce. In addition, they also demonstrate the reason why dense and long-term environmental data collection has been historically quite difficult. Moreover, hydrometeorological data collection efforts usually overlook the (critically important) inclusion of a standards-based system for storing, managing, organizing, indexing, documenting and sharing sensor data. We are developing a cross-platform software framework using the Python programming language that will allow us to develop a low cost end-to-end (from sensor to publication) system for hydrometeorological conditions monitoring. The software framework contains provision for sensor, sensor platforms, calibration and network protocols description, sensor programming, data storage, data publication and visualization and more importantly data retrieval in a desired unit system. It is being tested on the Raspberry Pi microcomputer as end node and a laptop PC as the base station in a wireless setting.

  5. Emission properties and back-bombardment for CeB{sub 6} compared to LaB{sub 6}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakr, Mahmoud, E-mail: m-a-bakr@iae.kyoto-u.ac.jp; Kawai, M.; Kii, T.

    The emission properties of CeB{sub 6} compared to LaB{sub 6} thermionic cathodes have been measured using an electrostatic DC gun. Obtaining knowledge of the emission properties is the first step in understanding the back-bombardment effect that limits wide usage of thermionic radio-frequency electron guns. The effect of back-bombardment electrons on CeB{sub 6} compared to LaB{sub 6} was studied using a numerical simulation model. The results show that for 6 μs pulse duration with input radio-frequency power of 8 MW, CeB{sub 6} should experience 14% lower temperature increase and 21% lower current density rise compared to LaB{sub 6}. We conclude that CeB{submore » 6} has the potential to become the future replacement for LaB{sub 6} thermionic cathodes in radio-frequency electron guns.« less

  6. Source-Constrained Recall: Front-End and Back-End Control of Retrieval Quality

    ERIC Educational Resources Information Center

    Halamish, Vered; Goldsmith, Morris; Jacoby, Larry L.

    2012-01-01

    Research on the strategic regulation of memory accuracy has focused primarily on monitoring and control processes used to edit out incorrect information after it is retrieved (back-end control). Recent studies, however, suggest that rememberers also enhance accuracy by preventing the retrieval of incorrect information in the first place (front-end…

  7. Back-illuminate fiber system research for multi-object fiber spectroscopic telescope

    NASA Astrophysics Data System (ADS)

    Zhou, Zengxiang; Liu, Zhigang; Hu, Hongzhuan; Wang, Jianping; Zhai, Chao; Chu, Jiaru

    2016-07-01

    In the telescope observation, the position of fiber will highly influence the spectra efficient input in the fiber to the spectrograph. When the fibers were back illuminated on the spectra end, they would export light on the positioner end, so the CCD cameras could capture the photo of fiber tip position covered the focal plane, calculates the precise position information by light centroid method and feeds back to control system. A set of fiber back illuminated system was developed which combined to the low revolution spectro instruments in LAMOST. It could provide uniform light output to the fibers, meet the requirements for the CCD camera measurement. The paper was introduced the back illuminated system design and different test for the light resource. After optimization, the effect illuminated system could compare with the integrating sphere, meet the conditions of fiber position measurement.Using parallel controlled fiber positioner as the spectroscopic receiver is an efficiency observation system for spectra survey, has been used in LAMOST recently, and will be proposed in CFHT and rebuilt telescope Mayall. In the telescope observation, the position of fiber will highly influence the spectra efficient input in the fiber to the spectrograph. When the fibers were back illuminated on the spectra end, they would export light on the positioner end, so the CCD cameras could capture the photo of fiber tip position covered the focal plane, calculates the precise position information by light centroid method and feeds back to control system. After many years on these research, the back illuminated fiber measurement was the best method to acquire the precision position of fibers. In LAMOST, a set of fiber back illuminated system was developed which combined to the low revolution spectro instruments in LAMOST. It could provide uniform light output to the fibers, meet the requirements for the CCD camera measurement and was controlled by high-level observation system which

  8. LWS/SET End-to-End Data System

    NASA Technical Reports Server (NTRS)

    Giffin, Geoff; Sherman, Barry; Colon, Gilberto (Technical Monitor)

    2002-01-01

    This paper describes the concept for the End-to-End Data System that will support NASA's Living With a Star Space Environment Testbed missions. NASA has initiated the Living With a Star (LWS) Program to develop a better scientific understanding to address the aspects of the connected Sun-Earth system that affect life and society. A principal goal of the program is to bridge the gap.between science, engineering, and user application communities. The Space Environment Testbed (SET) Project is one element of LWS. The Project will enable future science, operational, and commercial objectives in space and atmospheric environments by improving engineering approaches to the accommodation and/or mitigation of the effects of solar variability on technological systems. The End-to-end data system allows investigators to access the SET control center, command their experiments, and receive data from their experiments back at their home facility, using the Internet. The logical functioning of major components of the end-to-end data system are described, including the GSFC Payload Operations Control Center (POCC), SET Payloads, the GSFC SET Simulation Lab, SET Experiment PI Facilities, and Host Systems. Host Spacecraft Operations Control Centers (SOCC) and the Host Spacecraft are essential links in the end-to-end data system, but are not directly under the control of the SET Project. Formal interfaces will be established between these entities and elements of the SET Project. The paper describes data flow through the system, from PI facilities connecting to the SET operations center via the Internet, communications to SET carriers and experiments via host systems, to telemetry returns to investigators from their flight experiments. It also outlines the techniques that will be used to meet mission requirements, while holding development and operational costs to a minimum. Additional information is included in the original extended abstract.

  9. Back-end and interface implementation of the STS-XYTER2 prototype ASIC for the CBM experiment

    NASA Astrophysics Data System (ADS)

    Kasinski, K.; Szczygiel, R.; Zabolotny, W.

    2016-11-01

    Each front-end readout ASIC for the High-Energy Physics experiments requires robust and effective hit data streaming and control mechanism. A new STS-XYTER2 full-size prototype chip for the Silicon Tracking System and Muon Chamber detectors in the Compressed Baryonic Matter experiment at Facility for Antiproton and Ion Research (FAIR, Germany) is a 128-channel time and amplitude measuring solution for silicon microstrip and gas detectors. It operates at 250 kHit/s/channel hit rate, each hit producing 27 bits of information (5-bit amplitude, 14-bit timestamp, position and diagnostics data). The chip back-end implements fast front-end channel read-out, timestamp-wise hit sorting, and data streaming via a scalable interface implementing the dedicated protocol (STS-HCTSP) for chip control and hit transfer with data bandwidth from 9.7 MHit/s up to 47 MHit/s. It also includes multiple options for link diagnostics, failure detection, and throttling features. The back-end is designed to operate with the data acquisition architecture based on the CERN GBTx transceivers. This paper presents the details of the back-end and interface design and its implementation in the UMC 180 nm CMOS process.

  10. NEWFIRM Software--System Integration Using OPC

    NASA Astrophysics Data System (ADS)

    Daly, P. N.

    2004-07-01

    The NOAO Extremely Wide-Field Infra-Red Mosaic (NEWFIRM) camera is being built to satisfy the survey science requirements on the KPNO Mayall and CTIO Blanco 4m telescopes in an era of 8m+ aperture telescopes. Rather than re-invent the wheel, the software system to control the instrument has taken existing software packages and re-used what is appropriate. The result is an end-to-end observation control system using technology components from DRAMA, ORAC, observing tools, GWC, existing in-house motor controllers and new developments like the MONSOON pixel server.

  11. The MeqTrees software system and its use for third-generation calibration of radio interferometers

    NASA Astrophysics Data System (ADS)

    Noordam, J. E.; Smirnov, O. M.

    2010-12-01

    Context. The formulation of the radio interferometer measurement equation (RIME) for a generic radio telescope by Hamaker et al. has provided us with an elegant mathematical apparatus for better understanding, simulation and calibration of existing and future instruments. The calibration of the new radio telescopes (LOFAR, SKA) would be unthinkable without the RIME formalism, and new software to exploit it. Aims: The MeqTrees software system is designed to implement numerical models, and to solve for arbitrary subsets of their parameters. It may be applied to many problems, but was originally geared towards implementing Measurement Equations in radio astronomy for the purposes of simulation and calibration. The technical goal of MeqTrees is to provide a tool for rapid implementation of such models, while offering performance comparable to hand-written code. We are also pursuing the wider goal of increasing the rate of evolution of radio astronomical software, by offering a tool that facilitates rapid experimentation, and exchange of ideas (and scripts). Methods: MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time (hours rather than weeks or months) for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code. Results: MeqTrees is already widely used as the simulation tool for new instruments (LOFAR, SKA) and technologies (focal plane arrays). It has demonstrated that it can achieve a noise-limited dynamic range in excess of a million, on

  12. A new database sub-system for grain-size analysis

    NASA Astrophysics Data System (ADS)

    Suckow, Axel

    2013-04-01

    content, sand content, etc., which always only displays part of the available information at each depth. Alternatively, full spectra were displayed at one depth. The new software now allows to display the whole grain-size spectrum at each depth in a three dimensional display. LabData and the grain-size subsystem are based on MS Access as front-end and MS SQL Server as back-end database systems. The SQL code for the data model, SQL server procedures and triggers and the MS Access basic code for the front end are public domain code, published under the GNU GPL license agreement and are available free of charge. References: Novothny, Á., Frechen, M., Horváth, E., Wacha, L., Rolf, C., 2011. Investigating the penultimate and last glacial cycles of the Sütt dating, high-resolution grain size, and magnetic susceptibility data. Quaternary International 234, 75-85. Suckow, A., Dumke, I., 2001. A database system for geochemical, isotope hydrological and geochronological laboratories. Radiocarbon 43, 325-337.

  13. Magneto-transport study of top- and back-gated LaAlO{sub 3}/SrTiO{sub 3} heterostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W., E-mail: W.Liu@unige.ch; Gariglio, S.; Fête, A.

    2015-06-01

    We report a detailed analysis of magneto-transport properties of top- and back-gated LaAlO{sub 3}/SrTiO{sub 3} heterostructures. Efficient modulation in magneto-resistance, carrier density, and mobility of the two-dimensional electron liquid present at the interface is achieved by sweeping top and back gate voltages. Analyzing those changes with respect to the carrier density tuning, we observe that the back gate strongly modifies the electron mobility while the top gate mainly varies the carrier density. The evolution of the spin-orbit interaction is also followed as a function of top and back gating.

  14. A Distributed Simulation Software System for Multi-Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Burns, Richard; Davis, George; Cary, Everett

    2003-01-01

    The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.

  15. Putting Safety in the Software

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha S.; Berens, Kalynnda M.; Hardy, Sandra (Technical Monitor)

    2001-01-01

    Software is a vital component of nearly every piece of modern technology. It is not a 'sub-system', able to be separated out from the system as a whole, but a 'co-system' that controls, manipulates, or interacts with the hardware and with the end user. Software has its fingers into all the pieces of the pie. If that 'pie', the system, can lead to injury, death, loss of major equipment, or impact your business bottom line, then software safety becomes vitally important. Learning to think about software from a safety perspective is the focus of this paper. We want you to think of software as part of the safety critical system, a major part. This requires 'system thinking' - being able to grasp the whole picture. Software's contribution to modern technology is both good and potentially bad. Software allows more complex and useful devices to be built. It can also contribute to plane crashes and power outages. We want you to see software in a whole new light, see it as a contributor to system hazards, and also as a possible fix or mitigation to some of those hazards.

  16. Integration of Photo-Patternable Low-κ Material into Advanced Cu Back-End-Of-The-Line

    NASA Astrophysics Data System (ADS)

    Lin, Qinghuang; Nelson, Alshakim; Chen, Shyng-Tsong; Brock, Philip; Cohen, Stephan A.; Davis, Blake; Kaplan, Richard; Kwong, Ranee; Liniger, Eric; Neumayer, Debra; Patel, Jyotica; Shobha, Hosadurga; Sooriyakumaran, Ratnam; Purushothaman, Sampath; Miller, Robert; Spooner, Terry; Wisnieff, Robert

    2010-05-01

    We report herein the demonstration of a simple, low-cost Cu back-end-of-the-line (BEOL) dual-damascene integration using a novel photo-patternable low-κ dielectric material concept that dramatically reduces Cu BEOL integration complexity. This κ=2.7 photo-patternable low-κ material is based on the SiCOH-based material platform and has sub-200 nm resolution capability with 248 nm optical lithography. Cu/photo-patternable low-κ dual-damascene integration at 45 nm node BEOL fatwire levels has been demonstrated with very high electrical yields using the current manufacturing infrastructure. The photo-patternable low-κ concept is, therefore, a promising technology for highly efficient semiconductor Cu BEOL manufacturing.

  17. Towards a Software Framework to Support Deployment of Low Cost End-to-End Hydroclimatological Sensor Network

    NASA Astrophysics Data System (ADS)

    Celicourt, P.; Piasecki, M.

    2015-12-01

    Deployment of environmental sensors assemblies based on cheap platforms such as Raspberry Pi and Arduino have gained much attention over the past few years. While they are more attractive due to their ability to be controlled with a few programming language choices, the configuration task can become quite complex due to the need of having to learn several different proprietary data formats and protocols which constitute a bottleneck for the expansion of sensor network. In response to this rising complexity the Institute of Electrical and Electronics Engineers (IEEE) has sponsored the development of the IEEE 1451 standard in an attempt to introduce a common standard. The most innovative concept of the standard is the Transducer Electronic Data Sheet (TEDS) which enables transducers to self-identify, self-describe, self-calibrate, to exhibit plug-and-play functionality, etc. We used Python to develop an IEEE 1451.0 platform-independent graphical user interface to generate and provide sufficient information about almost ANY sensor and sensor platforms for sensor programming purposes, automatic calibration of sensors data, incorporation of back-end demands on data management in TEDS for automatic standard-based data storage, search and discovery purposes. These features are paramount to make data management much less onerous in large scale sensor network. Along with the TEDS Creator, we developed a tool namely HydroUnits for three specific purposes: encoding of physical units in the TEDS, dimensional analysis, and on-the-fly conversion of time series allowing users to retrieve data in a desired equivalent unit while accommodating unforeseen and user-defined units. In addition, our back-end data management comprises the Python/Django equivalent of the CUAHSI Observations Data Model (ODM) namely DjangODM that will be hosted by a MongoDB Database Server which offers more convenience for our application. We are also developing a data which will be paired with the data

  18. Applying Trustworthy Computing to End-to-End Electronic Voting

    ERIC Educational Resources Information Center

    Fink, Russell A.

    2010-01-01

    "End-to-End (E2E)" voting systems provide cryptographic proof that the voter's intention is captured, cast, and tallied correctly. While E2E systems guarantee integrity independent of software, most E2E systems rely on software to provide confidentiality, availability, authentication, and access control; thus, end-to-end integrity is not…

  19. Control Software for the VERITAS Cerenkov Telescope System

    NASA Astrophysics Data System (ADS)

    Krawczynski, H.; Olevitch, M.; Sembroski, G.; Gibbs, K.

    2003-07-01

    The VERITAS collab oration is developing a system of initially 4 and ˇ eventually 7 Cerenkov telescopes of the 12 m diameter class for high sensitivity gamma-ray astronomy in the >50 GeV energy range. In this contribution we describe the software that controls and monitors the various VERITAS subsystems. The software uses an object-oriented approach to cop e with the complexities that arise from using sub-groups of the 7 VERITAS telescopes to observe several sources at the same time. Inter-pro cess communication is based on the CORBA object Request Broker proto col and watch-dog processes monitor the sub-system performance.

  20. Third-Party Software's Trust Quagmire.

    PubMed

    Voas, J; Hurlburt, G

    2015-12-01

    Current software development has trended toward the idea of integrating independent software sub-functions to create more complete software systems. Software sub-functions are often not homegrown - instead they are developed by unknown 3 rd party organizations and reside in software marketplaces owned or controlled by others. Such software sub-functions carry plausible concern in terms of quality, origins, functionality, security, interoperability, to name a few. This article surveys key technical difficulties in confidently building systems from acquired software sub-functions by calling out the principle software supply chain actors.

  1. 40 CFR 63.497 - Back-end process provisions-monitoring provisions for control and recovery devices.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Back-end process provisions-monitoring... Polymers and Resins § 63.497 Back-end process provisions—monitoring provisions for control and recovery devices. (a) An owner or operator complying with the residual organic HAP limitations in § 63.494(a) using...

  2. High-resolution physical and biogeochemical variability from a shallow back reef on Ofu, American Samoa: an end-member perspective

    NASA Astrophysics Data System (ADS)

    Koweek, David A.; Dunbar, Robert B.; Monismith, Stephen G.; Mucciarone, David A.; Woodson, C. Brock; Samuel, Lianna

    2015-09-01

    Shallow back reefs commonly experience greater thermal and biogeochemical variability owing to a combination of coral community metabolism, environmental forcing, flow regime, and water depth. We present results from a high-resolution (sub-hourly to sub-daily) hydrodynamic and biogeochemical study, along with a coupled long-term (several months) hydrodynamic study, conducted on the back reefs of Ofu, American Samoa. During the high-resolution study, mean temperature was 29.0 °C with maximum temperatures near 32 °C. Dissolved oxygen concentrations spanned 32-178 % saturation, and pHT spanned the range from 7.80 to 8.39 with diel ranges reaching 0.58 units. Empirical cumulative distribution functions reveal that pHT was between 8.0 and 8.2 during only 30 % of the observational period, with approximately even distribution of the remaining 70 % of the time between pHT values less than 8.0 and greater than 8.2. Thermal and biogeochemical variability in the back reefs is partially controlled by tidal modulation of wave-driven flow, which isolates the back reefs at low tide and brings offshore water into the back reefs at high tide. The ratio of net community calcification to net community production was 0.15 ± 0.01, indicating that metabolism on the back reef was dominated by primary production and respiration. Similar to other back reef systems, the back reefs of Ofu are carbon sinks during the daytime. Shallow back reefs like those in Ofu may provide insights for how coral communities respond to extreme temperatures and acidification and are deserving of continued attention.

  3. Front End Software for Online Database Searching. Part 2: The Marketplace.

    ERIC Educational Resources Information Center

    Levy, Louise R.; Hawkins, Donald T.

    1986-01-01

    This article analyzes the front end software marketplace and discusses some of the complex forces influencing it. Discussion covers intermediary market; end users (library customers, scientific and technical professionals, corporate business specialists, consumers); marketing strategies; a British front end development firm; competitive pressures;…

  4. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.

    PubMed

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen

    2013-03-01

    Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.

  5. Implementing the concurrent operation of sub-arrays in the ALMA correlator

    NASA Astrophysics Data System (ADS)

    Amestica, Rodrigo; Perez, Jesus; Lacasse, Richard; Saez, Alejandro

    2016-07-01

    The ALMA correlator processes the digitized signals from 64 individual antennas to produce a grand total of 2016 correlated base-lines, with runtime selectable lags resolution and integration time. The on-line software system can process a maximum of 125M visibilities per second, producing an archiving data rate close to one sixteenth of the former (7.8M visibilities per second with a network transfer limit of 60 MB/sec). Mechanisms in the correlator hardware design make it possible to split the total number of antennas in the array into smaller subsets, or sub-arrays, such that they can share correlator resources while executing independent observations. The software part of the sub-system is responsible for configuring and scheduling correlator resources in such a way that observations among independent subarrays occur simultaneously while internally sharing correlator resources under a cooperative arrangement. Configuration of correlator modes through its CAN-bus interface and periodic geometric delay updates are the most relevant activities to schedule concurrently while observations happen at the same time among a number of sub-arrays. For that to work correctly, the software interface to sub-arrays schedules shared correlator resources sequentially before observations actually start on each sub-array. Start times for specific observations are optimized and reported back to the higher level observing software. After that initial sequential phase has taken place then simultaneous executions and recording of correlated data across different sub-arrays move forward concurrently, sharing the local network to broadcast results to other software sub-systems. The present paper presents an overview of the different hardware and software actors within the correlator sub-system that implement some degree of concurrency and synchronization needed for seamless and simultaneous operation of multiple sub-arrays, limitations stemming from the resource-sharing nature of the

  6. A system for automatic evaluation of simulation software

    NASA Technical Reports Server (NTRS)

    Ryan, J. P.; Hodges, B. C.

    1976-01-01

    Within the field of computer software, simulation and verification are complementary processes. Simulation methods can be used to verify software by performing variable range analysis. More general verification procedures, such as those described in this paper, can be implicitly, viewed as attempts at modeling the end-product software. From software requirement methodology, each component of the verification system has some element of simulation to it. Conversely, general verification procedures can be used to analyze simulation software. A dynamic analyzer is described which can be used to obtain properly scaled variables for an analog simulation, which is first digitally simulated. In a similar way, it is thought that the other system components and indeed the whole system itself have the potential of being effectively used in a simulation environment.

  7. Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2008-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.

  8. Building a Snow Data Management System using Open Source Software (and IDL)

    NASA Astrophysics Data System (ADS)

    Goodale, C. E.; Mattmann, C. A.; Ramirez, P.; Hart, A. F.; Painter, T.; Zimdars, P. A.; Bryant, A.; Brodzik, M.; Skiles, M.; Seidel, F. C.; Rittger, K. E.

    2012-12-01

    At NASA's Jet Propulsion Laboratory free and open source software is used everyday to support a wide range of projects, from planetary to climate to research and development. In this abstract I will discuss the key role that open source software has played in building a robust science data processing pipeline for snow hydrology research, and how the system is also able to leverage programs written in IDL, making JPL's Snow Data System a hybrid of open source and proprietary software. Main Points: - The Design of the Snow Data System (illustrate how the collection of sub-systems are combined to create a complete data processing pipeline) - Discuss the Challenges of moving from a single algorithm on a laptop, to running 100's of parallel algorithms on a cluster of servers (lesson's learned) - Code changes - Software license related challenges - Storage Requirements - System Evolution (from data archiving, to data processing, to data on a map, to near-real-time products and maps) - Road map for the next 6 months (including how easily we re-used the snowDS code base to support the Airborne Snow Observatory Mission) Software in Use and their Software Licenses: IDL - Used for pre and post processing of data. Licensed under a proprietary software license held by Excelis. Apache OODT - Used for data management and workflow processing. Licensed under the Apache License Version 2. GDAL - Geospatial Data processing library used for data re-projection currently. Licensed under the X/MIT license. GeoServer - WMS Server. Licensed under the General Public License Version 2.0 Leaflet.js - Javascript web mapping library. Licensed under the Berkeley Software Distribution License. Python - Glue code and miscellaneous data processing support. Licensed under the Python Software Foundation License. Perl - Script wrapper for running the SCAG algorithm. Licensed under the General Public License Version 3. PHP - Front-end web application programming. Licensed under the PHP License Version

  9. BAM/DASS: Data Analysis Software for Sub-Microarcsecond Astrometry Device

    NASA Astrophysics Data System (ADS)

    Gardiol, D.; Bonino, D.; Lattanzi, M. G.; Riva, A.; Russo, F.

    2010-12-01

    The INAF - Osservatorio Astronomico di Torino is part of the Data Processing and Analysis Consortium (DPAC) for Gaia, a cornerstone mission of the European Space Agency. Gaia will perform global astrometry by means of two telescopes looking at the sky along two different lines of sight oriented at a fixed angle, also called basic angle. Knowledge of the basic angle fluctuations at the sub-microarcsecond level over periods of the order of the minute is crucial to reach the mission goals. A specific device, the Basic Angle Monitoring, will be dedicated to this purpose. We present here the software system we are developing to analyze the BAM data and recover the basic angle variations. This tool is integrated into the whole DPAC data analysis software.

  10. DSN G/T(sub op) and telecommunications system performance

    NASA Technical Reports Server (NTRS)

    Stelzried, C.; Clauss, R.; Rafferty, W.; Petty, S.

    1992-01-01

    Provided here is an intersystem comparison of present and evolving Deep Space Network (DSN) microwave receiving systems. Comparisons of the receiving systems are based on the widely used G/T sub op figure of merit, which is defined as antenna gain divided by operating system noise temperature. In 10 years, it is expected that the DSN 32 GHz microwave receiving system will improve the G/T sub op performance over the current 8.4 GHz system by 8.3 dB. To compare future telecommunications system end-to-end performance, both the receiving systems' G/T sub op and spacecraft transmit parameters are used. Improving the 32 GHz spacecraft transmitter system is shown to increase the end-to-end telecommunications system performance an additional 3.2 dB, for a net improvement of 11.5 dB. These values are without a planet in the field of view (FOV). A Saturn mission is used for an example calculation to indicate the degradation in performance with a planet in the field of view.

  11. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology

    PubMed Central

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E.; Troein, Carl; Millar, Andrew J.; Goryanin, Igor; Gilmore, Stephen

    2013-01-01

    Summary: Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats. Availability and implementation: All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials. Contact: stg@inf.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23329415

  12. End-to-End Data System Architecture for the Space Station Biological Research Project

    NASA Technical Reports Server (NTRS)

    Mian, Arshad; Scimemi, Sam; Adeni, Kaiser; Picinich, Lou; Ramos, Rubin (Technical Monitor)

    1998-01-01

    The Space Station Biological Research Project (SSBRP) Is developing hardware referred to as the "facility" for providing life sciences research capability on the International Space Station. This hardware includes several biological specimen habitats, habitat holding racks, a centrifuge and a glovebox. An SSBRP end to end data system architecture has been developed to allow command and control of the facility from the ground, either with crew assistance or autonomously. The data system will be capable of handling commands, sensor data, and video from multiple cameras. The data will traverse through several onboard and ground networks and processing entities including the SSBRP and Space Station onboard and ground data systems. A large number of onboard and ground (,entities of the data system are being developed by the Space Station Program, other NASA centers and the International Partners. The SSBRP part of the system which includes the habitats, holding racks, and the ground operations center, User Operations Facility (UOF) will be developed by a multitude of geographically distributed development organizations. The SSBRP has the responsibility to define the end to end data and communications systems to make the interfaces manageable and verifiable with multiple contractors with widely varying development constraints and schedules. This paper provides an overview of the SSBRP end-to-end data system. Specifically, it describes the hardware, software and functional interactions of individual systems, and interface requirements among various entities of the end-to-end system.

  13. Tailoring Software for Multiple Processor Systems

    DTIC Science & Technology

    1982-10-01

    resource management decisions . Despite the lack of programming support, the use of multiple processor systems has grown sub- -stantially. Software has...making resource management decisions . Specifically, program- 1 mers need not allocate specific hardware resources to individual program components...Instead, such allocation decisions are automatically made based on high-level resource directives stated by ap- plication programmers, where each directive

  14. LabData database sub-systems for post-processing and quality control of stable isotope and gas chromatography measurements

    NASA Astrophysics Data System (ADS)

    Suckow, A. O.

    2013-12-01

    Measurements need post-processing to obtain results that are comparable between laboratories. Raw data may need to be corrected for blank, memory, drift (change of reference values with time), linearity (dependence of reference on signal height) and normalized to international reference materials. Post-processing parameters need to be stored for traceability of results. State of the art stable isotope correction schemes are available based on MS Excel (Geldern and Barth, 2012; Gröning, 2011) or MS Access (Coplen, 1998). These are specialized to stable isotope measurements only, often only to the post-processing of a special run. Embedding of algorithms into a multipurpose database system was missing. This is necessary to combine results of different tracers (3H, 3He, 2H, 18O, CFCs, SF6...) or geochronological tools (Sediment dating e.g. with 210Pb, 137Cs), to relate to attribute data (submitter, batch, project, geographical origin, depth in core, well information etc.) and for further interpretation tools (e.g. lumped parameter modelling). Database sub-systems to the LabData laboratory management system (Suckow and Dumke, 2001) are presented for stable isotopes and for gas chromatographic CFC and SF6 measurements. The sub-system for stable isotopes allows the following post-processing: 1. automated import from measurement software (Isodat, Picarro, LGR), 2. correction for sample-to sample memory, linearity, drift, and renormalization of the raw data. The sub-system for gas chromatography covers: 1. storage of all raw data 2. storage of peak integration parameters 3. correction for blank, efficiency and linearity The user interface allows interactive and graphical control of the post-processing and all corrections by export to and plot in MS Excel and is a valuable tool for quality control. The sub-databases are integrated into LabData, a multi-user client server architecture using MS SQL server as back-end and an MS Access front-end and installed in four

  15. European Space Software Repository ESSR

    NASA Astrophysics Data System (ADS)

    Livschitz, Jakob; Blommestijn, Robert

    2016-08-01

    The paper and presentation will present the status of the ESSR (European Space Software Repository), see [1]. It will describe the development phases, outline the web portal functionality and explain the process steps behind. Not only the front-end but also the back-end will be discussed.The ESSR web portal went live ESA internal on May 15th, 2015 and live world-wide September 19th, 2015. Currently the ESSR is in operations.

  16. 40 CFR 63.497 - Back-end process provisions-monitoring provisions for control and recovery devices used to comply...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Back-end process provisions-monitoring... Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.497 Back-end process... limitations. (a) An owner or operator complying with the residual organic HAP limitations in § 63.494(a)(1...

  17. End-to-end observatory software modeling using domain specific languages

    NASA Astrophysics Data System (ADS)

    Filgueira, José M.; Bec, Matthieu; Liu, Ning; Peng, Chien; Soto, José

    2014-07-01

    The Giant Magellan Telescope (GMT) is a 25-meter extremely large telescope that is being built by an international consortium of universities and research institutions. Its software and control system is being developed using a set of Domain Specific Languages (DSL) that supports a model driven development methodology integrated with an Agile management process. This approach promotes the use of standardized models that capture the component architecture of the system, that facilitate the construction of technical specifications in a uniform way, that facilitate communication between developers and domain experts and that provide a framework to ensure the successful integration of the software subsystems developed by the GMT partner institutions.

  18. A multitasking, multisinked, multiprocessor data acquisition front end

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, R.; Au, R.; Molen, A.V.

    1989-10-01

    The authors have developed a generalized data acquisition front end system which is based on MC68020 processors running a commercial real time kernel (rhoSOS), and implemented primarily in a high level language (C). This system has been attached to the back end on-line computing system at NSCL via our high performance ETHERNET protocol. Data may be simultaneously sent to any number of back end systems. Fixed fraction sampling along links to back end computing is also supported. A nonprocedural program generator simplifies the development of experiment specific code.

  19. Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2009-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  20. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.

    1987-01-01

    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.

  1. Evaluation of NASA's end-to-end data systems using DSDS+

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Davenport, William; Message, Philip

    1994-01-01

    The Data Systems Dynamic Simulator (DSDS+) is a software tool being developed by the authors to evaluate candidate architectures for NASA's end-to-end data systems. Via modeling and simulation, we are able to quickly predict the performance characteristics of each architecture, to evaluate 'what-if' scenarios, and to perform sensitivity analyses. As such, we are using modeling and simulation to help NASA select the optimal system configuration, and to quantify the performance characteristics of this system prior to its delivery. This paper is divided into the following six sections: (1) The role of modeling and simulation in the systems engineering process. In this section, we briefly describe the different types of results obtained by modeling each phase of the systems engineering life cycle, from concept definition through operations and maintenance; (2) Recent applications of DSDS+. In this section, we describe ongoing applications of DSDS+ in support of the Earth Observing System (EOS), and we present some of the simulation results generated of candidate system designs. So far, we have modeled individual EOS subsystems (e.g. the Solid State Recorders used onboard the spacecraft), and we have also developed an integrated model of the EOS end-to-end data processing and data communications systems (from the payloads onboard to the principle investigator facilities on the ground); (3) Overview of DSDS+. In this section we define what a discrete-event model is, and how it works. The discussion is presented relative to the DSDS+ simulation tool that we have developed, including it's run-time optimization algorithms that enables DSDS+ to execute substantially faster than comparable discrete-event simulation tools; (4) Summary. In this section, we summarize our findings and 'lessons learned' during the development and application of DSDS+ to model NASA's data systems; (5) Further Information; and (6) Acknowledgements.

  2. Software beamforming: comparison between a phased array and synthetic transmit aperture.

    PubMed

    Li, Yen-Feng; Li, Pai-Chi

    2011-04-01

    The data-transfer and computation requirements are compared between software-based beamforming using a phased array (PA) and a synthetic transmit aperture (STA). The advantages of a software-based architecture are reduced system complexity and lower hardware cost. Although this architecture can be implemented using commercial CPUs or GPUs, the high computation and data-transfer requirements limit its real-time beamforming performance. In particular, transferring the raw rf data from the front-end subsystem to the software back-end remains challenging with current state-of-the-art electronics technologies, which offset the cost advantage of the software back end. This study investigated the tradeoff between the data-transfer and computation requirements. Two beamforming methods based on a PA and STA, respectively, were used: the former requires a higher data transfer rate and the latter requires more memory operations. The beamformers were implemente;d in an NVIDIA GeForce GTX 260 GPU and an Intel core i7 920 CPU. The frame rate of PA beamforming was 42 fps with a 128-element array transducer, with 2048 samples per firing and 189 beams per image (with a 95 MB/frame data-transfer requirement). The frame rate of STA beamforming was 40 fps with 16 firings per image (with an 8 MB/frame data-transfer requirement). Both approaches achieved real-time beamforming performance but each had its own bottleneck. On the one hand, the required data-transfer speed was considerably reduced in STA beamforming, whereas this required more memory operations, which limited the overall computation time. The advantages of the GPU approach over the CPU approach were clearly demonstrated.

  3. Proceedings of the Workshop on Software Engineering Foundations for End-User Programming (SEEUP 2009)

    DTIC Science & Technology

    2009-11-01

    interest of scientific and technical information exchange. This work is sponsored by the U.S. Department of Defense. The Software Engineering Institute is a...an interesting conti- nuum between how many different requirements a program must satisfy: the more complex and diverse the requirements, the more... Gender differences in approaches to end-user software development have also been reported in debugging feature usage [1] and in end-user web programming

  4. A new system for measuring three-dimensional back shape in scoliosis

    PubMed Central

    Pynsent, Paul; Fairbank, Jeremy; Disney, Simon

    2008-01-01

    The aim of this work was to develop a low-cost automated system to measure the three-dimensional shape of the back in patients with scoliosis. The resulting system uses structured light to illuminate a patient’s back from an angle while a digital photograph is taken. The height of the surface is calculated using Fourier transform profilometry with an accuracy of ±1 mm. The surface is related to body axes using bony landmarks on the back that have been palpated and marked with small coloured stickers prior to photographing. Clinical parameters are calculated automatically and presented to the user on a monitor and as a printed report. All data are stored in a database. The database can be interrogated and successive measurements plotted for monitoring the deformity changes. The system developed uses inexpensive hardware and open source software. Accurate surface topography can help the clinician to measure spinal deformity at baseline and monitor changes over time. It can help the patients and their families to assess deformity. Above all it reduces the dependence on serial radiography and reduces radiation exposure when monitoring spinal deformity. PMID:18247064

  5. A New Control System Software for SANS BATAN Spectrometer in Serpong, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bharoto; Putra, Edy Giri Rachman

    2010-06-22

    The original main control system of the 36 meter small-angle neutron scattering (SANS) BATAN Spectrometer (SMARTer) has been replaced with the new ones due to the malfunction of the main computer. For that reason, a new control system software for handling all the control systems was also developed in order to put the spectrometer back in operation. The developed software is able to control the system such as rotation movement of six pinholes system, vertical movement of four neutron guide system with the total length of 16.5 m, two-directional movement of a neutron beam stopper, forward-backward movement of a 2Dmore » position sensitive detector (2D-PSD) along 16.7 m, etc. A Visual Basic language program running on Windows operating system was employed to develop the software and it can be operated by other remote computers in the local area network. All device positions and command menu are displayed graphically in the main monitor or window and each device control can be executed by clicking the control button. Those advantages are necessary required for developing a new user-friendly control system software. Finally, the new software has been tested for handling a complete SANS experiment and it works properly.« less

  6. A New Control System Software for SANS BATAN Spectrometer in Serpong, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bharoto,; Putra, Edy Giri Rachman

    2010-06-22

    The original main control system of the 36 meter small‐angle neutron scattering (SANS) BATAN Spectrometer (SMARTer) has been replaced with the new ones due to the malfunction of the main computer. For that reason, a new control system software for handling all the control systems was also developed in order to put the spectrometer back in operation. The developed software is able to control the system such as rotation movement of six pinholes system, vertical movement of four neutron guide system with the total length of 16.5 m, two‐directional movement of a neutron beam stopper, forward‐backward movement of a 2Dmore » position sensitive detector (2D‐PSD) along 16.7 m, etc. A Visual Basic language program running on Windows operating system was employed to develop the software and it can be operated by other remote computers in the local area network. All device positions and command menu are displayed graphically in the main monitor or window and each device control can be executed by clicking the control button. Those advantages are necessary required for developing a new user‐friendly control system software. Finally, the new software has been tested for handling a complete SANS experiment and it works properly.« less

  7. End-to-End ASR-Free Keyword Search From Speech

    NASA Astrophysics Data System (ADS)

    Audhkhasi, Kartik; Rosenberg, Andrew; Sethy, Abhinav; Ramabhadran, Bhuvana; Kingsbury, Brian

    2017-12-01

    End-to-end (E2E) systems have achieved competitive results compared to conventional hybrid hidden Markov model (HMM)-deep neural network based automatic speech recognition (ASR) systems. Such E2E systems are attractive due to the lack of dependence on alignments between input acoustic and output grapheme or HMM state sequence during training. This paper explores the design of an ASR-free end-to-end system for text query-based keyword search (KWS) from speech trained with minimal supervision. Our E2E KWS system consists of three sub-systems. The first sub-system is a recurrent neural network (RNN)-based acoustic auto-encoder trained to reconstruct the audio through a finite-dimensional representation. The second sub-system is a character-level RNN language model using embeddings learned from a convolutional neural network. Since the acoustic and text query embeddings occupy different representation spaces, they are input to a third feed-forward neural network that predicts whether the query occurs in the acoustic utterance or not. This E2E ASR-free KWS system performs respectably despite lacking a conventional ASR system and trains much faster.

  8. Support for Quality Assurance in End-User Systems.

    ERIC Educational Resources Information Center

    Klepper, Robert; McKenna, Edward G.

    1989-01-01

    Suggests an approach that organizations can take to provide centralized support services for quality assurance in end-user information systems, based on the experiences of a support group at Citicorp Mortgage, Inc. The functions of the support group include user education, software selection, and assistance in testing, implementation, and support…

  9. Front End Software for Online Database Searching Part 1: Definitions, System Features, and Evaluation.

    ERIC Educational Resources Information Center

    Hawkins, Donald T.; Levy, Louise R.

    1985-01-01

    This initial article in series of three discusses barriers inhibiting use of current online retrieval systems by novice users and notes reasons for front end and gateway online retrieval systems. Definitions, front end features, user interface, location (personal computer, host mainframe), evaluation, and strengths and weaknesses are covered. (16…

  10. Cost-effectiveness of a classification-based system for sub-acute and chronic low back pain.

    PubMed

    Apeldoorn, Adri T; Bosmans, Judith E; Ostelo, Raymond W; de Vet, Henrica C W; van Tulder, Maurits W

    2012-07-01

    Identifying relevant subgroups in patients with low back pain (LBP) is considered important to guide physical therapy practice and to improve outcomes. The aim of the present study was to assess the cost-effectiveness of a modified version of Delitto's classification-based treatment approach compared with usual physical therapy care in patients with sub-acute and chronic LBP with 1 year follow-up. All patients were classified using the modified version of Delitto's classification-based system and then randomly assigned to receive either classification-based treatment or usual physical therapy care. The main clinical outcomes measured were; global perceived effect, intensity of pain, functional disability and quality of life. Costs were measured from a societal perspective. Multiple imputations were used for missing data. Uncertainty surrounding cost differences and incremental cost-effectiveness ratios was estimated using bootstrapping. Cost-effectiveness planes and cost-effectiveness acceptability curves were estimated. In total, 156 patients were included. The outcome analyses showed a significantly better outcome on global perceived effect favoring the classification-based approach, and no differences between the groups on pain, disability and quality-adjusted life-years. Mean total societal costs for the classification-based group were 2,287, and for the usual physical therapy care group 2,020. The difference was 266 (95% CI -720 to 1,612) and not statistically significant. Cost-effectiveness analyses showed that the classification-based approach was not cost-effective in comparison with usual physical therapy care for any clinical outcome measure. The classification-based treatment approach as used in this study was not cost-effective in comparison with usual physical therapy care in a population of patients with sub-acute and chronic LBP.

  11. Back reflectors based on buried Al{sub 2}O{sub 3} for enhancement of photon recycling in monolithic, on-substrate III-V solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García, I.; Instituto de Energía Solar, Universidad Politécnica de Madrid, Avda Complutense s/n, 28040 Madrid; Kearns-McCoy, C. F.

    Photon management has been shown to be a fruitful way to boost the open circuit voltage and efficiency of high quality solar cells. Metal or low-index dielectric-based back reflectors can be used to confine the reemitted photons and enhance photon recycling. Gaining access to the back of the solar cell for placing these reflectors implies having to remove the substrate, with the associated added complexity to the solar cell manufacturing. In this work, we analyze the effectiveness of a single-layer reflector placed at the back of on-substrate solar cells, and assess the photon recycling improvement as a function of themore » refractive index of this layer. Al{sub 2}O{sub 3}-based reflectors, created by lateral oxidation of an AlAs layer, are identified as a feasible choice for on-substrate solar cells, which can produce a V{sub oc} increase of around 65% of the maximum increase attainable with an ideal reflector. The experimental results obtained using prototype GaAs cell structures show a greater than two-fold increase in the external radiative efficiency and a V{sub oc} increase of ∼2% (∼18 mV), consistent with theoretical calculations. For GaAs cells with higher internal luminescence, this V{sub oc} boost is calculated to be up to 4% relative (36 mV), which directly translates into at least 4% higher relative efficiency.« less

  12. The Earth System Documentation (ES-DOC) Software Process

    NASA Astrophysics Data System (ADS)

    Greenslade, M. A.; Murphy, S.; Treshansky, A.; DeLuca, C.; Guilyardi, E.; Denvil, S.

    2013-12-01

    Earth System Documentation (ES-DOC) is an international project supplying high-quality tools & services in support of earth system documentation creation, analysis and dissemination. It is nurturing a sustainable standards based documentation eco-system that aims to become an integral part of the next generation of exa-scale dataset archives. ES-DOC leverages open source software, and applies a software development methodology that places end-user narratives at the heart of all it does. ES-DOC has initially focused upon nurturing the Earth System Model (ESM) documentation eco-system and currently supporting the following projects: * Coupled Model Inter-comparison Project Phase 5 (CMIP5); * Dynamical Core Model Inter-comparison Project (DCMIP); * National Climate Predictions and Projections Platforms Quantitative Evaluation of Downscaling Workshop. This talk will demonstrate that ES-DOC implements a relatively mature software development process. Taking a pragmatic Agile process as inspiration, ES-DOC: * Iteratively develops and releases working software; * Captures user requirements via a narrative based approach; * Uses online collaboration tools (e.g. Earth System CoG) to manage progress; * Prototypes applications to validate their feasibility; * Leverages meta-programming techniques where appropriate; * Automates testing whenever sensibly feasible; * Streamlines complex deployments to a single command; * Extensively leverages GitHub and Pivotal Tracker; * Enforces strict separation of the UI from underlying API's; * Conducts code reviews.

  13. Back pain's association with vertebral end-plate signal changes in sciatica.

    PubMed

    el Barzouhi, Abdelilah; Vleggeert-Lankamp, Carmen L A M; van der Kallen, Bas F; Lycklama à Nijeholt, Geert J; van den Hout, Wilbert B; Koes, Bart W; Peul, Wilco C

    2014-02-01

    Patients with sciatica frequently experience disabling back pain. One of the proposed causes for back pain is vertebral end-plate signal changes (VESC) as visualized by magnetic resonance imaging (MRI). To report on VESC findings, changes of VESC findings over time, and the correlation between VESC and disabling back pain in patients with sciatica. A randomized clinical trial with 1 year of follow-up. Patients with 6 to 12 weeks of sciatica who participated in a multicenter, randomized clinical trial comparing an early surgery strategy with prolonged conservative care with surgery if needed. Patients were assessed by means of the 100-mm visual analog scale (VAS) for back pain (with 0 representing no pain and 100 the worst pain ever experienced) at baseline and 1 year. Disabling back pain was defined as a VAS score of at least 40 mm. Patients underwent MRI both at baseline and after 1 year follow-up. Presence and change of VESC was correlated with disabling back pain using chi-square tests and logistic regression analysis. At baseline, 39% of patients had disabling back pain. Of the patients with VESC at baseline, 40% had disabling back pain compared with 38% of the patients with no VESC (p=.67). The prevalence of type 1 VESC increased from 1% at baseline to 35% 1 year later in the surgical group compared with an increase from 3% to 11% in the conservative group. The prevalence of type 2 VESC decreased from 40% to 29% in the surgical group while remaining almost stable in the conservative group at 41%. The prevalence of disabling back pain at 1 year was 12% in patients with no VESC at 1 year, 16% in patients with type 1 VESC, 11% in patients with type 2 VESC, and 3% in patients with both types 1 and 2 VESC (p=.36). Undergoing surgery was associated with increase in the extent of VESC (odds ratio [OR], 8.6; 95% confidence interval [CI], 4.7-15.7; p<.001). Patients who showed an increase in the extent of VESC after 1 year did not significantly report more disabling

  14. An Internet Protocol-Based Software System for Real-Time, Closed-Loop, Multi-Spacecraft Mission Simulation Applications

    NASA Technical Reports Server (NTRS)

    Davis, George; Cary, Everett; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis

    2003-01-01

    The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.

  15. Data management software concept for WEST plasma measurement system

    NASA Astrophysics Data System (ADS)

    Zienkiewicz, P.; Kasprowicz, G.; Byszuk, A.; Wojeński, A.; Kolasinski, P.; Cieszewski, R.; Czarski, T.; Chernyshova, M.; Pozniak, K.; Zabolotny, W.; Juszczyk, B.; Mazon, D.; Malard, P.

    2014-11-01

    This paper describes the concept of data management software for the multichannel readout system for the GEM detector used in WEST Plasma experiment. The proposed system consists of three separate communication channels: fast data channel, diagnostics channel, slow data channel. Fast data channel is provided by the FPGA with integrated ARM cores providing direct readout data from Analog Front Ends through 10GbE with short, guaranteed intervals. Slow data channel is provided by multiple, fast CPUs after data processing with detailed readout data with use of GNU/Linux OS and appropriate software. Diagnostic channel provides detailed feedback for control purposes.

  16. Software system safety

    NASA Technical Reports Server (NTRS)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  17. XPRESS: eXascale PRogramming Environment and System Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brightwell, Ron; Sterling, Thomas; Koniges, Alice

    The XPRESS Project is one of four major projects of the DOE Office of Science Advanced Scientific Computing Research X-stack Program initiated in September, 2012. The purpose of XPRESS is to devise an innovative system software stack to enable practical and useful exascale computing around the end of the decade with near-term contributions to efficient and scalable operation of trans-Petaflops performance systems in the next two to three years; both for DOE mission-critical applications. To this end, XPRESS directly addresses critical challenges in computing of efficiency, scalability, and programmability through introspective methods of dynamic adaptive resource management and task scheduling.

  18. Requirements for guidelines systems: implementation challenges and lessons from existing software-engineering efforts.

    PubMed

    Shah, Hemant; Allard, Raymond D; Enberg, Robert; Krishnan, Ganesh; Williams, Patricia; Nadkarni, Prakash M

    2012-03-09

    A large body of work in the clinical guidelines field has identified requirements for guideline systems, but there are formidable challenges in translating such requirements into production-quality systems that can be used in routine patient care. Detailed analysis of requirements from an implementation perspective can be useful in helping define sub-requirements to the point where they are implementable. Further, additional requirements emerge as a result of such analysis. During such an analysis, study of examples of existing, software-engineering efforts in non-biomedical fields can provide useful signposts to the implementer of a clinical guideline system. In addition to requirements described by guideline-system authors, comparative reviews of such systems, and publications discussing information needs for guideline systems and clinical decision support systems in general, we have incorporated additional requirements related to production-system robustness and functionality from publications in the business workflow domain, in addition to drawing on our own experience in the development of the Proteus guideline system (http://proteme.org). The sub-requirements are discussed by conveniently grouping them into the categories used by the review of Isern and Moreno 2008. We cite previous work under each category and then provide sub-requirements under each category, and provide example of similar work in software-engineering efforts that have addressed a similar problem in a non-biomedical context. When analyzing requirements from the implementation viewpoint, knowledge of successes and failures in related software-engineering efforts can guide implementers in the choice of effective design and development strategies.

  19. Requirements for guidelines systems: implementation challenges and lessons from existing software-engineering efforts

    PubMed Central

    2012-01-01

    Background A large body of work in the clinical guidelines field has identified requirements for guideline systems, but there are formidable challenges in translating such requirements into production-quality systems that can be used in routine patient care. Detailed analysis of requirements from an implementation perspective can be useful in helping define sub-requirements to the point where they are implementable. Further, additional requirements emerge as a result of such analysis. During such an analysis, study of examples of existing, software-engineering efforts in non-biomedical fields can provide useful signposts to the implementer of a clinical guideline system. Methods In addition to requirements described by guideline-system authors, comparative reviews of such systems, and publications discussing information needs for guideline systems and clinical decision support systems in general, we have incorporated additional requirements related to production-system robustness and functionality from publications in the business workflow domain, in addition to drawing on our own experience in the development of the Proteus guideline system (http://proteme.org). Results The sub-requirements are discussed by conveniently grouping them into the categories used by the review of Isern and Moreno 2008. We cite previous work under each category and then provide sub-requirements under each category, and provide example of similar work in software-engineering efforts that have addressed a similar problem in a non-biomedical context. Conclusions When analyzing requirements from the implementation viewpoint, knowledge of successes and failures in related software-engineering efforts can guide implementers in the choice of effective design and development strategies. PMID:22405400

  20. OISI dynamic end-to-end modeling tool

    NASA Astrophysics Data System (ADS)

    Kersten, Michael; Weidler, Alexander; Wilhelm, Rainer; Johann, Ulrich A.; Szerdahelyi, Laszlo

    2000-07-01

    The OISI Dynamic end-to-end modeling tool is tailored to end-to-end modeling and dynamic simulation of Earth- and space-based actively controlled optical instruments such as e.g. optical stellar interferometers. `End-to-end modeling' is meant to denote the feature that the overall model comprises besides optical sub-models also structural, sensor, actuator, controller and disturbance sub-models influencing the optical transmission, so that the system- level instrument performance due to disturbances and active optics can be simulated. This tool has been developed to support performance analysis and prediction as well as control loop design and fine-tuning for OISI, Germany's preparatory program for optical/infrared spaceborne interferometry initiated in 1994 by Dornier Satellitensysteme GmbH in Friedrichshafen.

  1. Back-Arc Opening in the Western End of the Okinawa Trough Revealed From GNSS/Acoustic Measurements

    NASA Astrophysics Data System (ADS)

    Chen, Horng-Yue; Ikuta, Ryoya; Lin, Cheng-Horng; Hsu, Ya-Ju; Kohmi, Takeru; Wang, Chau-Chang; Yu, Shui-Beih; Tu, Yoko; Tsujii, Toshiaki; Ando, Masataka

    2018-01-01

    We measured seafloor movement using a Global Navigation Satellite Systems (GNSS)/Acoustic technique at the south of the rifting valley in the western end of the Okinawa Trough back-arc basin, 60 km east of northeastern corner of Taiwan. The horizontal position of the seafloor benchmark, measured eight times between July 2012 and May 2016, showed a southeastward movement suggesting a back-arc opening of the Okinawa Trough. The average velocity of the seafloor benchmark shows a block motion together with Yonaguni Island. The westernmost part of the Ryukyu Arc rotates clockwise and is pulled apart from the Taiwan Island, which should cause the expansion of the Yilan Plain, Taiwan. Comparing the motion of the seafloor benchmark with adjacent seismicity, we suggest a gentle episodic opening of the rifting valley accompanying a moderate seismic activation, which differs from the case in the segment north off-Yonaguni Island where a rapid dyke intrusion occurs with a significant seismic activity.

  2. Advanced end-to-end fiber optic sensing systems for demanding environments

    NASA Astrophysics Data System (ADS)

    Black, Richard J.; Moslehi, Behzad

    2010-09-01

    Optical fibers are small-in-diameter, light-in-weight, electromagnetic-interference immune, electrically passive, chemically inert, flexible, embeddable into different materials, and distributed-sensing enabling, and can be temperature and radiation tolerant. With appropriate processing and/or packaging, they can be very robust and well suited to demanding environments. In this paper, we review a range of complete end-to-end fiber optic sensor systems that IFOS has developed comprising not only (1) packaged sensors and mechanisms for integration with demanding environments, but (2) ruggedized sensor interrogators, and (3) intelligent decision aid algorithms software systems. We examine the following examples: " Fiber Bragg Grating (FBG) optical sensors systems supporting arrays of environmentally conditioned multiplexed FBG point sensors on single or multiple optical fibers: In conjunction with advanced signal processing, decision aid algorithms and reasoners, FBG sensor based structural health monitoring (SHM) systems are expected to play an increasing role in extending the life and reducing costs of new generations of aerospace systems. Further, FBG based structural state sensing systems have the potential to considerably enhance the performance of dynamic structures interacting with their environment (including jet aircraft, unmanned aerial vehicles (UAVs), and medical or extravehicular space robots). " Raman based distributed temperature sensing systems: The complete length of optical fiber acts as a very long distributed sensor which may be placed down an oil well or wrapped around a cryogenic tank.

  3. Structure evolution upon chemical and physical pressure in (Sr{sub 1−x}Ba{sub x}){sub 2}FeSbO{sub 6}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiittanen, T.; Karppinen, M., E-mail: maarit.karppinen@aalto.fi

    Here we demonstrate the gradual structural transformation from the monoclinic I2/m to tetragonal I4/m, cubic Fm-3m and hexagonal P6{sub 3}/mmc structure upon the isovalent larger-for-smaller A-site cation substitution in the B-site ordered double-perovskite system (Sr{sub 1−x}Ba{sub x}){sub 2}FeSbO{sub 6}. This is the same transformation sequence previously observed up to Fm-3m upon heating the parent Sr{sub 2}FeSbO{sub 6} phase to high temperatures. High-pressure treatment, on the other hand, transforms the hexagonal P6{sub 3}/mmc structure of the other end member Ba{sub 2}FeSbO{sub 6} back to the cubic Fm-3m structure. Hence we may conclude that chemical pressure, physical pressure and decreasing temperature allmore » work towards the same direction in the (Sr{sub 1−x}Ba{sub x}){sub 2}FeSbO{sub 6} system. Also shown is that with increasing Ba-for-Sr substitution level, i.e. with decreasing chemical pressure effect, the degree-of-order among the B-site cations, Fe and Sb, decreases. - Graphical abstract: In the (Sr{sub 1−x}Ba{sub x}){sub 2}FeSbO{sub 6} double-perovskite system the gradual structural transformation from the monoclinic I2/m to tetragonal I4/m, cubic Fm-3m and hexagonal P6{sub 3}/mmc structure is seen upon the isovalent larger-for-smaller A-site cation substitution. High-pressure treatment under 4 GPa extends stability of the cubic Fm-3m structure within a wider substitution range of x. - Highlights: • Gradual structural transitions upon A-cation substitution in (Sr{sub 1−x}Ba{sub x}){sub 2}FeSbO{sub 6.} • With increasing x structure changes from I2/m to I4/m, Fm-3m and P6{sub 3}/mmc. • Degree of B-site order decreases with increasing x and A-site cation radius. • High-pressure treatment extends cubic Fm-3m phase stability for wider x range. • High-pressure treatment affects bond lengths mostly around the A-cation.« less

  4. The relationships between software publications and software systems

    NASA Astrophysics Data System (ADS)

    Hogg, David W.

    2017-01-01

    When we build software systems or software tools for astronomy, we sometimes do and sometimes don't also write and publish standard scientific papers about those software systems. I will discuss the pros and cons of writing such publications. There are impacts of writing such papers immediately (they can affect the design and structure of the software project itself), in the short term (they can promote adoption and legitimize the software), in the medium term (they can provide a platform for all the literature's mechanisms for citation, criticism, and reuse), and in the long term (they can preserve ideas that are embodied in the software, possibly on timescales much longer than the lifetime of any software context). I will argue that as important as pure software contributions are to astronomy—and I am both a preacher and a practitioner—software contributions are even more valuable when they are associated with traditional scientific publications. There are exceptions and complexities of course, which I will discuss.

  5. Diagnostic system for measuring temperature, pressure, CO.sub.2 concentration and H.sub.2O concentration in a fluid stream

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Partridge, Jr., William P.; Jatana, Gurneesh Singh; Yoo, Ji Hyung

    A diagnostic system for measuring temperature, pressure, CO.sub.2 concentration and H.sub.2O concentration in a fluid stream is described. The system may include one or more probes that sample the fluid stream spatially, temporally and over ranges of pressure and temperature. Laser light sources are directed down pitch optical cables, through a lens and to a mirror, where the light sources are reflected back, through the lens to catch optical cables. The light travels through the catch optical cables to detectors, which provide electrical signals to a processer. The processer utilizes the signals to calculate CO.sub.2 concentration based on the temperaturesmore » derived from H.sub.2O vapor concentration. A probe for sampling CO.sub.2 and H.sub.2O vapor concentrations is also disclosed. Various mechanical features interact together to ensure the pitch and catch optical cables are properly aligned with the lens during assembly and use.« less

  6. Poly(ADP-ribose)polymerases are involved in microhomology mediated back-up non-homologous end joining in Arabidopsis thaliana.

    PubMed

    Jia, Qi; den Dulk-Ras, Amke; Shen, Hexi; Hooykaas, Paul J J; de Pater, Sylvia

    2013-07-01

    Besides the KU-dependent classical non-homologous end-joining (C-NHEJ) pathway, an alternative NHEJ pathway first identified in mammalian systems, which is often called the back-up NHEJ (B-NHEJ) pathway, was also found in plants. In mammalian systems PARP was found to be one of the essential components in B-NHEJ. Here we investigated whether PARP1 and PARP2 were also involved in B-NHEJ in Arabidopsis. To this end Arabidopsis parp1, parp2 and parp1parp2 (p1p2) mutants were isolated and functionally characterized. The p1p2 double mutant was crossed with the C-NHEJ ku80 mutant resulting in the parp1parp2ku80 (p1p2k80) triple mutant. As expected, because of their role in single strand break repair (SSBR) and base excision repair (BER), the p1p2 and p1p2k80 mutants were shown to be sensitive to treatment with the DNA damaging agent MMS. End-joining assays in cell-free leaf protein extracts of the different mutants using linear DNA substrates with different ends reflecting a variety of double strand breaks were performed. The results showed that compatible 5'-overhangs were accurately joined in all mutants, that KU80 protected the ends preventing the formation of large deletions and that PARP proteins were involved in microhomology mediated end joining (MMEJ), one of the characteristics of B-NHEJ.

  7. Supporting metabolomics with adaptable software: design architectures for the end-user.

    PubMed

    Sarpe, Vladimir; Schriemer, David C

    2017-02-01

    Large and disparate sets of LC-MS data are generated by modern metabolomics profiling initiatives, and while useful software tools are available to annotate and quantify compounds, the field requires continued software development in order to sustain methodological innovation. Advances in software development practices allow for a new paradigm in tool development for metabolomics, where increasingly the end-user can develop or redeploy utilities ranging from simple algorithms to complex workflows. Resources that provide an organized framework for development are described and illustrated with LC-MS processing packages that have leveraged their design tools. Full access to these resources depends in part on coding experience, but the emergence of workflow builders and pluggable frameworks strongly reduces the skill level required. Developers in the metabolomics community are encouraged to use these resources and design content for uptake and reuse. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Roll-Out and Turn-Off Display Software for Integrated Display System

    NASA Technical Reports Server (NTRS)

    Johnson, Edward J., Jr.; Hyer, Paul V.

    1999-01-01

    This report describes the software products, system architectures and operational procedures developed by Lockheed-Martin in support of the Roll-Out and Turn-Off (ROTO) sub-element of the Low Visibility Landing and Surface Operations (LVLASO) program at the NASA Langley Research Center. The ROTO portion of this program focuses on developing technologies that aid pilots in the task of managing the deceleration of an aircraft to a pre-selected exit taxiway. This report focuses on software that produces a system of redundant deceleration cues for a pilot during the landing roll-out, and presents these cues on a head up display (HUD). The software also produces symbology for aircraft operational phases involving cruise flight, approach, takeoff, and go-around. The algorithms and data sources used to compute the deceleration guidance and generate the displays are discussed. Examples of the display formats and symbology options are presented. Logic diagrams describing the design of the ROTO software module are also given.

  9. Fiber-optic thermometry using thermal radiation from Tm end doped SiO{sub 2} fiber sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morita, Kentaro; Katsumata, Toru; Komuro, Shuji

    2014-04-15

    Fiber-optic thermometry based on temperature dependence of thermal radiation from Tm{sup 3+} ions was studied using Tm end doped SiO{sub 2} fiber sensor. Visible light radiation peaks due to f-f transition of Tm{sup 3+} ion were clearly observed at λ = 690 and 790 nm from Tm end doped SiO{sub 2} fibers sensor at the temperature above 600 °C. Thermal radiation peaks are assigned with f-f transition of Tm{sup 3+} ion, {sup 1}D{sub 2}-{sup 3}H{sub 6}, and {sup 1}G{sub 4}-{sup 3}H{sub 6}. Peak intensity of thermal radiation from Tm{sup 3+} ion increases with temperature. Intensity ratio of thermal radiation peaks atmore » λ = 690 nm against that at λ = 790 nm, I{sub 790/690}, is suitable for the temperature measurement above 750 °C. Two-dimensional temperature distribution in a flame is successfully evaluated by Tm end doped SiO{sub 2} fiber sensor.« less

  10. GiA Roots: software for the high throughput analysis of plant root system architecture.

    PubMed

    Galkovskyi, Taras; Mileyko, Yuriy; Bucksch, Alexander; Moore, Brad; Symonova, Olga; Price, Charles A; Topp, Christopher N; Iyer-Pascuzzi, Anjali S; Zurek, Paul R; Fang, Suqin; Harer, John; Benfey, Philip N; Weitz, Joshua S

    2012-07-26

    Characterizing root system architecture (RSA) is essential to understanding the development and function of vascular plants. Identifying RSA-associated genes also represents an underexplored opportunity for crop improvement. Software tools are needed to accelerate the pace at which quantitative traits of RSA are estimated from images of root networks. We have developed GiA Roots (General Image Analysis of Roots), a semi-automated software tool designed specifically for the high-throughput analysis of root system images. GiA Roots includes user-assisted algorithms to distinguish root from background and a fully automated pipeline that extracts dozens of root system phenotypes. Quantitative information on each phenotype, along with intermediate steps for full reproducibility, is returned to the end-user for downstream analysis. GiA Roots has a GUI front end and a command-line interface for interweaving the software into large-scale workflows. GiA Roots can also be extended to estimate novel phenotypes specified by the end-user. We demonstrate the use of GiA Roots on a set of 2393 images of rice roots representing 12 genotypes from the species Oryza sativa. We validate trait measurements against prior analyses of this image set that demonstrated that RSA traits are likely heritable and associated with genotypic differences. Moreover, we demonstrate that GiA Roots is extensible and an end-user can add functionality so that GiA Roots can estimate novel RSA traits. In summary, we show that the software can function as an efficient tool as part of a workflow to move from large numbers of root images to downstream analysis.

  11. A hybrid single-end-access MZI and Φ-OTDR vibration sensing system with high frequency response

    NASA Astrophysics Data System (ADS)

    Zhang, Yixin; Xia, Lan; Cao, Chunqi; Sun, Zhenhong; Li, Yanting; Zhang, Xuping

    2017-01-01

    A hybrid single-end-access Mach-Zehnder interferometer (MZI) and phase sensitive OTDR (Φ-OTDR) vibration sensing system is proposed and demonstrated experimentally. In our system, the narrow optical pulses and the continuous wave are injected into the fiber through the front end of the fiber at the same time. And at the rear end of the fiber, a frequency-shift-mirror (FSM) is designed to back propagate the continuous wave modulated by the external vibration. Thus the Rayleigh backscattering signals (RBS) and the back propagated continuous wave interfere with the reference light at the same end of the sensing fiber and a single-end-access configuration is achieved. The RBS can be successfully separated from the interference signal (IS) through digital signal process due to their different intermediate frequency based on frequency division multiplexing technique. There is no influence between these two schemes. The experimental results show 10 m spatial resolution and up to 1.2 MHz frequency response along a 6.35 km long fiber. This newly designed single-end-access setup can achieve vibration events locating and high frequency events response, which can be widely used in health monitoring for civil infrastructures and transportation.

  12. Simulated moving bed system for CO.sub.2 separation, and method of same

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, Jeannine Elizabeth; Copeland, Robert James; Lind, Jeff

    A system and method for separating and/or purification of CO.sub.2 gas from a CO.sub.2 feed stream is described. The system and method include a plurality of fixed sorbent beds, adsorption zones and desorption zones, where the sorbent beds are connected via valve and lines to create a simulated moving bed system, where the sorbent beds move from one adsorption position to another adsorption position, and then into one regeneration position to another regeneration position, and optionally back to an adsorption position. The system and method operate by concentration swing adsorption/desorption and by adsorptive/desorptive displacement.

  13. Using MATLAB Software on the Peregrine System | High-Performance Computing

    Science.gov Websites

    Learn how to run MATLAB software in batch mode on the Peregrine system. Below is an example MATLAB job in batch (non-interactive) mode. To try the example out, create both matlabTest.sub and /$USER. In this example, it is also the directory into which MATLAB will write the output file x.dat

  14. Silicon nitride back-end optics for biosensor applications

    NASA Astrophysics Data System (ADS)

    Romero-García, Sebastian; Merget, Florian; Zhong, Frank C.; Finkelstein, Hod; Witzens, Jeremy

    2013-05-01

    Silicon nitride (SiN) is a promising candidate material for becoming a standard high-performance solution for integrated biophotonics applications in the visible spectrum. As a key feature, its compatibility with the complementary-oxidemetal- semiconductor (CMOS) technology permits cost reduction at large manufacturing volumes that is particularly advantageous for manufacturing consumables. In this work, we show that the back-end deposition of a thin SiN film enables the large light-cladding interaction desirable for biosensing applications while the refractive index contrast of the technology (Δn ≍ 0.5) also enables a considerable level of integration with reduced waveguide bend radii. Design and experimental validation also show that several advantages are derived from the moderate SiN/SiO2 refractive index contrast, such as lower scattering losses in interconnection waveguides and relaxed tolerances to fabrication imperfections as compared to higher refractive index contrast material systems. As a drawback, a moderate refractive index contrast also makes the implementation of compact grating couplers more challenging, due to the fact that only a relatively weak scattering strength can be achieved. Thereby, the beam diffracted by the grating tends to be rather large and consequently exhibit stringent angular alignment tolerances. Here, we experimentally demonstrate how a proper design of the bottom and top cladding oxide thicknesses allows reduction of the full-width at half maximum (FWHM) and alleviates this problem. Additionally, the inclusion of a CMOS-compatible AlCu/TiN bottom reflector further decreases the FWHM and increases the coupling efficiency. Finally, we show that focusing grating designs greatly reduce the device footprint without penalizing the device metrics.

  15. HTS flywheel energy storage system with rotor shaft stabilized by feed-back control of armature currents of motor-generator

    NASA Astrophysics Data System (ADS)

    Tsukamoto, O.; Utsunomiya, A.

    2007-10-01

    We propose an HTS bulk bearing flywheel energy system (FWES) with rotor shaft stabilization system using feed-back control of the armature currents of the motor-generator. In the proposed system the rotor shift has a pivot bearing at one end of the shaft and an HTS bulk bearing (SMB) at the other end. The fluctuation of the rotor shaft with SMB is damped by feed-back control of the armature currents of the motor-generator sensing the position of the rotor shaft. The method has merits that the fluctuations are damped without active control magnet bearings and extra devices which may deteriorate the energy storage efficiency and need additional costs. The principle of the method was demonstrated by an experiment using a model permanent magnet motor.

  16. End-to-end communication test on variable length packet structures utilizing AOS testbed

    NASA Technical Reports Server (NTRS)

    Miller, Warner H.; Sank, V.; Fong, Wai; Miko, J.; Powers, M.; Folk, John; Conaway, B.; Michael, K.; Yeh, Pen-Shu

    1994-01-01

    This paper describes a communication test, which successfully demonstrated the transfer of losslessly compressed images in an end-to-end system. These compressed images were first formatted into variable length Consultative Committee for Space Data Systems (CCSDS) packets in the Advanced Orbiting System Testbed (AOST). The CCSDS data Structures were transferred from the AOST to the Radio Frequency Simulations Operations Center (RFSOC), via a fiber optic link, where data was then transmitted through the Tracking and Data Relay Satellite System (TDRSS). The received data acquired at the White Sands Complex (WSC) was transferred back to the AOST where the data was captured and decompressed back to the original images. This paper describes the compression algorithm, the AOST configuration, key flight components, data formats, and the communication link characteristics and test results.

  17. Augmented Feedback System to Support Physical Therapy of Non-specific Low Back Pain

    NASA Astrophysics Data System (ADS)

    Brodbeck, Dominique; Degen, Markus; Stanimirov, Michael; Kool, Jan; Scheermesser, Mandy; Oesch, Peter; Neuhaus, Cornelia

    Low back pain is an important problem in industrialized countries. Two key factors limit the effectiveness of physiotherapy: low compliance of patients with repetitive movement exercises, and inadequate awareness of patients of their own posture. The Backtrainer system addresses these problems by real-time monitoring of the spine position, by providing a framework for most common physiotherapy exercises for the low back, and by providing feedback to patients in a motivating way. A minimal sensor configuration was identified as two inertial sensors that measure the orientation of the lower back at two points with three degrees of freedom. The software was designed as a flexible platform to experiment with different hardware, and with various feedback modalities. Basic exercises for two types of movements are provided: mobilizing and stabilizing. We developed visual feedback - abstract as well as in the form of a virtual reality game - and complemented the on-screen graphics with an ambient feedback device. The system was evaluated during five weeks in a rehabilitation clinic with 26 patients and 15 physiotherapists. Subjective satisfaction of subjects was good, and we interpret the results as encouraging indication for the adoption of such a therapy support system by both patients and therapists.

  18. End-To-End Simulation of Launch Vehicle Trajectories Including Stage Separation Dynamics

    NASA Technical Reports Server (NTRS)

    Albertson, Cindy W.; Tartabini, Paul V.; Pamadi, Bandu N.

    2012-01-01

    The development of methodologies, techniques, and tools for analysis and simulation of stage separation dynamics is critically needed for successful design and operation of multistage reusable launch vehicles. As a part of this activity, the Constraint Force Equation (CFE) methodology was developed and implemented in the Program to Optimize Simulated Trajectories II (POST2). The objective of this paper is to demonstrate the capability of POST2/CFE to simulate a complete end-to-end mission. The vehicle configuration selected was the Two-Stage-To-Orbit (TSTO) Langley Glide Back Booster (LGBB) bimese configuration, an in-house concept consisting of a reusable booster and an orbiter having identical outer mold lines. The proximity and isolated aerodynamic databases used for the simulation were assembled using wind-tunnel test data for this vehicle. POST2/CFE simulation results are presented for the entire mission, from lift-off, through stage separation, orbiter ascent to orbit, and booster glide back to the launch site. Additionally, POST2/CFE stage separation simulation results are compared with results from industry standard commercial software used for solving dynamics problems involving multiple bodies connected by joints.

  19. A Comparison and Evaluation of Real-Time Software Systems Modeling Languages

    NASA Technical Reports Server (NTRS)

    Evensen, Kenneth D.; Weiss, Kathryn Anne

    2010-01-01

    A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.

  20. Backed Bending Actuator

    NASA Technical Reports Server (NTRS)

    Costen, Robert C.; Su, Ji

    2004-01-01

    Bending actuators of a proposed type would partly resemble ordinary bending actuators, but would include simple additional components that would render them capable of exerting large forces at small displacements. Like an ordinary bending actuator, an actuator according to the proposal would include a thin rectangular strip that would comprise two bonded layers (possibly made of electroactive polymers with surface electrodes) and would be clamped at one end in the manner of a cantilever beam. Unlike an ordinary bending actuator, the proposed device would include a rigid flat backplate that would support part of the bending strip against backward displacement; because of this feature, the proposed device is called a backed bending actuator. When an ordinary bending actuator is inactive, the strip typically lies flat, the tip displacement is zero, and the force exerted by the tip is zero. During activation, the tip exerts a transverse force and undergoes a bending displacement that results from the expansion or contraction of one or more of the bonded layers. The tip force of an ordinary bending actuator is inversely proportional to its length; hence, a long actuator tends to be weak. The figure depicts an ordinary bending actuator and the corresponding backed bending actuator. The bending, the tip displacement (d(sub t)), and the tip force (F) exerted by the ordinary bending actuator are well approximated by the conventional equations for the loading and deflection of a cantilever beam subject to a bending moment which, in this case, is applied by the differential expansion or contraction of the bonded layers. The bending, displacement, and tip force of the backed bending actuator are calculated similarly, except that it is necessary to account for the fact that the force F(sub b) that resists the displacement of the tip could be sufficient to push part of the strip against the backplate; in such a condition, the cantilever beam would be effectively shortened

  1. Single-ended mid-infrared laser-absorption sensor for simultaneous in situ measurements of H<sub>2sub>O, CO<sub>2sub>, CO, and temperature in combustion flows.

    PubMed

    Peng, Wen Yu; Goldenstein, Christopher S; Mitchell Spearrin, R; Jeffries, Jay B; Hanson, Ronald K

    2016-11-20

    The development and demonstration of a four-color single-ended mid-infrared tunable laser-absorption sensor for simultaneous measurements of H<sub>2sub>O, CO<sub>2sub>, CO, and temperature in combustion flows is described. This sensor operates by transmitting laser light through a single optical port and measuring the backscattered radiation from within the combustion device. Scanned-wavelength-modulation spectroscopy with second-harmonic detection and first-harmonic normalization (scanned-WMS-2f/1f) was used to account for variable signal collection and nonabsorption losses in the harsh environment. Two tunable diode lasers operating near 2551 and 2482 nm were utilized to measure H<sub>2sub>O concentration and temperature, while an interband cascade laser near 4176 nm and a quantum cascade laser near 4865 nm were used for measuring CO<sub>2sub> and CO, respectively. The lasers were modulated at either 90 or 112 kHz and scanned across the peaks of their respective absorption features at 1 kHz, leading to a measurement rate of 2 kHz. A hybrid demultiplexing strategy involving both spectral filtering and frequency-domain demodulation was used to decouple the backscattered radiation into its constituent signals. Demonstration measurements were made in the exhaust of a laboratory-scale laminar methane-air flat-flame burner at atmospheric pressure and equivalence ratios ranging from 0.7 to 1.2. A stainless steel reflective plate was placed 0.78 cm away from the sensor head within the combustion exhaust, leading to a total absorption path length of 1.56 cm. Detection limits of 1.4% H<sub>2sub>O, 0.6% CO<sub>2sub>, and 0.4% CO by mole were reported. To the best of the authors' knowledge, this work represents the first demonstration of a mid-infrared laser-absorption sensor using a single-ended architecture in combustion flows.

  2. Back-Up/ Peak Shaving Fuel Cell System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staudt, Rhonda L.

    2008-05-28

    This Final Report covers the work executed by Plug Power from 8/11/03 – 10/31/07 statement of work for Topic 2: advancing the state of the art of fuel cell technology with the development of a new generation of commercially viable, stationary, Back-up/Peak-Shaving fuel cell systems, the GenCore II. The Program cost was $7.2 M with the Department of Energy share being $3.6M and Plug Power’s share being $3.6 M. The Program started in August of 2003 and was scheduled to end in January of 2006. The actual program end date was October of 2007. A no cost extension was grated.more » The Department of Energy barriers addressed as part of this program are: Technical Barriers for Distributed Generation Systems: o Durability o Power Electronics o Start up time Technical Barriers for Fuel Cell Components: o Stack Material and Manufacturing Cost o Durability o Thermal and water management Background The next generation GenCore backup fuel cell system to be designed, developed and tested by Plug Power under the program is the first, mass-manufacturable design implementation of Plug Power’s GenCore architected platform targeted for battery and small generator replacement applications in the telecommunications, broadband and UPS markets. The next generation GenCore will be a standalone, H2 in-DC-out system. In designing the next generation GenCore specifically for the telecommunications market, Plug Power is teaming with BellSouth Telecommunications, Inc., a leading industry end user. The final next generation GenCore system is expected to represent a market-entry, mass-manufacturable and economically viable design. The technology will incorporate: • A cost-reduced, polymer electrolyte membrane (PEM) fuel cell stack tailored to hydrogen fuel use • An advanced electrical energy storage system • A modular, scalable power conditioning system tailored to market requirements • A scaled-down, cost-reduced balance of plant (BOP) • Network Equipment Building Standards

  3. Software Development for the Hobby-Eberly Telescope's Segment Alignment Maintenance System using LABView

    NASA Technical Reports Server (NTRS)

    Hall, Drew P.; Ly, William; Howard, Richard T.; Weir, John; Rakoczy, John; Roe, Fred (Technical Monitor)

    2002-01-01

    The software development for an upgrade to the Hobby-Eberly Telescope (HET) was done in LABView. In order to improve the performance of the HET at the McDonald Observatory, a closed-loop system had to be implemented to keep the mirror segments aligned during periods of observation. The control system, called the Segment Alignment Maintenance System (SAMs), utilized inductive sensors to measure the relative motions of the mirror segments. Software was developed in LABView to tie the sensors, operator interface, and mirror-control motors together. Developing the software in LABView allowed the system to be flexible, understandable, and able to be modified by the end users. Since LABView is built using block diagrams, the software naturally followed the designed control system's block and flow diagrams, and individual software blocks could be easily verified. LABView's many built-in display routines allowed easy visualization of diagnostic and health-monitoring data during testing. Also, since LABView is a multi-platform software package, different programmers could develop the code remotely on various types of machines. LABView s ease of use facilitated rapid prototyping and field testing. There were some unanticipated difficulties in the software development, but the use of LABView as the software "language" for the development of SAMs contributed to the overall success of the project.

  4. Complexity, Systems, and Software

    DTIC Science & Technology

    2014-08-14

    2014 Carnegie Mellon University Complexity, Systems, and Software Software Engineering Institute Carnegie Mellon University Pittsburgh, PA...this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services , Directorate for Information...OMB control number. 1. REPORT DATE 29 OCT 2014 2. REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE Complexity, Systems, and Software

  5. TU-D-201-01: Definition and Purpose of End-Of-Life for Brachytherapy Devices and Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melhus, C.

    2015-06-15

    Brachytherapy devices and software are designed to last for a certain period of time. Due to a number of considerations, such as material factors, wear-and-tear, backwards compatibility, and others, they all reach a date when they are no longer supported by the manufacturer. Most of these products have a limited duration for their use, and the information is provided to the user at time of purchase. Because of issues or concerns determined by the manufacturer, certain products are retired sooner than the anticipated date, and the user is immediately notified. In these situations, the institution is facing some difficult choices:more » remove these products from the clinic or perform tests and continue their usage. Both of these choices come with a financial burden: replacing the product or assuming a potential medicolegal liability. This session will provide attendees with the knowledge and tools to make better decisions when facing these issues. Learning Objectives: Understand the meaning of “end-of-life or “life expectancy” for brachytherapy devices and software Review items (devices and software) affected by “end-of-life” restrictions Learn how to effectively formulate “end-of-life” policies at your institution Learn about possible implications of “end-of-life” policy Review other possible approaches to “end-of-life” issue.« less

  6. Orion MPCV GN and C End-to-End Phasing Tests

    NASA Technical Reports Server (NTRS)

    Neumann, Brian C.

    2013-01-01

    End-to-end integration tests are critical risk reduction efforts for any complex vehicle. Phasing tests are an end-to-end integrated test that validates system directional phasing (polarity) from sensor measurement through software algorithms to end effector response. Phasing tests are typically performed on a fully integrated and assembled flight vehicle where sensors are stimulated by moving the vehicle and the effectors are observed for proper polarity. Orion Multi-Purpose Crew Vehicle (MPCV) Pad Abort 1 (PA-1) Phasing Test was conducted from inertial measurement to Launch Abort System (LAS). Orion Exploration Flight Test 1 (EFT-1) has two end-to-end phasing tests planned. The first test from inertial measurement to Crew Module (CM) reaction control system thrusters uses navigation and flight control system software algorithms to process commands. The second test from inertial measurement to CM S-Band Phased Array Antenna (PAA) uses navigation and communication system software algorithms to process commands. Future Orion flights include Ascent Abort Flight Test 2 (AA-2) and Exploration Mission 1 (EM-1). These flights will include additional or updated sensors, software algorithms and effectors. This paper will explore the implementation of end-to-end phasing tests on a flight vehicle which has many constraints, trade-offs and compromises. Orion PA-1 Phasing Test was conducted at White Sands Missile Range (WSMR) from March 4-6, 2010. This test decreased the risk of mission failure by demonstrating proper flight control system polarity. Demonstration was achieved by stimulating the primary navigation sensor, processing sensor data to commands and viewing propulsion response. PA-1 primary navigation sensor was a Space Integrated Inertial Navigation System (INS) and Global Positioning System (GPS) (SIGI) which has onboard processing, INS (3 accelerometers and 3 rate gyros) and no GPS receiver. SIGI data was processed by GN&C software into thrust magnitude and

  7. OPeNDAP Server4: Buidling a High-Performance Server for the DAP by Leveraging Existing Software

    NASA Astrophysics Data System (ADS)

    Potter, N.; West, P.; Gallagher, J.; Garcia, J.; Fox, P.

    2006-12-01

    OPeNDAP has been working in conjunction with NCAR/ESSL/HAO to develop a modular, high performance data server that will be the successor to the current OPeNDAP data server. The new server, called Server4, is really two servers: A 'Back-End' data server which reads information from various types of data sources and packages the results in DAP objects; and A 'Front-End' which receives client DAP request and then decides how use features of the Back-End data server to build the correct responses. This architecture can be configured in several interesting ways: The Front- and Back-End components can be run on either the same or different machines, depending on security and performance needs, new Front-End software can be written to support other network data access protocols and local applications can interact directly with the Back-End data server. This new server's Back-End component will use the server infrastructure developed by HAO for use with the Earth System Grid II project. Extensions needed to use it as part of the new OPeNDAP server were minimal. The HAO server was modified so that it loads 'data handlers' at run-time. Each data handler module only needs to satisfy a simple interface which both enabled the existing data handlers written for the old OPeNDAP server to be directly used and also simplifies writing new handlers from scratch. The Back-End server leverages high- performance features developed for the ESG II project, so applications that can interact with it directly can read large volumes of data efficiently. The Front-End module of Server4 uses the Java Servlet system in place of the Common Gateway Interface (CGI) used in the past. New front-end modules can be written to support different network data access protocols, so that same server will ultimately be able to support more than the DAP/2.0 protocol. As an example, we will discuss a SOAP interface that's currently in development. In addition to support for DAP/2.0 and prototypical support for

  8. Product Engineering Class in the Software Safety Risk Taxonomy for Building Safety-Critical Systems

    NASA Technical Reports Server (NTRS)

    Hill, Janice; Victor, Daniel

    2008-01-01

    When software safety requirements are imposed on legacy safety-critical systems, retrospective safety cases need to be formulated as part of recertifying the systems for further use and risks must be documented and managed to give confidence for reusing the systems. The SEJ Software Development Risk Taxonomy [4] focuses on general software development issues. It does not, however, cover all the safety risks. The Software Safety Risk Taxonomy [8] was developed which provides a construct for eliciting and categorizing software safety risks in a straightforward manner. In this paper, we present extended work on the taxonomy for safety that incorporates the additional issues inherent in the development and maintenance of safety-critical systems with software. An instrument called a Software Safety Risk Taxonomy Based Questionnaire (TBQ) is generated containing questions addressing each safety attribute in the Software Safety Risk Taxonomy. Software safety risks are surfaced using the new TBQ and then analyzed. In this paper we give the definitions for the specialized Product Engineering Class within the Software Safety Risk Taxonomy. At the end of the paper, we present the tool known as the 'Legacy Systems Risk Database Tool' that is used to collect and analyze the data required to show traceability to a particular safety standard

  9. Status report of the SRT radiotelescope control software: the DISCOS project

    NASA Astrophysics Data System (ADS)

    Orlati, A.; Bartolini, M.; Buttu, M.; Fara, A.; Migoni, C.; Poppi, S.; Righini, S.

    2016-08-01

    The Sardinia Radio Telescope (SRT) is a 64-m fully-steerable radio telescope. It is provided with an active surface to correct for gravitational deformations, allowing observations from 300 MHz to 100 GHz. At present, three receivers are available: a coaxial LP-band receiver (305-410 MHz and 1.5-1.8 GHz), a C-band receiver (5.7-7.7 GHz) and a 7-feed K-band receiver (18-26.5 GHz). Several back-ends are also available in order to perform the different data acquisition and analysis procedures requested by scientific projects. The design and development of the SRT control software started in 2004, and now belongs to a wider project called DISCOS (Development of the Italian Single-dish COntrol System), which provides a common infrastructure to the three Italian radio telescopes (Medicina, Noto and SRT dishes). DISCOS is based on the Alma Common Software (ACS) framework, and currently consists of more than 500k lines of code. It is organized in a common core and three specific product lines, one for each telescope. Recent developments, carried out after the conclusion of the technical commissioning of the instrument (October 2013), consisted in the addition of several new features in many parts of the observing pipeline, spanning from the motion control to the digital back-ends for data acquisition and data formatting; we brie y describe such improvements. More importantly, in the last two years we have supported the astronomical validation of the SRT radio telescope, leading to the opening of the first public call for proposals in late 2015. During this period, while assisting both the engineering and the scientific staff, we massively employed the control software and were able to test all of its features: in this process we received our first feedback from the users and we could verify how the system performed in a real-life scenario, drawing the first conclusions about the overall system stability and performance. We examine how the system behaves in terms of network

  10. Characteristic of a Digital Correlation Radiometer Back End with Finite Wordlength

    NASA Technical Reports Server (NTRS)

    Biswas, Sayak K.; Hyde, David W.; James, Mark W.; Cecil, Daniel J.

    2017-01-01

    The performance characteristic of a digital correlation radiometer signal processing back end (DBE) is analyzed using a simulator. The particular design studied here corresponds to the airborne Hurricane Imaging radiometer which was jointly developed by the NASA Marshall Space Flight Center, University of Michigan, University of Central Florida and NOAA. Laboratory and flight test data is found to be in accord with the simulation results. Overall design seems to be optimum for the typical input signal dynamic range. It was found that the performance of the digital kurtosis could be improved by lowering the DBE input power level. An unusual scaling between digital correlation channels observed in the instrument data is confirmed to be a DBE characteristic.

  11. Development of a data management front-end for use with a LANDSAT-based information system

    NASA Technical Reports Server (NTRS)

    Turner, B. J.

    1982-01-01

    The development and implementation of a data management front-end system for use with a LANDSAT based information system that facilitates the processsing of both LANDSAT and ancillary data was examined. The final tasks, reported on here, involved; (1) the implementation of the VICAR image processing software system at Penn State and the development of a user-friendly front-end for this system; (2) the implementation of JPL-developed software based on VICAR, for mosaicking LANDSAT scenes; (3) the creation and storage of a mosiac of 1981 summer LANDSAT data for the entire state of Pennsylvania; (4) demonstrations of the defoliation assessment procedure for Perry and Centre Counties, and presentation of the results at the 1982 National Gypsy Moth Review Meeting, and (5) the training of Pennsylvania Bureau of Forestry personnel in the use of the defoliation analysis system.

  12. Modeling software systems by domains

    NASA Technical Reports Server (NTRS)

    Dippolito, Richard; Lee, Kenneth

    1992-01-01

    The Software Architectures Engineering (SAE) Project at the Software Engineering Institute (SEI) has developed engineering modeling techniques that both reduce the complexity of software for domain-specific computer systems and result in systems that are easier to build and maintain. These techniques allow maximum freedom for system developers to apply their domain expertise to software. We have applied these techniques to several types of applications, including training simulators operating in real time, engineering simulators operating in non-real time, and real-time embedded computer systems. Our modeling techniques result in software that mirrors both the complexity of the application and the domain knowledge requirements. We submit that the proper measure of software complexity reflects neither the number of software component units nor the code count, but the locus of and amount of domain knowledge. As a result of using these techniques, domain knowledge is isolated by fields of engineering expertise and removed from the concern of the software engineer. In this paper, we will describe kinds of domain expertise, describe engineering by domains, and provide relevant examples of software developed for simulator applications using the techniques.

  13. Using VirtualGL/TurboVNC Software on the Peregrine System |

    Science.gov Websites

    High-Performance Computing | NREL VirtualGL/TurboVNC Software on the Peregrine System Using , allowing users to access and share large-memory visualization nodes with high-end graphics processing units may be better than just using X11 forwarding when connecting from a remote site with low bandwidth and

  14. A Probabilistic Software System Attribute Acceptance Paradigm for COTS Software Evaluation

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    2005-01-01

    Standard software requirement formats are written from top-down perspectives only, that is, from an ideal notion of a client s needs. Despite the exactness of the standard format, software and system errors in designed systems have abounded. Bad and inadequate requirements have resulted in cost overruns, schedule slips and lost profitability. Commercial off-the-shelf (COTS) software components are even more troublesome than designed systems because they are often provided as is and subsequently delivered with unsubstantiated validation of described capabilities. For COTS software, there needs to be a way to express the client s software needs in a consistent and formal manner using software system attributes derived from software quality standards. Additionally, the format needs to be amenable to software evaluation processes that integrate observable evidence garnered from historical data. This paper presents a paradigm that effectively bridges the gap between what a client desires (top-down) and what has been demonstrated (bottom-up) for COTS software evaluation. The paradigm addresses the specification of needs before the software evaluation is performed and can be used to increase the shared understanding between clients and software evaluators about what is required and what is technically possible.

  15. The software-defined fast post-processing for GEM soft x-ray diagnostics in the Tungsten Environment in Steady-state Tokamak thermal fusion reactor

    NASA Astrophysics Data System (ADS)

    Krawczyk, Rafał Dominik; Czarski, Tomasz; Linczuk, Paweł; Wojeński, Andrzej; Kolasiński, Piotr; GÄ ska, Michał; Chernyshova, Maryna; Mazon, Didier; Jardin, Axel; Malard, Philippe; Poźniak, Krzysztof; Kasprowicz, Grzegorz; Zabołotny, Wojciech; Kowalska-Strzeciwilk, Ewa; Malinowski, Karol

    2018-06-01

    This article presents a novel software-defined server-based solutions that were introduced in the fast, real-time computation systems for soft X-ray diagnostics for the WEST (Tungsten Environment in Steady-state Tokamak) reactor in Cadarache, France. The objective of the research was to provide a fast processing of data at high throughput and with low latencies for investigating the interplay between the particle transport and magnetohydrodynamic activity. The long-term objective is to implement in the future a fast feedback signal in the reactor control mechanisms to sustain the fusion reaction. The implemented electronic measurement device is anticipated to be deployed in the WEST. A standalone software-defined computation engine was designed to handle data collected at high rates in the server back-end of the system. Signals are obtained from the front-end field-programmable gate array mezzanine cards that acquire and perform a selection from the gas electron multiplier detector. A fast, authorial library for plasma diagnostics was written in C++. It originated from reference offline MATLAB implementations. They were redesigned for runtime analysis during the experiment in the novel online modes of operation. The implementation allowed the benchmarking, evaluation, and optimization of plasma processing algorithms with the possibility to check the consistency with reference computations written in MATLAB. The back-end software and hardware architecture are presented with data evaluation mechanisms. The online modes of operation for the WEST are discussed. The results concerning the performance of the processing and the introduced functionality are presented.

  16. Remote Software Application and Display Development

    NASA Technical Reports Server (NTRS)

    Sanders, Brandon T.

    2014-01-01

    The era of the shuttle program has come to an end, but only to give rise to newer and more exciting projects. Now is the time of the Orion spacecraft, a work of art designed to exceed all previous endeavors of man. NASA is exiting the time of exploration and is entering a new period, a period of pioneering. With this new mission, many of NASAs organizations must undergo a great deal of change and development to support the Orion missions. The Spaceport Command and Control System (SCCS) is the new system that will provide NASA the ability to launch rockets into orbit and thus control Orion and other spacecraft as the goal of populating Mars becomes ever increasingly tangible. Since the previous control system, Launch Processing System (LPS), was primarily designed to launch the shuttles, SCCS was needed as Kennedy Space Center (KSC) reorganized to a multiuser spaceport for commercial flights, providing a more versatile control over rockets. Within SCCS, is the Launch Control System (LCS), which is the remote software behind the command and monitoring of flight and ground system hardware. This internship at KSC has involved two main components in LCS, including Remote Software Application and Display development. The display environment provides a graphical user interface for an operator to view and see if any cautions are raised, while the remote applications are the backbone that communicate with hardware, and then relay the data back to the displays. These elements go hand in hand as they provide monitoring and control over hardware and software alike from the safety of the Launch Control Center. The remote software applications are written in Application Control Language (ACL), which must undergo unit testing to ensure data integrity. This paper describes both the implementation and writing of unit tests in ACL code for remote software applications, as well as the building of remote displays to be used in the Launch Control Center (LCC).

  17. Safeguarding End-User Military Software

    DTIC Science & Technology

    2014-12-04

    product lines using composi- tional symbolic execution [17] Software product lines are families of products defined by feature commonality and vari...ability, with a well-managed asset base. Recent work in testing of software product lines has exploited similarities across development phases to reuse...feature dependence graph to extract the set of possible interaction trees in a product family. It composes these to incrementally and symbolically

  18. Security System Software

    NASA Technical Reports Server (NTRS)

    1993-01-01

    C Language Integration Production System (CLIPS), a NASA-developed expert systems program, has enabled a security systems manufacturer to design a new generation of hardware. C.CURESystem 1 Plus, manufactured by Software House, is a software based system that is used with a variety of access control hardware at installations around the world. Users can manage large amounts of information, solve unique security problems and control entry and time scheduling. CLIPS acts as an information management tool when accessed by C.CURESystem 1 Plus. It asks questions about the hardware and when given the answer, recommends possible quick solutions by non-expert persons.

  19. The Implementation of Satellite Attitude Control System Software Using Object Oriented Design

    NASA Technical Reports Server (NTRS)

    Reid, W. Mark; Hansell, William; Phillips, Tom; Anderson, Mark O.; Drury, Derek

    1998-01-01

    NASA established the Small Explorer (SNMX) program in 1988 to provide frequent opportunities for highly focused and relatively inexpensive space science missions. The SMEX program has produced five satellites, three of which have been successfully launched. The remaining two spacecraft are scheduled for launch within the coming year. NASA has recently developed a prototype for the next generation Small Explorer spacecraft (SMEX-Lite). This paper describes the object-oriented design (OOD) of the SMEX-Lite Attitude Control System (ACS) software. The SMEX-Lite ACS is three-axis controlled and is capable of performing sub-arc-minute pointing. This paper first describes high level requirements governing the SMEX-Lite ACS software architecture. Next, the context in which the software resides is explained. The paper describes the principles of encapsulation, inheritance, and polymorphism with respect to the implementation of an ACS software system. This paper will also discuss the design of several ACS software components. Specifically, object-oriented designs are presented for sensor data processing, attitude determination, attitude control, and failure detection. Finally, this paper will address the establishment of the ACS Foundation Class (AFC) Library. The AFC is a large software repository, requiring a minimal amount of code modifications to produce ACS software for future projects.

  20. NASA's Core Trajectory Sub-System Project: Using JBoss Enterprise Middleware for Building Software Systems Used to Support Spacecraft Trajectory Operations

    NASA Technical Reports Server (NTRS)

    Stensrud, Kjell C.; Hamm, Dustin

    2007-01-01

    NASA's Johnson Space Center (JSC) / Flight Design and Dynamics Division (DM) has prototyped the use of Open Source middleware technology for building its next generation spacecraft mission support system. This is part of a larger initiative to use open standards and open source software as building blocks for future mission and safety critical systems. JSC is hoping to leverage standardized enterprise architectures, such as Java EE, so that its internal software development efforts can be focused on the core aspects of their problem domain. This presentation will outline the design and implementation of the Trajectory system and the lessons learned during the exercise.

  1. NASA software specification and evaluation system: Software verification/validation techniques

    NASA Technical Reports Server (NTRS)

    1977-01-01

    NASA software requirement specifications were used in the development of a system for validating and verifying computer programs. The software specification and evaluation system (SSES) provides for the effective and efficient specification, implementation, and testing of computer software programs. The system as implemented will produce structured FORTRAN or ANSI FORTRAN programs, but the principles upon which SSES is designed allow it to be easily adapted to other high order languages.

  2. Web-based DAQ systems: connecting the user and electronics front-ends

    NASA Astrophysics Data System (ADS)

    Lenzi, Thomas

    2016-12-01

    Web technologies are quickly evolving and are gaining in computational power and flexibility, allowing for a paradigm shift in the field of Data Acquisition (DAQ) systems design. Modern web browsers offer the possibility to create intricate user interfaces and are able to process and render complex data. Furthermore, new web standards such as WebSockets allow for fast real-time communication between the server and the user with minimal overhead. Those improvements make it possible to move the control and monitoring operations from the back-end servers directly to the user and to the front-end electronics, thus reducing the complexity of the data acquisition chain. Moreover, web-based DAQ systems offer greater flexibility, accessibility, and maintainability on the user side than traditional applications which often lack portability and ease of use. As proof of concept, we implemented a simplified DAQ system on a mid-range Spartan6 Field Programmable Gate Array (FPGA) development board coupled to a digital front-end readout chip. The system is connected to the Internet and can be accessed from any web browser. It is composed of custom code to control the front-end readout and of a dual soft-core Microblaze processor to communicate with the client.

  3. Ripple-aware optical proximity correction fragmentation for back-end-of-line designs

    NASA Astrophysics Data System (ADS)

    Wang, Jingyu; Wilkinson, William

    2018-01-01

    Accurate characterization of image rippling is critical in early detection of back-end-of-line (BEOL) patterning weakpoints, as most defects are strongly associated with excessive rippling that does not get effectively compensated by optical proximity correction (OPC). We correlate image contour with design shapes to account for design geometry-dependent rippling signature, and explore the best practice of OPC fragmentation for BEOL geometries. Specifically, we predict the optimum contour as allowed by the lithographic process and illumination conditions and locate ripple peaks, valleys, and inflection points. This allows us to identify potential process weakpoints and segment the mask accordingly to achieve the best correction results.

  4. The ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array: camera DAQ software architecture

    NASA Astrophysics Data System (ADS)

    Conforti, Vito; Trifoglio, Massimo; Bulgarelli, Andrea; Gianotti, Fulvio; Fioretti, Valentina; Tacchini, Alessandro; Zoli, Andrea; Malaguti, Giuseppe; Capalbi, Milvia; Catalano, Osvaldo

    2014-07-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end-to-end prototype of a Small Size dual-mirror Telescope. In a second phase the ASTRI project foresees the installation of the first elements of the array at CTA southern site, a mini-array of 7 telescopes. The ASTRI Camera DAQ Software is aimed at the Camera data acquisition, storage and display during Camera development as well as during commissioning and operations on the ASTRI SST-2M telescope prototype that will operate at the INAF observing station located at Serra La Nave on the Mount Etna (Sicily). The Camera DAQ configuration and operations will be sequenced either through local operator commands or through remote commands received from the Instrument Controller System that commands and controls the Camera. The Camera DAQ software will acquire data packets through a direct one-way socket connection with the Camera Back End Electronics. In near real time, the data will be stored in both raw and FITS format. The DAQ Quick Look component will allow the operator to display in near real time the Camera data packets. We are developing the DAQ software adopting the iterative and incremental model in order to maximize the software reuse and to implement a system which is easily adaptable to changes. This contribution presents the Camera DAQ Software architecture with particular emphasis on its potential reuse for the ASTRI/CTA mini-array.

  5. Sub-cooled liquid nitrogen cryogenic system with neon turbo-refrigerator for HTS power equipment

    NASA Astrophysics Data System (ADS)

    Yoshida, S.; Hirai, H.; Nara, N.; Ozaki, S.; Hirokawa, M.; Eguchi, T.; Hayashi, H.; Iwakuma, M.; Shiohara, Y.

    2014-01-01

    We developed a prototype sub-cooled liquid nitrogen (LN) circulation system for HTS power equipment. The system consists of a neon turbo-Brayton refrigerator with a LN sub-cooler and LN circulation pump unit. The neon refrigerator has more than 2 kW cooling power at 65 K. The LN sub-cooler is a plate-fin type heat exchanger and is installed in a refrigerator cold box. In order to carry out the system performance tests, a dummy cryostat having an electric heater was set instead of a HTS power equipment. Sub-cooled LN is delivered into the sub-cooler by the LN circulation pump and cooled within it. After the sub-cooler, sub-cooled LN goes out from the cold box to the dummy cryostat, and comes back to the pump unit. The system can control an outlet sub-cooled LN temperature by adjusting refrigerator cooling power. The refrigerator cooling power is automatically controlled by the turbo-compressor rotational speed. In the performance tests, we increased an electric heater power from 200 W to 1300 W abruptly. We confirmed the temperature fluctuation was about ±1 K. We show the cryogenic system details and performance test results in this paper.

  6. A randomized clinical trial of the effectiveness of mechanical traction for sub-groups of patients with low back pain: study methods and rationale

    PubMed Central

    2010-01-01

    Background Patients with signs of nerve root irritation represent a sub-group of those with low back pain who are at increased risk of persistent symptoms and progression to costly and invasive management strategies including surgery. A period of non-surgical management is recommended for most patients, but there is little evidence to guide non-surgical decision-making. We conducted a preliminary study examining the effectiveness of a treatment protocol of mechanical traction with extension-oriented activities for patients with low back pain and signs of nerve root irritation. The results suggested this approach may be effective, particularly in a more specific sub-group of patients. The aim of this study will be to examine the effectiveness of treatment that includes traction for patients with low back pain and signs of nerve root irritation, and within the pre-defined sub-group. Methods/Design The study will recruit 120 patients with low back pain and signs of nerve root irritation. Patients will be randomized to receive an extension-oriented treatment approach, with or without the addition of mechanical traction. Randomization will be stratified based on the presence of the pre-defined sub-grouping criteria. All patients will receive 12 physical therapy treatment sessions over 6 weeks. Follow-up assessments will occur after 6 weeks, 6 months, and 1 year. The primary outcome will be disability measured with a modified Oswestry questionnaire. Secondary outcomes will include self-reports of low back and leg pain intensity, quality of life, global rating of improvement, additional healthcare utilization, and work absence. Statistical analysis will be based on intention to treat principles and will use linear mixed model analysis to compare treatment groups, and examine the interaction between treatment and sub-grouping status. Discussion This trial will provide a methodologically rigorous evaluation of the effectiveness of using traction for patients with low back

  7. Comparison between massage and routine physical therapy in women with sub acute and chronic nonspecific low back pain.

    PubMed

    Kamali, Fahimeh; Panahi, Fatemeh; Ebrahimi, Samaneh; Abbasi, Leila

    2014-01-01

    The aim of this study was to investigate the comparison of massage therapy and routine physical therapy on patients with sub acute and chronic nonspecific low back pain. Thirty volunteer female subjects with a sub acute or chronic nonspecific low back pain were randomly enrolled in two groups, massage therapy and routine physical therapy. After massage application, the hamstring and paravertebral muscles stretching and also stabilizing exercises were prescribed. In the routine physical therapy group, TENS, US and vibrator were used besides exercises. Pain intensity according to Numerical Rating Scale, functional disability level in accordance to Oswestry Disability Index, and modified Schober test, for measurement of flexion range of motion, before and after ten sessions of treatment were used to evaluate the effectiveness of the treatment. Pain intensity, Oswestry Disability Index and flexion range of motion had shown significant differences before and after intervention in both groups (p<0.001). The statistical analysis revealed that the massage therapy had significantly improved the pain intensity and Oswestry Disability Index compared to routine physical therapy (p=0.015, p=0.013 respectively), but the range of motion changes were not significant between two groups (p=1.00). It can be concluded that both massage therapy and routine physical therapy are useful for sub acute and chronic nonspecific low back pain treatment especially if accompanied with exercise. However, massage is more effective than other electrotherapy modalities, and it can be used alone or with electrotherapy for the treatment of patients with low back pain.

  8. The SIFT hardware/software systems. Volume 2: Software listings

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    1985-01-01

    This document contains software listings of the SIFT operating system and application software. The software is coded for the most part in a variant of the Pascal language, Pascal*. Pascal* is a cross-compiler running on the VAX and Eclipse computers. The output of Pascal* is BDX-390 assembler code. When necessary, modules are written directly in BDX-390 assembler code. The listings in this document supplement the description of the SIFT system found in Volume 1 of this report, A Detailed Description.

  9. Software And Systems Engineering Risk Management

    DTIC Science & Technology

    2010-04-01

    RSKM 2004 COSO Enterprise RSKM Framework 2006 ISO/IEC 16085 Risk Management Process 2008 ISO/IEC 12207 Software Lifecycle Processes 2009 ISO/IEC...1 Software And Systems Engineering Risk Management John Walz VP Technical and Conferences Activities, IEEE Computer Society Vice-Chair Planning...Software & Systems Engineering Standards Committee, IEEE Computer Society US TAG to ISO TMB Risk Management Working Group Systems and Software

  10. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  11. Software Architecture for Big Data Systems

    DTIC Science & Technology

    2014-03-27

    Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University Software Architecture for Big Data Systems...AND SUBTITLE Software Architecture for Big Data Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...ih - . Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University WHAT IS BIG DATA ? FROM A SOFTWARE

  12. The GOCE end-to-end system simulator

    NASA Astrophysics Data System (ADS)

    Catastini, G.; Cesare, S.; de Sanctis, S.; Detoma, E.; Dumontel, M.; Floberghagen, R.; Parisch, M.; Sechi, G.; Anselmi, A.

    2003-04-01

    The idea of an end-to-end simulator was conceived in the early stages of the GOCE programme, as an essential tool for assessing the satellite system performance, that cannot be fully tested on the ground. The simulator in its present form is under development at Alenia Spazio for ESA since the beginning of Phase B and is being used for checking the consistency of the spacecraft and of the payload specifications with the overall system requirements, supporting trade-off, sensitivity and worst-case analyses, and preparing and testing the on-ground and in-flight calibration concepts. The software simulates the GOCE flight along an orbit resulting from the application of Earth's gravity field, non-conservative environmental disturbances (atmospheric drag, coupling with Earth's magnetic field, etc.) and control forces/torques. The drag free control forces as well as the attitude control torques are generated by the current design of the dedicated algorithms. Realistic sensor models (star tracker, GPS receiver and gravity gradiometer) feed the control algorithms and the commanded forces are applied through realistic thruster models. The output of this stage of the simulator is a time series of Level-0 data, namely the gradiometer raw measurements and spacecraft ancillary data. The next stage of the simulator transforms Level-0 data into Level-1b (gravity gradient tensor) data, by implementing the following steps: - transformation of raw measurements of each pair of accelerometers into common and differential accelerations - calibration of the common and differential accelerations - application of the post-facto algorithm to rectify the phase of the accelerations and to estimate the GOCE angular velocity and attitude - computation of the Level-1b gravity gradient tensor from calibrated accelerations and estimated angular velocity in different reference frames (orbital, inertial, earth-fixed); computation of the spectral density of the error of the tensor diagonal

  13. End-to-end simulation and verification of GNC and robotic systems considering both space segment and ground segment

    NASA Astrophysics Data System (ADS)

    Benninghoff, Heike; Rems, Florian; Risse, Eicke; Brunner, Bernhard; Stelzer, Martin; Krenn, Rainer; Reiner, Matthias; Stangl, Christian; Gnat, Marcin

    2018-01-01

    In the framework of a project called on-orbit servicing end-to-end simulation, the final approach and capture of a tumbling client satellite in an on-orbit servicing mission are simulated. The necessary components are developed and the entire end-to-end chain is tested and verified. This involves both on-board and on-ground systems. The space segment comprises a passive client satellite, and an active service satellite with its rendezvous and berthing payload. The space segment is simulated using a software satellite simulator and two robotic, hardware-in-the-loop test beds, the European Proximity Operations Simulator (EPOS) 2.0 and the OOS-Sim. The ground segment is established as for a real servicing mission, such that realistic operations can be performed from the different consoles in the control room. During the simulation of the telerobotic operation, it is important to provide a realistic communication environment with different parameters like they occur in the real world (realistic delay and jitter, for example).

  14. Differences in end-range lumbar flexion during slumped sitting and forward bending between low back pain subgroups and genders

    PubMed Central

    Hoffman, Shannon L.; Johnson, Molly B.; Zou, Dequan; Van Dillen, Linda R.

    2012-01-01

    Patterns of lumbar posture and motion are associated with low back pain (LBP). Research suggests LBP subgroups demonstrate different patterns during common tasks. This study assessed differences in end-range lumbar flexion during two tasks between two LBP subgroups classified according to the Movement System Impairment model. Additionally, the impact of gender differences on subgroup differences was assessed. Kinematic data were collected. Subjects in the Rotation (Rot) and Rotation with Extension (RotExt) LBP subgroups were asked to sit slumped and bend forward from standing. Lumbar end-range flexion was calculated. Subjects reported symptom behavior during each test. Compared to the RotExt subgroup, the Rot subgroup demonstrated greater end-range lumbar flexion during slumped sitting and a trend towards greater end-range lumbar flexion with forward bending. Compared to females, males demonstrated greater end-range lumbar flexion during slumped sitting and forward bending. A greater proportion of people in the Rot subgroup reported symptoms with each test compared to the RotExt subgroup. Males and females were equally likely to report symptoms with each test. Gender differences were not responsible for LBP subgroup differences. Subgrouping people with LBP provides insight into differences in lumbar motion within the LBP population. Results suggesting potential consistent differences across flexion-related tasks support the presence of stereotypical movement patterns that are related to LBP. PMID:22261650

  15. System Software Framework for System of Systems Avionics

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.; Peterson, Benjamin L; Thompson, Hiram C.

    2005-01-01

    Project Constellation implements NASA's vision for space exploration to expand human presence in our solar system. The engineering focus of this project is developing a system of systems architecture. This architecture allows for the incremental development of the overall program. Systems can be built and connected in a "Lego style" manner to generate configurations supporting various mission objectives. The development of the avionics or control systems of such a massive project will result in concurrent engineering. Also, each system will have software and the need to communicate with other (possibly heterogeneous) systems. Fortunately, this design problem has already been solved during the creation and evolution of systems such as the Internet and the Department of Defense's successful effort to standardize distributed simulation (now IEEE 1516). The solution relies on the use of a standard layered software framework and a communication protocol. A standard framework and communication protocol is suggested for the development and maintenance of Project Constellation systems. The ARINC 653 standard is a great start for such a common software framework. This paper proposes a common system software framework that uses the Real Time Publish/Subscribe protocol for framework-to-framework communication to extend ARINC 653. It is highly recommended that such a framework be established before development. This is important for the success of concurrent engineering. The framework provides an infrastructure for general system services and is designed for flexibility to support a spiral development effort.

  16. Ease of adoption of clinical natural language processing software: An evaluation of five systems.

    PubMed

    Zheng, Kai; Vydiswaran, V G Vinod; Liu, Yang; Wang, Yue; Stubbs, Amber; Uzuner, Özlem; Gururaj, Anupama E; Bayer, Samuel; Aberdeen, John; Rumshisky, Anna; Pakhomov, Serguei; Liu, Hongfang; Xu, Hua

    2015-12-01

    In recognition of potential barriers that may inhibit the widespread adoption of biomedical software, the 2014 i2b2 Challenge introduced a special track, Track 3 - Software Usability Assessment, in order to develop a better understanding of the adoption issues that might be associated with the state-of-the-art clinical NLP systems. This paper reports the ease of adoption assessment methods we developed for this track, and the results of evaluating five clinical NLP system submissions. A team of human evaluators performed a series of scripted adoptability test tasks with each of the participating systems. The evaluation team consisted of four "expert evaluators" with training in computer science, and eight "end user evaluators" with mixed backgrounds in medicine, nursing, pharmacy, and health informatics. We assessed how easy it is to adopt the submitted systems along the following three dimensions: communication effectiveness (i.e., how effective a system is in communicating its designed objectives to intended audience), effort required to install, and effort required to use. We used a formal software usability testing tool, TURF, to record the evaluators' interactions with the systems and 'think-aloud' data revealing their thought processes when installing and using the systems and when resolving unexpected issues. Overall, the ease of adoption ratings that the five systems received are unsatisfactory. Installation of some of the systems proved to be rather difficult, and some systems failed to adequately communicate their designed objectives to intended adopters. Further, the average ratings provided by the end user evaluators on ease of use and ease of interpreting output are -0.35 and -0.53, respectively, indicating that this group of users generally deemed the systems extremely difficult to work with. While the ratings provided by the expert evaluators are higher, 0.6 and 0.45, respectively, these ratings are still low indicating that they also experienced

  17. Software Techniques for Balancing Computation & Communication in Parallel Systems

    DTIC Science & Technology

    1994-07-01

    boer of Tasks: 15 PE Loand Yaltanc: 0.0000 K ] PE Loed Ya tance: 0.0000 Into-Tas Com: LInter-Task Com: 116 Ntwok traffic: ±16 PE LAYMT 1, Networkc...confusion. Because past versions for all files were saved and documented within SCCS, software developers were able to roll back to various combinations of

  18. Treatment delivery software for a new clinical grade ultrasound system for thermoradiotherapy.

    PubMed

    Novák, Petr; Moros, Eduardo G; Straube, William L; Myerson, Robert J

    2005-11-01

    A detailed description of a clinical grade Scanning Ultrasound Reflector Linear Array System (SURLAS) applicator was given in a previous paper [Med. Phys. 32, 230-240 (2005)]. In this paper we concentrate on the design, development, and testing of the personal computer (PC) based treatment delivery software that runs the therapy system. The SURLAS requires the coordinated interaction between the therapy applicator and several peripheral devices for its proper and safe operation. One of the most important tasks was the coordination of the input power sequences for the elements of two parallel opposed ultrasound arrays (eight 1.5 cm x 2 cm elements/array, array 1 and 2 operate at 1.9 and 4.9 MHz, respectively) in coordination with the position of a dual-face scanning acoustic reflector. To achieve this, the treatment delivery software can divide the applicator's treatment window in up to 64 sectors (minimum size of 2 cm x 2 cm), and control the power to each sector independently by adjusting the power output levels from the channels of a 16-channel radio-frequency generator. The software coordinates the generator outputs with the position of the reflector as it scans back and forth between the arrays. Individual sector control and dual frequency operation allows the SURLAS to adjust power deposition in three dimensions to superficial targets coupled to its treatment window. The treatment delivery software also monitors and logs several parameters such as temperatures acquired using a 16-channel thermocouple thermometry unit. Safety (in particular to patients) was the paramount concern and design criterion. Failure mode and effects analysis (FMEA) was applied to the applicator as well as to the entire therapy system in order to identify safety issues and rank their relative importance. This analysis led to the implementation of several safety mechanisms and a software structure where each device communicates with the controlling PC independently of the others. In case

  19. Adapting astronomical source detection software to help detect animals in thermal images obtained by unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Longmore, S. N.; Collins, R. P.; Pfeifer, S.; Fox, S. E.; Mulero-Pazmany, M.; Bezombes, F.; Goodwind, A.; de Juan Ovelar, M.; Knapen, J. H.; Wich, S. A.

    2017-02-01

    In this paper we describe an unmanned aerial system equipped with a thermal-infrared camera and software pipeline that we have developed to monitor animal populations for conservation purposes. Taking a multi-disciplinary approach to tackle this problem, we use freely available astronomical source detection software and the associated expertise of astronomers, to efficiently and reliably detect humans and animals in aerial thermal-infrared footage. Combining this astronomical detection software with existing machine learning algorithms into a single, automated, end-to-end pipeline, we test the software using aerial video footage taken in a controlled, field-like environment. We demonstrate that the pipeline works reliably and describe how it can be used to estimate the completeness of different observational datasets to objects of a given type as a function of height, observing conditions etc. - a crucial step in converting video footage to scientifically useful information such as the spatial distribution and density of different animal species. Finally, having demonstrated the potential utility of the system, we describe the steps we are taking to adapt the system for work in the field, in particular systematic monitoring of endangered species at National Parks around the world.

  20. Linear Back-Drive Differentials

    NASA Technical Reports Server (NTRS)

    Waydo, Peter

    2003-01-01

    Linear back-drive differentials have been proposed as alternatives to conventional gear differentials for applications in which there is only limited rotational motion (e.g., oscillation). The finite nature of the rotation makes it possible to optimize a linear back-drive differential in ways that would not be possible for gear differentials or other differentials that are required to be capable of unlimited rotation. As a result, relative to gear differentials, linear back-drive differentials could be more compact and less massive, could contain fewer complex parts, and could be less sensitive to variations in the viscosities of lubricants. Linear back-drive differentials would operate according to established principles of power ball screws and linear-motion drives, but would utilize these principles in an innovative way. One major characteristic of such mechanisms that would be exploited in linear back-drive differentials is the possibility of designing them to drive or back-drive with similar efficiency and energy input: in other words, such a mechanism can be designed so that a rotating screw can drive a nut linearly or the linear motion of the nut can cause the screw to rotate. A linear back-drive differential (see figure) would include two collinear shafts connected to two parts that are intended to engage in limited opposing rotations. The linear back-drive differential would also include a nut that would be free to translate along its axis but not to rotate. The inner surface of the nut would be right-hand threaded at one end and left-hand threaded at the opposite end to engage corresponding right- and left-handed threads on the shafts. A rotation and torque introduced into the system via one shaft would drive the nut in linear motion. The nut, in turn, would back-drive the other shaft, creating a reaction torque. Balls would reduce friction, making it possible for the shaft/nut coupling on each side to operate with 90 percent efficiency.

  1. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  2. Software Prototyping: Designing Systems for Users.

    ERIC Educational Resources Information Center

    Spies, Phyllis Bova

    1983-01-01

    Reports on major change in computer software development process--the prototype model, i.e., implementation of skeletal system that is enhanced during interaction with users. Expensive and unreliable software, software design errors, traditional development approach, resources required for prototyping, success stories, and systems designer's role…

  3. Virtual and flexible digital signal processing system based on software PnP and component works

    NASA Astrophysics Data System (ADS)

    He, Tao; Wu, Qinghua; Zhong, Fei; Li, Wei

    2005-05-01

    An idea about software PnP (Plug & Play) is put forward according to the hardware PnP. And base on this idea, a virtual flexible digital signal processing system (FVDSPS) is carried out. FVDSPS is composed of a main control center, many sub-function modules and other hardware I/O modules. Main control center sends out commands to sub-function modules, and manages running orders, parameters and results of sub-functions. The software kernel of FVDSPS is DSP (Digital Signal Processing) module, which communicates with the main control center through some protocols, accept commands or send requirements. The data sharing and exchanging between the main control center and the DSP modules are carried out and managed by the files system of the Windows Operation System through the effective communication. FVDSPS real orients objects, orients engineers and orients engineering problems. With FVDSPS, users can freely plug and play, and fast reconfigure a signal process system according to engineering problems without programming. What you see is what you get. Thus, an engineer can orient engineering problems directly, pay more attention to engineering problems, and promote the flexibility, reliability and veracity of testing system. Because FVDSPS orients TCP/IP protocol, through Internet, testing engineers, technology experts can be connected freely without space. Engineering problems can be resolved fast and effectively. FVDSPS can be used in many fields such as instruments and meter, fault diagnosis, device maintenance and quality control.

  4. Spaceport Command and Control System - Support Software Development

    NASA Technical Reports Server (NTRS)

    Tremblay, Shayne

    2016-01-01

    The Information Architecture Support (IAS) Team, the component of the Spaceport Command and Control System (SCCS) that is in charge of all the pre-runtime data, was in need of some report features to be added to their internal web application, Information Architecture (IA). Development of these reports is crucial for the speed and productivity of the development team, as they are needed to quickly and efficiently make specific and complicated data requests against the massive IA database. These reports were being put on the back burner, as other development of IA was prioritized over them, but the need for them resulted in internships being created to fill this need. The creation of these reports required learning Ruby on Rails development, along with related web technologies, and they will continue to serve IAS and other support software teams and their IA data needs.

  5. Study of a micro-concentrated photovoltaic system based on Cu(In,Ga)Se<sub>2sub> microcells array.

    PubMed

    Jutteau, Sebastien; Guillemoles, Jean-François; Paire, Myriam

    2016-08-20

    We study a micro-concentrated photovoltaic (CPV) system based on micro solar cells made from a thin film technology, Cu(In,Ga)Se<sub>2sub>. We designed, using the ray-tracing software Zemax OpticStudio 14, an optical system adapted and integrated to the microcells, with only spherical lenses. The designed architecture has a magnification factor of 100× for an optical efficiency of 85% and an acceptance angle of ±3.5°, without anti-reflective coating. An experimental study is realized to fabricate the first generation prototype on a 5  cm×5  cm substrate. A mini-module achieved a concentration ratio of 72× under AM1.5G, and an absolute efficiency gain of 1.8% for a final aperture area efficiency of 12.6%.

  6. CLIMLAB: a Python-based software toolkit for interactive, process-oriented climate modeling

    NASA Astrophysics Data System (ADS)

    Rose, B. E. J.

    2015-12-01

    Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The IPython notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields. However CLIMLAB is well suited to be deployed as a computational back-end for a graphical gaming environment based on earth-system modeling.

  7. Software Design Methods for Real-Time Systems

    DTIC Science & Technology

    1989-12-01

    This module describes the concepts and methods used in the software design of real time systems . It outlines the characteristics of real time systems , describes...the role of software design in real time system development, surveys and compares some software design methods for real - time systems , and

  8. A method of Modelling and Simulating the Back-to-Back Modular Multilevel Converter HVDC Transmission System

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Fan, Youping; Zhang, Dai; Ge, Mengxin; Zou, Xianbin; Li, Jingjiao

    2017-09-01

    This paper proposes a method to simulate a back-to-back modular multilevel converter (MMC) HVDC transmission system. In this paper we utilize an equivalent networks to simulate the dynamic power system. Moreover, to account for the performance of converter station, core components of model of the converter station gives a basic model of simulation. The proposed method is applied to an equivalent real power system.

  9. Computer Software for Intelligent Systems.

    ERIC Educational Resources Information Center

    Lenat, Douglas B.

    1984-01-01

    Discusses the development and nature of computer software for intelligent systems, indicating that the key to intelligent problem-solving lies in reducing the random search for solutions. Formal reasoning methods, expert systems, and sources of power in problem-solving are among the areas considered. Specific examples of such software are…

  10. Backing collisions: a study of drivers' eye and backing behaviour using combined rear-view camera and sensor systems.

    PubMed

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2010-04-01

    Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Parking facility at UMass Amherst, USA. 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Subject's eye fixations while driving and researcher's observation of collision with objects during backing. Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system.

  11. Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform

    NASA Astrophysics Data System (ADS)

    Liu, H. S.; Liao, H. M.

    2015-08-01

    Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.

  12. Experimental research control software system

    NASA Astrophysics Data System (ADS)

    Cohn, I. A.; Kovalenko, A. G.; Vystavkin, A. N.

    2014-05-01

    A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.

  13. Computer systems and software engineering

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  14. Automating software design system DESTA

    NASA Technical Reports Server (NTRS)

    Lovitsky, Vladimir A.; Pearce, Patricia D.

    1992-01-01

    'DESTA' is the acronym for the Dialogue Evolutionary Synthesizer of Turnkey Algorithms by means of a natural language (Russian or English) functional specification of algorithms or software being developed. DESTA represents the computer-aided and/or automatic artificial intelligence 'forgiving' system which provides users with software tools support for algorithm and/or structured program development. The DESTA system is intended to provide support for the higher levels and earlier stages of engineering design of software in contrast to conventional Computer Aided Design (CAD) systems which provide low level tools for use at a stage when the major planning and structuring decisions have already been taken. DESTA is a knowledge-intensive system. The main features of the knowledge are procedures, functions, modules, operating system commands, batch files, their natural language specifications, and their interlinks. The specific domain for the DESTA system is a high level programming language like Turbo Pascal 6.0. The DESTA system is operational and runs on an IBM PC computer.

  15. Virtual Exercise Training Software System

    NASA Technical Reports Server (NTRS)

    Vu, L.; Kim, H.; Benson, E.; Amonette, W. E.; Barrera, J.; Perera, J.; Rajulu, S.; Hanson, A.

    2018-01-01

    The purpose of this study was to develop and evaluate a virtual exercise training software system (VETSS) capable of providing real-time instruction and exercise feedback during exploration missions. A resistive exercise instructional system was developed using a Microsoft Kinect depth-camera device, which provides markerless 3-D whole-body motion capture at a small form factor and minimal setup effort. It was hypothesized that subjects using the newly developed instructional software tool would perform the deadlift exercise with more optimal kinematics and consistent technique than those without the instructional software. Following a comprehensive evaluation in the laboratory, the system was deployed for testing and refinement in the NASA Extreme Environment Mission Operations (NEEMO) analog.

  16. Designing Control System Application Software for Change

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard

    2001-01-01

    The Unified Modeling Language (UML) was used to design the Environmental Systems Test Stand (ESTS) control system software. The UML was chosen for its ability to facilitate a clear dialog between software designer and customer, from which requirements are discovered and documented in a manner which transposes directly to program objects. Applying the UML to control system software design has resulted in a baseline set of documents from which change and effort of that change can be accurately measured. As the Environmental Systems Test Stand evolves, accurate estimates of the time and effort required to change the control system software will be made. Accurate quantification of the cost of software change can be before implementation, improving schedule and budget accuracy.

  17. Command and Control Software Development Memory Management

    NASA Technical Reports Server (NTRS)

    Joseph, Austin Pope

    2017-01-01

    This internship was initially meant to cover the implementation of unit test automation for a NASA ground control project. As is often the case with large development projects, the scope and breadth of the internship changed. Instead, the internship focused on finding and correcting memory leaks and errors as reported by a COTS software product meant to track such issues. Memory leaks come in many different flavors and some of them are more benign than others. On the extreme end a program might be dynamically allocating memory and not correctly deallocating it when it is no longer in use. This is called a direct memory leak and in the worst case can use all the available memory and crash the program. If the leaks are small they may simply slow the program down which, in a safety critical system (a system for which a failure or design error can cause a risk to human life), is still unacceptable. The ground control system is managed in smaller sub-teams, referred to as CSCIs. The CSCI that this internship focused on is responsible for monitoring the health and status of the system. This team's software had several methods/modules that were leaking significant amounts of memory. Since most of the code in this system is safety-critical, correcting memory leaks is a necessity.

  18. Proposal for hierarchical description of software systems

    NASA Technical Reports Server (NTRS)

    Thauboth, H.

    1973-01-01

    The programming of digital computers has developed into a new dimension full of diffculties, because the hardware of computers has become so powerful that more complex applications are entrusted to computers. The costs of software development, verification, and maintenance are outpacing those of the hardware and the trend is toward futher increase of sophistication of application of computers and consequently of sophistication of software. To obtain better visibility into software systems and to improve the structure of software systems for better tests, verification, and maintenance, a clear, but rigorous description and documentation of software is needed. The purpose of the report is to extend the present methods in order to obtain a documentation that better reflects the interplay between the various components and functions of a software system at different levels of detail without losing the precision in expression. This is done by the use of block diagrams, sequence diagrams, and cross-reference charts. In the appendices, examples from an actual large sofware system, i.e. the Marshall System for Aerospace Systems Simulation (MARSYAS), are presented. The proposed documentation structure is compatible to automation of updating significant portions of the documentation for better software change control.

  19. The Need for Integrating the Back End of the Nuclear Fuel Cycle in the United States of America

    DOE PAGES

    Bonano, Evaristo J.; Kalinina, Elena A.; Swift, Peter N.

    2018-02-26

    Current practice for commercial spent nuclear fuel management in the United States of America (US) includes storage of spent fuel in both pools and dry storage cask systems at nuclear power plants. Most storage pools are filled to their operational capacity, and management of the approximately 2,200 metric tons of spent fuel newly discharged each year requires transferring older and cooler fuel from pools into dry storage. In the absence of a repository that can accept spent fuel for permanent disposal, projections indicate that the US will have approximately 134,000 metric tons of spent fuel in dry storage by mid-centurymore » when the last plants in the current reactor fleet are decommissioned. Current designs for storage systems rely on large dual-purpose (storage and transportation) canisters that are not optimized for disposal. Various options exist in the US for improving integration of management practices across the entire back end of the nuclear fuel cycle.« less

  20. The Need for Integrating the Back End of the Nuclear Fuel Cycle in the United States of America

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonano, Evaristo J.; Kalinina, Elena A.; Swift, Peter N.

    Current practice for commercial spent nuclear fuel management in the United States of America (US) includes storage of spent fuel in both pools and dry storage cask systems at nuclear power plants. Most storage pools are filled to their operational capacity, and management of the approximately 2,200 metric tons of spent fuel newly discharged each year requires transferring older and cooler fuel from pools into dry storage. In the absence of a repository that can accept spent fuel for permanent disposal, projections indicate that the US will have approximately 134,000 metric tons of spent fuel in dry storage by mid-centurymore » when the last plants in the current reactor fleet are decommissioned. Current designs for storage systems rely on large dual-purpose (storage and transportation) canisters that are not optimized for disposal. Various options exist in the US for improving integration of management practices across the entire back end of the nuclear fuel cycle.« less

  1. Methodology for automating software systems. Task 1 of the foundations for automating software systems

    NASA Technical Reports Server (NTRS)

    Moseley, Warren

    1989-01-01

    The early stages of a research program designed to establish an experimental research platform for software engineering are described. Major emphasis is placed on Computer Assisted Software Engineering (CASE). The Poor Man's CASE Tool is based on the Apple Macintosh system, employing available software including Focal Point II, Hypercard, XRefText, and Macproject. These programs are functional in themselves, but through advanced linking are available for operation from within the tool being developed. The research platform is intended to merge software engineering technology with artificial intelligence (AI). In the first prototype of the PMCT, however, the sections of AI are not included. CASE tools assist the software engineer in planning goals, routes to those goals, and ways to measure progress. The method described allows software to be synthesized instead of being written or built.

  2. DSN system performance test software

    NASA Technical Reports Server (NTRS)

    Martin, M.

    1978-01-01

    The system performance test software is currently being modified to include additional capabilities and enhancements. Additional software programs are currently being developed for the Command Store and Forward System and the Automatic Total Recall System. The test executive is the main program. It controls the input and output of the individual test programs by routing data blocks and operator directives to those programs. It also processes data block dump requests from the operator.

  3. Backing collisions: a study of drivers’ eye and backing behaviour using combined rear-view camera and sensor systems

    PubMed Central

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2012-01-01

    Context Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Objectives Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? Design 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Setting Parking facility at UMass Amherst, USA. Subjects 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Main Outcome Measures Subject’s eye fixations while driving and researcher’s observation of collision with objects during backing. Results Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. Conclusions This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system. PMID:20363812

  4. 40 CFR Table 8 to Subpart U of... - Summary of Compliance Alternative Requirements for the Back-End Process Provisions

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Requirements for the Back-End Process Provisions 8 Table 8 to Subpart U of Part 63 Protection of Environment...: Group I Polymers and Resins Pt. 63, Subpt. U, Table 8 Table 8 to Subpart U of Part 63—Summary of... be monitored Requirements Compliance Using Stripping Technology, Demonstrated through Periodic...

  5. 40 CFR Table 8 to Subpart U of... - Summary of Compliance Alternative Requirements for the Back-End Process Provisions

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Requirements for the Back-End Process Provisions 8 Table 8 to Subpart U of Part 63 Protection of Environment...: Group I Polymers and Resins Pt. 63, Subpt. U, Table 8 Table 8 to Subpart U of Part 63—Summary of... be monitored Requirements Compliance Using Stripping Technology, Demonstrated through Periodic...

  6. 40 CFR Table 8 to Subpart U of... - Summary of Compliance Alternative Requirements for the Back-End Process Provisions

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Requirements for the Back-End Process Provisions 8 Table 8 to Subpart U of Part 63 Protection of Environment...: Group I Polymers and Resins Pt. 63, Subpt. U, Table 8 Table 8 to Subpart U of Part 63—Summary of... be monitored Requirements Compliance Using Stripping Technology, Demonstrated through Periodic...

  7. 40 CFR Table 8 to Subpart U of... - Summary of Compliance Alternative Requirements for the Back-End Process Provisions

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Requirements for the Back-End Process Provisions 8 Table 8 to Subpart U of Part 63 Protection of Environment...: Group I Polymers and Resins Pt. 63, Subpt. U, Table 8 Table 8 to Subpart U of Part 63—Summary of... be monitored Requirements Compliance Using Stripping Technology, Demonstrated through Periodic...

  8. 40 CFR Table 8 to Subpart U of... - Summary of Compliance Alternative Requirements for the Back-End Process Provisions

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Requirements for the Back-End Process Provisions 8 Table 8 to Subpart U of Part 63 Protection of Environment...: Group I Polymers and Resins Pt. 63, Subpt. U, Table 8 Table 8 to Subpart U of Part 63—Summary of... be monitored Requirements Compliance Using Stripping Technology, Demonstrated through Periodic...

  9. Flight software requirements and design support system

    NASA Technical Reports Server (NTRS)

    Riddle, W. E.; Edwards, B.

    1980-01-01

    The desirability and feasibility of computer-augmented support for the pre-implementation activities occurring during the development of flight control software was investigated. The specific topics to be investigated were the capabilities to be included in a pre-implementation support system for flight control software system development, and the specification of a preliminary design for such a system. Further, the pre-implementation support system was to be characterized and specified under the constraints that it: (1) support both description and assessment of flight control software requirements definitions and design specification; (2) account for known software description and assessment techniques; (3) be compatible with existing and planned NASA flight control software development support system; and (4) does not impose, but may encourage, specific development technologies. An overview of the results is given.

  10. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 3: Commands specification

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1990-01-01

    This volume (3 of 4) contains the specification for the command language for the AMPS system. The volume contains a requirements specification for the operating system and commands and a design specification for the operating system and command. The operating system and commands sits on top of the protocol. The commands are an extension of the present set of AMPS commands in that the commands are more compact, allow multiple sub-commands to be bundled into one command, and have provisions for identifying the sender and the intended receiver. The commands make no change to the actual software that implement the commands.

  11. An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency

    NASA Astrophysics Data System (ADS)

    Phillips, Dewanne Marie

    Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software

  12. Crystal structures of the double perovskites Ba{sub 2}Sr{sub 1-} {sub x} Ca {sub x} WO{sub 6}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, W.T.; Akerboom, S.; IJdo, D.J.W.

    2007-05-15

    Structures of the double perovskites Ba{sub 2}Sr{sub 1-} {sub x} Ca {sub x} WO{sub 6} have been studied by the profile analysis of X-ray diffraction data. The end members, Ba{sub 2}SrWO{sub 6} and Ba{sub 2}CaWO{sub 6}, have the space group I2/m (tilt system a {sup 0} b {sup -} b {sup -}) and Fm3-barm (tilt system a {sup 0} a {sup 0} a {sup 0}), respectively. By increasing the Ca concentration, the monoclinic structure transforms to the cubic one via the rhombohedral R3-bar phase (tilt system a {sup -} a {sup -} a {sup -}) instead of the tetragonal I4/mmore » phase (tilt system a {sup 0} a {sup 0} c {sup -}). This observation supports the idea that the rhombohedral structure is favoured by increasing the covalency of the octahedral cations in Ba{sub 2} MM'O{sub 6}-type double perovskites, and disagrees with a recent proposal that the formation of the {pi}-bonding, e.g., d {sup 0}-ion, determines the tetragonal symmetry in preference to the rhombohedral one. - Graphical abstract: Enlarged sections showing the evolution of the basic (222) and (400) reflections in Ba{sub 2}Sr{sub 1-} {sub x} Ca {sub x} WO{sub 6}. Tick marks below are the positions of Bragg's reflections calculated using the space groups I2/m (x=0), R3-bar (x=0.25, 0.5 and 0.75) and Fm3-barm (x=1), respectively.« less

  13. Subcutaneous Stimulation as an Additional Therapy to Spinal Cord Stimulation for the Treatment of Low Back Pain and Leg Pain in Failed Back Surgery Syndrome: Four-Year Follow-Up.

    PubMed

    Hamm-Faber, Tanja E; Aukes, Hans; van Gorp, Eric-Jan; Gültuna, Ismail

    2015-10-01

    The objective of this study is to investigate the efficacy of long-term follow-up of subcutaneous stimulation (SubQ) as an additional therapy for patients with failed back surgery syndrome (FBSS) with chronic refractory pain, for whom spinal cord stimulation (SCS) alone was unsuccessful in treating low back pain. Prospective case series. FBSS patients with leg and/or low back pain whose conventional therapies had failed, received a combination of SCS (8-contact Octad lead, 3877-45 cm, Medtronic, Minneapolis, MN, USA) and/or SubQ (4-contact Quad Plus lead (s), 2888-28 cm, Medtronic). Initially, an Octad lead was placed in the epidural space for SCS for a trial stimulation to assess the suppression of leg and/or low back pain. Where SCS alone was insufficient in treating low back pain, lead(s) were placed superficially in the subcutaneous tissue of the lower back, exactly in the middle of the pain area. A pulse generator (Prime Advanced, 37702, Medtronic) was implanted if the patient reported more than 50% pain relief during the trial period. We investigated the long-term effect of neuromodulation on pain with the visual analog scale (VAS), and disability using the Quebec Pain Disability Scale. The results after 46 months are presented. Eleven patients, five men and six women (age 51 ± 8 years, mean ± SD) were included in the pilot study. In nine cases, SCS was used in combination with SubQ leads. Two patients received only SubQ leads. In one patient, the SCS + SubQ system was removed after nine months and these results were not taken into account for the analysis. Baseline scores for leg (N = 8) and low back pain (N = 10) were VASbl: 59 ± 15 and VASbl: 63 ± 14, respectively. The long-term follow-up period was 46 ± 4 months. SCS significantly reduced leg pain after 12 months (VAS12: 20 ± 11, p12 = 0.001) and 46 months (VAS46: 37 ± 17, p46 = 0.027). Similarly, SubQ significantly reduced back pain after 12 months(VAS12: 33 ± 16, p12 = 0.001) and 46 months

  14. Sleipner vest CO{sub 2} disposal, CO{sub 2} injection into a shallow underground aquifer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baklid, A.; Korbol, R.; Owren, G.

    1996-12-31

    This paper describes the problem of disposing large amounts of CO{sub 2} into a shallow underground aquifer from an offshore location in the North Sea. The solutions presented is an alternative for CO{sub 2} emitting industries in addressing the growing concern for the environmental impact from such activities. The topside injection facilities, the well and reservoir aspects are discussed as well as the considerations made during establishing the design basis and the solutions chosen. The CO{sub 2} injection issues in this project differs from industry practice in that the CO{sub 2} is wet and contaminated with methane, and further, becausemore » of the shallow depth, the total pressure resistance in the system is not sufficient for the CO{sub 2} to naturally stay in the dense phase region. To allow for safe and cost effective handling of the CO{sub 2}, it was necessary to develop an injection system that gave a constant back pressure from the well corresponding to the output pressure from the compressor, and being independent of the injection rate. This is accomplished by selecting a high injectivity sand formation, completing the well with a large bore, and regulating the dense phase CO{sub 2} temperature and thus the density of the fluid in order to account for the variations in back pressure from the well.« less

  15. Assessment Environment for Complex Systems Software Guide

    NASA Technical Reports Server (NTRS)

    2013-01-01

    This Software Guide (SG) describes the software developed to test the Assessment Environment for Complex Systems (AECS) by the West Virginia High Technology Consortium (WVHTC) Foundation's Mission Systems Group (MSG) for the National Aeronautics and Space Administration (NASA) Aeronautics Research Mission Directorate (ARMD). This software is referred to as the AECS Test Project throughout the remainder of this document. AECS provides a framework for developing, simulating, testing, and analyzing modern avionics systems within an Integrated Modular Avionics (IMA) architecture. The purpose of the AECS Test Project is twofold. First, it provides a means to test the AECS hardware and system developed by MSG. Second, it provides an example project upon which future AECS research may be based. This Software Guide fully describes building, installing, and executing the AECS Test Project as well as its architecture and design. The design of the AECS hardware is described in the AECS Hardware Guide. Instructions on how to configure, build and use the AECS are described in the User's Guide. Sample AECS software, developed by the WVHTC Foundation, is presented in the AECS Software Guide. The AECS Hardware Guide, AECS User's Guide, and AECS Software Guide are authored by MSG. The requirements set forth for AECS are presented in the Statement of Work for the Assessment Environment for Complex Systems authored by NASA Dryden Flight Research Center (DFRC). The intended audience for this document includes software engineers, hardware engineers, project managers, and quality assurance personnel from WVHTC Foundation (the suppliers of the software), NASA (the customer), and future researchers (users of the software). Readers are assumed to have general knowledge in the field of real-time, embedded computer software development.

  16. Tools for Embedded Computing Systems Software

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.

  17. [Sb{sub 4}Au{sub 4}Sb{sub 4}]{sup 2−}: A designer all-metal aromatic sandwich

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Wen-Juan; You, Xue-Rui; Guo, Jin-Chang

    We report on the computational design of an all-metal aromatic sandwich, [Sb{sub 4}Au{sub 4}Sb{sub 4}]{sup 2−}. The triple-layered, square-prismatic sandwich complex is the global minimum of the system from Coalescence Kick and Minima Hopping structural searches. Following a standard, qualitative chemical bonding analysis via canonical molecular orbitals, the sandwich complex can be formally described as [Sb{sub 4}]{sup +}[Au{sub 4}]{sup 4−}[Sb{sub 4}]{sup +}, showing ionic bonding characters with electron transfers in between the Sb{sub 4}/Au{sub 4}/Sb{sub 4} layers. For an in-depth understanding of the system, one needs to go beyond the above picture. Significant Sb → Au donation and Sb ←more » Au back-donation occur, redistributing electrons from the Sb{sub 4}/Au{sub 4}/Sb{sub 4} layers to the interlayer Sb–Au–Sb edges, which effectively lead to four Sb–Au–Sb three-center two-electron bonds. The complex is a system with 30 valence electrons, excluding the Sb 5s and Au 5d lone-pairs. The two [Sb{sub 4}]{sup +} ligands constitute an unusual three-fold (π and σ) aromatic system with all 22 electrons being delocalized. An energy gap of ∼1.6 eV is predicted for this all-metal sandwich. The complex is a rare example for rational design of cluster compounds and invites forth-coming synthetic efforts.« less

  18. 30 CFR 75.1101-9 - Back-up water system.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Back-up water system. 75.1101-9 Section 75.1101-9 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Fire Protection § 75.1101-9 Back-up water system...

  19. 30 CFR 75.1101-9 - Back-up water system.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Back-up water system. 75.1101-9 Section 75.1101-9 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Fire Protection § 75.1101-9 Back-up water system...

  20. 30 CFR 75.1101-9 - Back-up water system.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Back-up water system. 75.1101-9 Section 75.1101-9 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Fire Protection § 75.1101-9 Back-up water system...

  1. 30 CFR 75.1101-9 - Back-up water system.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Back-up water system. 75.1101-9 Section 75.1101-9 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Fire Protection § 75.1101-9 Back-up water system...

  2. 30 CFR 75.1101-9 - Back-up water system.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Back-up water system. 75.1101-9 Section 75.1101-9 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Fire Protection § 75.1101-9 Back-up water system...

  3. Advanced program development management software system. Software description and user's manual

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The objectives of this project were to apply emerging techniques and tools from the computer science discipline of paperless management to the activities of the Space Transportation and Exploration Office (PT01) in Marshall Space Flight Center (MSFC) Program Development, thereby enhancing the productivity of the workforce, the quality of the data products, and the collection, dissemination, and storage of information. The approach used to accomplish the objectives emphasized the utilization of finished form (off-the-shelf) software products to the greatest extent possible without impacting the performance of the end product, to pursue developments when necessary in the rapid prototyping environment to provide a mechanism for frequent feedback from the users, and to provide a full range of user support functions during the development process to promote testing of the software.

  4. Equalization enhanced phase noise in Nyquist-spaced superchannel transmission systems using multi-channel digital back-propagation

    PubMed Central

    Xu, Tianhua; Liga, Gabriele; Lavery, Domaniç; Thomsen, Benn C.; Savory, Seb J.; Killey, Robert I.; Bayvel, Polina

    2015-01-01

    Superchannel transmission spaced at the symbol rate, known as Nyquist spacing, has been demonstrated for effectively maximizing the optical communication channel capacity and spectral efficiency. However, the achievable capacity and reach of transmission systems using advanced modulation formats are affected by fibre nonlinearities and equalization enhanced phase noise (EEPN). Fibre nonlinearities can be effectively compensated using digital back-propagation (DBP). However EEPN which arises from the interaction between laser phase noise and dispersion cannot be efficiently mitigated, and can significantly degrade the performance of transmission systems. Here we report the first investigation of the origin and the impact of EEPN in Nyquist-spaced superchannel system, employing electronic dispersion compensation (EDC) and multi-channel DBP (MC-DBP). Analysis was carried out in a Nyquist-spaced 9-channel 32-Gbaud DP-64QAM transmission system. Results confirm that EEPN significantly degrades the performance of all sub-channels of the superchannel system and that the distortions are more severe for the outer sub-channels, both using EDC and MC-DBP. It is also found that the origin of EEPN depends on the relative position between the carrier phase recovery module and the EDC (or MC-DBP) module. Considering EEPN, diverse coding techniques and modulation formats have to be applied for optimizing different sub-channels in superchannel systems. PMID:26365422

  5. Software dependability in the Tandem GUARDIAN system

    NASA Technical Reports Server (NTRS)

    Lee, Inhwan; Iyer, Ravishankar K.

    1995-01-01

    Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.

  6. Electronic structure of monodentate-coordinated diphosphine complexes. Photoelectron spectra of Mo(CO)[sub 5](P(CH[sub 3])[sub 2]CH[sub 2]P(CH[sub 3])[sub 2]) and Mo(CO)[sub 5](P(CH[sub 3])[sub 2]CH[sub 2]CH[sub 2]P(CH[sub 3])[sub 2])

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lichtenberger, D.L.; Jatcko, M.E.

    1992-02-05

    Photoelectron spectroscopy is used to study the electronic structure of molybdenum carbonyl complexes that contain diphosphine ligands bound to the metal through only one of the two phosphorus atoms. Photoelectron spectra are reported for Mo(CO)[sub 5]DMPE and Mo(CO)[sub 5]DMPM and compared to the spectra of Mo(CO)[sub 5]PMe[sub 3] and the corresponding free phosphine and diphosphine ligands (PMe[sub 3] is trimethylphosphine, DMPE is 1,2-bis(dimethylphosphino)ethane, and DMPM is bis(dimethylphosphino)methane). The energy splittings between the d[sup 6] metal-based ionizations of these complexes indicate that the [pi]-back-bonding ability is the same for each of these phosphine ligands and is relatively small, about 25% thatmore » of carbon monoxide. The metal-based ionizations shift only slightly to lower binding energy from the PMe[sub 3] to the DMPE to the DMPM complex due to a slightly increasing negative charge potential at the metal along this series. This would normally be interpreted as slightly increasing [sigma]-donor strength in the order PMe[sub 3] < DMPE < DMPM. However, the difference between the ionization energy of the coordinated lone pair (CLP) of the phosphine and the ionization energy of the lone pair of the free ligand indicates an opposite trend in [sigma]-donor strength with PMe[sub 3] (1.28 eV) > DMPE (1.27 eV) > DMPM (1.23 eV). The shift of the uncoordinated phosphine lone-pair ionization (ULP) of the monocoordinated diphosphine complexes, which is affected primarily by charge potential effects, reveals that the important factor is a transfer of negative charge from the uncoordinated end of the phosphine through the alkyl linkage to the coordinated phosphine. Aside from these subtle details of charge distribution, the primary conclusion is that the diphosphine ligands, DMPE and DMPM, have [sigma]-donor and [pi]-acceptor strengths extremely similar to those of PMe[sub 3].« less

  7. Software Safety Risk in Legacy Safety-Critical Computer Systems

    NASA Technical Reports Server (NTRS)

    Hill, Janice; Baggs, Rhoda

    2007-01-01

    Safety-critical computer systems must be engineered to meet system and software safety requirements. For legacy safety-critical computer systems, software safety requirements may not have been formally specified during development. When process-oriented software safety requirements are levied on a legacy system after the fact, where software development artifacts don't exist or are incomplete, the question becomes 'how can this be done?' The risks associated with only meeting certain software safety requirements in a legacy safety-critical computer system must be addressed should such systems be selected as candidates for reuse. This paper proposes a method for ascertaining formally, a software safety risk assessment, that provides measurements for software safety for legacy systems which may or may not have a suite of software engineering documentation that is now normally required. It relies upon the NASA Software Safety Standard, risk assessment methods based upon the Taxonomy-Based Questionnaire, and the application of reverse engineering CASE tools to produce original design documents for legacy systems.

  8. Providing scalable system software for high-end simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, D.

    1997-12-31

    Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.

  9. Knowledge-based reusable software synthesis system

    NASA Technical Reports Server (NTRS)

    Donaldson, Cammie

    1989-01-01

    The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.

  10. Observation-Driven Configuration of Complex Software Systems

    NASA Astrophysics Data System (ADS)

    Sage, Aled

    2010-06-01

    The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on Taguchi Methods. These statistical methods have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, applying the combination of ACT and Taguchi Methods to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of Taguchi Methods for configuring complex software systems. Taguchi Methods were found to be useful for modelling and configuring DC- Directory, making them a valuable addition to the techniques available to system administrators and developers.

  11. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  12. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  13. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  14. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  15. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  16. The Implementation of Satellite Control System Software Using Object Oriented Design

    NASA Technical Reports Server (NTRS)

    Anderson, Mark O.; Reid, Mark; Drury, Derek; Hansell, William; Phillips, Tom

    1998-01-01

    NASA established the Small Explorer (SMEX) program in 1988 to provide frequent opportunities for highly focused and relatively inexpensive space science missions that can be launched into low earth orbit by small expendable vehicles. The development schedule for each SMEX spacecraft was three years from start to launch. The SMEX program has produced five satellites; Solar Anomalous and Magnetospheric Particle Explorer (SAMPEX), Fast Auroral Snapshot Explorer (FAST), Submillimeter Wave Astronomy Satellite (SWAS), Transition Region and Coronal Explorer (TRACE) and Wide-Field Infrared Explorer (WIRE). SAMPEX and FAST are on-orbit, TRACE is scheduled to be launched in April of 1998, WIRE is scheduled to be launched in September of 1998, and SWAS is scheduled to be launched in January of 1999. In each of these missions, the Attitude Control System (ACS) software was written using a modular procedural design. Current program goals require complete spacecraft development within 18 months. This requirement has increased pressure to write reusable flight software. Object-Oriented Design (OOD) offers the constructs for developing an application that only needs modification for mission unique requirements. This paper describes the OOD that was used to develop the SMEX-Lite ACS software. The SMEX-Lite ACS is three-axis controlled, momentum stabilized, and is capable of performing sub-arc-minute pointing. The paper first describes the high level requirements which governed the architecture of the SMEX-Lite ACS software. Next, the context in which the software resides is explained. The paper describes the benefits of encapsulation, inheritance and polymorphism with respect to the implementation of an ACS software system. This paper will discuss the design of several software components that comprise the ACS software. Specifically, Object-Oriented designs are presented for sensor data processing, attitude control, attitude determination and failure detection. The paper addresses

  17. Software package for performing experiments about the convolutionally encoded Voyager 1 link

    NASA Technical Reports Server (NTRS)

    Cheng, U.

    1989-01-01

    A software package enabling engineers to conduct experiments to determine the actual performance of long constraint-length convolutional codes over the Voyager 1 communication link directly from the Jet Propulsion Laboratory (JPL) has been developed. Using this software, engineers are able to enter test data from the Laboratory in Pasadena, California. The software encodes the data and then sends the encoded data to a personal computer (PC) at the Goldstone Deep Space Complex (GDSC) over telephone lines. The encoded data are sent to the transmitter by the PC at GDSC. The received data, after being echoed back by Voyager 1, are first sent to the PC at GDSC, and then are sent back to the PC at the Laboratory over telephone lines for decoding and further analysis. All of these operations are fully integrated and are completely automatic. Engineers can control the entire software system from the Laboratory. The software encoder and the hardware decoder interface were developed for other applications, and have been modified appropriately for integration into the system so that their existence is transparent to the users. This software provides: (1) data entry facilities, (2) communication protocol for telephone links, (3) data displaying facilities, (4) integration with the software encoder and the hardware decoder, and (5) control functions.

  18. Sighten Final Technical Report DEEE0006690 Deploying an integrated and comprehensive solar financing software platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Conlan

    Over the project, Sighten built a comprehensive software-as-a-service (Saas) platform to automate and streamline the residential solar financing workflow. Before the project period, significant time and money was spent by companies on front-end tools related to system design and proposal creation, but comparatively few resources were available to support the many back-end calculations and data management processes that underpin third party financing. Without a tool like Sighten, the solar financing processes involved passing information from the homeowner prospect into separate tools for system design, financing, and then later to reporting tools including Microsoft Excel, CRM software, in-house software, outside software,more » and offline, manual processes. Passing data between tools and attempting to connect disparate systems results in inefficiency and inaccuracy for the industry. Sighten was built to consolidate all financial and solar-related calculations in a single software platform. It significantly improves upon the accuracy of these calculations and exposes sophisticated new analysis tools resulting in a rigorous, efficient and cost-effective toolset for scaling residential solar. Widely deploying a platform like Sighten’s significantly and immediately impacts the residential solar space in several important ways: 1) standardizing and improving the quality of all quantitative calculations involved in the residential financing process, most notably project finance, system production and reporting calculations; 2) representing a true step change in terms of reporting and analysis capabilities by maintaining more accurate data and exposing sophisticated tools around simulation, tranching, and financial reporting, among others, to all stakeholders in the space; 3) allowing a broader group of developers/installers/finance companies to access the capital markets by providing an out-of-the-box toolset that handles the execution of running investor capital

  19. Reengineering legacy software to object-oriented systems

    NASA Technical Reports Server (NTRS)

    Pitman, C.; Braley, D.; Fridge, E.; Plumb, A.; Izygon, M.; Mears, B.

    1994-01-01

    NASA has a legacy of complex software systems that are becoming increasingly expensive to maintain. Reengineering is one approach to modemizing these systems. Object-oriented technology, other modem software engineering principles, and automated tools can be used to reengineer the systems and will help to keep maintenance costs of the modemized systems down. The Software Technology Branch at the NASA/Johnson Space Center has been developing and testing reengineering methods and tools for several years. The Software Technology Branch is currently providing training and consulting support to several large reengineering projects at JSC, including the Reusable Objects Software Environment (ROSE) project, which is reengineering the flight analysis and design system (over 2 million lines of FORTRAN code) into object-oriented C++. Many important lessons have been learned during the past years; one of these is that the design must never be allowed to diverge from the code during maintenance and enhancement. Future work on open, integrated environments to support reengineering is being actively planned.

  20. Study of fault tolerant software technology for dynamic systems

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Zacharias, G. L.

    1985-01-01

    The major aim of this study is to investigate the feasibility of using systems-based failure detection isolation and compensation (FDIC) techniques in building fault-tolerant software and extending them, whenever possible, to the domain of software fault tolerance. First, it is shown that systems-based FDIC methods can be extended to develop software error detection techniques by using system models for software modules. In particular, it is demonstrated that systems-based FDIC techniques can yield consistency checks that are easier to implement than acceptance tests based on software specifications. Next, it is shown that systems-based failure compensation techniques can be generalized to the domain of software fault tolerance in developing software error recovery procedures. Finally, the feasibility of using fault-tolerant software in flight software is investigated. In particular, possible system and version instabilities, and functional performance degradation that may occur in N-Version programming applications to flight software are illustrated. Finally, a comparative analysis of N-Version and recovery block techniques in the context of generic blocks in flight software is presented.

  1. Java RMI Software Technology for the Payload Planning System of the International Space Station

    NASA Technical Reports Server (NTRS)

    Bryant, Barrett R.

    1999-01-01

    The Payload Planning System is for experiment planning on the International Space Station. The planning process has a number of different aspects which need to be stored in a database which is then used to generate reports on the planning process in a variety of formats. This process is currently structured as a 3-tier client/server software architecture comprised of a Java applet at the front end, a Java server in the middle, and an Oracle database in the third tier. This system presently uses CGI, the Common Gateway Interface, to communicate between the user-interface and server tiers and Active Data Objects (ADO) to communicate between the server and database tiers. This project investigated other methods and tools for performing the communications between the three tiers of the current system so that both the system performance and software development time could be improved. We specifically found that for the hardware and software platforms that PPS is required to run on, the best solution is to use Java Remote Method Invocation (RMI) for communication between the client and server and SQLJ (Structured Query Language for Java) for server interaction with the database. Prototype implementations showed that RMI combined with SQLJ significantly improved performance and also greatly facilitated construction of the communication software.

  2. Resilience Engineering in Critical Long Term Aerospace Software Systems: A New Approach to Spacecraft Software Safety

    NASA Astrophysics Data System (ADS)

    Dulo, D. A.

    Safety critical software systems permeate spacecraft, and in a long term venture like a starship would be pervasive in every system of the spacecraft. Yet software failure today continues to plague both the systems and the organizations that develop them resulting in the loss of life, time, money, and valuable system platforms. A starship cannot afford this type of software failure in long journeys away from home. A single software failure could have catastrophic results for the spaceship and the crew onboard. This paper will offer a new approach to developing safe reliable software systems through focusing not on the traditional safety/reliability engineering paradigms but rather by focusing on a new paradigm: Resilience and Failure Obviation Engineering. The foremost objective of this approach is the obviation of failure, coupled with the ability of a software system to prevent or adapt to complex changing conditions in real time as a safety valve should failure occur to ensure safe system continuity. Through this approach, safety is ensured through foresight to anticipate failure and to adapt to risk in real time before failure occurs. In a starship, this type of software engineering is vital. Through software developed in a resilient manner, a starship would have reduced or eliminated software failure, and would have the ability to rapidly adapt should a software system become unstable or unsafe. As a result, long term software safety, reliability, and resilience would be present for a successful long term starship mission.

  3. ON THE REACTION OF COMPONENETS IN MeNO$sub 3$-UO$sub 2$(NO$sub 3$)$sub 2$- H$sub 2$O TYPE SYSTEMS (in Russian)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yakimov, M.A.; Nosova, N.F.; Degtyarev, A.Ya.

    1963-01-01

    Solubility in ternary systems TlNO/sub 3/--UO/sub 2/(NO/sub 3/)/sub 2/-- H/sub 2/ O and CsNO/sub 3/--UO/sub 2/(NO/sub 3/)/sub 2/--H/sub 2/O at 0 to 25 c- C was studi ed by the isothermal method. The first system did not form solid phase compounds; the second system formed two compounds Cs/sub 2/UO/ sub 2/(NO/sub 3/)/sub 4/ and CsUO/sub 2/(NO/sub 3/)/sub 3/ at 25 c- and of water vapor pressure over the systems at 25 c- showed that water activity in the ternary systems at certain concentrations does not exceed the water activity in binary uranyl nitratewater system (at identical uranyl nitrate concentrations) confirmingmore » the observed complex formation in the solution. The mechanism of complex formation was analyzed and expanded for alkali metal - metal salt-complexing agent water systems. (R.V.J.)« less

  4. Fault-tolerant software - Experiment with the sift operating system. [Software Implemented Fault Tolerance computer

    NASA Technical Reports Server (NTRS)

    Brunelle, J. E.; Eckhardt, D. E., Jr.

    1985-01-01

    Results are presented of an experiment conducted in the NASA Avionics Integrated Research Laboratory (AIRLAB) to investigate the implementation of fault-tolerant software techniques on fault-tolerant computer architectures, in particular the Software Implemented Fault Tolerance (SIFT) computer. The N-version programming and recovery block techniques were implemented on a portion of the SIFT operating system. The results indicate that, to effectively implement fault-tolerant software design techniques, system requirements will be impacted and suggest that retrofitting fault-tolerant software on existing designs will be inefficient and may require system modification.

  5. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems

    NASA Astrophysics Data System (ADS)

    Cota, Steve A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Chris J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Willkinson, Timothy S.

    2008-08-01

    The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.

  6. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Christopher J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Wilkinson, Timothy S.

    2010-06-01

    The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.

  7. The TAME Project: Towards improvement-oriented software environments

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Rombach, H. Dieter

    1988-01-01

    Experience from a dozen years of analyzing software engineering processes and products is summarized as a set of software engineering and measurement principles that argue for software engineering process models that integrate sound planning and analysis into the construction process. In the TAME (Tailoring A Measurement Environment) project at the University of Maryland, such an improvement-oriented software engineering process model was developed that uses the goal/question/metric paradigm to integrate the constructive and analytic aspects of software development. The model provides a mechanism for formalizing the characterization and planning tasks, controlling and improving projects based on quantitative analysis, learning in a deeper and more systematic way about the software process and product, and feeding the appropriate experience back into the current and future projects. The TAME system is an instantiation of the TAME software engineering process model as an ISEE (integrated software engineering environment). The first in a series of TAME system prototypes has been developed. An assessment of experience with this first limited prototype is presented including a reassessment of its initial architecture.

  8. Electrophysiological assessment of piano players' back extensor muscles on a regular piano bench and chair with back rest.

    PubMed

    Honarmand, Kavan; Minaskanian, Rafael; Maboudi, Seyed Ebrahim; Oskouei, Ali E

    2018-01-01

    [Purpose] Sitting position is the dominant position for a professional pianist. There are many static and dynamic forces which affect musculoskeletal system during sitting. In prolonged sitting, these forces are harmful. The aim of this study was to compare pianists' back extensor muscles activity during playing piano while sitting on a regular piano bench and a chair with back rest. [Subjects and Methods] Ten professional piano players (mean age 25.4 ± 5.28, 60% male, 40% female) performed similar tasks for 5 hours in two sessions: one session sitting on a regular piano bench and the other sitting on a chair with back rest. In each session, muscular activity was assessed in 3 ways: 1) recording surface electromyography of the back-extensor muscles at the beginning and end of each session, 2) isometric back extension test, and 3) musculoskeletal discomfort questionnaire. [Results] There were significantly lesser muscular activity, more ability to perform isometric back extension and better personal comfort while sitting on a chair with back rest. [Conclusion] Decreased muscular activity and perhaps fatigue during prolonged piano playing on a chair with back rest may reduce acquired musculoskeletal disorders amongst professional pianists.

  9. Electrophysiological assessment of piano players’ back extensor muscles on a regular piano bench and chair with back rest

    PubMed Central

    Honarmand, Kavan; Minaskanian, Rafael; Maboudi, Seyed Ebrahim; Oskouei, Ali E.

    2018-01-01

    [Purpose] Sitting position is the dominant position for a professional pianist. There are many static and dynamic forces which affect musculoskeletal system during sitting. In prolonged sitting, these forces are harmful. The aim of this study was to compare pianists’ back extensor muscles activity during playing piano while sitting on a regular piano bench and a chair with back rest. [Subjects and Methods] Ten professional piano players (mean age 25.4 ± 5.28, 60% male, 40% female) performed similar tasks for 5 hours in two sessions: one session sitting on a regular piano bench and the other sitting on a chair with back rest. In each session, muscular activity was assessed in 3 ways: 1) recording surface electromyography of the back-extensor muscles at the beginning and end of each session, 2) isometric back extension test, and 3) musculoskeletal discomfort questionnaire. [Results] There were significantly lesser muscular activity, more ability to perform isometric back extension and better personal comfort while sitting on a chair with back rest. [Conclusion] Decreased muscular activity and perhaps fatigue during prolonged piano playing on a chair with back rest may reduce acquired musculoskeletal disorders amongst professional pianists. PMID:29410569

  10. A measurement system for large, complex software programs

    NASA Technical Reports Server (NTRS)

    Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.

    1994-01-01

    This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.

  11. Fluorinated tin oxide back contact for AZTSSe photovoltaic devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gershon, Talia S.; Gunawan, Oki; Haight, Richard A.

    A photovoltaic device includes a substrate, a back contact comprising a stable low-work function material, a photovoltaic absorber material layer comprising Ag.sub.2ZnSn(S,Se).sub.4 (AZTSSe) on a side of the back contact opposite the substrate, wherein the back contact forms an Ohmic contact with the photovoltaic absorber material layer, a buffer layer or Schottky contact layer on a side of the absorber layer opposite the back contact, and a top electrode on a side of the buffer layer opposite the absorber layer.

  12. Examination of the relationship between theory-driven policies and allowed lost-time back claims in workers' compensation: a system dynamics model.

    PubMed

    Wong, Jessica J; McGregor, Marion; Mior, Silvano A; Loisel, Patrick

    2014-01-01

    The purpose of this study was to develop a model that evaluates the impact of policy changes on the number of workers' compensation lost-time back claims in Ontario, Canada, over a 30-year timeframe. The model was used to test the hypothesis that a theory- and policy-driven model would be sufficient in reproducing historical claims data in a robust manner and that policy changes would have a major impact on modeled data. The model was developed using system dynamics methods in the Vensim simulation program. The theoretical effects of policies for compensation benefit levels and experience rating fees were modeled. The model was built and validated using historical claims data from 1980 to 2009. Sensitivity analysis was used to evaluate the modeled data at extreme end points of variable input and timeframes. The degree of predictive value of the modeled data was measured by the coefficient of determination, root mean square error, and Theil's inequality coefficients. Correlation between modeled data and actual data was found to be meaningful (R(2) = 0.934), and the modeled data were stable at extreme end points. Among the effects explored, policy changes were found to be relatively minor drivers of back claims data, accounting for a 13% improvement in error. Simulation results suggested that unemployment, number of no-lost-time claims, number of injuries per worker, and recovery rate from back injuries outside of claims management to be sensitive drivers of back claims data. A robust systems-based model was developed and tested for use in future policy research in Ontario's workers' compensation. The study findings suggest that certain areas within and outside the workers' compensation system need to be considered when evaluating and changing policies around back claims. © 2014. Published by National University of Health Sciences All rights reserved.

  13. Advanced software development workstation project ACCESS user's guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    ACCESS is a knowledge based software information system designed to assist the user in modifying retrieved software to satisfy user specifications. A user's guide is presented for the knowledge engineer who wishes to create for ACCESS a knowledge base consisting of representations of objects in some software system. This knowledge is accessible to an end user who wishes to use the catalogued software objects to create a new application program or an input stream for an existing system. The application specific portion of an ACCESS knowledge base consists of a taxonomy of object classes, as well as instances of these classes. All objects in the knowledge base are stored in an associative memory. ACCESS provides a standard interface for the end user to browse and modify objects. In addition, the interface can be customized by the addition of application specific data entry forms and by specification of display order for the taxonomy and object attributes. These customization options are described.

  14. Modeling Physical Systems Using Vensim PLE Systems Dynamics Software

    NASA Astrophysics Data System (ADS)

    Widmark, Stephen

    2012-02-01

    Many physical systems are described by time-dependent differential equations or systems of such equations. This makes it difficult for students in an introductory physics class to solve many real-world problems since these students typically have little or no experience with this kind of mathematics. In my high school physics classes, I address this problem by having my students use a variety of software solutions to model physical systems described by differential equations. These include spreadsheets, applets, software my students themselves create, and systems dynamics software. For the latter, cost is often the main issue in choosing a solution for use in a public school and so I researched no-cost software. I found Sphinx SD,2OptiSim,3 Systems Dynamics,4 Simile (Trial Edition),5 and Vensim PLE.6 In evaluating each of these solutions, I looked for the fewest restrictions in the license for educational use, ease of use by students, power, and versatility. In my opinion, Vensim PLE best fulfills these criteria.7

  15. End-to-End Information System design at the NASA Jet Propulsion Laboratory

    NASA Technical Reports Server (NTRS)

    Hooke, A. J.

    1978-01-01

    Recognizing a pressing need of the 1980s to optimize the two-way flow of information between a ground-based user and a remote space-based sensor, an end-to-end approach to the design of information systems has been adopted at the Jet Propulsion Laboratory. The objectives of this effort are to ensure that all flight projects adequately cope with information flow problems at an early stage of system design, and that cost-effective, multi-mission capabilities are developed when capital investments are made in supporting elements. The paper reviews the End-to-End Information System (EEIS) activity at the Laboratory, and notes the ties to the NASA End-to-End Data System program.

  16. The software system for the Control and Data Acquisition for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Wegner, P.; FüBling, M.; Oya, I.; Hagge, L.; Schwanke, U.; Schwarz, J.; Tosti, G.; Conforti, V.; Lyard, E.; Walter, R.; Oliveira Antonino, P.; Morgenstern, A.

    2016-10-01

    The Cherenkov Telescope Array (CTA), as the next generation ground-based very high-energy gamma-ray observatory, is defining new areas beyond those related to physics. It is also creating new demands on the control and data acquisition system. CTA will consist of two installations, one in each hemisphere, containing tens of telescopes of different sizes. The ACTL (array control and data acquisition) system will consist of the hardware and software that is necessary to control and monitor the CTA array, as well as to time-stamp, read-out, filter and store the scientific data at aggregated rates of a few GB/s. The ACTL system must implement a flexible software architecture to permit the simultaneous automatic operation of multiple sub-arrays of telescopes with a minimum personnel effort on site. In addition ACTL must be able to modify the observation schedule on timescales of a few tens of seconds, to account for changing environmental conditions or to prioritize incoming scientific alerts from time-critical transient phenomena such as gamma-ray bursts. This contribution summarizes the status of the development of the software architecture and the main design choices and plans.

  17. Experimental demonstration of record high 19.125 Gb/s real-time end-to-end dual-band optical OFDM transmission over 25 km SMF in a simple EML-based IMDD system.

    PubMed

    Giddings, R P; Hugues-Salas, E; Tang, J M

    2012-08-27

    Record high 19.125 Gb/s real-time end-to-end dual-band optical OFDM (OOFDM) transmission is experimentally demonstrated, for the first time, in a simple electro-absorption modulated laser (EML)-based 25 km standard SMF system using intensity modulation and direct detection (IMDD). Adaptively modulated baseband (0-2GHz) and passband (6.125 ± 2GHz) OFDM RF sub-bands, supporting line rates of 10 Gb/s and 9.125 Gb/s respectively, are independently generated and detected with FPGA-based DSP clocked at only 100 MHz and DACs/ADCs operating at sampling speeds as low as 4GS/s. The two OFDM sub-bands are electrically frequency-division-multiplexed (FDM) for intensity modulation of a single optical carrier by an EML. To maximize and balance the signal transmission performance of each sub-band, on-line adaptive features and on-line performance monitoring is fully exploited to optimize key OOFDM transceiver and system parameters, which includes subcarrier characteristics within each individual OFDM sub-band, total and relative sub-band power as well as EML operating conditions. The achieved 19.125 Gb/s over 25 km SMF OOFDM transmission system has an optical power budget of 13.5 dB, and shows almost identical bit error rate (BER) performances for both the baseband and passband signals. In addition, experimental investigations also indicate that the maximum achievable transmission capacity of the present system is mainly determined by the EML frequency chirp-enhanced chromatic dispersion effect, and the passband BER performance is not affected by the two sub-band-induced intermixing effect, which, however, gives a 1.2dB optical power penalty to the baseband signal transmission.

  18. Taking advantage of ground data systems attributes to achieve quality results in testing software

    NASA Technical Reports Server (NTRS)

    Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.

    1994-01-01

    During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.

  19. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  20. 14 CFR 415.123 - Computing systems and software.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  1. 14 CFR 415.123 - Computing systems and software.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  2. 14 CFR 415.123 - Computing systems and software.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  3. 14 CFR 415.123 - Computing systems and software.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  4. 14 CFR 415.123 - Computing systems and software.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  5. Software Piracy, Ethics, and the Academician.

    ERIC Educational Resources Information Center

    Bassler, Richard A.

    The numerous software programs available for easy, low-cost copying raise ethical questions. The problem can be examined from the viewpoints of software users, teachers, authors, vendors, and distributors. Software users might hesitate to purchase or use software which prevents the making of back-up copies for program protection. Teachers in…

  6. The software product assurance metrics study: JPL's software systems quality and productivity

    NASA Technical Reports Server (NTRS)

    Bush, Marilyn W.

    1989-01-01

    The findings are reported of the Jet Propulsion Laboratory (JPL)/Software Product Assurance (SPA) Metrics Study, conducted as part of a larger JPL effort to improve software quality and productivity. Until recently, no comprehensive data had been assembled on how JPL manages and develops software-intensive systems. The first objective was to collect data on software development from as many projects and for as many years as possible. Results from five projects are discussed. These results reflect 15 years of JPL software development, representing over 100 data points (systems and subsystems), over a third of a billion dollars, over four million lines of code and 28,000 person months. Analysis of this data provides a benchmark for gauging the effectiveness of past, present and future software development work. In addition, the study is meant to encourage projects to record existing metrics data and to gather future data. The SPA long term goal is to integrate the collection of historical data and ongoing project data with future project estimations.

  7. Low Back Imaging When Not Indicated: A Descriptive Cross-System Analysis.

    PubMed

    Gold, Rachel; Esterberg, Elizabeth; Hollombe, Celine; Arkind, Jill; Vakarcs, Patricia A; Tran, Huong; Burdick, Tim; Devoe, Jennifer E; Horberg, Michael A

    2016-01-01

    Guideline-discordant imaging to evaluate incident low back pain is common. We compared rates of guideline-discordant imaging in patients with low back pain in two care delivery systems with differing abilities to track care through an electronic health record (EHR), and in their patients' insurance status, to measure the association between these factors and rates of ordered low back imaging. We used data from two Kaiser Permanente (KP) Regions and from OCHIN, a community health center network. We extracted data on imaging performed after index visits for low back pain from June 1, 2011, to May 31, 2012, in these systems. Adjusted logistic regression measured associations between system-level factors and imaging rates. Imaging rates for incident low back pain using 2 national quality metrics: Clinical Quality Measure 0052, a measure for assessing Meaningful Use of EHRs, and the Healthcare Effectiveness Data and Information Set measure "Use of Imaging Studies for Low Back Pain." Among 19,503 KP patients and 2694 OCHIN patients with incident low back pain, ordered imaging was higher among men and whites but did not differ across health care systems. OCHIN's publicly insured patients had higher rates of imaging compared with those with private or no insurance. Rates of ordered imaging to evaluate incident low back pain among uninsured OCHIN patients were lower than in KP overall; among insured OCHIN patients, rates were higher than in KP overall. Research is needed to establish causality and develop interventions.

  8. Data systems and computer science: Software Engineering Program

    NASA Technical Reports Server (NTRS)

    Zygielbaum, Arthur I.

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. This review is specifically concerned with the Software Engineering Program. The goals of the Software Engineering Program are as follows: (1) improve NASA's ability to manage development, operation, and maintenance of complex software systems; (2) decrease NASA's cost and risk in engineering complex software systems; and (3) provide technology to assure safety and reliability of software in mission critical applications.

  9. Trend Monitoring System (TMS) graphics software

    NASA Technical Reports Server (NTRS)

    Brown, J. S.

    1979-01-01

    A prototype bus communications systems, which is being used to support the Trend Monitoring System (TMS) and to evaluate the bus concept is considered. A set of FORTRAN-callable graphics subroutines for the host MODCOMP comuter, and an approach to splitting graphics work between the host and the system's intelligent graphics terminals are described. The graphics software in the MODCOMP and the operating software package written for the graphics terminals are included.

  10. RT-Syn: A real-time software system generator

    NASA Technical Reports Server (NTRS)

    Setliff, Dorothy E.

    1992-01-01

    This paper presents research into providing highly reusable and maintainable components by using automatic software synthesis techniques. This proposal uses domain knowledge combined with automatic software synthesis techniques to engineer large-scale mission-critical real-time software. The hypothesis centers on a software synthesis architecture that specifically incorporates application-specific (in this case real-time) knowledge. This architecture synthesizes complex system software to meet a behavioral specification and external interaction design constraints. Some examples of these external constraints are communication protocols, precisions, timing, and space limitations. The incorporation of application-specific knowledge facilitates the generation of mathematical software metrics which are used to narrow the design space, thereby making software synthesis tractable. Success has the potential to dramatically reduce mission-critical system life-cycle costs not only by reducing development time, but more importantly facilitating maintenance, modifications, and extensions of complex mission-critical software systems, which are currently dominating life cycle costs.

  11. 1.6  MW peak power, 90  ps all-solid-state laser from an aberration self-compensated double-passing end-pumped Nd:YVO<sub>4sub> rod amplifier.

    PubMed

    Wang, Chunhua; Liu, Chong; Shen, Lifeng; Zhao, Zhiliang; Liu, Bin; Jiang, Hongbo

    2016-03-20

    In this paper a delicately designed double-passing end-pumped Nd:YVO<sub>4sub> rod amplifier is reported that produces 10.2 W average laser output when seeded by a 6 mW Nd:YVO<sub>4sub> microchip laser at a repetition rate of 70 kHz with pulse duration of 90 ps. A pulse peak power of ∼1.6  MW and pulse energy of ∼143  μJ is achieved. The beam quality is well preserved by a double-passing configuration for spherical-aberration compensation. The laser-beam size in the amplifier is optimized to prevent the unwanted damage from the high pulse peak-power density. This study provides a simple and robust picosecond all-solid-state master oscillator power amplifier system with both high peak power and high beam quality, which shows great potential in the micromachining.

  12. Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.

    1993-01-01

    This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.

  13. From Bridges and Rockets, Lessons for Software Systems

    NASA Technical Reports Server (NTRS)

    Holloway, C. Michael

    2004-01-01

    Although differences exist between building software systems and building physical structures such as bridges and rockets, enough similarities exist that software engineers can learn lessons from failures in traditional engineering disciplines. This paper draws lessons from two well-known failures the collapse of the Tacoma Narrows Bridge in 1940 and the destruction of the space shuttle Challenger in 1986 and applies these lessons to software system development. The following specific applications are made: (1) the verification and validation of a software system should not be based on a single method, or a single style of methods; (2) the tendency to embrace the latest fad should be overcome; and (3) the introduction of software control into safety-critical systems should be done cautiously.

  14. Sub-grouping patients with non-specific low back pain based on cluster analysis of discriminatory clinical items.

    PubMed

    Billis, Evdokia; McCarthy, Christopher J; Roberts, Chris; Gliatis, John; Papandreou, Maria; Gioftsos, George; Oldham, Jacqueline A

    2013-02-01

    To identify potential subgroups amongst patients with non-specific low back pain based on a consensus list of potentially discriminatory examination items. Exploratory study. A convenience sample of 106 patients with non-specific low back pain (43 males, 63 females, mean age 36 years, standard deviation 15.9 years) and 7 physiotherapists. Based on 3 focus groups and a two-round Delphi involving 23 health professionals and a random stratified sample of 150 physiotherapists, respectively, a comprehensive examination list comprising the most "discriminatory" items was compiled. Following reliability analysis, the most reliable clinical items were assessed with a sample of patients with non-specific low back pain. K-means cluster analysis was conducted for 2-, 3- and 4-cluster options to explore for meaningful homogenous subgroups. The most clinically meaningful cluster was a two-subgroup option, comprising a small group (n = 24) with more severe clinical presentation (i.e. more widespread pain, functional and sleeping problems, other symptoms, increased investigations undertaken, more severe clinical signs, etc.) and a larger less dysfunctional group (n = 80). A number of potentially discriminatory clinical items were identified by health professionals and sub-classified, based on a sample of patients with non-specific low back pain, into two subgroups. However, further work is needed to validate this classification process.

  15. Modernizing Systems and Software: How Evolving Trends in Future Trends in Systems and Software Technology Bode Well for Advancing the Precision of Technology

    DTIC Science & Technology

    2009-04-23

    of Code Need for increased functionality will be a forcing function to bring the fields of software and systems engineering... of Software-Intensive Systems is Increasing 3 How Evolving Trends in Systems and Software Technologies Bode Well for Advancing the Precision of ...Engineering in Continued Partnership 4 How Evolving Trends in Systems and Software Technologies Bode Well for Advancing the

  16. The DEEP-South: Scheduling and Data Reduction Software System

    NASA Astrophysics Data System (ADS)

    Yim, Hong-Suh; Kim, Myung-Jin; Bae, Youngho; Moon, Hong-Kyu; Choi, Young-Jun; Roh, Dong-Goo; the DEEP-South Team

    2015-08-01

    The DEep Ecliptic Patrol of the Southern sky (DEEP-South), started in October 2012, is currently in test runs with the first Korea Microlensing Telescope Network (KMTNet) 1.6 m wide-field telescope located at CTIO in Chile. While the primary objective for the DEEP-South is physical characterization of small bodies in the Solar System, it is expected to discover a large number of such bodies, many of them previously unknown.An automatic observation planning and data reduction software subsystem called "The DEEP-South Scheduling and Data reduction System" (the DEEP-South SDS) is currently being designed and implemented for observation planning, data reduction and analysis of huge amount of data with minimum human interaction. The DEEP-South SDS consists of three software subsystems: the DEEP-South Scheduling System (DSS), the Local Data Reduction System (LDR), and the Main Data Reduction System (MDR). The DSS manages observation targets, makes decision on target priority and observation methods, schedules nightly observations, and archive data using the Database Management System (DBMS). The LDR is designed to detect moving objects from CCD images, while the MDR conducts photometry and reconstructs lightcurves. Based on analysis made at the LDR and the MDR, the DSS schedules follow-up observation to be conducted at other KMTNet stations. In the end of 2015, we expect the DEEP-South SDS to achieve a stable operation. We also have a plan to improve the SDS to accomplish finely tuned observation strategy and more efficient data reduction in 2016.

  17. Low Back Imaging When Not Indicated: A Descriptive Cross-System Analysis

    PubMed Central

    Gold, Rachel; Esterberg, Elizabeth; Hollombe, Celine; Arkind, Jill; Vakarcs, Patricia A; Tran, Huong; Burdick, Tim; DeVoe, Jennifer E; Horberg, Michael A

    2016-01-01

    Context: Guideline-discordant imaging to evaluate incident low back pain is common. Objective: We compared rates of guideline-discordant imaging in patients with low back pain in two care delivery systems with differing abilities to track care through an electronic health record (EHR), and in their patients’ insurance status, to measure the association between these factors and rates of ordered low back imaging. Design: We used data from two Kaiser Permanente (KP) Regions and from OCHIN, a community health center network. We extracted data on imaging performed after index visits for low back pain from June 1, 2011, to May 31, 2012, in these systems. Adjusted logistic regression measured associations between system-level factors and imaging rates. Main Outcome Measures: Imaging rates for incident low back pain using 2 national quality metrics: Clinical Quality Measure 0052, a measure for assessing Meaningful Use of EHRs, and the Healthcare Effectiveness Data and Information Set measure “Use of Imaging Studies for Low Back Pain.” Results: Among 19,503 KP patients and 2694 OCHIN patients with incident low back pain, ordered imaging was higher among men and whites but did not differ across health care systems. OCHIN’s publicly insured patients had higher rates of imaging compared with those with private or no insurance. Conclusion: Rates of ordered imaging to evaluate incident low back pain among uninsured OCHIN patients were lower than in KP overall; among insured OCHIN patients, rates were higher than in KP overall. Research is needed to establish causality and develop interventions. PMID:26934626

  18. Information adaptive system of NEEDS. [of NASA End to End Data System

    NASA Technical Reports Server (NTRS)

    Howle, W. M., Jr.; Kelly, W. L.

    1979-01-01

    The NASA End-to-End Data System (NEEDS) program was initiated by NASA to improve significantly the state of the art in acquisition, processing, and distribution of space-acquired data for the mid-1980s and beyond. The information adaptive system (IAS) is a program element under NEEDS Phase II which addresses sensor specific processing on board the spacecraft. The IAS program is a logical first step toward smart sensors, and IAS developments - particularly the system components and key technology improvements - are applicable to future smart efforts. The paper describes the design goals and functional elements of the IAS. In addition, the schedule for IAS development and demonstration is discussed.

  19. The Software Architecture of the Upgraded ESA DRAMA Software Suite

    NASA Astrophysics Data System (ADS)

    Kebschull, Christopher; Flegel, Sven; Gelhaus, Johannes; Mockel, Marek; Braun, Vitali; Radtke, Jonas; Wiedemann, Carsten; Vorsmann, Peter; Sanchez-Ortiz, Noelia; Krag, Holger

    2013-08-01

    In the beginnings of man's space flight activities there was the belief that space is so big that everybody could use it without any repercussions. However during the last six decades the increasing use of Earth's orbits has lead to a rapid growth in the space debris environment, which has a big influence on current and future space missions. For this reason ESA issued the "Requirements on Space Debris Mitigation for ESA Projects" [1] in 2008, which apply to all ESA missions henceforth. The DRAMA (Debris Risk Assessment and Mitigation Analysis) software suite had been developed to support the planning of space missions to comply with these requirements. During the last year the DRAMA software suite has been upgraded under ESA contract by TUBS and DEIMOS to include additional tools and increase the performance of existing ones. This paper describes the overall software architecture of the ESA DRAMA software suite. Specifically the new graphical user interface, which manages the five main tools ARES (Assessment of Risk Event Statistics), MIDAS (MASTER-based Impact Flux and Damage Assessment Software), OSCAR (Orbital Spacecraft Active Removal), CROC (Cross Section of Complex Bodies) and SARA (Re-entry Survival and Risk Analysis) is being discussed. The advancements are highlighted as well as the challenges that arise from the integration of the five tool interfaces. A framework had been developed at the ILR and was used for MASTER-2009 and PROOF-2009. The Java based GUI framework, enables the cross-platform deployment, and its underlying model-view-presenter (MVP) software pattern, meet strict design requirements necessary to ensure a robust and reliable method of operation in an environment where the GUI is separated from the processing back-end. While the GUI framework evolved with each project, allowing an increasing degree of integration of services like validators for input fields, it has also increased in complexity. The paper will conclude with an outlook on

  20. Software Build and Delivery Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    2016-07-10

    This presentation deals with the hierarchy of software build and delivery systems. One of the goals is to maximize the success rate of new users and developers when first trying your software. First impressions are important. Early successes are important. This also reduces critical documentation costs. This is a presentation focused on computer science and goes into detail about code documentation.

  1. An effective write policy for software coherence schemes

    NASA Technical Reports Server (NTRS)

    Chen, Yung-Chin; Veidenbaum, Alexander V.

    1992-01-01

    The authors study the write behavior and evaluate the performance of various write strategies and buffering techniques for a MIN-based multiprocessor system using the simple software coherence scheme. Hit ratios, memory latencies, total execution time, and total write traffic are used as the performance indices. The write-through write-allocate no-fetch cache using a write-back write buffer is shown to have a better performance than both write-through and write-back caches. This type of write buffer is effective in reducing the volume as well as bursts of write traffic. On average, the use of a write-back cache reduces by 60 percent the total write traffic generated by a write-through cache.

  2. Next Generation Cloud-based Science Data Systems and Their Implications on Data and Software Stewardship, Preservation, and Provenance

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.

    2017-12-01

    NASA's upcoming missions are expected to be generating data volumes at least an order of magnitude larger than current missions. A significant increase in data processing, data rates, data volumes, and long-term data archive capabilities are needed. Consequently, new challenges are emerging that impact traditional data and software management approaches. At large-scales, next generation science data systems are exploring the move onto cloud computing paradigms to support these increased needs. New implications such as costs, data movement, collocation of data systems & archives, and moving processing closer to the data, may result in changes to the stewardship, preservation, and provenance of science data and software. With more science data systems being on-boarding onto cloud computing facilities, we can expect more Earth science data records to be both generated and kept in the cloud. But at large scales, the cost of processing and storing global data may impact architectural and system designs. Data systems will trade the cost of keeping data in the cloud with the data life-cycle approaches of moving "colder" data back to traditional on-premise facilities. How will this impact data citation and processing software stewardship? What are the impacts of cloud-based on-demand processing and its affect on reproducibility and provenance. Similarly, with more science processing software being moved onto cloud, virtual machines, and container based approaches, more opportunities arise for improved stewardship and preservation. But will the science community trust data reprocessed years or decades later? We will also explore emerging questions of the stewardship of the science data system software that is generating the science data records both during and after the life of mission.

  3. An Automated Solar Synoptic Analysis Software System

    NASA Astrophysics Data System (ADS)

    Hong, S.; Lee, S.; Oh, S.; Kim, J.; Lee, J.; Kim, Y.; Lee, J.; Moon, Y.; Lee, D.

    2012-12-01

    We have developed an automated software system of identifying solar active regions, filament channels, and coronal holes, those are three major solar sources causing the space weather. Space weather forecasters of NOAA Space Weather Prediction Center produce the solar synoptic drawings as a daily basis to predict solar activities, i.e., solar flares, filament eruptions, high speed solar wind streams, and co-rotating interaction regions as well as their possible effects to the Earth. As an attempt to emulate this process with a fully automated and consistent way, we developed a software system named ASSA(Automated Solar Synoptic Analysis). When identifying solar active regions, ASSA uses high-resolution SDO HMI intensitygram and magnetogram as inputs and providing McIntosh classification and Mt. Wilson magnetic classification of each active region by applying appropriate image processing techniques such as thresholding, morphology extraction, and region growing. At the same time, it also extracts morphological and physical properties of active regions in a quantitative way for the short-term prediction of flares and CMEs. When identifying filament channels and coronal holes, images of global H-alpha network and SDO AIA 193 are used for morphological identification and also SDO HMI magnetograms for quantitative verification. The output results of ASSA are routinely checked and validated against NOAA's daily SRS(Solar Region Summary) and UCOHO(URSIgram code for coronal hole information). A couple of preliminary scientific results are to be presented using available output results. ASSA will be deployed at the Korean Space Weather Center and serve its customers in an operational status by the end of 2012.

  4. Indium-oxide nanoparticles for RRAM devices compatible with CMOS back-end-off-line

    NASA Astrophysics Data System (ADS)

    León Pérez, Edgar A. A.; Guenery, Pierre-Vincent; Abouzaid, Oumaïma; Ayadi, Khaled; Brottet, Solène; Moeyaert, Jérémy; Labau, Sébastien; Baron, Thierry; Blanchard, Nicholas; Baboux, Nicolas; Militaru, Liviu; Souifi, Abdelkader

    2018-05-01

    We report on the fabrication and characterization of Resistive Random Access Memory (RRAM) devices based on nanoparticles in MIM structures. Our approach is based on the use of indium oxide (In2O3) nanoparticles embedded in a dielectric matrix using CMOS-full-compatible fabrication processes in view of back-end-off-line integration for non-volatile memory (NVM) applications. A bipolar switching behavior has been observed using current-voltage measurements (I-V) for all devices. Very high ION/IOFF ratios have been obtained up to 108. Our results provide insights for further integration of In2O3 nanoparticles-based devices for NVM applications. He is currently a Postdoctoral Researcher in the Institute of Nanotechnologies of Lyon (INL), INSA de Lyon, France, in the Electronics Department. His current research include indium oxide nanoparticles for non-volatile memory applications, and the integrations of these devices in CMOS BEOL.

  5. RELAP-7 Software Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less

  6. Compiling software for a hierarchical distributed processing system

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  7. Software control and system configuration management: A systems-wide approach

    NASA Technical Reports Server (NTRS)

    Petersen, K. L.; Flores, C., Jr.

    1984-01-01

    A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.

  8. Software fault tolerance for real-time avionics systems

    NASA Technical Reports Server (NTRS)

    Anderson, T.; Knight, J. C.

    1983-01-01

    Avionics systems have very high reliability requirements and are therefore prime candidates for the inclusion of fault tolerance techniques. In order to provide tolerance to software faults, some form of state restoration is usually advocated as a means of recovery. State restoration can be very expensive for systems which utilize concurrent processes. The concurrency present in most avionics systems and the further difficulties introduced by timing constraints imply that providing tolerance for software faults may be inordinately expensive or complex. A straightforward pragmatic approach to software fault tolerance which is believed to be applicable to many real-time avionics systems is proposed. A classification system for software errors is presented together with approaches to recovery and continued service for each error type.

  9. Development of fuel oil management system software: Phase 1, Tank management module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lange, H.B.; Baker, J.P.; Allen, D.

    1992-01-01

    The Fuel Oil Management System (FOMS) is a micro-computer based software system being developed to assist electric utilities that use residual fuel oils with oil purchase and end-use decisions. The Tank Management Module (TMM) is the first FOMS module to be produced. TMM enables the user to follow the mixing status of oils contained in a number of oil storage tanks. The software contains a computational model of residual fuel oil mixing which addresses mixing that occurs as one oil is added to another in a storage tank and also purposeful mixing of the tank by propellers, recirculation or convection.Themore » model also addresses the potential for sludge formation due to incompatibility of oils being mixed. Part 1 of the report presents a technical description of the mixing model and a description of its development. Steps followed in developing the mixing model included: (1) definition of ranges of oil properties and tank design factors used by utilities; (2) review and adaption of prior applicable work; (3) laboratory development; and (4) field verification. Also, a brief laboratory program was devoted to exploring the suitability of suggested methods for predicting viscosities, flash points and pour points of oil mixtures. Part 2 of the report presents a functional description of the TMM software and a description of its development. The software development program consisted of the following steps: (1) on-site interviews at utilities to prioritize needs and characterize user environments; (2) construction of the user interface; and (3) field testing the software.« less

  10. Automated software configuration in the MONSOON system

    NASA Astrophysics Data System (ADS)

    Daly, Philip N.; Buchholz, Nick C.; Moore, Peter C.

    2004-09-01

    MONSOON is the next generation OUV-IR controller project being developed at NOAO. The design is flexible, emphasizing code re-use, maintainability and scalability as key factors. The software needs to support widely divergent detector systems ranging from multi-chip mosaics (for LSST, QUOTA, ODI and NEWFIRM) down to large single or multi-detector laboratory development systems. In order for this flexibility to be effective and safe, the software must be able to configure itself to the requirements of the attached detector system at startup. The basic building block of all MONSOON systems is the PAN-DHE pair which make up a single data acquisition node. In this paper we discuss the software solutions used in the automatic PAN configuration system.

  11. Ion-beam irradiation of lanthanum compounds in the systems La{sub 2}O{sub 3}-Al{sub 2}O{sub 3} and La{sub 2}O{sub 3}-TiO{sub 2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whittle, Karl R., E-mail: karl.whittle@ansto.gov.a; Lumpkin, Gregory R.; Blackford, Mark G.

    2010-10-15

    Thin crystals of La{sub 2}O{sub 3}, LaAlO{sub 3}, La{sub 2/3}TiO{sub 3}, La{sub 2}TiO{sub 5}, and La{sub 2}Ti{sub 2}O{sub 7} have been irradiated in situ using 1 MeV Kr{sup 2+} ions at the Intermediate Voltage Electron Microscope-Tandem User Facility (IVEM-Tandem), Argonne National Laboratory (ANL). We observed that La{sub 2}O{sub 3} remained crystalline to a fluence greater than 3.1x10{sup 16} ions cm{sup -2} at a temperature of 50 K. The four binary oxide compounds in the two systems were observed through the crystalline-amorphous transition as a function of ion fluence and temperature. Results from the ion irradiations give critical temperatures for amorphisationmore » (T{sub c}) of 647 K for LaAlO{sub 3}, 840 K for La{sub 2}Ti{sub 2}O{sub 7}, 865 K for La{sub 2/3}TiO{sub 3}, and 1027 K for La{sub 2}TiO{sub 5}. The T{sub c} values observed in this study, together with previous data for Al{sub 2}O{sub 3} and TiO{sub 2}, are discussed with reference to the melting points for the La{sub 2}O{sub 3}-Al{sub 2}O{sub 3} and La{sub 2}O{sub 3}-TiO{sub 2} systems and the different local environments within the four crystal structures. Results suggest that there is an observable inverse correlation between T{sub c} and melting temperature (T{sub m}) in the two systems. More complex relationships exist between T{sub c} and crystal structure, with the stoichiometric perovskite LaAlO{sub 3} being the most resistant to amorphisation. - Graphical abstract: La{sub 2}TiO{sub 5} with atypical co-ordination for Ti, TiO{sub 5} is found to be different in radiation resistance to La{sub 2}Ti{sub 2}O{sub 7} and La{sub 2/3}TiO{sub 3}. Irradiation of La-Ti-O, and La-Al-O based systems has found that radiation damage resistance is related to the ability of the system to disorder.« less

  12. The ASTRI/CTA mini-array software system

    NASA Astrophysics Data System (ADS)

    Tosti, Gino; Schwarz, Joseph; Antonelli, Lucio Angelo; Trifoglio, Massimo; Catalano, Osvaldo; Maccarone, Maria Concetta; Leto, Giuseppe; Gianotti, Fulvio; Canestrari, Rodolfo; Giro, Enrico; Fiorini, Mauro; La Palombara, Nicola; Pareschi, Giovanni; Stringhetti, Luca; Vercellone, Stefano; Conforti, Vito; Tanci, Claudio; Bruno, Pietro; Grillo, Alessandro; Testa, Vincenzo; di Paola, Andrea; Gallozzi, Stefano

    2014-07-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. The main goals of the ASTRI project are the realization of an end-to-end prototype of a Small Size Telescope (SST) for the Cherenkov Telescope Array (CTA) in a dual- mirror configuration (SST-2M) and, subsequently, of a mini-array comprising seven SST-2M telescopes. The mini-array will be placed at the final CTA Southern Site, which will be part of the CTA seed array, around which the whole CTA observatory will be developed. The Mini-Array Software System (MASS) will provide a comprehensive set of tools to prepare an observing proposal, to perform the observations specified therein (monitoring and controlling all the hardware components of each telescope), to analyze the acquired data online and to store/retrieve all the data products to/from the archive. Here we present the main features of the MASS and its first version, to be tested on the ASTRI SST-2M prototype that will be installed at the INAF observing station located at Serra La Nave on Mount Etna in Sicily.

  13. Real-time animation software for customized training to use motor prosthetic systems.

    PubMed

    Davoodi, Rahman; Loeb, Gerald E

    2012-03-01

    Research on control of human movement and development of tools for restoration and rehabilitation of movement after spinal cord injury and amputation can benefit greatly from software tools for creating precisely timed animation sequences of human movement. Despite their ability to create sophisticated animation and high quality rendering, existing animation software are not adapted for application to neural prostheses and rehabilitation of human movement. We have developed a software tool known as MSMS (MusculoSkeletal Modeling Software) that can be used to develop models of human or prosthetic limbs and the objects with which they interact and to animate their movement using motion data from a variety of offline and online sources. The motion data can be read from a motion file containing synthesized motion data or recordings from a motion capture system. Alternatively, motion data can be streamed online from a real-time motion capture system, a physics-based simulation program, or any program that can produce real-time motion data. Further, animation sequences of daily life activities can be constructed using the intuitive user interface of Microsoft's PowerPoint software. The latter allows expert and nonexpert users alike to assemble primitive movements into a complex motion sequence with precise timing by simply arranging the order of the slides and editing their properties in PowerPoint. The resulting motion sequence can be played back in an open-loop manner for demonstration and training or in closed-loop virtual reality environments where the timing and speed of animation depends on user inputs. These versatile animation utilities can be used in any application that requires precisely timed animations but they are particularly suited for research and rehabilitation of movement disorders. MSMS's modeling and animation tools are routinely used in a number of research laboratories around the country to study the control of movement and to develop and test

  14. An End-To-End Test of A Simulated Nuclear Electric Propulsion System

    NASA Technical Reports Server (NTRS)

    VanDyke, Melissa; Hrbud, Ivana; Goddfellow, Keith; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    The Safe Affordable Fission Engine (SAFE) test series addresses Phase I Space Fission Systems issues in it particular non-nuclear testing and system integration issues leading to the testing and non-nuclear demonstration of a 400-kW fully integrated flight unit. The first part of the SAFE 30 test series demonstrated operation of the simulated nuclear core and heat pipe system. Experimental data acquired in a number of different test scenarios will validate existing computational models, demonstrated system flexibility (fast start-ups, multiple start-ups/shut downs), simulate predictable failure modes and operating environments. The objective of the second part is to demonstrate an integrated propulsion system consisting of a core, conversion system and a thruster where the system converts thermal heat into jet power. This end-to-end system demonstration sets a precedent for ground testing of nuclear electric propulsion systems. The paper describes the SAFE 30 end-to-end system demonstration and its subsystems.

  15. Debugging and Performance Analysis Software Tools for Peregrine System |

    Science.gov Websites

    High-Performance Computing | NREL Debugging and Performance Analysis Software Tools for Peregrine System Debugging and Performance Analysis Software Tools for Peregrine System Learn about debugging and performance analysis software tools available to use with the Peregrine system. Allinea

  16. Modeling of Zircon (ZrSiO{sub 4}) and Zirconia (ZrO{sub 2}) using ADF-GUI Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lwin, Maung Tin Moe; Amin, Yusoff Mohd; Kassim, Hasan Abu

    2010-07-07

    Natural zircon (ZrSiO{sub 4}) has very high concentration of Uranium and Thorium of up to 5000 ppm. Radioactive decay process of alpha particles from these impurities affects some changes like several atomic displacements in the crystalline structure of zircon. The amount of track density caused by alpha particles decay process of these radioactive materials in zircon can be decreased with annealing temperatures from 700 deg. C to 980 deg. C. Recently it has been extensively studied as the possible candidate material for immobilization of fission products and actinides. Besides, zirconia (ZrO{sub 2}), product from natural zircon, is widely used inmore » industrial field because it has excellent chemical and mechanical properties at high temperature. Dielectric constant of monoclinic, cubic and tetragonal ZrO{sub 2} can be found in the range of 22, 35 and 50 by computer simulation works. In recent years, atomistic simulations and modeling have been studied, because a lot of computational techniques can offer atomic-level approaching with minimum errors in estimations. One favorite methods is Density Functional Theory (DFT). In this study, ADF-GUI software from DFT will be used to calculate the frequency and absorption Intensity of zircon and zirconia molecules. The data from calculations will be verified with experimental works such as Raman Spectroscopy, AFM and XRD.« less

  17. Advanced software techniques for data management systems. Volume 1: Study of software aspects of the phase B space shuttle avionics system

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1972-01-01

    An overview of the executive system design task is presented. The flight software executive system, software verification, phase B baseline avionics system review, higher order languages and compilers, and computer hardware features are also discussed.

  18. OSIRIX: open source multimodality image navigation software

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman

    2005-04-01

    The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/

  19. An Internet Protocol-Based Software System for Real-Time, Closed-Loop, Multi-Spacecraft Mission Simulation Applications

    NASA Technical Reports Server (NTRS)

    Burns, Richard D.; Davis, George; Cary, Everett; Higinbotham, John; Hogie, Keith

    2003-01-01

    A mission simulation prototype for Distributed Space Systems has been constructed using existing developmental hardware and software testbeds at NASA s Goddard Space Flight Center. A locally distributed ensemble of testbeds, connected through the local area network, operates in real time and demonstrates the potential to assess the impact of subsystem level modifications on system level performance and, ultimately, on the quality and quantity of the end product science data.

  20. Synthesis and X-ray crystal structures of (Mo(CO)(Et{sub 2}PC{sub 2}H{sub 4}PEt{sub 2}){sub 2}){sub 2}({mu}-N{sub 2}) with an end-on bridging dinitrogen ligand and Mo(CO)(Bu{sup i}{sub 2}PC{sub 2}H{sub 4}PBu{sup i}{sub 2}){sub 2} containing an agostic Mo...H-C interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, X.L.; Kubas, G.J.; Burns, C.J.

    1995-12-20

    The compound formed by the reaction of trans-Mo(N{sub 2})(Et{sub 2}PC{sub 2}H{sub 4}PEt{sub 2}){sub 2} with ethyl acetate in refluxing toluene under argon has been formulated as the bridging dinitrogen complex (Mo(CO)(Et{sub 2}PC{sub 2}H{sub 4}PEt{sub 2}){sub 2}){sub 2}({mu}-N{sub 2}) (1), in contrast with the previously proposed formulation of Mo(CO)(Et{sub 2}PC{sub 2}H{sub 4}PEt{sub 2}){sub 2} (2). In refluxing p-xylene and under argon, compound 1 eliminates the bridging dinitrogen ligand to form the nitrogen-free compound 2. The reaction of trans-Mo(N{sub 2})(Bu{sup i}{sub 2}PC{sub 2}H{sub 4}PBu{sup i}{sub 2}){sub 2} (3). The molecular structures of compounds 1 and 3 have been determined by single-crystal X-raymore » diffraction studies. Compound 1 contains an end-on bridging dinitrogen ligand. Compound 3 attains a formal 18-electron configuration by virtue of an agostic Mo...H-C interaction between the molybdenum atom and an alphiatic {gamma}-C-H bond of the alkyldiphosphine ligand. On the basis of the agostic Mo...C and Mo...H distances, the agostic interaction in 3 appears to be stronger than that in the related compound Mo(CO)(Ph{sub 2}PC{sub 2}H{sub 4}PPh{sub 2}){sub 2} which involves an ortho aromatic C-H bond of the diphosphine ligand. Crystallographic data for 1: monoclinic, space group C2/c, a=24.270(2){angstrom}, b=44.233(4){angstrom}, c=20.378(2){angstrom}, {beta}=90.725(9){angstrom}, V=21875(3){angstrom}{sup 3}, Z=16, and R=0.048. Crystallographic data for 3: orthorhombic, space group Pna2{sub 1}, a=18.332(4){angstrom}, b=22.0664(4){angstrom}, c=10.589(2){angstrom}, V=4283(2){angstrom}{sup 3}, Z=4, and R=0.034.« less

  1. Executive system software design and expert system implementation

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1992-01-01

    The topics are presented in viewgraph form and include: software requirements; design layout of the automated assembly system; menu display for automated composite command; expert system features; complete robot arm state diagram and logic; and expert system benefits.

  2. Software architecture for a distributed real-time system in Ada, with application to telerobotics

    NASA Technical Reports Server (NTRS)

    Olsen, Douglas R.; Messiora, Steve; Leake, Stephen

    1992-01-01

    The architecture structure and software design methodology presented is described in the context of telerobotic application in Ada, specifically the Engineering Test Bed (ETB), which was developed to support the Flight Telerobotic Servicer (FTS) Program at GSFC. However, the nature of the architecture is such that it has applications to any multiprocessor distributed real-time system. The ETB architecture, which is a derivation of the NASA/NBS Standard Reference Model (NASREM), defines a hierarchy for representing a telerobot system. Within this hierarchy, a module is a logical entity consisting of the software associated with a set of related hardware components in the robot system. A module is comprised of submodules, which are cyclically executing processes that each perform a specific set of functions. The submodules in a module can run on separate processors. The submodules in the system communicate via command/status (C/S) interface channels, which are used to send commands down and relay status back up the system hierarchy. Submodules also communicate via setpoint data links, which are used to transfer control data from one submodule to another. A submodule invokes submodule algorithms (SMA's) to perform algorithmic operations. Data that describe or models a physical component of the system are stored as objects in the World Model (WM). The WM is a system-wide distributed database that is accessible to submodules in all modules of the system for creating, reading, and writing objects.

  3. REVEAL: Software Documentation and Platform Migration

    NASA Technical Reports Server (NTRS)

    Wilson, Michael A.; Veibell, Victoir T.; Freudinger, Lawrence C.

    2008-01-01

    The Research Environment for Vehicle Embedded Analysis on Linux (REVEAL) is reconfigurable data acquisition software designed for network-distributed test and measurement applications. In development since 2001, it has been successfully demonstrated in support of a number of actual missions within NASA s Suborbital Science Program. Improvements to software configuration control were needed to properly support both an ongoing transition to operational status and continued evolution of REVEAL capabilities. For this reason the project described in this report targets REVEAL software source documentation and deployment of the software on a small set of hardware platforms different from what is currently used in the baseline system implementation. This report specifically describes the actions taken over a ten week period by two undergraduate student interns and serves as a final report for that internship. The topics discussed include: the documentation of REVEAL source code; the migration of REVEAL to other platforms; and an end-to-end field test that successfully validates the efforts.

  4. WinHPC System Software | High-Performance Computing | NREL

    Science.gov Websites

    Software WinHPC System Software Learn about the software applications, tools, toolchains, and for industrial applications. Intel Compilers Development Tool, Toolchain Suite featuring an industry

  5. The software architecture to control the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Oya, I.; Füßling, M.; Antonino, P. O.; Conforti, V.; Hagge, L.; Melkumyan, D.; Morgenstern, A.; Tosti, G.; Schwanke, U.; Schwarz, J.; Wegner, P.; Colomé, J.; Lyard, E.

    2016-07-01

    The Cherenkov Telescope Array (CTA) project is an initiative to build two large arrays of Cherenkov gamma- ray telescopes. CTA will be deployed as two installations, one in the northern and the other in the southern hemisphere, containing dozens of telescopes of different sizes. CTA is a big step forward in the field of ground- based gamma-ray astronomy, not only because of the expected scientific return, but also due to the order-of- magnitude larger scale of the instrument to be controlled. The performance requirements associated with such a large and distributed astronomical installation require a thoughtful analysis to determine the best software solutions. The array control and data acquisition (ACTL) work-package within the CTA initiative will deliver the software to control and acquire the data from the CTA instrumentation. In this contribution we present the current status of the formal ACTL system decomposition into software building blocks and the relationships among them. The system is modelled via the Systems Modelling Language (SysML) formalism. To cope with the complexity of the system, this architecture model is sub-divided into different perspectives. The relationships with the stakeholders and external systems are used to create the first perspective, the context of the ACTL software system. Use cases are employed to describe the interaction of those external elements with the ACTL system and are traced to a hierarchy of functionalities (abstract system functions) describing the internal structure of the ACTL system. These functions are then traced to fully specified logical elements (software components), the deployment of which as technical elements, is also described. This modelling approach allows us to decompose the ACTL software in elements to be created and the ow of information within the system, providing us with a clear way to identify sub-system interdependencies. This architectural approach allows us to build the ACTL system model and

  6. Analysis of Software Systems for Specialized Computers,

    DTIC Science & Technology

    computer) with given computer hardware and software . The object of study is the software system of a computer, designed for solving a fixed complex of...purpose of the analysis is to find parameters that characterize the system and its elements during operation, i.e., when servicing the given requirement flow. (Author)

  7. System of end-to-end symmetric database encryption

    NASA Astrophysics Data System (ADS)

    Galushka, V. V.; Aydinyan, A. R.; Tsvetkova, O. L.; Fathi, V. A.; Fathi, D. V.

    2018-05-01

    The article is devoted to the actual problem of protecting databases from information leakage, which is performed while bypassing access control mechanisms. To solve this problem, it is proposed to use end-to-end data encryption, implemented at the end nodes of an interaction of the information system components using one of the symmetric cryptographic algorithms. For this purpose, a key management method designed for use in a multi-user system based on the distributed key representation model, part of which is stored in the database, and the other part is obtained by converting the user's password, has been developed and described. In this case, the key is calculated immediately before the cryptographic transformations and is not stored in the memory after the completion of these transformations. Algorithms for registering and authorizing a user, as well as changing his password, have been described, and the methods for calculating parts of a key when performing these operations have been provided.

  8. The High-Level Interface Definitions in the ASTRI/CTA Mini Array Software System (MASS)

    NASA Astrophysics Data System (ADS)

    Conforti, V.; Tosti, G.; Schwarz, J.; Bruno, P.; Cefal‘A, M.; Paola, A. D.; Gianotti, F.; Grillo, A.; Russo, F.; Tanci, C.; Testa, V.; Antonelli, L. A.; Canestrari, R.; Catalano, O.; Fiorini, M.; Gallozzi, S.; Giro, E.; Palombara, N. L.; Leto, G.; Maccarone, M. C.; Pareschi, G.; Stringhetti, L.; Trifoglio, M.; Vercellone, S.; Astri Collaboration; Cta Consortium

    2015-09-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project funded by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end-to-end prototype, named ASTRI SST-2M, of a Small Size Dual-Mirror Telescope for the Cherenkov Telescope Array, CTA. A second goal of the project is the realization of the ASTRI/CTA mini-array, which will be composed of seven SST-2M telescopes placed at the CTA Southern Site. The ASTRI Mini Array Software System (MASS) is designed to support the ASTRI/CTA mini-array operations. MASS is being built on top of the ALMA Common Software (ACS) framework, which provides support for the implementation of distributed data acquisition and control systems, and functionality for log and alarm management, message driven communication and hardware devices management. The first version of the MASS system, which will comply with the CTA requirements and guidelines, will be tested on the ASTRI SST-2M prototype. In this contribution we present the interface definitions of the MASS high level components in charge of the ASTRI SST-2M observation scheduling, telescope control and monitoring, and data taking. Particular emphasis is given to their potential reuse for the ASTRI/CTA mini-array.

  9. Enhancement of computer system for applications software branch

    NASA Technical Reports Server (NTRS)

    Bykat, Alex

    1987-01-01

    Presented is a compilation of the history of a two-month project concerned with a survey, evaluation, and specification of a new computer system for the Applications Software Branch of the Software and Data Management Division of Information and Electronic Systems Laboratory of Marshall Space Flight Center, NASA. Information gathering consisted of discussions and surveys of branch activities, evaluation of computer manufacturer literature, and presentations by vendors. Information gathering was followed by evaluation of their systems. The criteria of the latter were: the (tentative) architecture selected for the new system, type of network architecture supported, software tools, and to some extent the price. The information received from the vendors, as well as additional research, lead to detailed design of a suitable system. This design included considerations of hardware and software environments as well as personnel issues such as training. Design of the system culminated in a recommendation for a new computing system for the Branch.

  10. System software for the finite element machine

    NASA Technical Reports Server (NTRS)

    Crockett, T. W.; Knott, J. D.

    1985-01-01

    The Finite Element Machine is an experimental parallel computer developed at Langley Research Center to investigate the application of concurrent processing to structural engineering analysis. This report describes system-level software which has been developed to facilitate use of the machine by applications researchers. The overall software design is outlined, and several important parallel processing issues are discussed in detail, including processor management, communication, synchronization, and input/output. Based on experience using the system, the hardware architecture and software design are critiqued, and areas for further work are suggested.

  11. Diode-end-pumped Ho, Pr:LiLuF<sub>4sub> bulk laser at 2.95  μm.

    PubMed

    Nie, Hongkun; Zhang, Peixiong; Zhang, Baitao; Yang, Kejian; Zhang, Lianhan; Li, Tao; Zhang, Shuaiyi; Xu, Jianqiu; Hang, Yin; He, Jingliang

    2017-02-15

    A diode-end-pumped continuous-wave (CW) and passively Q-switched Ho, Pr:LiLuF<sub>4sub> (Ho, Pr:LLF) laser operation at 2.95 μm was demonstrated for the first time, to the best of our knowledge. The maximum CW output power was 172 mW. By using a monolayer graphene as the saturable absorber, the passively Q-switched operation was realized, in which regimes with the highest output power, the shortest pulse duration, and the maximum repetition rate were determined to be 88 mW, 937.5 ns, and 55.7 kHz, respectively. The laser beam quality factor M2 at the maximum CW output power were measured to be Mx2=1.48 and My2=1.47.

  12. The Development of Design Tools for Fault Tolerant Quantum Dot Cellular Automata Based Logic

    NASA Technical Reports Server (NTRS)

    Armstrong, Curtis D.; Humphreys, William M.

    2003-01-01

    We are developing software to explore the fault tolerance of quantum dot cellular automata gate architectures in the presence of manufacturing variations and device defects. The Topology Optimization Methodology using Applied Statistics (TOMAS) framework extends the capabilities of the A Quantum Interconnected Network Array Simulator (AQUINAS) by adding front-end and back-end software and creating an environment that integrates all of these components. The front-end tools establish all simulation parameters, configure the simulation system, automate the Monte Carlo generation of simulation files, and execute the simulation of these files. The back-end tools perform automated data parsing, statistical analysis and report generation.

  13. jade: An End-To-End Data Transfer and Catalog Tool

    NASA Astrophysics Data System (ADS)

    Meade, P.

    2017-10-01

    The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. IceCube collects 1 TB of data every day. An online filtering farm processes this data in real time and selects 10% to be sent via satellite to the main data center at the University of Wisconsin-Madison. IceCube has two year-round on-site operators. New operators are hired every year, due to the hard conditions of wintering at the South Pole. These operators are tasked with the daily operations of running a complex detector in serious isolation conditions. One of the systems they operate is the data archiving and transfer system. Due to these challenging operational conditions, the data archive and transfer system must above all be simple and robust. It must also share the limited resource of satellite bandwidth, and collect and preserve useful metadata. The original data archive and transfer software for IceCube was written in 2005. After running in production for several years, the decision was taken to fully rewrite it, in order to address a number of structural drawbacks. The new data archive and transfer software (JADE2) has been in production for several months providing improved performance and resiliency. One of the main goals for JADE2 is to provide a unified system that handles the IceCube data end-to-end: from collection at the South Pole, all the way to long-term archive and preservation in dedicated repositories at the North. In this contribution, we describe our experiences and lessons learned from developing and operating the data archive and transfer software for a particle physics experiment in extreme operational conditions like IceCube.

  14. iSDS: a self-configurable software-defined storage system for enterprise

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Shyen Eric; Huang, Chun-Fang; Huang, Ming-Jen

    2018-01-01

    Storage is one of the most important aspects of IT infrastructure for various enterprises. But, enterprises are interested in more than just data storage; they are interested in such things as more reliable data protection, higher performance and reduced resource consumption. Traditional enterprise-grade storage satisfies these requirements at high cost. It is because traditional enterprise-grade storage is usually designed and constructed by customised field-programmable gate array to achieve high-end functionality. However, in this ever-changing environment, enterprises request storage with more flexible deployment and at lower cost. Moreover, the rise of new application fields, such as social media, big data, video streaming service etc., makes operational tasks for administrators more complex. In this article, a new storage system called intelligent software-defined storage (iSDS), based on software-defined storage, is described. More specifically, this approach advocates using software to replace features provided by traditional customised chips. To alleviate the management burden, it also advocates applying machine learning to automatically configure storage to meet dynamic requirements of workloads running on storage. This article focuses on the analysis feature of iSDS cluster by detailing its architecture and design.

  15. The KASE approach to domain-specific software systems

    NASA Technical Reports Server (NTRS)

    Bhansali, Sanjay; Nii, H. Penny

    1992-01-01

    Designing software systems, like all design activities, is a knowledge-intensive task. Several studies have found that the predominant cause of failures among system designers is lack of knowledge: knowledge about the application domain, knowledge about design schemes, knowledge about design processes, etc. The goal of domain-specific software design systems is to explicitly represent knowledge relevant to a class of applications and use it to partially or completely automate various aspects of the designing systems within that domain. The hope is that this would reduce the intellectual burden on the human designers and lead to more efficient software development. In this paper, we present a domain-specific system built on top of KASE, a knowledge-assisted software engineering environment being developed at the Stanford Knowledge Systems Laboratory. We introduce the main ideas underlying the construction of domain specific systems within KASE, illustrate the application of the idea in the synthesis of a system for tracking aircraft from radar signals, and discuss some of the issues in constructing domain-specific systems.

  16. Development, Validation and Integration of the ATLAS Trigger System Software in Run 2

    NASA Astrophysics Data System (ADS)

    Keyes, Robert; ATLAS Collaboration

    2017-10-01

    The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.

  17. Software Template for Instruction in Mathematics

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.; Moebes, Travis A.; Beall, Anna

    2005-01-01

    Intelligent Math Tutor (IMT) is a software system that serves as a template for creating software for teaching mathematics. IMT can be easily connected to artificial-intelligence software and other analysis software through input and output of files. IMT provides an easy-to-use interface for generating courses that include tests that contain both multiple-choice and fill-in-the-blank questions, and enables tracking of test scores. IMT makes it easy to generate software for Web-based courses or to manufacture compact disks containing executable course software. IMT also can function as a Web-based application program, with features that run quickly on the Web, while retaining the intelligence of a high-level language application program with many graphics. IMT can be used to write application programs in text, graphics, and/or sound, so that the programs can be tailored to the needs of most handicapped persons. The course software generated by IMT follows a "back to basics" approach of teaching mathematics by inducing the student to apply creative mathematical techniques in the process of learning. Students are thereby made to discover mathematical fundamentals and thereby come to understand mathematics more deeply than they could through simple memorization.

  18. DOEDEF Software System, Version 2. 2: Operational instructions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meirans, L.

    The DOEDEF (Department of Energy Data Exchange Format) Software System is a collection of software routines written to facilitate the manipulation of IGES (Initial Graphics Exchange Specification) data. Typically, the IGES data has been produced by the IGES processors for a Computer-Aided Design (CAD) system, and the data manipulations are user-defined ''flavoring'' operations. The DOEDEF Software System is used in conjunction with the RIM (Relational Information Management) DBMS from Boeing Computer Services (Version 7, UD18 or higher). The three major pieces of the software system are: Parser, reads an ASCII IGES file and converts it to the RIM database equivalent;more » Kernel, provides the user with IGES-oriented interface routines to the database; and Filewriter, writes the RIM database to an IGES file.« less

  19. Adaptation and development of software simulation methodologies for cardiovascular engineering: present and future challenges from an end-user perspective

    PubMed Central

    Díaz-Zuccarini, V.; Narracott, A.J.; Burriesci, G.; Zervides, C.; Rafiroiu, D.; Jones, D.; Hose, D.R.; Lawford, P.V.

    2009-01-01

    This paper describes the use of diverse software tools in cardiovascular applications. These tools were primarily developed in the field of engineering and the applications presented push the boundaries of the software to address events related to venous and arterial valve closure, exploration of dynamic boundary conditions or the inclusion of multi-scale boundary conditions from protein to organ levels. The future of cardiovascular research and the challenges that modellers and clinicians face from validation to clinical uptake are discussed from an end-user perspective. PMID:19487202

  20. Adaptation and development of software simulation methodologies for cardiovascular engineering: present and future challenges from an end-user perspective.

    PubMed

    Díaz-Zuccarini, V; Narracott, A J; Burriesci, G; Zervides, C; Rafiroiu, D; Jones, D; Hose, D R; Lawford, P V

    2009-07-13

    This paper describes the use of diverse software tools in cardiovascular applications. These tools were primarily developed in the field of engineering and the applications presented push the boundaries of the software to address events related to venous and arterial valve closure, exploration of dynamic boundary conditions or the inclusion of multi-scale boundary conditions from protein to organ levels. The future of cardiovascular research and the challenges that modellers and clinicians face from validation to clinical uptake are discussed from an end-user perspective.

  1. Efficient Software Systems for Cardio Surgical Departments

    NASA Astrophysics Data System (ADS)

    Fountoukis, S. G.; Diomidous, M. J.

    2009-08-01

    Herein, the design implementation and deployment of an object oriented software system, suitable for the monitoring of cardio surgical departments, is investigated. Distributed design architectures are applied and the implemented software system can be deployed on distributed infrastructures. The software is flexible and adaptable to any cardio surgical environment regardless of the department resources used. The system exploits the relations and the interdependency of the successive bed positions that the patients occupy at the different health care units during their stay in a cardio surgical department, to determine bed availabilities and to perform patient scheduling and instant rescheduling whenever necessary. It also aims to successful monitoring of the workings of the cardio surgical departments in an efficient manner.

  2. Using MATLAB Software on the Peregrine System | High-Performance Computing

    Science.gov Websites

    | NREL MATLAB Software on the Peregrine System Using MATLAB Software on the Peregrine System Learn how to use MATLAB software on the Peregrine system. Running MATLAB in Batch Mode Using the node. Understanding Versions and Licenses Learn about the MATLAB software versions and licenses

  3. Cosimulation of embedded system using RTOS software simulator

    NASA Astrophysics Data System (ADS)

    Wang, Shihao; Duan, Zhigang; Liu, Mingye

    2003-09-01

    Embedded system design often employs co-simulation to verify system's function; one efficient verification tool of software is Instruction Set Simulator (ISS). As a full functional model of target CPU, ISS interprets instruction of embedded software step by step, which usually is time-consuming since it simulates at low-level. Hence ISS often becomes the bottleneck of co-simulation in a complicated system. In this paper, a new software verification tools, the RTOS software simulator (RSS) was presented. The mechanism of its operation was described in a full details. In RSS method, RTOS API is extended and hardware simulator driver is adopted to deal with data-exchange and synchronism between the two simulators.

  4. Phase relations in the system In{sub 2}O{sub 3}-TiO{sub 2}-Fe{sub 2}O{sub 3} at 1100 C in air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, F.; Flores, M.J.R.; Kimizuka, N.

    1999-04-01

    Phase relations in the system In{sub 2}O{sub 3}-TiO{sub 2}-Fe{sub 2}O{sub 3} at 1100 C in air are determined by means of a classic quenching method. There exist In{sub 2}TiO{sub 5}, Fe{sub 2}TiO{sub 5} having a pseudo-Brookite-type phase and a new phase, In{sub 3}Ti{sub 2}FeO{sub 10} having a solid solution range from In{sub 2}O{sub 3}:TiO{sub 2}:Fe{sub 2}O{sub 3} = 4:6:1 to In{sub 2}O{sub 3}:TiO{sub 2}:Fe{sub 2}O{sub 3} = 0.384:0.464:0.152 (mole ratio) on the line InFeO{sub 3}-In{sub 2}Ti{sub 2}O{sub 7}. The crystal structures of In{sub 3}Ti{sub 2}FeO{sub 10} are pyrochlore related with a{sub m} = 5.9171 (5) {angstrom}, b{sub m} = 3.3696more » (3) {angstrom}, c{sub m} = 6.3885 (6) {angstrom}, and {beta} = 108.02 (1){degree} in a monoclinic crystal system at 1100 C, and a{sub 0} = 5.9089 (5) {angstrom}, b{sub 0} = 3.3679 (3) {angstrom}, and c{sub 0} = 12.130 (1) {angstrom} in an orthorhombic system at 1200 C. The relationship between the lattice constants of these phases and those of the cubic pyrochlore type are approximately as follows: a{sub m} = {minus}{1/4}a{sub p} + ({minus}{1/2})b{sub p} + ({minus}{1/4})c{sub p}, b{sub m} = {minus}{1/4}a{sub p} + (0)b{sub p} + ({1/4})c{sub p}, c{sub m} = {1/4}a{sub p} + ({minus}{1/2})b{sub p} + ({1/4})c{sub p} and {beta} = 109.47{degree} in the monoclinic system, and a{sub 0} = {minus}{1/4}a{sub p} + ({minus}{1/2})b{sub p} + ({minus}{1/4})c{sub p}, b{sub 0} = {minus}{1/4}a{sub p} + (0)b{sub p} + ({1/4})c{sub p}, and c{sub 0} = 2/3a{sub p} + ({minus}2/3)b{sub p} + (2/3)c{sub p} in the orthorhombic system, where a{sub p} = b{sub p} = c{sub p} = 9.90 ({angstrom}) are the lattice constants of In{sub 2}Ti{sub 2}O{sub 7} having the cubic pyrochlore type. All solid solutions of In{sub 3}Ti{sub 2}FeO{sub 10} have incommensurate structures with a periodicity of q {times} b{sup *} (q = 0.281--0.356) along the b{sup *} axis and the stoichiometric phase has q = 1/3. In FeO{sub 3} having a layered structure type is unstable between 750

  5. End-to-end System Performance Simulation: A Data-Centric Approach

    NASA Astrophysics Data System (ADS)

    Guillaume, Arnaud; Laffitte de Petit, Jean-Luc; Auberger, Xavier

    2013-08-01

    In the early times of space industry, the feasibility of Earth observation missions was directly driven by what could be achieved by the satellite. It was clear to everyone that the ground segment would be able to deal with the small amount of data sent by the payload. Over the years, the amounts of data processed by the spacecrafts have been increasing drastically, leading to put more and more constraints on the ground segment performances - and in particular on timeliness. Nowadays, many space systems require high data throughputs and short response times, with information coming from multiple sources and involving complex algorithms. It has become necessary to perform thorough end-to-end analyses of the full system in order to optimise its cost and efficiency, but even sometimes to assess the feasibility of the mission. This paper presents a novel framework developed by Astrium Satellites in order to meet these needs of timeliness evaluation and optimisation. This framework, named ETOS (for “End-to-end Timeliness Optimisation of Space systems”), provides a modelling process with associated tools, models and GUIs. These are integrated thanks to a common data model and suitable adapters, with the aim of building suitable space systems simulators of the full end-to-end chain. A big challenge of such environment is to integrate heterogeneous tools (each one being well-adapted to part of the chain) into a relevant timeliness simulation.

  6. Software Tool Integrating Data Flow Diagrams and Petri Nets

    NASA Technical Reports Server (NTRS)

    Thronesbery, Carroll; Tavana, Madjid

    2010-01-01

    Data Flow Diagram - Petri Net (DFPN) is a software tool for analyzing other software to be developed. The full name of this program reflects its design, which combines the benefit of data-flow diagrams (which are typically favored by software analysts) with the power and precision of Petri-net models, without requiring specialized Petri-net training. (A Petri net is a particular type of directed graph, a description of which would exceed the scope of this article.) DFPN assists a software analyst in drawing and specifying a data-flow diagram, then translates the diagram into a Petri net, then enables graphical tracing of execution paths through the Petri net for verification, by the end user, of the properties of the software to be developed. In comparison with prior means of verifying the properties of software to be developed, DFPN makes verification by the end user more nearly certain, thereby making it easier to identify and correct misconceptions earlier in the development process, when correction is less expensive. After the verification by the end user, DFPN generates a printable system specification in the form of descriptions of processes and data.

  7. Quasi-real-time end-to-end simulations of ELT-scale adaptive optics systems on GPUs

    NASA Astrophysics Data System (ADS)

    Gratadour, Damien

    2011-09-01

    Our team has started the development of a code dedicated to GPUs for the simulation of AO systems at the E-ELT scale. It uses the CUDA toolkit and an original binding to Yorick (an open source interpreted language) to provide the user with a comprehensive interface. In this paper we present the first performance analysis of our simulation code, showing its ability to provide Shack-Hartmann (SH) images and measurements at the kHz scale for VLT-sized AO system and in quasi-real-time (up to 70 Hz) for ELT-sized systems on a single top-end GPU. The simulation code includes multiple layers atmospheric turbulence generation, ray tracing through these layers, image formation at the focal plane of every sub-apertures of a SH sensor using either natural or laser guide stars and centroiding on these images using various algorithms. Turbulence is generated on-the-fly giving the ability to simulate hours of observations without the need of loading extremely large phase screens in the global memory. Because of its performance this code additionally provides the unique ability to test real-time controllers for future AO systems under nominal conditions.

  8. ACES: Space shuttle flight software analysis expert system

    NASA Technical Reports Server (NTRS)

    Satterwhite, R. Scott

    1990-01-01

    The Analysis Criteria Evaluation System (ACES) is a knowledge based expert system that automates the final certification of the Space Shuttle onboard flight software. Guidance, navigation and control of the Space Shuttle through all its flight phases are accomplished by a complex onboard flight software system. This software is reconfigured for each flight to allow thousands of mission-specific parameters to be introduced and must therefore be thoroughly certified prior to each flight. This certification is performed in ground simulations by executing the software in the flight computers. Flight trajectories from liftoff to landing, including abort scenarios, are simulated and the results are stored for analysis. The current methodology of performing this analysis is repetitive and requires many man-hours. The ultimate goals of ACES are to capture the knowledge of the current experts and improve the quality and reduce the manpower required to certify the Space Shuttle onboard flight software.

  9. MUST - An integrated system of support tools for research flight software engineering. [Multipurpose User-oriented Software Technology

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Foudriat, E. C.; Will, R. W.

    1977-01-01

    The objectives of NASA's MUST (Multipurpose User-oriented Software Technology) program at Langley Research Center are to cut the cost of producing software which effectively utilizes digital systems for flight research. These objectives will be accomplished by providing an integrated system of support software tools for use throughout the research flight software development process. A description of the overall MUST program and its progress toward the release of a first MUST system will be presented. This release includes: a special interactive user interface, a library of subroutines, assemblers, a compiler, automatic documentation tools, and a test and simulation system.

  10. Dielectric relaxation in 0-3 PVDF-Ba(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chandra, K. P., E-mail: kpchandra23@gmail.com; Singh, Rajan; Kulkarni, A. R., E-mail: ajit2957@gmail.com

    2016-05-06

    (1-x)PVDF-xBa(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} ceramic-polymer composites with x = 0.025, 0.05, 0.10, 0.15 were prepared using melt-mixing technique. The crystal symmetry, space group and unit cell dimensions were determined from the XRD data of Ba(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} using FullProf software, whereas crystallite size and lattice strain were estimated using Williamson-Hall approach. The distribution of Ba(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} particles in the PVDF matrix were examined on the cryo-fractured surfaces using a scanning electron microscope. Cole-Cole and pseudo Cole-Cole analysis suggested the dielectric relaxation in this system to be of non-Debye type. Filler concentration dependent real and imaginary parts ofmore » dielectric constant as well as ac conductivity data followed definite trends of exponential growth types of variation.« less

  11. 78 FR 47015 - Software Requirement Specifications for Digital Computer Software Used in Safety Systems of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Requirement Specifications for Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission... issuing a revised regulatory guide (RG), revision 1 of RG 1.172, ``Software Requirement Specifications for...

  12. Availability of software services for a hospital information system.

    PubMed

    Sakamoto, N

    1998-03-01

    Hospital information systems (HISs) are becoming more important and covering more parts in daily hospital operations as order-entry systems become popular and electronic charts are introduced. Thus, HISs today need to be able to provide necessary services for hospital operations for a 24-h day, 365 days a year. The provision of services discussed here does not simply mean the availability of computers, in which all that matters is that the computer is functioning. It means the provision of necessary information for hospital operations by the computer software, and we will call it the availability of software services. HISs these days are mostly client-server systems. To increase availability of software services in these systems, it is not enough to just use system structures that are highly reliable in existing host-centred systems. Four main components which support availability of software services are network systems, client computers, server computers, and application software. In this paper, we suggest how to structure these four components to provide the minimum requested software services even if a part of the system stops to function. The network system should be double-protected in stratus using Asynchronous Transfer Mode (ATM) as its base network. Client computers should be fat clients with as much application logic as possible, and reference information which do not require frequent updates (master files, for example) should be replicated in clients. It would be best if all server computers could be double-protected. However, if that is physically impossible, one database file should be made accessible by several server computers. Still, at least the basic patients' information and the latest clinical records should be double-protected physically. Application software should be tested carefully before introduction. Different versions of the application software should always be kept and managed in case the new version has problems. If a hospital

  13. Clinical records anonymisation and text extraction (CRATE): an open-source software system.

    PubMed

    Cardinal, Rudolf N

    2017-04-26

    Electronic medical records contain information of value for research, but contain identifiable and often highly sensitive confidential information. Patient-identifiable information cannot in general be shared outside clinical care teams without explicit consent, but anonymisation/de-identification allows research uses of clinical data without explicit consent. This article presents CRATE (Clinical Records Anonymisation and Text Extraction), an open-source software system with separable functions: (1) it anonymises or de-identifies arbitrary relational databases, with sensitivity and precision similar to previous comparable systems; (2) it uses public secure cryptographic methods to map patient identifiers to research identifiers (pseudonyms); (3) it connects relational databases to external tools for natural language processing; (4) it provides a web front end for research and administrative functions; and (5) it supports a specific model through which patients may consent to be contacted about research. Creation and management of a research database from sensitive clinical records with secure pseudonym generation, full-text indexing, and a consent-to-contact process is possible and practical using entirely free and open-source software.

  14. Phobos lander coding system: Software and analysis

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.

    1988-01-01

    The software developed for the decoding system used in the telemetry link of the Phobos Lander mission is described. Encoders and decoders are provided to cover the three possible telemetry configurations. The software can be used to decode actual data or to simulate the performance of the telemetry system. The theoretical properties of the codes chosen for this mission are analyzed and discussed.

  15. Computation of Ground-State Properties in Molecular Systems: Back-Propagation with Auxiliary-Field Quantum Monte Carlo.

    PubMed

    Motta, Mario; Zhang, Shiwei

    2017-11-14

    We address the computation of ground-state properties of chemical systems and realistic materials within the auxiliary-field quantum Monte Carlo method. The phase constraint to control the Fermion phase problem requires the random walks in Slater determinant space to be open-ended with branching. This in turn makes it necessary to use back-propagation (BP) to compute averages and correlation functions of operators that do not commute with the Hamiltonian. Several BP schemes are investigated, and their optimization with respect to the phaseless constraint is considered. We propose a modified BP method for the computation of observables in electronic systems, discuss its numerical stability and computational complexity, and assess its performance by computing ground-state properties in several molecular systems, including small organic molecules.

  16. Integrated seat frame and back support

    DOEpatents

    Martin, Leo

    1999-01-01

    An integrated seating device comprises a seat frame having a front end and a rear end. The seat frame has a double wall defining an exterior wall and an interior wall. The rear end of the seat frame has a slot cut therethrough both the exterior wall and the interior wall. The front end of the seat frame has a slot cut through just the interior wall thereof. A back support comprising a generally L shape has a horizontal member, and a generally vertical member which is substantially perpendicular to the horizontal member. The horizontal member is sized to be threaded through the rear slot and is fitted into the front slot. Welded slat means secures the back support to the seat frame to result in an integrated seating device.

  17. Lithium isotopic systematics of submarine vent fluids from arc and back-arc hydrothermal systems in the western Pacific

    NASA Astrophysics Data System (ADS)

    Araoka, Daisuke; Nishio, Yoshiro; Gamo, Toshitaka; Yamaoka, Kyoko; Kawahata, Hodaka

    2016-10-01

    The Li concentration and isotopic composition (δ7Li) in submarine vent fluids are important for oceanic Li budget and potentially useful for investigating hydrothermal systems deep under the seafloor because hydrothermal vent fluids are highly enriched in Li relative to seawater. Although Li isotopic geochemistry has been studied at mid-ocean-ridge (MOR) hydrothermal sites, in arc and back-arc settings Li isotopic composition has not been systematically investigated. Here we determined the δ7Li and 87Sr/86Sr values of 11 end-member fluids from 5 arc and back-arc hydrothermal systems in the western Pacific and examined Li behavior during high-temperature water-rock interactions in different geological settings. In sediment-starved hydrothermal systems (Manus Basin, Izu-Bonin Arc, Mariana Trough, and North Fiji Basin), the Li concentrations (0.23-1.30 mmol/kg) and δ7Li values (+4.3‰ to +7.2‰) of the end-member fluids are explained mainly by dissolution-precipitation model during high-temperature seawater-rock interactions at steady state. Low Li concentrations are attributable to temperature-related apportioning of Li in rock into the fluid phase and phase separation process. Small variation in Li among MOR sites is probably caused by low-temperature alteration process by diffusive hydrothermal fluids under the seafloor. In contrast, the highest Li concentrations (3.40-5.98 mmol/kg) and lowest δ7Li values (+1.6‰ to +2.4‰) of end-member fluids from the Okinawa Trough demonstrate that the Li is predominantly derived from marine sediments. The variation of Li in sediment-hosted sites can be explained by the differences in degree of hydrothermal fluid-sediment interactions associated with the thickness of the marine sediment overlying these hydrothermal sites.

  18. FALL-BACK DISKS IN LONG AND SHORT GAMMA-RAY BURSTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannizzo, J. K.; Troja, E.; Gehrels, N., E-mail: John.K.Cannizzo@nasa.gov

    2011-06-10

    We present time-dependent numerical calculations for fall-back disks relevant to gamma-ray bursts (GRBs) in which the disk of material surrounding the black hole powering the GRB jet modulates the mass flow and hence the strength of the jet. Given the initial existence of a small mass {approx}< 10{sup -4} M{sub sun} near the progenitor with a circularization radius {approx}10{sup 10}-10{sup 11} cm, an unavoidable consequence will be the formation of an 'external disk' whose outer edge continually moves to larger radii due to angular momentum transport and lack of a confining torque. For long GRBs, if the mass distribution inmore » the initial fall-back disk traces the progenitor envelope, then a radius {approx}10{sup 11} cm gives a timescale {approx}10{sup 4} s for the X-ray plateau. For late times t > 10{sup 7} s a steepening due to a cooling front in the disk may have observational support in GRB 060729. For short GRBs, one expects most of the mass initially to lie at small radii <10{sup 8} cm; however, the presence of even a trace amount {approx}10{sup -9} M{sub sun} of high angular material can give a brief plateau in the light curve. By studying the plateaus in the X-ray decay of GRBs, which can last up to {approx}10{sup 4} s after the prompt emission, Dainotti et al. find an apparent inverse relation between the X-ray luminosity at the end of the plateau and the duration of the plateau. We show that this relation may simply represent the fact that one is biased against detecting faint plateaus and therefore preferentially sampling the more energetic GRBs. If, however, there were a standard reservoir in fall-back mass, our model could reproduce the inverse X-ray luminosity-duration relation. We emphasize that we do not address the very steep, initial decays immediately following the prompt emission, which have been modeled by Lindner et al. as fall back of the progenitor core, and may entail the accretion of {approx}> 1 M{sub sun}.« less

  19. Developing the E-Scape Software System

    ERIC Educational Resources Information Center

    Derrick, Karim

    2012-01-01

    Most innovations have contextual pre-cursors that prompt new ways of thinking and in their turn help to give form to the new reality. This was the case with the e-scape software development process. The origins of the system existed in software components and ideas that we had developed through previous projects, but the ultimate direction we took…

  20. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman-Davies, C. S.; Benzinger, L.; Beshers, G.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1986-01-01

    Research into software development is required to reduce its production cost and to improve its quality. Modern software systems, such as the embedded software required for NASA's space station initiative, stretch current software engineering techniques. The requirements to build large, reliable, and maintainable software systems increases with time. Much theoretical and practical research is in progress to improve software engineering techniques. One such technique is to build a software system or environment which directly supports the software engineering process, i.e., the SAGA project, comprising the research necessary to design and build a software development which automates the software engineering process. Progress under SAGA is described.

  1. Automated Cryocooler Monitor and Control System Software

    NASA Technical Reports Server (NTRS)

    Britchcliffe, Michael J.; Conroy, Bruce L.; Anderson, Paul E.; Wilson, Ahmad

    2011-01-01

    This software is used in an automated cryogenic control system developed to monitor and control the operation of small-scale cryocoolers. The system was designed to automate the cryogenically cooled low-noise amplifier system described in "Automated Cryocooler Monitor and Control System" (NPO-47246), NASA Tech Briefs, Vol. 35, No. 5 (May 2011), page 7a. The software contains algorithms necessary to convert non-linear output voltages from the cryogenic diode-type thermometers and vacuum pressure and helium pressure sensors, to temperature and pressure units. The control function algorithms use the monitor data to control the cooler power, vacuum solenoid, vacuum pump, and electrical warm-up heaters. The control algorithms are based on a rule-based system that activates the required device based on the operating mode. The external interface is Web-based. It acts as a Web server, providing pages for monitor, control, and configuration. No client software from the external user is required.

  2. POLYSHIFT Communications Software for the Connection Machine System CM-200

    DOE PAGES

    George, William; Brickner, Ralph G.; Johnsson, S. Lennart

    1994-01-01

    We describe the use and implementation of a polyshift function PSHIFT for circular shifts and end-offs shifts. Polyshift is useful in many scientific codes using regular grids, such as finite difference codes in several dimensions, and multigrid codes, molecular dynamics computations, and in lattice gauge physics computations, such as quantum chromodynamics (QCD) calculations. Our implementation of the PSHIFT function on the Connection Machine systems CM-2 and CM-200 offers a speedup of up to a factor of 3–4 compared with CSHIFT when the local data motion within a node is small. The PSHIFT routine is included in the Connection Machine Scientificmore » Software Library (CMSSL).« less

  3. Software fault tolerance in computer operating systems

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  4. Common Database Interface for Heterogeneous Software Engineering Tools.

    DTIC Science & Technology

    1987-12-01

    SUB-GROUP Database Management Systems ;Programming(Comuters); 1e 05 Computer Files;Information Transfer;Interfaces; 19. ABSTRACT (Continue on reverse...Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Systems ...Literature ..... 8 System 690 Configuration ......... 8 Database Functionis ............ 14 Software Engineering Environments ... 14 Data Manager

  5. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that

  6. The La{sub 2}S{sub 3}-LaS{sub 2} system: Thermodynamic and kinetic study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasilyeva, I.G., E-mail: kamarz@niic.nsc.r; Nikolaev, R.E.

    2010-08-15

    A detailed thermodynamic study of the LaS{sub 2}-La{sub 2}S{sub 3} system in the temperature range 350-1000 {sup o}C was performed, starting from high quality crystals LaS{sub 2} as the highest polysulfide in the system, and using a sensitive static tensimetric method with a quartz Bourdon gauge and a membrane as a null-point instrument. The p{sub S}-T-x diagram obtained has shown that the phase region covering the composition between LaS{sub 2} and La{sub 2}S{sub 3}, which was previously described as a single grossly nonstoichiometric phase, consists of three discrete stoichiometric phases, LaS{sub 2.00}, LaS{sub 1.91}, and LaS{sub 1.76}, where compositions couldmore » be determined with an accuracy of {+-}0.01 f.u. The thermodynamic characteristics of evaporation of the polysulfides as well as standard heat of LaS{sub 2} formation were calculated. The role of kinetics in the formation of ordered superstructures of sulfur-poorer polysulfides with different formal concentration of vacancies is considered. - Graphical abstract: The p{sub S}-T stability fields for La-polysulfides in the concentration range between LaS{sub 2} and La{sub 2}S{sub 3}.« less

  7. Architected Agile Solutions for Software-Reliant Systems

    NASA Astrophysics Data System (ADS)

    Boehm, Barry; Lane, Jo Ann; Koolmanojwong, Supannika; Turner, Richard

    Systems are becoming increasingly reliant on software due to needs for rapid fielding of “70% capabilities,” interoperability, net-centricity, and rapid adaptation to change. The latter need has led to increased interest in agile methods of software development, in which teams rely on shared tacit interpersonal knowledge rather than explicit documented knowledge. However, such systems often need to be scaled up to higher level of performance and assurance, requiring stronger architectural support. Several organizations have recently transformed themselves by developing successful combinations of agility and architecture that can scale to projects of up to 100 personnel. This chapter identifies a set of key principles for such architected agile solutions for software-reliant systems, provides guidance for how much architecting is enough, and illustrates the key principles with several case studies.

  8. GNOCIS an update of the generic NO{sub x} control intelligent system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmes, R.; Mayes, I.; Irons, R.

    1996-01-01

    GNOCIS is an on-line enhancement to existing power plant Digital Control Systems (DCS) designed to reduce NO{sub x} emissions while meeting other operational constraints, such as heat rate and CO emissions. It can also be used to minimize unburned carbon while meeting a specified NO{sub x} limit, or any combination of emissions/performance variables that can be quantified by a common metric and are affected by DCS - adjustable parameters. The core of the system is an adaptive neural network model of the NO{sub x} generation characteristics of the boiler. The software applies an optimizing procedure to identify the best setpointsmore » for the plant. The recommended setpoints can be either conveyed to the operator via the DCS in an advisory mode, or implemented automatically in a closed-loop mode. GNOCIS is designed to run on a stand-alone workstation connected to the DCS via the data highway. Sensor validation techniques have been incorporated. The goal for GNOSIS is to deliver 10-35% reductions in NO{sub x} from baseline conditions while maintaining or improving other operational constraints. Preliminary results are presented for demonstrations at two power plants: (1) 500-MW T-fired boiler at PowerGen`s Kingsnorth Station, (2) 250-MW Opposed-fired boiler at Alabama Power Company`s Gaston Station.« less

  9. Integrated dry NO{sub x}/SO{sub 2} emissions control system performance summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, T.; Muzio, L.J.; Smith, R.

    1997-12-31

    The Integrated Dry NO{sub x}/SO{sub 2} Emissions Control System was installed at Public Service Company of Colorado`s Arapahoe 4 generating station in 1992 in cooperation with the US Department of Energy (DOE) and the Electric Power Research Institute (EPRI). This full-scale 100 MWe demonstration combines low-NO{sub x} burners, overfire, air, and selective non-catalytic reduction (SNCR) for NO{sub x} control and dry sorbent injection (DSI) with or without humidification for SO{sub 2} control. Operation and testing of the Integrated Dry NO{sub x}/SO{sub 2} Emissions Control System began in August 1992 and will continue through 1996. Results of the NO{sub x} controlmore » technologies show that the original system goal of 70% NO{sub x} removal has been easily met and the combustion and SNCR systems can achieve NO{sub x} removals of up to 80% at full load. Duct injection of commercial calcium hydroxide has achieved a maximum SO{sub 2} removal of nearly 40% while humidifying the flue gas to a 20 F approach to saturation. Sodium-based dry sorbent injection has provided SO{sub 2} removal of over 70% without the occurrence of a visible NO{sub 2} plume. Recent test work has improved SNCR performance at low loads and has demonstrated that combined dry sodium injection and SNCR yields both lower NO{sub 2} levels and NH{sub 3} slip than either technology alone.« less

  10. Experimental recovery of quantum correlations in absence of system-environment back-action.

    PubMed

    Xu, Jin-Shi; Sun, Kai; Li, Chuan-Feng; Xu, Xiao-Ye; Guo, Guang-Can; Andersson, Erika; Lo Franco, Rosario; Compagno, Giuseppe

    2013-01-01

    Revivals of quantum correlations in composite open quantum systems are a useful dynamical feature against detrimental effects of the environment. Their occurrence is attributed to flows of quantum information back and forth from systems to quantum environments. However, revivals also show up in models where the environment is classical, thus unable to store quantum correlations, and forbids system-environment back-action. This phenomenon opens basic issues about its interpretation involving the role of classical environments, memory effects, collective effects and system-environment correlations. Moreover, an experimental realization of back-action-free quantum revivals has applicative relevance as it leads to recover quantum resources without resorting to more demanding structured environments and correction procedures. Here we introduce a simple two-qubit model suitable to address these issues. We then report an all-optical experiment which simulates the model and permits us to recover and control, against decoherence, quantum correlations without back-action. We finally give an interpretation of the phenomenon by establishing the roles of the involved parties.

  11. SSR_pipeline--computer software for the identification of microsatellite sequences from paired-end Illumina high-throughput DNA sequence data

    USGS Publications Warehouse

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.

  12. Software Coherence in Multiprocessor Memory Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bolosky, William Joseph

    1993-01-01

    Processors are becoming faster and multiprocessor memory interconnection systems are not keeping up. Therefore, it is necessary to have threads and the memory they access as near one another as possible. Typically, this involves putting memory or caches with the processors, which gives rise to the problem of coherence: if one processor writes an address, any other processor reading that address must see the new value. This coherence can be maintained by the hardware or with software intervention. Systems of both types have been built in the past; the hardware-based systems tended to outperform the software ones. However, the ratio of processor to interconnect speed is now so high that the extra overhead of the software systems may no longer be significant. This issue is explored both by implementing a software maintained system and by introducing and using the technique of offline optimal analysis of memory reference traces. It finds that in properly built systems, software maintained coherence can perform comparably to or even better than hardware maintained coherence. The architectural features necessary for efficient software coherence to be profitable include a small page size, a fast trap mechanism, and the ability to execute instructions while remote memory references are outstanding.

  13. Architecture of the software for LAMOST fiber positioning subsystem

    NASA Astrophysics Data System (ADS)

    Peng, Xiaobo; Xing, Xiaozheng; Hu, Hongzhuan; Zhai, Chao; Li, Weimin

    2004-09-01

    The architecture of the software which controls the LAMOST fiber positioning sub-system is described. The software is composed of two parts as follows: a main control program in a computer and a unit controller program in a MCS51 single chip microcomputer ROM. And the function of the software includes: Client/Server model establishment, observation planning, collision handling, data transmission, pulse generation, CCD control, image capture and processing, and data analysis etc. Particular attention is paid to the ways in which different parts of the software can communicate. Also software techniques for multi threads, SOCKET programming, Microsoft Windows message response, and serial communications are discussed.

  14. The ALMA common software: dispatch from the trenches

    NASA Astrophysics Data System (ADS)

    Schwarz, J.; Sommer, H.; Jeram, B.; Sekoranja, M.; Chiozzi, G.; Grimstrup, A.; Caproni, A.; Paredes, C.; Allaert, E.; Harrington, S.; Turolla, S.; Cirami, R.

    2008-07-01

    The ALMA Common Software (ACS) provides both an application framework and CORBA-based middleware for the distributed software system of the Atacama Large Millimeter Array. Building upon open-source tools such as the JacORB, TAO and OmniORB ORBs, ACS supports the development of component-based software in any of three languages: Java, C++ and Python. Now in its seventh major release, ACS has matured, both in its feature set as well as in its reliability and performance. However, it is only recently that the ALMA observatory's hardware and application software has reached a level at which it can exploit and challenge the infrastructure that ACS provides. In particular, the availability of an Antenna Test Facility(ATF) at the site of the Very Large Array in New Mexico has enabled us to exercise and test the still evolving end-to-end ALMA software under realistic conditions. The major focus of ACS, consequently, has shifted from the development of new features to consideration of how best to use those that already exist. Configuration details which could be neglected for the purpose of running unit tests or skeletal end-to-end simulations have turned out to be sensitive levers for achieving satisfactory performance in a real-world environment. Surprising behavior in some open-source tools has required us to choose between patching code that we did not write or addressing its deficiencies by implementing workarounds in our own software. We will discuss these and other aspects of our recent experience at the ATF and in simulation.

  15. Software use cases to elicit the software requirements analysis within the ASTRI project

    NASA Astrophysics Data System (ADS)

    Conforti, Vito; Antolini, Elisa; Bonnoli, Giacomo; Bruno, Pietro; Bulgarelli, Andrea; Capalbi, Milvia; Fioretti, Valentina; Fugazza, Dino; Gardiol, Daniele; Grillo, Alessandro; Leto, Giuseppe; Lombardi, Saverio; Lucarelli, Fabrizio; Maccarone, Maria Concetta; Malaguti, Giuseppe; Pareschi, Giovanni; Russo, Federico; Sangiorgi, Pierluca; Schwarz, Joseph; Scuderi, Salvatore; Tanci, Claudio; Tosti, Gino; Trifoglio, Massimo; Vercellone, Stefano; Zanmar Sanchez, Ricardo

    2016-07-01

    The Italian National Institute for Astrophysics (INAF) is leading the Astrofisica con Specchi a Tecnologia Replicante Italiana (ASTRI) project whose main purpose is the realization of small size telescopes (SST) for the Cherenkov Telescope Array (CTA). The first goal of the ASTRI project has been the development and operation of an innovative end-to-end telescope prototype using a dual-mirror optical configuration (SST-2M) equipped with a camera based on silicon photo-multipliers and very fast read-out electronics. The ASTRI SST-2M prototype has been installed in Italy at the INAF "M.G. Fracastoro" Astronomical Station located at Serra La Nave, on Mount Etna, Sicily. This prototype will be used to test several mechanical, optical, control hardware and software solutions which will be used in the ASTRI mini-array, comprising nine telescopes proposed to be placed at the CTA southern site. The ASTRI mini-array is a collaborative and international effort led by INAF and carried out by Italy, Brazil and South-Africa. We present here the use cases, through UML (Unified Modeling Language) diagrams and text details, that describe the functional requirements of the software that will manage the ASTRI SST-2M prototype, and the lessons learned thanks to these activities. We intend to adopt the same approach for the Mini Array Software System that will manage the ASTRI miniarray operations. Use cases are of importance for the whole software life cycle; in particular they provide valuable support to the validation and verification activities. Following the iterative development approach, which breaks down the software development into smaller chunks, we have analysed the requirements, developed, and then tested the code in repeated cycles. The use case technique allowed us to formalize the problem through user stories that describe how the user procedurally interacts with the software system. Through the use cases we improved the communication among team members, fostered

  16. Software For Design Of Life-Support Systems

    NASA Technical Reports Server (NTRS)

    Rudokas, Mary R.; Cantwell, Elizabeth R.; Robinson, Peter I.; Shenk, Timothy W.

    1991-01-01

    Design Assistant Workstation (DAWN) computer program is prototype of expert software system for analysis and design of regenerative, physical/chemical life-support systems that revitalize air, reclaim water, produce food, and treat waste. Incorporates both conventional software for quantitative mathematical modeling of physical, chemical, and biological processes and expert system offering user stored knowledge about materials and processes. Constructs task tree as it leads user through simulated process, offers alternatives, and indicates where alternative not feasible. Also enables user to jump from one design level to another.

  17. Reducing Risk in DoD Software-Intensive Systems Development

    DTIC Science & Technology

    2016-03-01

    intensive systems development risk. This research addresses the use of the Technical Readiness Assessment (TRA) using the nine-level software Technology...The software TRLs are ineffective in reducing technical risk for the software component development. • Without the software TRLs, there is no...effective method to perform software TRA or reduce the technical development risk. The software component will behave as a new, untried technology in nearly

  18. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Badger, W.; Beckman, C. S.; Beshers, G.; Hammerslag, D.; Kimball, J.; Kirslis, P. A.; Render, H.; Richards, P.; Terwilliger, R.

    1984-01-01

    The project to automate the management of software production systems is described. The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. Several major components of the SAGA system are completed to prototype form. The construction methods are described.

  19. Improved interoceptive awareness in chronic low back pain: a comparison of Back school versus Feldenkrais method.

    PubMed

    Paolucci, Teresa; Zangrando, Federico; Iosa, Marco; De Angelis, Simona; Marzoli, Caterina; Piccinini, Giulia; Saraceni, Vincenzo Maria

    2017-05-01

    To determine the efficacy of the Feldenkrais method for relieving pain in patients with chronic low back pain (CLBP) and the improvement of interoceptive awareness. This study was designed as a single-blind randomized controlled trial. Fifty-three patients with a diagnosis of CLBP for at least 3 months were randomly allocated to the Feldenkrais (mean age 61.21 ± 11.53 years) or Back School group (mean age 60.70 ± 11.72 years). Pain was assessed using the visual analog scale (VAS) and McGill Pain Questionnaire (MPQ), disability was evaluated with the Waddel Disability Index, quality of life was measured with the Short Form-36 Health Survey (SF-36), and mind-body interactions were studied using the Multidimensional Assessment of Interoceptive Awareness Questionnaire (MAIA). Data were collected at baseline, at the end of treatment, and at the 3-month follow-up. The two groups were matched at baseline for all the computed parameters. At the end of treatment (Tend), there were no significant differences between groups regarding chronic pain reduction (p = 0.290); VAS and MAIA-N sub scores correlated at Tend (R = 0.296, p = 0.037). By the Friedman analysis, both groups experienced significant changes in pain (p < 0.001) and disability (p < 0.001) along the investigated period. The Feldenkrais method has comparable efficacy as Back School in CLBP. Implications for rehabilitation The Feldenkrais method is a mind-body therapy that is based on awareness through movement lessons, which are verbally guided explorations of movement that are conducted by a physiotherapist who is experienced and trained in this method. It aims to increase self-awareness, expand a person's repertoire of movements, and to promote increased functioning in contexts in which the entire body cooperates in the execution of movements. Interoceptive awareness, which improves with rehabilitation, has a complex function in the perception of chronic pain and should be

  20. Software-Based Safety Systems in Space - Learning from other Domains

    NASA Astrophysics Data System (ADS)

    Klicker, M.; Putzer, H.

    2012-01-01

    Increasing complexity and new emerging capabilities for manned and unmanned missions have been the hallmark of the past decades of space exploration. One of the drivers in this process was the ever increasing use of software and software-intensive systems to implement system functions necessary to the capabilities needed. The course of technological evolution suggests that this development will continue well into the future with a number of challenges for the safety community some of which shall be discussed in this paper. The current state of the art reveals a number of problems with developing and assessing safety critical software which explains the reluctance of the space community to rely on software-based safety measures to mitigate hazards. Among others, usually lack of trustworthy evidence of software integrity in all foreseeable situations and the difficulties to integrate software in the traditional safety analysis framework are cited. Experience from other domains and recent developments in modern software development methodologies and verification techniques are analysed for the suitability for space systems and an avionics architectural framework (see STANAG 4626) for the implementation of safety critical software is proposed. This is shown to create among other features the possibility of numerous degradation modes enhancing overall system safety and interoperability of computerized space systems. It also potentially simplifies international cooperation on a technical level by introducing a higher degree of compatibility. As software safety cannot be tested or argued into a system in hindsight, the development process and especially the architecture chosen are essential to establish safety properties for the software used to implement safety functions. The core of the safety argument revolves around the separation of different functions and software modules from each other by minimal coupling of functions and credible separation mechanisms in the

  1. Practical Methods for Estimating Software Systems Fault Content and Location

    NASA Technical Reports Server (NTRS)

    Nikora, A.; Schneidewind, N.; Munson, J.

    1999-01-01

    Over the past several years, we have developed techniques to discriminate between fault-prone software modules and those that are not, to estimate a software system's residual fault content, to identify those portions of a software system having the highest estimated number of faults, and to estimate the effects of requirements changes on software quality.

  2. Expert System Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    C Language Integrated Production System (CLIPS) is a software shell for developing expert systems is designed to allow research and development of artificial intelligence on conventional computers. Originally developed by Johnson Space Center, it enables highly efficient pattern matching. A collection of conditions and actions to be taken if the conditions are met is built into a rule network. Additional pertinent facts are matched to the rule network. Using the program, E.I. DuPont de Nemours & Co. is monitoring chemical production machines; California Polytechnic State University is investigating artificial intelligence in computer aided design; Mentor Graphics has built a new Circuit Synthesis system, and Brooke and Brooke, a law firm, can determine which facts from a file are most important.

  3. Launch Control System Software Development System Automation Testing

    NASA Technical Reports Server (NTRS)

    Hwang, Andrew

    2017-01-01

    ) tool to Brandon Echols, a fellow intern, and I. The purpose of the OCR tool is to analyze an image and find the coordinates of any group of text. Some issues that arose while installing the OCR tool included the absence of certain libraries needed to train the tool and an outdated software version. We eventually resolved the issues and successfully installed the OCR tool. Training the tool required many images and different fonts and sizes, but in the end the tool learned to accurately decipher the text in the images and their coordinates. The OCR tool produced a file that contained significant metadata for each section of text, but only the text and coordinates of the text was required for our purpose. The team made a script to parse the information we wanted from the OCR file to a different file that would be used by automation functions within the automated framework. Since a majority of development and testing for the automated test cases for the GUI in question has been done using live simulated data on the workstations at the Launch Control Center (LCC), a large amount of progress has been made. As of this writing, about 60% of all of automated testing has been implemented. Additionally, the OCR tool will help make our automated tests more robust due to the tool's text recognition being highly scalable to different text fonts and text sizes. Soon we will have the whole test system automated, allowing for more full-time engineers working on development projects.

  4. Defect measurement and analysis of JPL ground software: a case study

    NASA Technical Reports Server (NTRS)

    Powell, John D.; Spagnuolo, John N., Jr.

    2004-01-01

    Ground software systems at JPL must meet high assurance standards while remaining on schedule due to relatively immovable launch dates for spacecraft that will be controlled by such systems. Toward this end, the Software Quality Improvement (SQI) project's Measurement and Benchmarking (M&B) team is collecting and analyzing defect data of JPL ground system software projects to build software defect prediction models. The aim of these models is to improve predictability with regard to software quality activities. Predictive models will quantitatively define typical trends for JPL ground systems as well as Critical Discriminators (CDs) to provide explanations for atypical deviations from the norm at JPL. CDs are software characteristics that can be estimated or foreseen early in a software project's planning. Thus, these CDs will assist in planning for the predicted degree to which software quality activities for a project are likely to deviation from the normal JPL ground system based on pasted experience across the lab.

  5. GUIdock-VNC: using a graphical desktop sharing system to provide a browser-based interface for containerized software

    PubMed Central

    Mittal, Varun; Hung, Ling-Hong; Keswani, Jayant; Kristiyanto, Daniel; Lee, Sung Bong

    2017-01-01

    Abstract Background: Software container technology such as Docker can be used to package and distribute bioinformatics workflows consisting of multiple software implementations and dependencies. However, Docker is a command line–based tool, and many bioinformatics pipelines consist of components that require a graphical user interface. Results: We present a container tool called GUIdock-VNC that uses a graphical desktop sharing system to provide a browser-based interface for containerized software. GUIdock-VNC uses the Virtual Network Computing protocol to render the graphics within most commonly used browsers. We also present a minimal image builder that can add our proposed graphical desktop sharing system to any Docker packages, with the end result that any Docker packages can be run using a graphical desktop within a browser. In addition, GUIdock-VNC uses the Oauth2 authentication protocols when deployed on the cloud. Conclusions: As a proof-of-concept, we demonstrated the utility of GUIdock-noVNC in gene network inference. We benchmarked our container implementation on various operating systems and showed that our solution creates minimal overhead. PMID:28327936

  6. GUIdock-VNC: using a graphical desktop sharing system to provide a browser-based interface for containerized software.

    PubMed

    Mittal, Varun; Hung, Ling-Hong; Keswani, Jayant; Kristiyanto, Daniel; Lee, Sung Bong; Yeung, Ka Yee

    2017-04-01

    Software container technology such as Docker can be used to package and distribute bioinformatics workflows consisting of multiple software implementations and dependencies. However, Docker is a command line-based tool, and many bioinformatics pipelines consist of components that require a graphical user interface. We present a container tool called GUIdock-VNC that uses a graphical desktop sharing system to provide a browser-based interface for containerized software. GUIdock-VNC uses the Virtual Network Computing protocol to render the graphics within most commonly used browsers. We also present a minimal image builder that can add our proposed graphical desktop sharing system to any Docker packages, with the end result that any Docker packages can be run using a graphical desktop within a browser. In addition, GUIdock-VNC uses the Oauth2 authentication protocols when deployed on the cloud. As a proof-of-concept, we demonstrated the utility of GUIdock-noVNC in gene network inference. We benchmarked our container implementation on various operating systems and showed that our solution creates minimal overhead. © The Authors 2017. Published by Oxford University Press.

  7. Towards Model-Driven End-User Development in CALL

    ERIC Educational Resources Information Center

    Farmer, Rod; Gruba, Paul

    2006-01-01

    The purpose of this article is to introduce end-user development (EUD) processes to the CALL software development community. EUD refers to the active participation of end-users, as non-professional developers, in the software development life cycle. Unlike formal software engineering approaches, the focus in EUD on means/ends development is…

  8. Evolvable Neural Software System

    NASA Technical Reports Server (NTRS)

    Curtis, Steven A.

    2009-01-01

    The Evolvable Neural Software System (ENSS) is composed of sets of Neural Basis Functions (NBFs), which can be totally autonomously created and removed according to the changing needs and requirements of the software system. The resulting structure is both hierarchical and self-similar in that a given set of NBFs may have a ruler NBF, which in turn communicates with other sets of NBFs. These sets of NBFs may function as nodes to a ruler node, which are also NBF constructs. In this manner, the synthetic neural system can exhibit the complexity, three-dimensional connectivity, and adaptability of biological neural systems. An added advantage of ENSS over a natural neural system is its ability to modify its core genetic code in response to environmental changes as reflected in needs and requirements. The neural system is fully adaptive and evolvable and is trainable before release. It continues to rewire itself while on the job. The NBF is a unique, bilevel intelligence neural system composed of a higher-level heuristic neural system (HNS) and a lower-level, autonomic neural system (ANS). Taken together, the HNS and the ANS give each NBF the complete capabilities of a biological neural system to match sensory inputs to actions. Another feature of the NBF is the Evolvable Neural Interface (ENI), which links the HNS and ANS. The ENI solves the interface problem between these two systems by actively adapting and evolving from a primitive initial state (a Neural Thread) to a complicated, operational ENI and successfully adapting to a training sequence of sensory input. This simulates the adaptation of a biological neural system in a developmental phase. Within the greater multi-NBF and multi-node ENSS, self-similar ENI s provide the basis for inter-NBF and inter-node connectivity.

  9. The Elements of an Effective Software Development Plan - Software Development Process Guidebook

    DTIC Science & Technology

    2011-11-11

    standards and practices required for all XMPL software development. This SDP implements the <corporate> Standard Software Process (SSP). as tailored...Developing and integrating reusable software products • Approach to managing COTS/Reuse software implementation • COTS/Reuse software selection...final selection and submit to change board for approval MAINTENANCE Monitor current products for obsolescence or end of support Track new

  10. Formal Verification of Large Software Systems

    NASA Technical Reports Server (NTRS)

    Yin, Xiang; Knight, John

    2010-01-01

    We introduce a scalable proof structure to facilitate formal verification of large software systems. In our approach, we mechanically synthesize an abstract specification from the software implementation, match its static operational structure to that of the original specification, and organize the proof as the conjunction of a series of lemmas about the specification structure. By setting up a different lemma for each distinct element and proving each lemma independently, we obtain the important benefit that the proof scales easily for large systems. We present details of the approach and an illustration of its application on a challenge problem from the security domain

  11. 77 FR 50724 - Developing Software Life Cycle Processes for Digital Computer Software Used in Safety Systems of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Developing Software Life Cycle Processes for Digital... Software Life Cycle Processes for Digital Computer Software used in Safety Systems of Nuclear Power Plants... clarifications, the enhanced consensus practices for developing software life-cycle processes for digital...

  12. A software engineering approach to expert system design and verification

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.; Goodwin, Mary Ann

    1988-01-01

    Software engineering design and verification methods for developing expert systems are not yet well defined. Integration of expert system technology into software production environments will require effective software engineering methodologies to support the entire life cycle of expert systems. The software engineering methods used to design and verify an expert system, RENEX, is discussed. RENEX demonstrates autonomous rendezvous and proximity operations, including replanning trajectory events and subsystem fault detection, onboard a space vehicle during flight. The RENEX designers utilized a number of software engineering methodologies to deal with the complex problems inherent in this system. An overview is presented of the methods utilized. Details of the verification process receive special emphasis. The benefits and weaknesses of the methods for supporting the development life cycle of expert systems are evaluated, and recommendations are made based on the overall experiences with the methods.

  13. Large Scale Portability of Hospital Information System Software

    PubMed Central

    Munnecke, Thomas H.; Kuhn, Ingeborg M.

    1986-01-01

    As part of its Decentralized Hospital Computer Program (DHCP) the Veterans Administration installed new hospital information systems in 169 of its facilities during 1984 and 1985. The application software for these systems is based on the ANS MUMPS language, is public domain, and is designed to be operating system and hardware independent. The software, developed by VA employees, is built upon a layered approach, where application packages layer on a common data dictionary which is supported by a Kernel of software. Communications between facilities are based on public domain Department of Defense ARPA net standards for domain naming, mail transfer protocols, and message formats, layered on a variety of communications technologies.

  14. NASA Data Acquisitions System (NDAS) Software Architecture

    NASA Technical Reports Server (NTRS)

    Davis, Dawn; Duncan, Michael; Franzl, Richard; Holladay, Wendy; Marshall, Peggi; Morris, Jon; Turowski, Mark

    2012-01-01

    The NDAS Software Project is for the development of common low speed data acquisition system software to support NASA's rocket propulsion testing facilities at John C. Stennis Space Center (SSC), White Sands Test Facility (WSTF), Plum Brook Station (PBS), and Marshall Space Flight Center (MSFC).

  15. Phase equilibria in the quasiternary system Ag{sub 2}S–Ga{sub 2}S{sub 3}–In{sub 2}S{sub 3} and optical properties of (Ga{sub 55}In{sub 45}){sub 2}S{sub 300}, (Ga{sub 54.59}In{sub 44.66}Er{sub 0.75}){sub 2}S{sub 300} single crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivashchenko, I.A., E-mail: Ivashchenko.Inna@eenu.edu.ua; Danyliuk, I.V.; Olekseyuk, I.D.

    The quasiternary system Ag{sub 2}S–Ga{sub 2}S{sub 3}–In{sub 2}S{sub 3} was investigated by differential thermal, X-ray diffraction analyses. The phase diagram of the Ga{sub 2}S{sub 3}–In{sub 2}S{sub 3} system and nine polythermal sections, isothermal section at 820 K and the liquidus surface projection were constructed. The existence of the large solid solutions ranges of binary and ternary compounds was established. The range of the existence of the quaternary phase AgGa{sub x}In{sub 5−x}S{sub 8} (2.25≤x≤2.85) at 820 K was determined. The single crystals (Ga{sub 55}In{sub 45}){sub 2}S{sub 300} and (Ga{sub 54.59}In{sub 44.66}Er{sub 0.75}){sub 2}S{sub 300} were grown by a directional crystallization methodmore » from solution-melt. Optical absorption spectra in the 500–1600 nm range were recorded. The luminescence of the (Ga{sub 54.59}In{sub 44.66}Er{sub 0.75}){sub 2}S{sub 300} single crystal shows a maximum at 1530 nm for the excitation wavelengths of 532 and 980 nm at 80 and 300 K. - Graphical abstract: Isothermal section of the quasiternary system Ag{sub 2}S–Ga{sub 2}S{sub 3}–In{sub 2}S{sub 3} at 820 K and normalized photoluminescence spectra of the single crystal (Ga{sub 54.59}In{sub 44.66}Er{sub 0.75}){sub 2}S{sub 300} at 300 K. - Highlights: • Isothermal section at 820 K, liquidus surface projection were built for Ag{sub 2}S–Ga{sub 2}S{sub 3}–In{sub 2}S{sub 3}. • Optical properties of single crystals were studied.« less

  16. Chronic low back pain in patients with systemic lupus erythematosus: prevalence and predictors of back muscle strength and its correlation with disability.

    PubMed

    Cezarino, Raíssa Sudré; Cardoso, Jefferson Rosa; Rodrigues, Kedma Neves; Magalhães, Yasmin Santana; Souza, Talita Yokoy de; Mota, Lícia Maria Henrique da; Bonini-Rocha, Ana Clara; McVeigh, Joseph; Martins, Wagner Rodrigues

    To determine the prevalence of Chronic Low Back Pain and predictors of Back Muscle Strength in patients with Systemic Lupus Erythematosus. Cross-sectional study. Ninety-six ambulatory patients with lupus were selected by non-probability sampling and interviewed and tested during medical consultation. The outcomes measurements were: Point prevalence of chronic low back pain, Oswestry Disability Index, Tampa Scale of Kinesiophobia, Fatigue Severity Scale and maximal voluntary isometric contractions of handgrip and of the back muscles. Correlation coefficient and multiple linear regression were used in statistical analysis. Of the 96 individuals interviewed, 25 had chronic low back pain, indicating a point prevalence of 26% (92% women). The correlation between the Oswestry Index and maximal voluntary isometric contraction of the back muscles was r=-0.4, 95% CI [-0.68; -0.01] and between the maximal voluntary isometric contraction of handgrip and of the back muscles was r=0.72, 95% CI [0.51; 0.88]. The regression model presented the highest value of R 2 being observed when maximal voluntary isometric contraction of the back muscles was tested with five independent variables (63%). In this model handgrip strength was the only predictive variable (β=0.61, p=0.001). The prevalence of chronic low back pain in individuals with systemic lupus erythematosus was 26%. The maximal voluntary isometric contraction of the back muscles was 63% predicted by five variables of interest, however, only the handgrip strength was a statistically significant predictive variable. The maximal voluntary isometric contraction of the back muscles presented a linear relation directly proportional to handgrip and inversely proportional to Oswestry Index i.e. stronger back muscles are associated with lower disability scores. Copyright © 2017. Published by Elsevier Editora Ltda.

  17. Development of automation software for neutron activation analysis process in Malaysian nuclear agency

    NASA Astrophysics Data System (ADS)

    Yussup, N.; Rahman, N. A. A.; Ibrahim, M. M.; Mokhtar, M.; Salim, N. A. A.; Soh@Shaari, S. C.; Azman, A.

    2017-01-01

    Neutron Activation Analysis (NAA) process has been established in Malaysian Nuclear Agency (Nuclear Malaysia) since 1980s. Most of the procedures established especially from sample registration to sample analysis are performed manually. These manual procedures carried out by the NAA laboratory personnel are time consuming and inefficient. Hence, a software to support the system automation is developed to provide an effective method to replace redundant manual data entries and produce faster sample analysis and calculation process. This paper describes the design and development of automation software for NAA process which consists of three sub-programs. The sub-programs are sample registration, hardware control and data acquisition; and sample analysis. The data flow and connection between the sub-programs will be explained. The software is developed by using National Instrument LabView development package.

  18. Microwave corrosion detection using open ended rectangular waveguide sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qaddoumi, N.; Handjojo, L.; Bigelow, T.

    The use of microwave and millimeter wave nondestructive testing methods utilizing open ended rectangular waveguide sensors has shown great potential for detecting minute thickness variations in laminate structures, in particular those backed by a conducting plate. Slight variations in the dielectric properties of materials may also be detected using a set of optimal parameters which include the standoff distance and the frequency of operation. In a recent investigation, on detecting rust under paint, the dielectric properties of rust were assumed to be similar to those of Fe{sub 2}O{sub 3} powder. These values were used in an electromagnetic model that simulatesmore » the interaction of fields radiated by a rectangular waveguide aperture with layered structures to obtain optimal parameters. The dielectric properties of Fe{sub 2}O{sub 3} were measured to be very similar to the properties of paint. Nevertheless, the presence of a simulated Fe{sub 2}O{sub 3} layer under a paint layer was detected. In this paper the dielectric properties of several different rust samples from different environments are measured. The measurements indicate that the nature of real rust is quite diverse and is different from Fe{sub 2}O{sub 3} and paint, indicating that the presence of rust under paint can be easily detected. The same electromagnetic model is also used (with the newly measured dielectric properties of real rust) to obtain an optimal standoff distance at a frequency of 24 GHz. The results indicate that variations in the magnitude as well as the phase of the reflection coefficient can be used to obtain information about the presence of rust. An experimental investigation on detecting the presence of very thin rust layers (2.5--5 x 10{sup {minus}2} mm [09--2.0 x 10{sup {minus}3} in.]) using an open ended rectangular waveguide probe is also conducted. Microwave images of rusted specimens, obtained at 24 GHz, are also presented.« less

  19. Design of EPON far-end equipment based on FTTH

    NASA Astrophysics Data System (ADS)

    Feng, Xiancheng; Yun, Xiang

    2008-12-01

    Now, most favors fiber access is mainly the EPON fiber access system. Inheriting from the low cost of Ethernet, usability and bandwidth of optical network, EPON technology is one of the best technologies in fiber access and is adopted by the carriers all over the world widely. According to the scheme analysis to FTTH fan-end equipment, hardware design of ONU is proposed in this paper. The FTTH far-end equipment software design deference modulation design concept, it divides the software designment into 5 function modules: the module of low-layer driver, the module of system management, the module of master/slave communication, and the module of main/Standby switch and the module of command line. The software flow of the host computer is also analyzed. Finally, test is made for Ethernet service performance of FTTH far-end equipment, E1 service performance and the optical path protection switching, and so on. The results of test indicates that all the items are accordance with technical request of far-end ONU equipment and possess good quality and fully reach the requirement of telecommunication level equipment. The far-end equipment of FTTH divides into several parts based on the function: the control module, the exchange module, the UNI interface module, the ONU module, the EPON interface module, the network management debugging module, the voice processing module, the circuit simulation module, the CATV module. In the downstream direction, under the protect condition, we design 2 optical modules. The system can set one group optical module working and another group optical module closure when it is initialized. When the optical fiber line is cut off, the LOS warning comes out. It will cause MUX to replace another group optical module, simultaneously will reset module 3701/3711 and will make it again test the distance, and will give the plug board MPC850 report through the GPIO port. During normal mode, the downstream optical signal is transformed into the

  20. LabVIEW interface with Tango control system for a multi-technique X-ray spectrometry IAEA beamline end-station at Elettra Sincrotrone Trieste

    NASA Astrophysics Data System (ADS)

    Wrobel, P. M.; Bogovac, M.; Sghaier, H.; Leani, J. J.; Migliori, A.; Padilla-Alvarez, R.; Czyzycki, M.; Osan, J.; Kaiser, R. B.; Karydas, A. G.

    2016-10-01

    A new synchrotron beamline end-station for multipurpose X-ray spectrometry applications has been recently commissioned and it is currently accessible by end-users at the XRF beamline of Elettra Sincrotrone Trieste. The end-station consists of an ultra-high vacuum chamber that includes as main instrument a seven-axis motorized manipulator for sample and detectors positioning, different kinds of X-ray detectors and optical cameras. The beamline end-station allows performing measurements in different X-ray spectrometry techniques such as Microscopic X-Ray Fluorescence analysis (μXRF), Total Reflection X-Ray Fluorescence analysis (TXRF), Grazing Incidence/Exit X-Ray Fluorescence analysis (GI-XRF/GE-XRF), X-Ray Reflectometry (XRR), and X-Ray Absorption Spectroscopy (XAS). A LabVIEW Graphical User Interface (GUI) bound with Tango control system consisted of many custom made software modules is utilized as a user-friendly tool for control of the entire end-station hardware components. The present work describes this advanced Tango and LabVIEW software platform that utilizes in an optimal synergistic manner the merits and functionality of these well-established programming and equipment control tools.

  1. Evaluation of Visualization Software

    NASA Technical Reports Server (NTRS)

    Globus, Al; Uselton, Sam

    1995-01-01

    Visualization software is widely used in scientific and engineering research. But computed visualizations can be very misleading, and the errors are easy to miss. We feel that the software producing the visualizations must be thoroughly evaluated and the evaluation process as well as the results must be made available. Testing and evaluation of visualization software is not a trivial problem. Several methods used in testing other software are helpful, but these methods are (apparently) often not used. When they are used, the description and results are generally not available to the end user. Additional evaluation methods specific to visualization must also be developed. We present several useful approaches to evaluation, ranging from numerical analysis of mathematical portions of algorithms to measurement of human performance while using visualization systems. Along with this brief survey, we present arguments for the importance of evaluations and discussions of appropriate use of some methods.

  2. Development of fuel oil management system software: Phase 1, Tank management module. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lange, H.B.; Baker, J.P.; Allen, D.

    1992-01-01

    The Fuel Oil Management System (FOMS) is a micro-computer based software system being developed to assist electric utilities that use residual fuel oils with oil purchase and end-use decisions. The Tank Management Module (TMM) is the first FOMS module to be produced. TMM enables the user to follow the mixing status of oils contained in a number of oil storage tanks. The software contains a computational model of residual fuel oil mixing which addresses mixing that occurs as one oil is added to another in a storage tank and also purposeful mixing of the tank by propellers, recirculation or convection.Themore » model also addresses the potential for sludge formation due to incompatibility of oils being mixed. Part 1 of the report presents a technical description of the mixing model and a description of its development. Steps followed in developing the mixing model included: (1) definition of ranges of oil properties and tank design factors used by utilities; (2) review and adaption of prior applicable work; (3) laboratory development; and (4) field verification. Also, a brief laboratory program was devoted to exploring the suitability of suggested methods for predicting viscosities, flash points and pour points of oil mixtures. Part 2 of the report presents a functional description of the TMM software and a description of its development. The software development program consisted of the following steps: (1) on-site interviews at utilities to prioritize needs and characterize user environments; (2) construction of the user interface; and (3) field testing the software.« less

  3. Systems, methods and apparatus for developing and maintaining evolving systems with software product lines

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Rash, James L. (Inventor); Pena, Joaquin (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided through which an evolutionary system is managed and viewed as a software product line. In some embodiments, the core architecture is a relatively unchanging part of the system, and each version of the system is viewed as a product from the product line. Each software product is generated from the core architecture with some agent-based additions. The result may be a multi-agent system software product line.

  4. Experimental recovery of quantum correlations in absence of system-environment back-action

    PubMed Central

    Xu, Jin-Shi; Sun, Kai; Li, Chuan-Feng; Xu, Xiao-Ye; Guo, Guang-Can; Andersson, Erika; Lo Franco, Rosario; Compagno, Giuseppe

    2013-01-01

    Revivals of quantum correlations in composite open quantum systems are a useful dynamical feature against detrimental effects of the environment. Their occurrence is attributed to flows of quantum information back and forth from systems to quantum environments. However, revivals also show up in models where the environment is classical, thus unable to store quantum correlations, and forbids system-environment back-action. This phenomenon opens basic issues about its interpretation involving the role of classical environments, memory effects, collective effects and system-environment correlations. Moreover, an experimental realization of back-action-free quantum revivals has applicative relevance as it leads to recover quantum resources without resorting to more demanding structured environments and correction procedures. Here we introduce a simple two-qubit model suitable to address these issues. We then report an all-optical experiment which simulates the model and permits us to recover and control, against decoherence, quantum correlations without back-action. We finally give an interpretation of the phenomenon by establishing the roles of the involved parties. PMID:24287554

  5. Artificial intelligence and expert systems in-flight software testing

    NASA Technical Reports Server (NTRS)

    Demasie, M. P.; Muratore, J. F.

    1991-01-01

    The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.

  6. The CFHT MegaCam control system: new solutions based on PLCs, WorldFIP fieldbus and Java softwares

    NASA Astrophysics Data System (ADS)

    Rousse, Jean Y.; Boulade, Olivier; Charlot, Xavier; Abbon, P.; Aune, Stephan; Borgeaud, Pierre; Carton, Pierre-Henri; Carty, M.; Da Costa, J.; Deschamps, H.; Desforge, D.; Eppele, Dominique; Gallais, Pascal; Gosset, L.; Granelli, Remy; Gros, Michel; de Kat, Jean; Loiseau, Denis; Ritou, J. L.; Starzynski, Pierre; Vignal, Nicolas; Vigroux, Laurent G.

    2003-03-01

    MegaCam is a wide-field imaging camera built for the prime focus of the 3.6m Canada-France-Hawaii Telescope. This large detector has required new approaches from the hardware up to the instrument control system software. Safe control of the three sub-systems of the instrument (cryogenics, filters and shutter), measurement of the exposure time with an accuracy of 0.1%, identification of the filters and management of the internal calibration source are the major challenges that are taken up by the control system. Another challenge is to insure all these functionalities with the minimum space available on the telescope structure for the electrical hardware and a minimum number of cables to keep the highest reliability. All these requirements have been met with a control system which different elements are linked by a WorldFip fieldbus on optical fiber. The diagnosis and remote user support will be insured with an Engineering Control System station based on software developed on Internet JAVA technologies (applets, servlets) and connected on the fieldbus.

  7. Addressing Challenges in the Acquisition of Secure Software Systems With Open Architectures

    DTIC Science & Technology

    2012-04-30

    as a “broker” to market specific research topics identified by our sponsors to NPS graduate students. This three-pronged approach provides for a...breaks, and the day-ending socials. Many of our researchers use these occasions to establish new teaming arrangements for future research work. In the...software (CSS) and open source software (OSS). Federal government acquisition policy, as well as many leading enterprise IT centers, now encourage the use

  8. Comparing 2-nt 3' overhangs against blunt-ended siRNAs: a systems biology based study.

    PubMed

    Ghosh, Preetam; Dullea, Robert; Fischer, James E; Turi, Tom G; Sarver, Ronald W; Zhang, Chaoyang; Basu, Kalyan; Das, Sajal K; Poland, Bradley W

    2009-07-07

    In this study, we formulate a computational reaction model following a chemical kinetic theory approach to predict the binding rate constant for the siRNA-RISC complex formation reaction. The model allowed us to study the potency difference between 2-nt 3' overhangs against blunt-ended siRNA molecules in an RNA interference (RNAi) system. The rate constant predicted by this model was fed into a stochastic simulation of the RNAi system (using the Gillespie stochastic simulator) to study the overall potency effect. We observed that the stochasticity in the transcription/translation machinery has no observable effects in the RNAi pathway. Sustained gene silencing using siRNAs can be achieved only if there is a way to replenish the dsRNA molecules in the cell. Initial findings show about 1.5 times more blunt-ended molecules will be required to keep the mRNA at the same reduced level compared to the 2-nt overhang siRNAs. However, the mRNA levels jump back to saturation after a longer time when blunt-ended siRNAs are used. We found that the siRNA-RISC complex formation reaction rate was 2 times slower when blunt-ended molecules were used pointing to the fact that the presence of the 2-nt overhangs has a greater effect on the reaction in which the bound RISC complex cleaves the mRNA.

  9. Comparing 2-nt 3' overhangs against blunt-ended siRNAs: a systems biology based study

    PubMed Central

    Ghosh, Preetam; Dullea, Robert; Fischer, James E; Turi, Tom G; Sarver, Ronald W; Zhang, Chaoyang; Basu, Kalyan; Das, Sajal K; Poland, Bradley W

    2009-01-01

    In this study, we formulate a computational reaction model following a chemical kinetic theory approach to predict the binding rate constant for the siRNA-RISC complex formation reaction. The model allowed us to study the potency difference between 2-nt 3' overhangs against blunt-ended siRNA molecules in an RNA interference (RNAi) system. The rate constant predicted by this model was fed into a stochastic simulation of the RNAi system (using the Gillespie stochastic simulator) to study the overall potency effect. We observed that the stochasticity in the transcription/translation machinery has no observable effects in the RNAi pathway. Sustained gene silencing using siRNAs can be achieved only if there is a way to replenish the dsRNA molecules in the cell. Initial findings show about 1.5 times more blunt-ended molecules will be required to keep the mRNA at the same reduced level compared to the 2-nt overhang siRNAs. However, the mRNA levels jump back to saturation after a longer time when blunt-ended siRNAs are used. We found that the siRNA-RISC complex formation reaction rate was 2 times slower when blunt-ended molecules were used pointing to the fact that the presence of the 2-nt overhangs has a greater effect on the reaction in which the bound RISC complex cleaves the mRNA. PMID:19594876

  10. The Bartlesville System; TGISS Software Documentation.

    ERIC Educational Resources Information Center

    Roberts, Tommy L.; And Others

    TGISS (Total Guidance Information Support System) is an information storage and retrieval system specifically designed to meet the needs and requirements of a counselor in the Bartlesville Public School environment. The system, which is a combination of man/machine capabilities, includes the hardware and software necessary to extend the…

  11. Phase equilibria in the quasi-ternary system Ag{sub 2}Se–Ga{sub 2}Se{sub 3}–In{sub 2}Se{sub 3} and physical properties of (Ga{sub 0.6}In{sub 0.4}){sub 2}Se{sub 3}, (Ga{sub 0.594}In{sub 0.396}Er{sub 0.01}){sub 2}Se{sub 3} single crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivashchenko, I.A., E-mail: inna.ivashchenko@mail.ru; Danyliuk, I.V.; Olekseyuk, I.D.

    2014-02-15

    The quasi-ternary system Ag{sub 2}Se–Ga{sub 2}Se{sub 3}–In{sub 2}Se{sub 3} was investigated by differential thermal, X-ray phase, X-ray structure, microstructure analysis and microhardness measurements. Five quasi-binary phase diagrams, six polythermal sections, isothermal section at 820 K and the liquidus surface projection were constructed. The character and temperature of the invariant processes were determined. The specific resistance of the single crystals (Ga{sub 0.6}In{sub 0.4}){sub 2}Se{sub 3}, (Ga{sub 0.594}In{sub 0.396}Er{sub 0.01}){sub 2}Se{sub 3} was measured, 7.5×10{sup 5} and 3.15×10{sup 5} Ω m, respectively, optical absorption spectra in the 600–1050 nm range were recorded at room temperature, and the band gap energy was estimatedmore » which is 1.95±0. 01 eV for both samples. - Graphical abstract: The article reports for the first time the investigated liquidus surface projection of the Ag{sub 2}Se–Ga{sub 2}Se{sub 3}–In{sub 2}Se{sub 3} system and isothermal section at 820 K of the system. Five phase diagrams, six polythermal sections, isothermal section at 820 K and the liquidus surface projection were built at the first time. The existence of the large region of the solid solutions based on AgIn{sub 5}Se{sub 8}, Ga{sub 2}Se{sub 3} and AgGa{sub 1−x}In{sub x}Se{sub 2} was investigated. The existence of two ternary phases was established in the Ga{sub 2}Se{sub 3}–In{sub 2}Se{sub 3} system. Two single crystals (Ga{sub 0.6}In{sub 0.4}){sub 2}Se{sub 3}, (Ga{sub 0.594}In{sub 0.396}Er{sub 0.01}){sub 2}Se{sub 3} were grown and some of optical properties of them were studied at first time. Display Omitted - Highlights: • Liquidus surface projection was built for Ag{sub 2}Se–Ga{sub 2}Se{sub 3}–In{sub 2}Se{sub 3} system. • Solid solution ranges of AgIn{sub 5}Se{sub 8}, Ga{sub 2}Se{sub 3} and AgGa{sub 1−x}In{sub x}Se{sub 2} were investigated. • Two single crystals (Ga{sub 0.6}In{sub 0.4}){sub 2}Se{sub 3}, (Ga{sub 0.594}In{sub 0

  12. Engine Structures Modeling Software System (ESMOSS)

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Engine Structures Modeling Software System (ESMOSS) is the development of a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components, and substructures which can be transferred to finite element analysis programs such as NASTRAN. The NASA Lewis Engine Structures Program is concerned with the development of technology for the rational structural design and analysis of advanced gas turbine engines with emphasis on advanced structural analysis, structural dynamics, structural aspects of aeroelasticity, and life prediction. Fundamental and common to all of these developments is the need for geometric and analytical model descriptions at various engine assembly levels which are generated using ESMOSS.

  13. Arcus end-to-end simulations

    NASA Astrophysics Data System (ADS)

    Wilms, Joern; Guenther, H. Moritz; Dauser, Thomas; Huenemoerder, David P.; Ptak, Andrew; Smith, Randall; Arcus Team

    2018-01-01

    We present an overview of the end-to-end simulation environment that we are implementing as part of the Arcus phase A Study. With the rcus simulator, we aim to to model the imaging, detection, and event reconstruction properties of the spectrometer. The simulator uses a Monte Carlo ray-trace approach, projecting photons onto the Arcus focal plane from the silicon pore optic mirrors and critical-angle transmission gratings. We simulate the detection and read-out of the photons in the focal plane CCDs with software originally written for the eROSITA and Athena-WFI detectors; we include all relevant detector physics, such as charge splitting, and effects of the detector read-out, such as out of time events. The output of the simulation chain is an event list that closely resembles the data expected during flight. This event list is processed using a prototype event reconstruction chain for the order separation, wavelength calibration, and effective area calibration. The output is compatible with standard X-ray astronomical analysis software.During phase A, the end-to-end simulation approach is used to demonstrate the overall performance of the mission, including a full simulation of the calibration effort. Continued development during later phases of the mission will ensure that the simulator remains a faithful representation of the true mission capabilities, and will ultimately be used as the Arcus calibration model.

  14. eXascale PRogramming Environment and System Software (XPRESS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Barbara; Gabriel, Edgar

    Exascale systems, with a thousand times the compute capacity of today’s leading edge petascale computers, are expected to emerge during the next decade. Their software systems will need to facilitate the exploitation of exceptional amounts of concurrency in applications, and ensure that jobs continue to run despite the occurrence of system failures and other kinds of hard and soft errors. Adapting computations at runtime to cope with changes in the execution environment, as well as to improve power and performance characteristics, is likely to become the norm. As a result, considerable innovation is required to develop system support to meetmore » the needs of future computing platforms. The XPRESS project aims to develop and prototype a revolutionary software system for extreme-­scale computing for both exascale and strong­scaled problems. The XPRESS collaborative research project will advance the state-­of-­the-­art in high performance computing and enable exascale computing for current and future DOE mission-­critical applications and supporting systems. The goals of the XPRESS research project are to: A. enable exascale performance capability for DOE applications, both current and future, B. develop and deliver a practical computing system software X-­stack, OpenX, for future practical DOE exascale computing systems, and C. provide programming methods and environments for effective means of expressing application and system software for portable exascale system execution.« less

  15. Software Dependability and Safety Evaluations ESA's Initiative

    NASA Astrophysics Data System (ADS)

    Hernek, M.

    ESA has allocated funds for an initiative to evaluate Dependability and Safety methods of Software. The objectives of this initiative are; · More extensive validation of Safety and Dependability techniques for Software · Provide valuable results to improve the quality of the Software thus promoting the application of Dependability and Safety methods and techniques. ESA space systems are being developed according to defined PA requirement specifications. These requirements may be implemented through various design concepts, e.g. redundancy, diversity etc. varying from project to project. Analysis methods (FMECA. FTA, HA, etc) are frequently used during requirements analysis and design activities to assure the correct implementation of system PA requirements. The criticality level of failures, functions and systems is determined and by doing that the critical sub-systems are identified, on which dependability and safety techniques are to be applied during development. Proper performance of the software development requires the development of a technical specification for the products at the beginning of the life cycle. Such technical specification comprises both functional and non-functional requirements. These non-functional requirements address characteristics of the product such as quality, dependability, safety and maintainability. Software in space systems is more and more used in critical functions. Also the trend towards more frequent use of COTS and reusable components pose new difficulties in terms of assuring reliable and safe systems. Because of this, its dependability and safety must be carefully analysed. ESA identified and documented techniques, methods and procedures to ensure that software dependability and safety requirements are specified and taken into account during the design and development of a software system and to verify/validate that the implemented software systems comply with these requirements [R1].

  16. Study of phase relationships in the Sr{sub 3}(PO{sub 4}){sub 2}–CePO{sub 4} system. Phase diagram and thermal characteristics of phases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matraszek, Aleksandra, E-mail: aleksandra.matraszek@ue.wroc.pl

    2013-07-15

    A diagram representing phase relationships in the Sr{sub 3}(PO{sub 4}){sub 2}–CePO{sub 4} phosphate system has been developed on the basis of results obtained by thermal analysis (DTA/DSC/TGA) and X-ray diffraction (XRD) methods. One intermediate compound with the formula Sr{sub 3}Ce(PO{sub 4}){sub 3} occurs in the Sr{sub 3}(PO{sub 4}){sub 2}–CePO{sub 4} system at temperatures exceeding 1045 °C. The compound has a eulytite structure with the following structural parameters: a=b=c=10.1655(8) Å, α=β=γ=90.00°, V=1050.46(6) Å{sup 3}. It's melting point exceeds 1950 °C. A limited solid solution exists in the system, which possesses the structure of a low-temperature form of Sr{sub 3}(PO{sub 4}){sub 2}.more » At 1000 °C the maximal concentration of CePO{sub 4} in the solid solution is below 20 mol%. The solid solution phase field narrows with increased temperature. There is a eutectic point in the (Sr{sub 3}(PO{sub 4}){sub 2}+Sr{sub 3}Ce(PO{sub 4}){sub 3}) phase field at 1765 °C and 15 mol% of CePO{sub 4}. The melting temperature of Sr{sub 3}(PO{sub 4}){sub 2} is 1882±15 °C. - Graphical abstract: The phase diagram of Sr{sub 3}(PO{sub 4}){sub 2}–CePO{sub 4} system showing the stability ranges of limited solid solution and Sr{sub 3}Ce(PO{sub 4}){sub 3} phases. - Highlights: • Sr{sub 3}(PO{sub 4}){sub 2} melts at 1882 °C. • Phase diagram of Sr{sub 3}(PO{sub 4}){sub 2}–CePO{sub 4} system has been proposed. • Limited solid solution of CePO{sub 4} in Sr{sub 3}(PO{sub 4}){sub 2} forms in the system. • The Sr{sub 3}Ce(PO{sub 4}){sub 2} phosphate is stable at temperatures above 1045 °C.« less

  17. Concept of software interface for BCI systems

    NASA Astrophysics Data System (ADS)

    Svejda, Jaromir; Zak, Roman; Jasek, Roman

    2016-06-01

    Brain Computer Interface (BCI) technology is intended to control external system by brain activity. One of main part of such system is software interface, which carries about clear communication between brain and either computer or additional devices connected to computer. This paper is organized as follows. Firstly, current knowledge about human brain is briefly summarized to points out its complexity. Secondly, there is described a concept of BCI system, which is then used to build an architecture of proposed software interface. Finally, there are mentioned disadvantages of sensing technology discovered during sensing part of our research.

  18. New crystals of the CsHSO{sub 4}–CsH{sub 2}PO{sub 4}–H{sub 2}O system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarova, I. P., E-mail: makarova@crys.ras.ru; Grebenev, V. V.; Komornikov, V. A.

    2016-11-15

    Cs{sub 6}H(HSO{sub 4}){sub 3}(H{sub 2}PO{sub 4}){sub 4} crystals, grown for the first time based on an analysis of the phase diagram of the CsHSO{sub 4}–CsH{sub 2}PO{sub 4}–H{sub 2}O ternary system, have been investigated by structural analysis using synchrotron radiation. The atomic structure of the crystals is determined and its specific features are analyzed.

  19. A General Water Resources Regulation Software System in China

    NASA Astrophysics Data System (ADS)

    LEI, X.

    2017-12-01

    To avoid iterative development of core modules in water resource normal regulation and emergency regulation and improve the capability of maintenance and optimization upgrading of regulation models and business logics, a general water resources regulation software framework was developed based on the collection and analysis of common demands for water resources regulation and emergency management. It can provide a customizable, secondary developed and extensible software framework for the three-level platform "MWR-Basin-Province". Meanwhile, this general software system can realize business collaboration and information sharing of water resources regulation schemes among the three-level platforms, so as to improve the decision-making ability of national water resources regulation. There are four main modules involved in the general software system: 1) A complete set of general water resources regulation modules allows secondary developer to custom-develop water resources regulation decision-making systems; 2) A complete set of model base and model computing software released in the form of Cloud services; 3) A complete set of tools to build the concept map and model system of basin water resources regulation, as well as a model management system to calibrate and configure model parameters; 4) A database which satisfies business functions and functional requirements of general water resources regulation software can finally provide technical support for building basin or regional water resources regulation models.

  20. Structural investigation of MO⋅P{sub 2}O{sub 5}⋅Li{sub 2}O (MO = Fe{sub 2}O{sub 3} or V{sub 2}O{sub 5}) glass systems by FTIR spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andronache, Constantin I., E-mail: androtin03@yahoo.com; Racolta, Dania, E-mail: androtin03@yahoo.com

    2014-11-24

    Glasses from the systems xMO⋅(100−x)[P{sub 2}O{sub 5}⋅Li{sub 2}O] (MO = Fe{sub 2}O{sub 3} or V{sub 2}O{sub 5}) with 0 ≤ x ≤ mol % were prepared in the same conditions and characterized by IR spectroscopy. It was established the mode in which both Fe{sub 2}O{sub 3} and V{sub 2}O{sub 5} influences the local structure of these glasses. The iron ions generally modify in a different way the local structure of these glasses then vanadium ions. The results shown that phosphate units are the main structural units of glass system and the iron and vanadium ions are located in the network.

  1. Advanced Software Development Workstation Project, phase 3

    NASA Technical Reports Server (NTRS)

    1991-01-01

    ACCESS provides a generic capability to develop software information system applications which are explicitly intended to facilitate software reuse. In addition, it provides the capability to retrofit existing large applications with a user friendly front end for preparation of input streams in a way that will reduce required training time, improve the productivity even of experienced users, and increase accuracy. Current and past work shows that ACCESS will be scalable to much larger object bases.

  2. Advanced transport operating system software upgrade: Flight management/flight controls software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Debure, Kelly R.; Dickson, Richard W.; Heaphy, William J.; Parks, Mark A.; Slominski, Christopher J.; Wolverton, David A.

    1988-01-01

    The Flight Management/Flight Controls (FM/FC) software for the Norden 2 (PDP-11/70M) computer installed on the NASA 737 aircraft is described. The software computes the navigation position estimates, guidance commands, those commands to be issued to the control surfaces to direct the aircraft in flight based on the modes selected on the Advanced Guidance Control System (AGSC) mode panel, and the flight path selected via the Navigation Control/Display Unit (NCDU).

  3. An expert system based software sizing tool, phase 2

    NASA Technical Reports Server (NTRS)

    Friedlander, David

    1990-01-01

    A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.

  4. Transform-Based Channel-Data Compression to Improve the Performance of a Real-Time GPU-Based Software Beamformer.

    PubMed

    Lok, U-Wai; Li, Pai-Chi

    2016-03-01

    Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found

  5. Clustering of arc volcanoes caused by temperature perturbations in the back-arc mantle

    PubMed Central

    Lee, Changyeol; Wada, Ikuko

    2017-01-01

    Clustering of arc volcanoes in subduction zones indicates along-arc variation in the physical condition of the underlying mantle where majority of arc magmas are generated. The sub-arc mantle is brought in from the back-arc largely by slab-driven mantle wedge flow. Dynamic processes in the back-arc, such as small-scale mantle convection, are likely to cause lateral variations in the back-arc mantle temperature. Here we use a simple three-dimensional numerical model to quantify the effects of back-arc temperature perturbations on the mantle wedge flow pattern and sub-arc mantle temperature. Our model calculations show that relatively small temperature perturbations in the back-arc result in vigorous inflow of hotter mantle and subdued inflow of colder mantle beneath the arc due to the temperature dependence of the mantle viscosity. This causes a three-dimensional mantle flow pattern that amplifies the along-arc variations in the sub-arc mantle temperature, providing a simple mechanism for volcano clustering. PMID:28660880

  6. Clustering of arc volcanoes caused by temperature perturbations in the back-arc mantle.

    PubMed

    Lee, Changyeol; Wada, Ikuko

    2017-06-29

    Clustering of arc volcanoes in subduction zones indicates along-arc variation in the physical condition of the underlying mantle where majority of arc magmas are generated. The sub-arc mantle is brought in from the back-arc largely by slab-driven mantle wedge flow. Dynamic processes in the back-arc, such as small-scale mantle convection, are likely to cause lateral variations in the back-arc mantle temperature. Here we use a simple three-dimensional numerical model to quantify the effects of back-arc temperature perturbations on the mantle wedge flow pattern and sub-arc mantle temperature. Our model calculations show that relatively small temperature perturbations in the back-arc result in vigorous inflow of hotter mantle and subdued inflow of colder mantle beneath the arc due to the temperature dependence of the mantle viscosity. This causes a three-dimensional mantle flow pattern that amplifies the along-arc variations in the sub-arc mantle temperature, providing a simple mechanism for volcano clustering.

  7. Real-Time Multimission Event Notification System for Mars Relay

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Wang, Paul; Hy, Franklin H.

    2013-01-01

    As the Mars Relay Network is in constant flux (missions and teams going through their daily workflow), it is imperative that users are aware of such state changes. For example, a change by an orbiter team can affect operations on a lander team. This software provides an ambient view of the real-time status of the Mars network. The Mars Relay Operations Service (MaROS) comprises a number of tools to coordinate, plan, and visualize various aspects of the Mars Relay Network. As part of MaROS, a feature set was developed that operates on several levels of the software architecture. These levels include a Web-based user interface, a back-end "ReSTlet" built in Java, and databases that store the data as it is received from the network. The result is a real-time event notification and management system, so mission teams can track and act upon events on a moment-by-moment basis. This software retrieves events from MaROS and displays them to the end user. Updates happen in real time, i.e., messages are pushed to the user while logged into the system, and queued when the user is not online for later viewing. The software does not do away with the email notifications, but augments them with in-line notifications. Further, this software expands the events that can generate a notification, and allows user-generated notifications. Existing software sends a smaller subset of mission-generated notifications via email. A common complaint of users was that the system-generated e-mails often "get lost" with other e-mail that comes in. This software allows for an expanded set (including user-generated) of notifications displayed in-line of the program. By separating notifications, this can improve a user's workflow.

  8. Analyzing Software Errors in Safety-Critical Embedded Systems

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.

    1994-01-01

    This paper analyzes the root causes of safty-related software faults identified as potentially hazardous to the system are distributed somewhat differently over the set of possible error causes than non-safety-related software faults.

  9. Framework for End-User Programming of Cross-Smart Space Applications

    PubMed Central

    Palviainen, Marko; Kuusijärvi, Jarkko; Ovaska, Eila

    2012-01-01

    Cross-smart space applications are specific types of software services that enable users to share information, monitor the physical and logical surroundings and control it in a way that is meaningful for the user's situation. For developing cross-smart space applications, this paper makes two main contributions: it introduces (i) a component design and scripting method for end-user programming of cross-smart space applications and (ii) a backend framework of components that interwork to support the brunt of the RDFScript translation, and the use and execution of ontology models. Before end-user programming activities, the software professionals must develop easy-to-apply Driver components for the APIs of existing software systems. Thereafter, end-users are able to create applications from the commands of the Driver components with the help of the provided toolset. The paper also introduces the reference implementation of the framework, tools for the Driver component development and end-user programming of cross-smart space applications and the first evaluation results on their application. PMID:23202169

  10. Software Analyzes Complex Systems in Real Time

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Expert system software programs, also known as knowledge-based systems, are computer programs that emulate the knowledge and analytical skills of one or more human experts, related to a specific subject. SHINE (Spacecraft Health Inference Engine) is one such program, a software inference engine (expert system) designed by NASA for the purpose of monitoring, analyzing, and diagnosing both real-time and non-real-time systems. It was developed to meet many of the Agency s demanding and rigorous artificial intelligence goals for current and future needs. NASA developed the sophisticated and reusable software based on the experience and requirements of its Jet Propulsion Laboratory s (JPL) Artificial Intelligence Research Group in developing expert systems for space flight operations specifically, the diagnosis of spacecraft health. It was designed to be efficient enough to operate in demanding real time and in limited hardware environments, and to be utilized by non-expert systems applications written in conventional programming languages. The technology is currently used in several ongoing NASA applications, including the Mars Exploration Rovers and the Spacecraft Health Automatic Reasoning Pilot (SHARP) program for the diagnosis of telecommunication anomalies during the Neptune Voyager Encounter. It is also finding applications outside of the Space Agency.

  11. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman, Carol S.; Benzinger, Leonora; Beshers, George; Hammerslag, David; Kimball, John; Kirslis, Peter A.; Render, Hal; Richards, Paul; Terwilliger, Robert

    1985-01-01

    The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. The SAGA system consists of a small number of software components that are adapted by the meta-tools into specific tools for use in the software development application. The modules are design so that the meta-tools can construct an environment which is both integrated and flexible. The SAGA project is documented in several papers which are presented.

  12. Enhancing requirements engineering for patient registry software systems with evidence-based components.

    PubMed

    Lindoerfer, Doris; Mansmann, Ulrich

    2017-07-01

    Patient registries are instrumental for medical research. Often their structures are complex and their implementations use composite software systems to meet the wide spectrum of challenges. Commercial and open-source systems are available for registry implementation, but many research groups develop their own systems. Methodological approaches in the selection of software as well as the construction of proprietary systems are needed. We propose an evidence-based checklist, summarizing essential items for patient registry software systems (CIPROS), to accelerate the requirements engineering process. Requirements engineering activities for software systems follow traditional software requirements elicitation methods, general software requirements specification (SRS) templates, and standards. We performed a multistep procedure to develop a specific evidence-based CIPROS checklist: (1) A systematic literature review to build a comprehensive collection of technical concepts, (2) a qualitative content analysis to define a catalogue of relevant criteria, and (3) a checklist to construct a minimal appraisal standard. CIPROS is based on 64 publications and covers twelve sections with a total of 72 items. CIPROS also defines software requirements. Comparing CIPROS with traditional software requirements elicitation methods, SRS templates and standards show a broad consensus but differences in issues regarding registry-specific aspects. Using an evidence-based approach to requirements engineering for registry software adds aspects to the traditional methods and accelerates the software engineering process for registry software. The method we used to construct CIPROS serves as a potential template for creating evidence-based checklists in other fields. The CIPROS list supports developers in assessing requirements for existing systems and formulating requirements for their own systems, while strengthening the reporting of patient registry software system descriptions. It may be

  13. A Reference Model for Software and System Inspections. White Paper

    NASA Technical Reports Server (NTRS)

    He, Lulu; Shull, Forrest

    2009-01-01

    Software Quality Assurance (SQA) is an important component of the software development process. SQA processes provide assurance that the software products and processes in the project life cycle conform to their specified requirements by planning, enacting, and performing a set of activities to provide adequate confidence that quality is being built into the software. Typical techniques include: (1) Testing (2) Simulation (3) Model checking (4) Symbolic execution (5) Management reviews (6) Technical reviews (7) Inspections (8) Walk-throughs (9) Audits (10) Analysis (complexity analysis, control flow analysis, algorithmic analysis) (11) Formal method Our work over the last few years has resulted in substantial knowledge about SQA techniques, especially the areas of technical reviews and inspections. But can we apply the same QA techniques to the system development process? If yes, what kind of tailoring do we need before applying them in the system engineering context? If not, what types of QA techniques are actually used at system level? And, is there any room for improvement.) After a brief examination of the system engineering literature (especially focused on NASA and DoD guidance) we found that: (1) System and software development process interact with each other at different phases through development life cycle (2) Reviews are emphasized in both system and software development. (Figl.3). For some reviews (e.g. SRR, PDR, CDR), there are both system versions and software versions. (3) Analysis techniques are emphasized (e.g. Fault Tree Analysis, Preliminary Hazard Analysis) and some details are given about how to apply them. (4) Reviews are expected to use the outputs of the analysis techniques. In other words, these particular analyses are usually conducted in preparation for (before) reviews. The goal of our work is to explore the interaction between the Quality Assurance (QA) techniques at the system level and the software level.

  14. Oxygen Generation System Laptop Bus Controller Flight Software

    NASA Technical Reports Server (NTRS)

    Rowe, Chad; Panter, Donna

    2009-01-01

    The Oxygen Generation System Laptop Bus Controller Flight Software was developed to allow the International Space Station (ISS) program to activate specific components of the Oxygen Generation System (OGS) to perform a checkout of key hardware operation in a microgravity environment, as well as to perform preventative maintenance operations of system valves during a long period of what would otherwise be hardware dormancy. The software provides direct connectivity to the OGS Firmware Controller with pre-programmed tasks operated by on-orbit astronauts to exercise OGS valves and motors. The software is used to manipulate the pump, separator, and valves to alleviate the concerns of hardware problems due to long-term inactivity and to allow for operational verification of microgravity-sensitive components early enough so that, if problems are found, they can be addressed before the hardware is required for operation on-orbit. The decision was made to use existing on-orbit IBM ThinkPad A31p laptops and MIL-STD-1553B interface cards as the hardware configuration. The software at the time of this reporting was developed and tested for use under the Windows 2000 Professional operating system to ensure compatibility with the existing on-orbit computer systems.

  15. Use of software tools in the development of real time software systems

    NASA Technical Reports Server (NTRS)

    Garvey, R. C.

    1981-01-01

    The transformation of a preexisting software system into a larger and more versatile system with different mission requirements is discussed. The history of this transformation is used to illustrate the use of structured real time programming techniques and tools to produce maintainable and somewhat transportable systems. The predecessor system is a single ground diagnostic system; its purpose is to exercise a computer controlled hardware set prior to its deployment in its functional environment, as well as test the equipment set by supplying certain well known stimulas. The successor system (FTE) is required to perform certain testing and control functions while this hardware set is in its functional environment. Both systems must deal with heavy user input/output loads and a new I/O requirement is included in the design of the FTF system. Human factors are enhanced by adding an improved console interface and special function keyboard handler. The additional features require the inclusion of much new software to the original set from which FTF was developed. As a result, it is necessary to split the system into a duel programming configuration with high rates of interground communications. A generalized information routing mechanism is used to support this configuration.

  16. Onboard shuttle on-line software requirements system: Prototype

    NASA Technical Reports Server (NTRS)

    Kolkhorst, Barbara; Ogletree, Barry

    1989-01-01

    The prototype discussed here was developed as proof of a concept for a system which could support high volumes of requirements documents with integrated text and graphics; the solution proposed here could be extended to other projects whose goal is to place paper documents in an electronic system for viewing and printing purposes. The technical problems (such as conversion of documentation between word processors, management of a variety of graphics file formats, and difficulties involved in scanning integrated text and graphics) would be very similar for other systems of this type. Indeed, technological advances in areas such as scanning hardware and software and display terminals insure that some of the problems encountered here will be solved in the near-term (less than five years). Examples of these solvable problems include automated input of integrated text and graphics, errors in the recognition process, and the loss of image information which results from the digitization process. The solution developed for the Online Software Requirements System is modular and allows hardware and software components to be upgraded or replaced as industry solutions mature. The extensive commercial software content allows the NASA customer to apply resources to solving the problem and maintaining documents.

  17. Visualization for Hyper-Heuristics: Back-End Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Luke

    Modern society is faced with increasingly complex problems, many of which can be formulated as generate-and-test optimization problems. Yet, general-purpose optimization algorithms may sometimes require too much computational time. In these instances, hyperheuristics may be used. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario, finding the solution significantly faster than its predecessor. However, it may be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address these issues by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics and an easy-to-understand scientific visualizationmore » for the produced solutions. To support the development of this GUI, my portion of the research involved developing algorithms that would allow for parsing of the data produced by the hyper-heuristics. This data would then be sent to the front-end, where it would be displayed to the end user.« less

  18. Authoritative Authoring: Software That Makes Multimedia Happen.

    ERIC Educational Resources Information Center

    Florio, Chris; Murie, Michael

    1996-01-01

    Compares seven mid- to high-end multimedia authoring software systems that combine graphics, sound, animation, video, and text for Windows and Macintosh platforms. A run-time project was created with each program using video, animation, graphics, sound, formatted text, hypertext, and buttons. (LRW)

  19. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...

  20. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...

  1. End-to-end system of license plate localization and recognition

    NASA Astrophysics Data System (ADS)

    Zhu, Siyu; Dianat, Sohail; Mestha, Lalit K.

    2015-03-01

    An end-to-end license plate recognition system is proposed. It is composed of preprocessing, detection, segmentation, and character recognition to find and recognize plates from camera-based still images. The system utilizes connected component (CC) properties to quickly extract the license plate region. A two-stage CC filtering is utilized to address both shape and spatial relationship information to produce high precision and to recall values for detection. Floating peak and valleys of projection profiles are used to cut the license plates into individual characters. A turning function-based method is proposed to quickly and accurately recognize each character. It is further accelerated using curvature histogram-based support vector machine. The INFTY dataset is used to train the recognition system, and MediaLab license plate dataset is used for testing. The proposed system achieved 89.45% F-measure for detection and 87.33% accuracy for overall recognition rate which is comparable to current state-of-the-art systems.

  2. Development and implementation of software systems for imaging spectroscopy

    USGS Publications Warehouse

    Boardman, J.W.; Clark, R.N.; Mazer, A.S.; Biehl, L.L.; Kruse, F.A.; Torson, J.; Staenz, K.

    2006-01-01

    Specialized software systems have played a crucial role throughout the twenty-five year course of the development of the new technology of imaging spectroscopy, or hyperspectral remote sensing. By their very nature, hyperspectral data place unique and demanding requirements on the computer software used to visualize, analyze, process and interpret them. Often described as a marriage of the two technologies of reflectance spectroscopy and airborne/spaceborne remote sensing, imaging spectroscopy, in fact, produces data sets with unique qualities, unlike previous remote sensing or spectrometer data. Because of these unique spatial and spectral properties hyperspectral data are not readily processed or exploited with legacy software systems inherited from either of the two parent fields of study. This paper provides brief reviews of seven important software systems developed specifically for imaging spectroscopy.

  3. Software defined photon counting system for time resolved x-ray experiments.

    PubMed

    Acremann, Y; Chembrolu, V; Strachan, J P; Tyliszczak, T; Stöhr, J

    2007-01-01

    The time structure of synchrotron radiation allows time resolved experiments with sub-100 ps temporal resolution using a pump-probe approach. However, the relaxation time of the samples may require a lower repetition rate of the pump pulse compared to the full repetition rate of the x-ray pulses from the synchrotron. The use of only the x-ray pulse immediately following the pump pulse is not efficient and often requires special operation modes where only a few buckets of the storage ring are filled. We designed a novel software defined photon counting system that allows to implement a variety of pump-probe schemes at the full repetition rate. The high number of photon counters allows to detect the response of the sample at multiple time delays simultaneously, thus improving the efficiency of the experiment. The system has been successfully applied to time resolved scanning transmission x-ray microscopy. However, this technique is applicable more generally.

  4. SEDS1 mission software verification using a signal simulator

    NASA Technical Reports Server (NTRS)

    Pierson, William E.

    1992-01-01

    The first flight of the Small Expendable Deployer System (SEDS1) is schedule to fly as the secondary payload of a Delta 2 in March, 1993. The objective of the SEDS1 mission is to collect data to validate the concept of tethered satellite systems and to verify computer simulations used to predict their behavior. SEDS1 will deploy a 50 lb. instrumented satellite as an end mass using a 20 km tether. Langley Research Center is providing the end mass instrumentation, while the Marshall Space Flight Center is designing and building the deployer. The objective of the experiment is to test the SEDS design concept by demonstrating that the system will satisfactorily deploy the full 20 km tether without stopping prematurely, come to a smooth stop on the application of a brake, and cut the tether at the proper time after it swings to the local vertical. Also, SEDS1 will collect data which will be used to test the accuracy of tether dynamics models used to stimulate this type of deployment. The experiment will last about 1.5 hours and complete approximately 1.5 orbits. Radar tracking of the Delta II and end mass is planned. In addition, the SEDS1 on-board computer will continuously record, store, and transmit mission data over the Delta II S-band telemetry system. The Data System will count tether windings as the tether unwinds, log the times of each turn and other mission events, monitor tether tension, and record the temperature of system components. A summary of the measurements taken during the SEDS1 are shown. The Data System will also control the tether brake and cutter mechanisms. Preliminary versions of two major sections of the flight software, the data telemetry modules and the data collection modules, were developed and tested under the 1990 NASA/ASEE Summer Faculty Fellowship Program. To facilitate the debugging of these software modules, a prototype SEDS Data System was programmed to simulate turn count signals. During the 1991 summer program, the concept of

  5. THE EPA MULTIMEDIA INTEGRATED MODELING SYSTEM SOFTWARE SUITE

    EPA Science Inventory

    The U.S. EPA is developing a Multimedia Integrated Modeling System (MIMS) framework that will provide a software infrastructure or environment to support constructing, composing, executing, and evaluating complex modeling studies. The framework will include (1) common software ...

  6. Energy storage systems having an electrode comprising Li.sub.xS.sub.y

    DOEpatents

    Xiao, Jie; Zhang, Jiguang; Graff, Gordon L.; Liu, Jun; Wang, Wei; Zheng, Jianming; Xu, Wu; Shao, Yuyan; Yang, Zhenguo

    2016-08-02

    Improved lithium-sulfur energy storage systems can utilizes Li.sub.xS.sub.y as a component in an electrode of the system. For example, the energy storage system can include a first electrode current collector, a second electrode current collector, and an ion-permeable separator separating the first and second electrode current collectors. A second electrode is arranged between the second electrode current collector and the separator. A first electrode is arranged between the first electrode current collector and the separator and comprises a first condensed-phase fluid comprising Li.sub.xS.sub.y. The energy storage system can be arranged such that the first electrode functions as a positive or a negative electrode.

  7. Software Safety Risk in Legacy Safety-Critical Computer Systems

    NASA Technical Reports Server (NTRS)

    Hill, Janice L.; Baggs, Rhoda

    2007-01-01

    Safety Standards contain technical and process-oriented safety requirements. Technical requirements are those such as "must work" and "must not work" functions in the system. Process-Oriented requirements are software engineering and safety management process requirements. Address the system perspective and some cover just software in the system > NASA-STD-8719.13B Software Safety Standard is the current standard of interest. NASA programs/projects will have their own set of safety requirements derived from the standard. Safety Cases: a) Documented demonstration that a system complies with the specified safety requirements. b) Evidence is gathered on the integrity of the system and put forward as an argued case. [Gardener (ed.)] c) Problems occur when trying to meet safety standards, and thus make retrospective safety cases, in legacy safety-critical computer systems.

  8. Evaluation of a low-end architecture for collaborative software development, remote observing, and data analysis from multiple sites

    NASA Astrophysics Data System (ADS)

    Messerotti, Mauro; Otruba, Wolfgang; Hanslmeier, Arnold

    2000-06-01

    The Kanzelhoehe Solar Observatory is an observing facility located in Carinthia (Austria) and operated by the Institute of Geophysics, Astrophysics and Meteorology of the Karl- Franzens University Graz. A set of instruments for solar surveillance at different wavelengths bands is continuously operated in automatic mode and is presently being upgraded to be used in supplying near-real-time solar activity indexes for space weather applications. In this frame, we tested a low-end software/hardware architecture running on the PC platform in a non-homogeneous, remotely distributed environment that allows efficient or moderately efficient application sharing at the Intranet and Extranet (i.e., Wide Area Network) levels respectively. Due to the geographical distributed of participating teams (Trieste, Italy; Kanzelhoehe and Graz, Austria), we have been using such features for collaborative remote software development and testing, data analysis and calibration, and observing run emulation from multiple sites as well. In this work, we describe the used architecture and its performances based on a series of application sharing tests we carried out to ascertain its effectiveness in real collaborative remote work, observations and data exchange. The system proved to be reliable at the Intranet level for most distributed tasks, limited to less demanding ones at the Extranet level, but quite effective in remote instrument control when real time response is not needed.

  9. 4. EXTERIOR OF SOUTH END OF BUILDING 108 SHOWING STORM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. EXTERIOR OF SOUTH END OF BUILDING 108 SHOWING STORM PORCH ADDITION AND WINDOWS ALONG BACK (WEST SIDE) OF HOUSE. NOTE ORIGNAL SHORT CHIMNEY AT CREST OF ROOF. VIEW TO NORTH. - Rush Creek Hydroelectric System, Clubhouse Cottage, Rush Creek, June Lake, Mono County, CA

  10. Toward an integrated software platform for systems pharmacology

    PubMed Central

    Ghosh, Samik; Matsuoka, Yukiko; Asai, Yoshiyuki; Hsin, Kun-Yi; Kitano, Hiroaki

    2013-01-01

    Understanding complex biological systems requires the extensive support of computational tools. This is particularly true for systems pharmacology, which aims to understand the action of drugs and their interactions in a systems context. Computational models play an important role as they can be viewed as an explicit representation of biological hypotheses to be tested. A series of software and data resources are used for model development, verification and exploration of the possible behaviors of biological systems using the model that may not be possible or not cost effective by experiments. Software platforms play a dominant role in creativity and productivity support and have transformed many industries, techniques that can be applied to biology as well. Establishing an integrated software platform will be the next important step in the field. © 2013 The Authors. Biopharmaceutics & Drug Disposition published by John Wiley & Sons, Ltd. PMID:24150748

  11. Systems and methods for an integrated electrical sub-system powered by wind energy

    DOEpatents

    Liu, Yan [Ballston Lake, NY; Garces, Luis Jose [Niskayuna, NY

    2008-06-24

    Various embodiments relate to systems and methods related to an integrated electrically-powered sub-system and wind power system including a wind power source, an electrically-powered sub-system coupled to and at least partially powered by the wind power source, the electrically-powered sub-system being coupled to the wind power source through power converters, and a supervisory controller coupled to the wind power source and the electrically-powered sub-system to monitor and manage the integrated electrically-powered sub-system and wind power system.

  12. The Bi{sub 2}O{sub 3}–Fe{sub 2}O{sub 3}–Sb{sub 2}O{sub 5} system phase diagram refinement, Bi{sub 3}FeSb{sub 2}O{sub 11} structure peculiarities and magnetic properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egorysheva, A.V., E-mail: anna_egorysheva@rambler.ru; Ellert, O.G.; Gajtko, O.M.

    2015-05-15

    The refinement of the Bi{sub 2}O{sub 3}–Fe{sub 2}O{sub 3}–Sb{sub 2}O{sub 5} system phase diagram has been performed and the existence of the two ternary compounds has been confirmed. The first one with a pyrochlore-type structure (sp. gr. Fd 3-barm) exists in the wide solid solution region, (Bi{sub 2−x}Fe{sub x})Fe{sub 1+y}Sb{sub 1−y}O{sub 7±δ}, where x=0.1–0.4 and y=−0.13–0.11. The second one, Bi{sub 3}FeSb{sub 2}O{sub 11}, corresponds to the cubic KSbO{sub 3}-type structure (sp. gr. Pn 3-bar) with unit cell parameter a=9.51521(2) Å. The Rietveld structure refinement showed that this compound is characterized by disordered structure. The Bi{sub 3}FeSb{sub 2}O{sub 11} factor groupmore » analysis has been carried out and a Raman spectrum has been investigated. According to magnetization measurements performed at the temperature range 2–300 K it may be concluded that the Bi{sub 3}FeSb{sub 2}O{sub 11} magnetic properties can be substantially described as a superposition of strong short-range antiferromagnetic exchange interactions realizing inside the [(FeSb{sub 2})O{sub 9}] 3D-framework via different pathways. - Graphical abstract: The refinement of the Bi{sub 2}O{sub 3}–Fe{sub 2}O{sub 3}–Sb{sub 2}O{sub 5} system phase diagram has been performed and the existence of the solid solution with a pyrochlore-type structure (sp. gr. Fd 3-barm) and Bi{sub 3}FeSb{sub 2}O{sub 11}, correspond of the cubic KSbO{sub 3}-type structure (sp. gr. Pn 3-bar has been confirmed. The structure refinement, Raman spectroscopy as well as magnetic measurements data of Bi{sub 3}FeSb{sub 2}O{sub 11} are presented. - Highlights: • The Bi{sub 2}O{sub 3}–Fe{sub 2}O{sub 3}–Sb{sub 2}O{sub 5} system phase diagram refinement has been performed. • The Bi{sub 3}FeSb{sub 2}O{sub 11} existence along with pyrochlore structure compound is shown. • It was determined that the Bi{sub 3}FeSb{sub 2}O{sub 11} is of disordered cubic KSbO{sub 3}-type structure. • Factor group

  13. Sub-millimeter wave frequency heterodyne detector system

    NASA Technical Reports Server (NTRS)

    Siegel, Peter H. (Inventor); Dengler, Robert (Inventor); Mueller, Eric R. (Inventor)

    2009-01-01

    The present invention relates to sub-millimeter wave frequency heterodyne imaging systems. More specifically, the present invention relates to a sub-millimeter wave frequency heterodyne detector system for imaging the magnitude and phase of transmitted power through or reflected power off of mechanically scanned samples at sub-millimeter wave frequencies.

  14. Sub-millimeter wave frequency heterodyne detector system

    NASA Technical Reports Server (NTRS)

    Siegel, Peter H. (Inventor); Dengler, Robert (Inventor); Mueller, Eric R. (Inventor)

    2010-01-01

    The present invention relates to sub-millimeter wave frequency heterodyne imaging systems. More specifically, the present invention relates to a sub-millimeter wave frequency heterodyne detector system for imaging the magnitude and phase of transmitted power through or reflected power off of mechanically scanned samples at sub-millimeter wave frequencies.

  15. The complex spine: the multidimensional system of causal pathways for low-back disorders.

    PubMed

    Marras, William S

    2012-12-01

    The aim of this study was to examine the logic behind the knowledge of low-back problem causal pathways. Low-back pain and low-back disorders (LBDs) continue to represent the major musculoskeletal risk problem in the workplace,with the prevalence and costs of such disorders increasing over time. In recent years, there has been much criticism of the ability of ergonomics methods to control the risk of LBDs. Logical assessment of the systems logic associated with our understanding and prevention of LBDs. Current spine loading as well as spine tolerance research efforts are bringing the field to the point where there is a better systems understanding of the inextricable link between the musculoskeletal system and the cognitive system. Loading is influenced by both the physical environment factors as well as mental demands, whereas tolerances are defined by both physical tissue tolerance and biochemically based tissue sensitivities to pain. However, the logic used in many low-back risk assessment tools may be overly simplistic, given what is understood about causal pathways. Current tools typically assess only load or position in a very cursory manner. Efforts must work toward satisfying both the physical environment and the cognitive environment for the worker if one is to reliably lower the risk of low-back problems. This systems representation of LBD development may serve as a guide to identify gaps in our understanding of LBDs.

  16. A Generic Software Architecture For Prognostics

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher; Daigle, Matthew J.; Sankararaman, Shankar; Goebel, Kai; Watkins, Jason

    2017-01-01

    Prognostics is a systems engineering discipline focused on predicting end-of-life of components and systems. As a relatively new and emerging technology, there are few fielded implementations of prognostics, due in part to practitioners perceiving a large hurdle in developing the models, algorithms, architecture, and integration pieces. As a result, no open software frameworks for applying prognostics currently exist. This paper introduces the Generic Software Architecture for Prognostics (GSAP), an open-source, cross-platform, object-oriented software framework and support library for creating prognostics applications. GSAP was designed to make prognostics more accessible and enable faster adoption and implementation by industry, by reducing the effort and investment required to develop, test, and deploy prognostics. This paper describes the requirements, design, and testing of GSAP. Additionally, a detailed case study involving battery prognostics demonstrates its use.

  17. Software System Safety and the NASA Aeronautics Blueprint

    NASA Technical Reports Server (NTRS)

    Holloway, C. Michael; Hayhurst, Kelly J.

    2002-01-01

    NASA's Aeronautics Blueprint lays out a research agenda for the Agency s aeronautics program. The word software appears only four times in this Blueprint, but the critical importance of safe and correct software to the fulfillment of the proposed research is evident on almost every page. Most of the technology solutions proposed to address challenges in aviation are software dependent technologies. Of the fifty-two specific technology solutions described in the Blueprint, forty-one depend, at least in part, on software for success. For thirty-five of these forty-one, software is not only critical to success, but also to human safety. That is, implementing the technology solutions will require using software in such a way that it may, if not specified, designed, and implemented properly, lead to fatal accidents. These results have at least two implications for the research based on the Blueprint: (1) knowledge about the current state-of-the-art and state-of-the-practice in software engineering and software system safety is essential, and (2) research into current unsolved problems in these software disciplines is also essential.

  18. Advanced information processing system: Input/output network management software

    NASA Technical Reports Server (NTRS)

    Nagle, Gail; Alger, Linda; Kemp, Alexander

    1988-01-01

    The purpose of this document is to provide the software requirements and specifications for the Input/Output Network Management Services for the Advanced Information Processing System. This introduction and overview section is provided to briefly outline the overall architecture and software requirements of the AIPS system before discussing the details of the design requirements and specifications of the AIPS I/O Network Management software. A brief overview of the AIPS architecture followed by a more detailed description of the network architecture.

  19. Advanced Transport Operating System (ATOPS) color displays software description microprocessor system

    NASA Technical Reports Server (NTRS)

    Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.

    1992-01-01

    This document describes the software created for the Sperry Microprocessor Color Display System used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global reference section includes procedures and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight cathode ray tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.

  20. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods.

    PubMed

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.

  1. Emerging Technologies for Software-Reliant Systems of Systems

    DTIC Science & Technology

    2010-09-01

    conditions, such as temperature, sound, vibration, light intensity , motion, or proximity to objects [Raghavendra 2006]. Cognitive Network A cognitive...systems evolutionary development emergent behavior geographic distribution Maier also defines four types of SoS based on their management...by multinational teams. Many organizations use offshoring as a way to reduce costs of software development. Large web- based systems often use

  2. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Vouk, Mladen A.

    1989-01-01

    Twenty functionally equivalent programs were built and tested in a multiversion software experiment. Following unit testing, all programs were subjected to an extensive system test. In the process sixty-one distinct faults were identified among the versions. Less than 12 percent of the faults exhibited varying degrees of positive correlation. The common-cause (or similar) faults spanned as many as 14 components. However, a majority of these faults were trivial, and easily detected by proper unit and/or system testing. Only two of the seven similar faults were difficult faults, and both were caused by specification ambiguities. One of these faults exhibited variable identical-and-wrong response span, i.e. response span which varied with the testing conditions and input data. Techniques that could have been used to avoid the faults are discussed. For example, it was determined that back-to-back testing of 2-tuples could have been used to eliminate about 90 percent of the faults. In addition, four of the seven similar faults could have been detected by using back-to-back testing of 5-tuples. It is believed that most, if not all, similar faults could have been avoided had the specifications been written using more formal notation, the unit testing phase was subject to more stringent standards and controls, and better tools for measuring the quality and adequacy of the test data (e.g. coverage) were used.

  3. Evaluation of Work-related Psychosocial and Ergonomics Factors in Relation to Low Back Discomfort in Emergency Unit Nurses

    PubMed Central

    Habibi, Ehsanollah; Pourabdian, Siamak; Atabaki, Azadeh Kianpour; Hoseini, Mohsen

    2012-01-01

    Background and Aim: High prevalence of low back pain is one of the most common problems among nurses. The aim of this study was to evaluate the relation of the intensity of low back discomfort to two low back pain contributor factors (Ergonomics risk factors and psychosocial factors). Methods: This cross-sectional survey was conducted on 120 emergency unit nurses in Esfahan. Job content, ergonomics hazards and nordic questionnaire were used in that order for daily assessment of Psychosocial and Ergonomics factors and the intensity of low back discomfort. Nurses were questioned during a 5-week period, at the end of each shift work. The final results were analyzed with SPSS software18/PASW by using Spearman, Mann-Whitney and Kolmogorov-Smirnove test. Results: There was a significant relationship between work demand, job content, social support and intensity of low back discomfort (P value <0.05). But, there was not any link between intensity of low back discomfort and job control. Also, there was significant relationship between intensity of low back discomfort and ergonomics risk factors. Conclusion: This study showed an indirect relationship between the intensity of low back discomfort and social support. This study also confirmed a direct relationship between the intensity of low back discomfort and work demand, job content, ergonomics factors (Awkward Postures (rotating and bending), manual patient handling and repetitiveness, standing continuously more than 30 min). So, to decrease work related low back discomfort, psychosocial factors should be attended in addition to ergonomics factors. PMID:22973487

  4. Crystal structures and electronic properties for the over-lithiated and Li–Ag substituted phases of Li{sub 9}V{sub 3}(P{sub 2}O{sub 7}){sub 3}(PO{sub 4}){sub 2} insertion electrode system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onoda, Masashige, E-mail: onoda.masashige.ft@u.tsukuba.ac.jp; Inagaki, Makoto; Saito, Hiroaki

    2014-11-15

    For the Li{sub 9}V{sub 3}(P{sub 2}O{sub 7}){sub 3}(PO{sub 4}){sub 2} insertion electrode system with a multiple-electron reaction, the over-lithiated phase Li{sub x}V{sub 3}(P{sub 2}O{sub 7}){sub 3}(PO{sub 4}){sub 2} with 99) and Li{sub 9−y}Ag{sub y}V{sub 3}(P{sub 2}O{sub 7}){sub 3}(PO{sub 4}){sub 2} (0

  5. Space Flight Software Development Software for Intelligent System Health Management

    NASA Technical Reports Server (NTRS)

    Trevino, Luis C.; Crumbley, Tim

    2004-01-01

    The slide presentation examines the Marshall Space Flight Center Flight Software Branch, including software development projects, mission critical space flight software development, software technical insight, advanced software development technologies, and continuous improvement in the software development processes and methods.

  6. Fissile material holdup measurement systems: an historical review of hardware and software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Jeffrey Allen; Smith, Steven E; Rowe, Nathan C

    The measurement of fissile material holdup is accomplished by passively measuring the energy-dependent photon flux and/or passive neutron flux emitted from the fissile material deposited within an engineered process system. Both measurement modalities--photon and neutron--require the implementation of portable, battery-operated systems that are transported, by hand, from one measurement location to another. Because of this portability requirement, gamma-ray spectrometers are typically limited to inorganic scintillators, coupled to photomultiplier tubes, a small multi-channel analyzer, and a handheld computer for data logging. For neutron detection, polyethylene-moderated, cadmium-back-shielded He-3 thermal neutron detectors are used, coupled to nuclear electronics for supplying high voltage tomore » the detector, and amplifying the signal chain to the scaler for counting. Holdup measurement methods, including the concept of Generalized Geometry Holdup (GGH), are well presented by T. Douglas Reilly in LA-UR-07-5149 and P. Russo in LA-14206, yet both publications leave much of the evolutionary hardware and software to the imagination of the reader. This paper presents an historical review of systems that have been developed and implemented since the mid-1980s for the nondestructive assay of fissile material, in situ. Specifications for the next-generation holdup measurements systems are conjectured.« less

  7. Predicting Software Suitability Using a Bayesian Belief Network

    NASA Technical Reports Server (NTRS)

    Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.

    2005-01-01

    The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.

  8. GORGONA - the characteristic of the software system.

    NASA Astrophysics Data System (ADS)

    Artim, M.; Zejda, M.

    A description of the new software system is given. The GORGONA system is established to the processing, making and administration of archives of periodic variable stars observations, observers and observed variable stars.

  9. End-to-end Cyberinfrastructure and Data Services for Earth System Science Education and Research: Unidata's Plans and Directions

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M.

    2005-12-01

    work together in a fundamentally different way. Likewise, the advent of digital libraries, grid computing platforms, interoperable frameworks, standards and protocols, open-source software, and community atmospheric models have been important drivers in shaping the use of a new generation of end-to-end cyberinfrastructure for solving some of the most challenging scientific and educational problems. In this talk, I will present an overview of the scientific, technological, and educational drivers and discuss recent developments in cyberinfrastructure and Unidata's role and directions in providing robust, end-to-end data services for solving geoscientific problems and advancing student learning.

  10. Glycation inhibitors extend yeast chronological lifespan by reducing advanced glycation end products and by back regulation of proteins involved in mitochondrial respiration.

    PubMed

    Kazi, Rubina S; Banarjee, Reema M; Deshmukh, Arati B; Patil, Gouri V; Jagadeeshaprasad, Mashanipalya G; Kulkarni, Mahesh J

    2017-03-06

    Advanced Glycation End products (AGEs) are implicated in aging process. Thus, reducing AGEs by using glycation inhibitors may help in attenuating the aging process. In this study using Saccharomyces cerevisiae yeast system, we show that Aminoguanidine (AMG), a well-known glycation inhibitor, decreases the AGE modification of proteins in non-calorie restriction (NR) (2% glucose) and extends chronological lifespan (CLS) similar to that of calorie restriction (CR) condition (0.5% glucose). Proteomic analysis revealed that AMG back regulates the expression of differentially expressed proteins especially those involved in mitochondrial respiration in NR condition, suggesting that it switches metabolism from fermentation to respiration, mimicking CR. AMG induced back regulation of differentially expressed proteins could be possibly due to its chemical effect or indirectly by glycation inhibition. To delineate this, Metformin (MET), a structural analog of AMG and a mild glycation inhibitor and Hydralazine (HYD), another potent glycation inhibitor but not structural analog of AMG were used. HYD was more effective than MET in mimicking AMG suggesting that glycation inhibition was responsible for restoration of differentially expressed proteins. Thus glycation inhibitors particularly AMG, HYD and MET extend yeast CLS by reducing AGEs, modulating the expression of proteins involved in mitochondrial respiration and possibly by scavenging glucose. This study reports the role of glycation in aging process. In the non-caloric restriction condition, carbohydrates such as glucose promote protein glycation and reduce CLS. While, the inhibitors of glycation such as AMG, HYD, MET mimic the caloric restriction condition by back regulating deregulated proteins involved in mitochondrial respiration which could facilitate shift of metabolism from fermentation to respiration and extend yeast CLS. These findings suggest that glycation inhibitors can be potential molecules that can be used

  11. Flank wears Simulation by using back propagation neural network when cutting hardened H-13 steel in CNC End Milling

    NASA Astrophysics Data System (ADS)

    Hazza, Muataz Hazza F. Al; Adesta, Erry Y. T.; Riza, Muhammad

    2013-12-01

    High speed milling has many advantages such as higher removal rate and high productivity. However, higher cutting speed increase the flank wear rate and thus reducing the cutting tool life. Therefore estimating and predicting the flank wear length in early stages reduces the risk of unaccepted tooling cost. This research presents a neural network model for predicting and simulating the flank wear in the CNC end milling process. A set of sparse experimental data for finish end milling on AISI H13 at hardness of 48 HRC have been conducted to measure the flank wear length. Then the measured data have been used to train the developed neural network model. Artificial neural network (ANN) was applied to predict the flank wear length. The neural network contains twenty hidden layer with feed forward back propagation hierarchical. The neural network has been designed with MATLAB Neural Network Toolbox. The results show a high correlation between the predicted and the observed flank wear which indicates the validity of the models.

  12. Configuration management and software measurement in the Ground Systems Development Environment (GSDE)

    NASA Technical Reports Server (NTRS)

    Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo

    1992-01-01

    A set of functional requirements for software configuration management (CM) and metrics reporting for Space Station Freedom ground systems software are described. This report is one of a series from a study of the interfaces among the Ground Systems Development Environment (GSDE), the development systems for the Space Station Training Facility (SSTF) and the Space Station Control Center (SSCC), and the target systems for SSCC and SSTF. The focus is on the CM of the software following delivery to NASA and on the software metrics that relate to the quality and maintainability of the delivered software. The CM and metrics requirements address specific problems that occur in large-scale software development. Mechanisms to assist in the continuing improvement of mission operations software development are described.

  13. Sensor Open System Architecture (SOSA) evolution for collaborative standards development

    NASA Astrophysics Data System (ADS)

    Collier, Charles Patrick; Lipkin, Ilya; Davidson, Steven A.; Baldwin, Rusty; Orlovsky, Michael C.; Ibrahim, Tim

    2017-04-01

    The Sensor Open System Architecture (SOSA) is a C4ISR-focused technical and economic collaborative effort between the Air Force, Navy, Army, the Department of Defense (DoD), Industry, and other Governmental agencies to develop (and incorporate) a technical Open Systems Architecture standard in order to maximize C4ISR sub-system, system, and platform affordability, re-configurability, and hardware/software/firmware re-use. The SOSA effort will effectively create an operational and technical framework for the integration of disparate payloads into C4ISR systems; with a focus on the development of a modular decomposition (defining functions and behaviors) and associated key interfaces (physical and logical) for common multi-purpose architecture for radar, EO/IR, SIGINT, EW, and Communications. SOSA addresses hardware, software, and mechanical/electrical interfaces. The modular decomposition will produce a set of re-useable components, interfaces, and sub-systems that engender reusable capabilities. This, in effect, creates a realistic and affordable ecosystem enabling mission effectiveness through systematic re-use of all available re-composed hardware, software, and electrical/mechanical base components and interfaces. To this end, SOSA will leverage existing standards as much as possible and evolve the SOSA architecture through modification, reuse, and enhancements to achieve C4ISR goals. This paper will present accomplishments over the first year of SOSA initiative.

  14. Engaging New Software.

    ERIC Educational Resources Information Center

    Allen, Denise

    1994-01-01

    Reviews three educational computer software products: (1) a compact disc-read only memory (CD-ROM) bundle of five mathematics programs from the Apple Education Series; (2) "Sammy's Science House," with science activities for preschool through second grade (Edmark); and (3) "The Cat Came Back," an interactive CD-ROM game designed to build language…

  15. Acid Rain Data System: Progressive application of information technology for operation of a market-based environmental program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, D.A.

    1995-12-31

    Under the Acid Rain Program, by statute and regulation, affected utility units are allocated annual allowances. Each allowance permits a unit to emit one ton of SO{sub 2} during or after a specified year. At year end, utilities must hold allowances equal to or greater than the cumulative SO{sub 2} emissions throughout the year from their affected units. The program has been developing, on a staged basis, two major computer-based information systems: the Allowance Tracking System (ATS) for tracking creation, transfer, and ultimate use of allowances; and the Emissions Tracking System (ETS) for transmission, receipt, processing, and inventory of continuousmore » emissions monitoring (CEM) data. The systems collectively form a logical Acid Rain Data System (ARDS). ARDS will be the largest information system ever used to operate and evaluate an environmental program. The paper describes the progressive software engineering approach the Acid Rain Program has been using to develop ARDS. Iterative software version releases, keyed to critical program deadlines, add the functionality required to support specific statutory and regulatory provisions. Each software release also incorporates continual improvements for efficiency, user-friendliness, and lower life-cycle costs. The program is migrating the independent ATS and ETS systems into a logically coordinated True-Up processing model, to support the end-of-year reconciliation for balancing allowance holdings against annual emissions and compliance plans for Phase 1 affected utility units. The paper provides specific examples and data to illustrate exciting applications of today`s information technology in ARDS.« less

  16. Instructional Support Software System. Final Report.

    ERIC Educational Resources Information Center

    McDonnell Douglas Astronautics Co. - East, St. Louis, MO.

    This report describes the development of the Instructional Support System (ISS), a large-scale, computer-based training system that supports both computer-assisted instruction and computer-managed instruction. Written in the Ada programming language, the ISS software package is designed to be machine independent. It is also grouped into functional…

  17. Non-developmental item computer systems and the malicious software threat

    NASA Technical Reports Server (NTRS)

    Bown, Rodney L.

    1991-01-01

    The following subject areas are covered: a DOD development system - the Army Secure Operating System; non-development commercial computer systems; security, integrity, and assurance of service (SI and A); post delivery SI and A and malicious software; computer system unique attributes; positive feedback to commercial computer systems vendors; and NDI (Non-Development Item) computers and software safety.

  18. 30 CFR 75.1101-21 - Back-up water system.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Back-up water system. 75.1101-21 Section 75.1101-21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY... water system. One fire hose outlet together with a length of hose capable of extending to the belt drive...

  19. 30 CFR 75.1101-21 - Back-up water system.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Back-up water system. 75.1101-21 Section 75.1101-21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY... water system. One fire hose outlet together with a length of hose capable of extending to the belt drive...

  20. 30 CFR 75.1101-21 - Back-up water system.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Back-up water system. 75.1101-21 Section 75.1101-21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY... water system. One fire hose outlet together with a length of hose capable of extending to the belt drive...

  1. 30 CFR 75.1101-21 - Back-up water system.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Back-up water system. 75.1101-21 Section 75.1101-21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY... water system. One fire hose outlet together with a length of hose capable of extending to the belt drive...

  2. 30 CFR 75.1101-21 - Back-up water system.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Back-up water system. 75.1101-21 Section 75.1101-21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY... water system. One fire hose outlet together with a length of hose capable of extending to the belt drive...

  3. Molybdenum oxide and molybdenum oxide-nitride back contacts for CdTe solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drayton, Jennifer A., E-mail: drjadrayton@yahoo.com; Geisthardt, Russell M., E-mail: Russell.Geisthardt@gmail.com; Sites, James R., E-mail: james.sites@colostate.edu

    2015-07-15

    Molybdenum oxide (MoO{sub x}) and molybdenum oxynitride (MoON) thin film back contacts were formed by a unique ion-beam sputtering and ion-beam-assisted deposition process onto CdTe solar cells and compared to back contacts made using carbon–nickel (C/Ni) paint. Glancing-incidence x-ray diffraction and x-ray photoelectron spectroscopy measurements show that partially crystalline MoO{sub x} films are created with a mixture of Mo, MoO{sub 2}, and MoO{sub 3} components. Lower crystallinity content is observed in the MoON films, with an additional component of molybdenum nitride present. Three different film thicknesses of MoO{sub x} and MoON were investigated that were capped in situ in Ni.more » Small area devices were delineated and characterized using current–voltage (J-V), capacitance–frequency, capacitance–voltage, electroluminescence, and light beam-induced current techniques. In addition, J-V data measured as a function of temperature (JVT) were used to estimate back barrier heights for each thickness of MoO{sub x} and MoON and for the C/Ni paint. Characterization prior to stressing indicated the devices were similar in performance. Characterization after stress testing indicated little change to cells with 120 and 180-nm thick MoO{sub x} and MoON films. However, moderate-to-large cell degradation was observed for 60-nm thick MoO{sub x} and MoON films and for C/Ni painted back contacts.« less

  4. Ground Systems Development Environment (GSDE) software configuration management

    NASA Technical Reports Server (NTRS)

    Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo

    1992-01-01

    This report presents a review of the software configuration management (CM) plans developed for the Space Station Training Facility (SSTF) and the Space Station Control Center. The scope of the CM assessed in this report is the Systems Integration and Testing Phase of the Ground Systems development life cycle. This is the period following coding and unit test and preceding delivery to operational use. This report is one of a series from a study of the interfaces among the Ground Systems Development Environment (GSDE), the development systems for the SSTF and the SSCC, and the target systems for SSCC and SSTF. This is the last report in the series. The focus of this report is on the CM plans developed by the contractors for the Mission Systems Contract (MSC) and the Training Systems Contract (TSC). CM requirements are summarized and described in terms of operational software development. The software workflows proposed in the TSC and MSC plans are reviewed in this context, and evaluated against the CM requirements defined in earlier study reports. Recommendations are made to improve the effectiveness of CM while minimizing its impact on the developers.

  5. Sustaining Software-Intensive Systems

    DTIC Science & Technology

    2006-05-01

    2.2 Multi- Service Operational Test and Evaluation .......................................4 2.3 Stable Software Baseline...or equivalent document • completed Multi- Service Operational Test and Evaluation (MOT&E) for the potential production software package (or OT&E if...not multi- service ) • stable software production baseline • complete and current software documentation • Authority to Operate (ATO) for an

  6. Real time software for a heat recovery steam generator control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valdes, R.; Delgadillo, M.A.; Chavez, R.

    1995-12-31

    This paper is addressed to the development and successful implementation of a real time software for the Heat Recovery Steam Generator (HRSG) control system of a Combined Cycle Power Plant. The real time software for the HRSG control system physically resides in a Control and Acquisition System (SAC) which is a component of a distributed control system (DCS). The SAC is a programmable controller. The DCS installed at the Gomez Palacio power plant in Mexico accomplishes the functions of logic, analog and supervisory control. The DCS is based on microprocessors and the architecture consists of workstations operating as a Man-Machinemore » Interface (MMI), linked to SAC controllers by means of a communication system. The HRSG real time software is composed of an operating system, drivers, dedicated computer program and application computer programs. The operating system used for the development of this software was the MultiTasking Operating System (MTOS). The application software developed at IIE for the HRSG control system basically consisted of a set of digital algorithms for the regulation of the main process variables at the HRSG. By using the multitasking feature of MTOS, the algorithms are executed pseudo concurrently. In this way, the applications programs continuously use the resources of the operating system to perform their functions through a uniform service interface. The application software of the HRSG consist of three tasks, each of them has dedicated responsibilities. The drivers were developed for the handling of hardware resources of the SAC controller which in turn allows the signals acquisition and data communication with a MMI. The dedicated programs were developed for hardware diagnostics, task initializations, access to the data base and fault tolerance. The application software and the dedicated software for the HRSG control system was developed using C programming language due to compactness, portability and efficiency.« less

  7. Next Processor Module: A Hardware Accelerator of UT699 LEON3-FT System for On-Board Computer Software Simulation

    NASA Astrophysics Data System (ADS)

    Langlois, Serge; Fouquet, Olivier; Gouy, Yann; Riant, David

    2014-08-01

    On-Board Computers (OBC) are more and more using integrated systems on-chip (SOC) that embed processors running from 50MHz up to several hundreds of MHz, and around which are plugged some dedicated communication controllers together with other Input/Output channels.For ground testing and On-Board SoftWare (OBSW) validation purpose, a representative simulation of these systems, faster than real-time and with cycle-true timing of execution, is not achieved with current purely software simulators.Since a few years some hybrid solutions where put in place ([1], [2]), including hardware in the loop so as to add accuracy and performance in the computer software simulation.This paper presents the results of the works engaged by Thales Alenia Space (TAS-F) at the end of 2010, that led to a validated HW simulator of the UT699 by mid- 2012 and that is now qualified and fully used in operational contexts.

  8. An evaluation of the physiological demands of elite rugby union using Global Positioning System tracking software.

    PubMed

    Cunniffe, Brian; Proctor, Wayne; Baker, Julien S; Davies, Bruce

    2009-07-01

    The current case study attempted to document the contemporary demands of elite rugby union. Players (n = 2) were tracked continuously during a competitive team selection game using Global Positioning System (GPS) software. Data revealed that players covered on average 6,953 m during play (83 minutes). Of this distance, 37% (2,800 m) was spent standing and walking, 27% (1,900 m) jogging, 10% (700 m) cruising, 14% (990 m) striding, 5% (320 m) high-intensity running, and 6% (420 m) sprinting. Greater running distances were observed for both players (6.7% back; 10% forward) in the second half of the game. Positional data revealed that the back performed a greater number of sprints (>20 km x h(-1)) than the forward (34 vs. 19) during the game. Conversely, the forward entered the lower speed zone (6-12 km x h(-1)) on a greater number of occasions than the back (315 vs. 229) but spent less time standing and walking (66.5 vs. 77.8%). Players were found to perform 87 moderate-intensity runs (>14 km x h(-1)) covering an average distance of 19.7 m (SD = 14.6). Average distances of 15.3 m (back) and 17.3 m (forward) were recorded for each sprint burst (>20 km x h(-1)), respectively. Players exercised at approximately 80 to 85% VO2max during the course of the game with a mean heart rate of 172 b x min(-1) ( approximately 88% HRmax). This corresponded to an estimated energy expenditure of 6.9 and 8.2 MJ, back and forward, respectively. The current study provides insight into the intense and physical nature of elite rugby using "on the field" assessment of physical exertion. Future use of this technology may help practitioners in design and implementation of individual position-specific training programs with appropriate management of player exercise load.

  9. The ASTRI mini-array software system (MASS) implementation: a proposal for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Tanci, Claudio; Tosti, Gino; Conforti, Vito; Schwarz, Joseph; Antolini, Elisa; Antonelli, L. A.; Bulgarelli, Andrea; Bigongiari, Ciro; Bruno, Pietro; Canestrari, Rodolfo; Capalbi, Milvia; Cascone, Enrico; Catalano, Osvaldo; Di Paola, Andrea; Di Pierro, Federico; Fioretti, Valentina; Gallozzi, Stefano; Gardiol, Daniele; Gianotti, Fulvio; Giro, Enrico; Grillo, Alessandro; La Palombara, Nicola; Leto, Giuseppe; Lombardi, Saverio; Maccarone, Maria C.; Pareschi, Giovanni; Russo, Federico; Sangiorgi, Pierluca; Scuderi, Salvo; Stringhetti, Luca; Testa, Vincenzo; Trifoglio, Massimo; Vercellone, Stefano; Zoli, Andrea

    2016-08-01

    The ASTRI mini-array, composed of nine small-size dual mirror (SST-2M) telescopes, has been proposed to be installed at the southern site of the Cherenkov Telescope Array (CTA), as a set of preproduction units of the CTA observatory. The ASTRI mini-array is a collaborative and international effort carried out by Italy, Brazil and South Africa and led by the Italian National Institute of Astrophysics, INAF. We present the main features of the current implementation of the Mini-Array Software System (MASS) now in use for the activities of the ASTRI SST-2M telescope prototype located at the INAF observing station on Mt. Etna, Italy and the characteristics that make it a prototype for the CTA control software system. CTA Data Management (CTADATA) and CTA Array Control and Data Acquisition (CTA-ACTL) requirements and guidelines as well as the ASTRI use cases were considered in the MASS design, most of its features are derived from the Atacama Large Millimeter/sub-millimeter Array Control software. The MASS will provide a set of tools to manage all onsite operations of the ASTRI mini-array in order to perform the observations specified in the short term schedule (including monitoring and controlling all the hardware components of each telescope and calibration device), to analyze the acquired data online and to store/retrieve all the data products to/from the onsite repository.

  10. Anomalous magnetoelastic behaviour near morphotropic phase boundary in ferromagnetic Tb{sub 1-x}Nd{sub x}Co{sub 2} system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murtaza, Adil; Yang, Sen, E-mail: yang.sen@mail.xjtu.edu.cn; Zhou, Chao

    2016-08-01

    In this work, we report a morphotropic phase boundary (MPB) involved ferromagnetic system Tb{sub 1-x}Nd{sub x}Co{sub 2} and reveal the corresponding structural and magnetoelastic properties of this system. With high resolution synchrotron X-ray diffractometry, the crystal structure of the TbCo{sub 2}-rich side is detected to be rhombohedral and that of NdCo{sub 2}-rich side is tetragonal below their respective Curie temperatures T{sub C}. The MPB composition Tb{sub 0.35}Nd{sub 0.65}Co{sub 2} corresponds to the coexistence of the rhombohedral phase (R-phase) and tetragonal phase (T-phase). Contrary to previously reported MPB involved ferromagnetic systems, the MPB composition of Tb{sub 0.35}Nd{sub 0.65}Co{sub 2} shows minimummore » magnetization which can be understood as compensation of sublattice moments between the R-phase and the T-phase. Furthermore, magnetostriction of Tb{sub 1-x}Nd{sub x}Co{sub 2} decreases with increasing Nd concentration until x = 0.8 and then increases in the negative direction with further increasing Nd concentration; the optimum point for magnetoelastic properties lies towards the rhombohedral phase. Our work not only shows an anomalous type of ferromagnetic MPB but also provides an effective way to design functional materials.« less

  11. Integrated testing and verification system for research flight software

    NASA Technical Reports Server (NTRS)

    Taylor, R. N.

    1979-01-01

    The MUST (Multipurpose User-oriented Software Technology) program is being developed to cut the cost of producing research flight software through a system of software support tools. An integrated verification and testing capability was designed as part of MUST. Documentation, verification and test options are provided with special attention on real-time, multiprocessing issues. The needs of the entire software production cycle were considered, with effective management and reduced lifecycle costs as foremost goals.

  12. Hardware-assisted software clock synchronization for homogeneous distributed systems

    NASA Technical Reports Server (NTRS)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  13. A NOVEL CO{sub 2} SEPARATION SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert J. Copeland

    2000-03-01

    Because of concern over global climate change, new systems are needed that produce electricity from fossil fuels and emit less CO{sub 2}. The fundamental problem with current systems which recover and concentrate CO{sub 2} from flue gases is the need to separate dilute CO{sub 2} and pressurize it to roughly 35 atm for storage or sequestration. This is an energy intensive process that can reduce plant efficiency by 9-37% and double the cost of electricity. There are two fundamental reasons for the current high costs of power consumption, CO{sub 2} removal, and concentration systems: (1) most disposal, storage and sequesteringmore » systems require high pressure CO{sub 2} (at roughly 35 atm). Thus, assuming 90% removal of the CO{sub 2} from a typical atmospheric pressure flue gas that contains 10% CO{sub 2}, the CO{sub 2} is essentially being compressed from 0.01 atm to 35 atm (a pressure ratio of 3,500). This is a very energy intensive process. (2) The absorption-based (amine) separation processes that are used to remove the CO{sub 2} from the flue gas and compress it to 1 atm consume approximately 10 times as much energy as the theoretical work of compression because they are heat driven cycles working over a very low temperature difference. Thus, to avoid the problems of current systems, we need a power cycle in which the CO{sub 2} produced by the oxidation of the fuel is not diluted with a large excess of nitrogen, a power cycle which would allow us to eliminate the very inefficient thermally driven absorption/desorption step. In addition, we would want the CO{sub 2} to be naturally available at high pressure (approximately 3 to 6 atmospheres), which would allow us to greatly reduce the compression ratio between generation and storage (from roughly 3,500 to approximately 8).« less

  14. A NOVEL CO{sub 2} SEPARATION SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert J. Copeland

    2000-05-01

    Because of concern over global climate change, new systems are needed that produce electricity from fossil fuels and emit less CO{sub 2}. The fundamental problem with current systems which recover and concentrate CO{sub 2} from flue gases is the need to separate dilute CO{sub 2} and pressurize it to roughly 35 atm for storage or sequestration. This is an energy intensive process that can reduce plant efficiency by 9-37% and double the cost of electricity. There are two fundamental reasons for the current high costs of power consumption, CO{sub 2} removal, and concentration systems: (1) most disposal, storage and sequesteringmore » systems require high pressure CO{sub 2} (at roughly 35 atm). Thus, assuming 90% removal of the CO{sub 2} from a typical atmospheric pressure flue gas that contains 10% CO{sub 2}, the CO{sub 2} is essentially being compressed from 0.01 atm to 35 atm (a pressure ratio of 3,500). This is a very energy intensive process. (2) The absorption-based (amine) separation processes that are used to remove the CO{sub 2} from the flue gas and compress it to 1 atm consume approximately 10 times as much energy as the theoretical work of compression because they are heat driven cycles working over a very low temperature difference. Thus, to avoid the problems of current systems, we need a power cycle in which the CO{sub 2} produced by the oxidation of the fuel is not diluted with a large excess of nitrogen, a power cycle which would allow us to eliminate the very inefficient thermally driven absorption/desorption step. In addition, we would want the CO{sub 2} to be naturally available at high pressure (approximately 3 to 6 atmospheres), which would allow us to greatly reduce the compression ratio between generation and storage (from roughly 3,500 to approximately 8).« less

  15. A NOVEL CO{sub 2} SEPARATION SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert J. Copeland

    2000-08-01

    Because of concern over global climate change, new systems are needed that produce electricity from fossil fuels and emit less CO{sub 2}. The fundamental problem with current systems which recover and concentrate CO{sub 2} from flue gases is the need to separate dilute CO{sub 2} and pressurize it to roughly 35 atm for storage or sequestration. This is an energy intensive process that can reduce plant efficiency by 9-37% and double the cost of electricity. There are two fundamental reasons for the current high costs of power consumption, CO{sub 2} removal, and concentration systems: (1) most disposal, storage and sequesteringmore » systems require high pressure CO{sub 2} (at roughly 35 atm). Thus, assuming 90% removal of the CO{sub 2} from a typical atmospheric pressure flue gas that contains 10% CO{sub 2}, the CO{sub 2} is essentially being compressed from 0.01 atm to 35 atm (a pressure ratio of 3,500). This is a very energy intensive process. (2) The absorption-based (amine) separation processes that are used to remove the CO{sub 2} from the flue gas and compress it to 1 atm consume approximately 10 times as much energy as the theoretical work of compression because they are heat driven cycles working over a very low temperature difference. Thus, to avoid the problems of current systems, we need a power cycle in which the CO{sub 2} produced by the oxidation of the fuel is not diluted with a large excess of nitrogen, a power cycle which would allow us to eliminate the very inefficient thermally driven absorption/desorption step. In addition, we would want the CO{sub 2} to be naturally available at high pressure (approximately 3 to 6 atmospheres), which would allow us to greatly reduce the compression ratio between generation and storage (from roughly 3,500 to approximately 8).« less

  16. Standardization of End-to-End Performance of Digital Video Teleconferencing/Video Telephony Systems

    DTIC Science & Technology

    1991-12-01

    SYSTEM 3-1 end-to-end video transmission system including both firmly specified and peripheral flexible functions. The format converter changes either...which manifests itself in both subjective evaluations and objective tests. The relative importance of performance parameters is likely to change with...conventional analog performance parameters to be largely independent of bit rate, and only slightly changed between different codec models. The

  17. SOFTWARE DESIGN FOR REAL-TIME SYSTEMS.

    DTIC Science & Technology

    Real-time computer systems and real-time computations are defined for the purposes of this report. The design of software for real - time systems is...discussed, employing the concept that all real - time systems belong to one of two types. The types are classified according to the type of control...program used; namely: Pre-assigned Iterative Cycle and Real-time Queueing. The two types of real - time systems are described in general, with supplemental

  18. Unified Engineering Software System

    NASA Technical Reports Server (NTRS)

    Purves, L. R.; Gordon, S.; Peltzman, A.; Dube, M.

    1989-01-01

    Collection of computer programs performs diverse functions in prototype engineering. NEXUS, NASA Engineering Extendible Unified Software system, is research set of computer programs designed to support full sequence of activities encountered in NASA engineering projects. Sequence spans preliminary design, design analysis, detailed design, manufacturing, assembly, and testing. Primarily addresses process of prototype engineering, task of getting single or small number of copies of product to work. Written in FORTRAN 77 and PROLOG.

  19. ARROWSMITH-P: A prototype expert system for software engineering management

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Ramsey, Connie Loggia

    1985-01-01

    Although the field of software engineering is relatively new, it can benefit from the use of expert systems. Two prototype expert systems were developed to aid in software engineering management. Given the values for certain metrics, these systems will provide interpretations which explain any abnormal patterns of these values during the development of a software project. The two systems, which solve the same problem, were built using different methods, rule-based deduction and frame-based abduction. A comparison was done to see which method was better suited to the needs of this field. It was found that both systems performed moderately well, but the rule-based deduction system using simple rules provided more complete solutions than did the frame-based abduction system.

  20. A methodology based on openEHR archetypes and software agents for developing e-health applications reusing legacy systems.

    PubMed

    Cardoso de Moraes, João Luís; de Souza, Wanderley Lopes; Pires, Luís Ferreira; do Prado, Antonio Francisco

    2016-10-01

    In Pervasive Healthcare, novel information and communication technologies are applied to support the provision of health services anywhere, at anytime and to anyone. Since health systems may offer their health records in different electronic formats, the openEHR Foundation prescribes the use of archetypes for describing clinical knowledge in order to achieve semantic interoperability between these systems. Software agents have been applied to simulate human skills in some healthcare procedures. This paper presents a methodology, based on the use of openEHR archetypes and agent technology, which aims to overcome the weaknesses typically found in legacy healthcare systems, thereby adding value to the systems. This methodology was applied in the design of an agent-based system, which was used in a realistic healthcare scenario in which a medical staff meeting to prepare a cardiac surgery has been supported. We conducted experiments with this system in a distributed environment composed by three cardiology clinics and a center of cardiac surgery, all located in the city of Marília (São Paulo, Brazil). We evaluated this system according to the Technology Acceptance Model. The case study confirmed the acceptance of our agent-based system by healthcare professionals and patients, who reacted positively with respect to the usefulness of this system in particular, and with respect to task delegation to software agents in general. The case study also showed that a software agent-based interface and a tools-based alternative must be provided to the end users, which should allow them to perform the tasks themselves or to delegate these tasks to other people. A Pervasive Healthcare model requires efficient and secure information exchange between healthcare providers. The proposed methodology allows designers to build communication systems for the message exchange among heterogeneous healthcare systems, and to shift from systems that rely on informal communication of actors to

  1. Development of management information system for land in mine area based on MapInfo

    NASA Astrophysics Data System (ADS)

    Wang, Shi-Dong; Liu, Chuang-Hua; Wang, Xin-Chuang; Pan, Yan-Yu

    2008-10-01

    MapInfo is current a popular GIS software. This paper introduces characters of MapInfo and GIS second development methods offered by MapInfo, which include three ones based on MapBasic, OLE automation, and MapX control usage respectively. Taking development of land management information system in mine area for example, in the paper, the method of developing GIS applications based on MapX has been discussed, as well as development of land management information system in mine area has been introduced in detail, including development environment, overall design, design and realization of every function module, and simple application of system, etc. The system uses MapX 5.0 and Visual Basic 6.0 as development platform, takes SQL Server 2005 as back-end database, and adopts Matlab 6.5 to calculate number in back-end. On the basis of integrated design, the system develops eight modules including start-up, layer control, spatial query, spatial analysis, data editing, application model, document management, results output. The system can be used in mine area for cadastral management, land use structure optimization, land reclamation, land evaluation, analysis and forecasting for land in mine area and environmental disruption, thematic mapping, and so on.

  2. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    NASA Astrophysics Data System (ADS)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  3. Software Tools for Development on the Peregrine System | High-Performance

    Science.gov Websites

    Computing | NREL Software Tools for Development on the Peregrine System Software Tools for and manage software at the source code level. Cross-Platform Make and SCons The "Cross-Platform Make" (CMake) package is from Kitware, and SCons is a modern software build tool based on Python

  4. International Space Station alpha remote manipulator system workstation controls test report

    NASA Technical Reports Server (NTRS)

    Ehrenstrom, William A.; Swaney, Colin; Forrester, Patrick

    1994-01-01

    Previous development testing for the space station remote manipulator system workstation controls determined the need for hardware controls for the emergency stop, brakes on/off, and some camera functions. This report documents the results of an evaluation to further determine control implementation requirements, requested by the Canadian Space Agency (CSA), to close outstanding review item discrepancies. This test was conducted at the Johnson Space Center's Space Station Mockup and Trainer Facility in Houston, Texas, with nine NASA astronauts and one CSA astronaut as operators. This test evaluated camera iris and focus, back-up drive, latching end effector release, and autosequence controls using several types of hardware and software implementations. Recommendations resulting from the testing included providing guarded hardware buttons to prevent accidental actuation, providing autosequence controls and back-up drive controls on a dedicated hardware control panel, and that 'latch on/latch off', or on-screen software, controls not be considered. Generally, the operators preferred hardware controls although other control implementations were acceptable. The results of this evaluation will be used along with further testing to define specific requirements for the workstation design.

  5. International Space Station alpha remote manipulator system workstation controls test report

    NASA Astrophysics Data System (ADS)

    Ehrenstrom, William A.; Swaney, Colin; Forrester, Patrick

    1994-05-01

    Previous development testing for the space station remote manipulator system workstation controls determined the need for hardware controls for the emergency stop, brakes on/off, and some camera functions. This report documents the results of an evaluation to further determine control implementation requirements, requested by the Canadian Space Agency (CSA), to close outstanding review item discrepancies. This test was conducted at the Johnson Space Center's Space Station Mockup and Trainer Facility in Houston, Texas, with nine NASA astronauts and one CSA astronaut as operators. This test evaluated camera iris and focus, back-up drive, latching end effector release, and autosequence controls using several types of hardware and software implementations. Recommendations resulting from the testing included providing guarded hardware buttons to prevent accidental actuation, providing autosequence controls and back-up drive controls on a dedicated hardware control panel, and that 'latch on/latch off', or on-screen software, controls not be considered. Generally, the operators preferred hardware controls although other control implementations were acceptable. The results of this evaluation will be used along with further testing to define specific requirements for the workstation design.

  6. NASA End-to-End Data System /NEEDS/ information adaptive system - Performing image processing onboard the spacecraft

    NASA Technical Reports Server (NTRS)

    Kelly, W. L.; Howle, W. M.; Meredith, B. D.

    1980-01-01

    The Information Adaptive System (IAS) is an element of the NASA End-to-End Data System (NEEDS) Phase II and is focused toward onbaord image processing. Since the IAS is a data preprocessing system which is closely coupled to the sensor system, it serves as a first step in providing a 'Smart' imaging sensor. Some of the functions planned for the IAS include sensor response nonuniformity correction, geometric correction, data set selection, data formatting, packetization, and adaptive system control. The inclusion of these sensor data preprocessing functions onboard the spacecraft will significantly improve the extraction of information from the sensor data in a timely and cost effective manner and provide the opportunity to design sensor systems which can be reconfigured in near real time for optimum performance. The purpose of this paper is to present the preliminary design of the IAS and the plans for its development.

  7. Solving the Software Legacy Problem with RISA

    NASA Astrophysics Data System (ADS)

    Ibarra, A.; Gabriel, C.

    2012-09-01

    Nowadays hardware and system infrastructure evolve on time scales much shorter than the typical duration of space astronomy missions. Data processing software capabilities have to evolve to preserve the scientific return during the entire experiment life time. Software preservation is a key issue that has to be tackled before the end of the project to keep the data usable over many years. We present RISA (Remote Interface to Science Analysis) as a solution to decouple data processing software and infrastructure life-cycles, using JAVA applications and web-services wrappers to existing software. This architecture employs embedded SAS in virtual machines assuring a homogeneous job execution environment. We will also present the first studies to reactivate the data processing software of the EXOSAT mission, the first ESA X-ray astronomy mission launched in 1983, using the generic RISA approach.

  8. Oxygen potentials and phase equilibria in the system Ca–Co–O and thermodynamic properties of Ca{sub 3}Co{sub 2}O{sub 6} and Ca{sub 3}Co{sub 4}O{sub 9.163}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, K.T., E-mail: katob@materials.iisc.ernet.in; Gupta, Preeti

    2015-01-15

    Oxygen potentials established by the equilibrium between three condensed phases, CaO{sub ss}+CoO{sub ss}+Ca{sub 3}Co{sub 2}O{sub 6} and CoO{sub ss}+Ca{sub 3}Co{sub 2}O{sub 6}+Ca{sub 3}Co{sub 3.93+α}O{sub 9.36−δ}, are measured as a function of temperature using solid-state electrochemical cells incorporating yttria-stabilized zirconia as the electrolyte and pure oxygen as the reference electrode. Cation non-stoichiometry and oxygen non-stoichiometry in Ca{sub 3}Co{sub 3.93+α}O{sub 9.36−δ} are determined using different techniques under defined conditions. Decomposition temperatures and thermodynamic properties of Ca{sub 3}Co{sub 2}O{sub 6} and Ca{sub 3}Co{sub 4}O{sub 9.163} are calculated from the results. The standard entropy and enthalpy of formation of Ca{sub 3}Co{sub 2}O{sub 6} atmore » 298.15 K are evaluated. Using thermodynamic data from this study and auxiliary information from the literature, phase diagram for the ternary system Ca–Co–O is computed. Isothermal sections at representative temperatures are displayed to demonstrate the evolution of phase relations with temperature. - Graphical abstract: Isothermal section of the phase diagram of the system Ca–Co–O at 1250 K. - Highlights: • Improved definition of cation and oxygen nonstoichiometry of Ca{sub 3}Co{sub 3.93+α}O{sub 9.36−δ}. • Measurement of Δμ{sub O{sub 2}} associated with two 3-phase fields as a function of temperature. • Use of solid-state electrochemical cells for accurate measurement of Δμ{sub O{sub 2}}. • Decomposition temperatures and thermodynamic properties for ternary oxides. • Characterization of ternary phase diagram of the system Ca–Co–O.« less

  9. An adaptive software defined radio design based on a standard space telecommunication radio system API

    NASA Astrophysics Data System (ADS)

    Xiong, Wenhao; Tian, Xin; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2017-05-01

    Software defined radio (SDR) has become a popular tool for the implementation and testing for communications performance. The advantage of the SDR approach includes: a re-configurable design, adaptive response to changing conditions, efficient development, and highly versatile implementation. In order to understand the benefits of SDR, the space telecommunication radio system (STRS) was proposed by NASA Glenn research center (GRC) along with the standard application program interface (API) structure. Each component of the system uses a well-defined API to communicate with other components. The benefit of standard API is to relax the platform limitation of each component for addition options. For example, the waveform generating process can support a field programmable gate array (FPGA), personal computer (PC), or an embedded system. As long as the API defines the requirements, the generated waveform selection will work with the complete system. In this paper, we demonstrate the design and development of adaptive SDR following the STRS and standard API protocol. We introduce step by step the SDR testbed system including the controlling graphic user interface (GUI), database, GNU radio hardware control, and universal software radio peripheral (USRP) tranceiving front end. In addition, a performance evaluation in shown on the effectiveness of the SDR approach for space telecommunication.

  10. Software Tools for Developing and Simulating the NASA LaRC CMF Motion Base

    NASA Technical Reports Server (NTRS)

    Bryant, Richard B., Jr.; Carrelli, David J.

    2006-01-01

    The NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base has provided many design and analysis challenges. In the process of addressing these challenges, a comprehensive suite of software tools was developed. The software tools development began with a detailed MATLAB/Simulink model of the motion base which was used primarily for safety loads prediction, design of the closed loop compensator and development of the motion base safety systems1. A Simulink model of the digital control law, from which a portion of the embedded code is directly generated, was later added to this model to form a closed loop system model. Concurrently, software that runs on a PC was created to display and record motion base parameters. It includes a user interface for controlling time history displays, strip chart displays, data storage, and initializing of function generators used during motion base testing. Finally, a software tool was developed for kinematic analysis and prediction of mechanical clearances for the motion system. These tools work together in an integrated package to support normal operations of the motion base, simulate the end to end operation of the motion base system providing facilities for software-in-the-loop testing, mechanical geometry and sensor data visualizations, and function generator setup and evaluation.

  11. The Li–Si–(O)–N system revisited: Structural characterization of Li{sub 21}Si{sub 3}N{sub 11} and Li{sub 7}SiN{sub 3}O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas-Cabanas, M.; Santner, H.; Palacín, M.R., E-mail: rosa.palacin@icmab.es

    2014-05-01

    A systematic study of the Li–Si–(O)–N system is presented. The synthetic conditions to prepare Li{sub 2}SiN{sub 2}, Li{sub 5}SiN{sub 3}, Li{sub 18}Si{sub 3}N{sub 10}, Li{sub 21}Si{sub 3}N{sub 11} and Li{sub 7}SiN{sub 3}O are described and the structure of the last two compounds has been solved for the first time. While Li{sub 21}Si{sub 3}N{sub 11} crystallizes as a superstructure of the anti-fluorite structure with Li and Si ordering, Li{sub 7}SiN{sub 3}O exhibits the anti-fluorite structure with both anion and cation disorder. - Graphical abstract: A systematic study of the Li–Si–(O)–N system is presented. Li{sub 21}Si{sub 3}N{sub 11} crystallizes as a superstructuremore » of the anti-fluorite structure with Li and Si ordering, Li{sub 7}SiN{sub 3}O exhibits the anti-fluorite structure with both anion and cation disorder. - Highlights: • Li{sub 2}SiN{sub 2}, Li{sub 5}SiN{sub 3}, Li{sub 18}Si{sub 3}N{sub 10}, Li{sub 21}Si{sub 3}N{sub 11} and Li{sub 7}SiN{sub 3}O are prepared. • The structures of Li{sub 21}Si{sub 3}N{sub 11} and Li{sub 7}SiN{sub 3}O are presented. • Li{sub 21}Si{sub 3}N{sub 11} exhibits an anti-fluorite superstructure with Li and Si ordering.« less

  12. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1987-01-01

    The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management.

  13. The ALICE Software Release Validation cluster

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Krzewicki, M.

    2015-12-01

    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.

  14. Software to Manage the Unmanageable

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In 1995, NASA s Jet Propulsion Laboratory (JPL) contracted Redmond, Washington-based Lucidoc Corporation, to design a technology infrastructure to automate the intersection between policy management and operations management with advanced software that automates document workflow, document status, and uniformity of document layout. JPL had very specific parameters for the software. It expected to store and catalog over 8,000 technical and procedural documents integrated with hundreds of processes. The project ended in 2000, but NASA still uses the resulting highly secure document management system, and Lucidoc has managed to help other organizations, large and small, with integrating document flow and operations management to ensure a compliance-ready culture.

  15. Project W-211, initial tank retrieval systems, retrieval control system software configuration management plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    RIECK, C.A.

    1999-02-23

    This Software Configuration Management Plan (SCMP) provides the instructions for change control of the W-211 Project, Retrieval Control System (RCS) software after initial approval/release but prior to the transfer of custody to the waste tank operations contractor. This plan applies to the W-211 system software developed by the project, consisting of the computer human-machine interface (HMI) and programmable logic controller (PLC) software source and executable code, for production use by the waste tank operations contractor. The plan encompasses that portion of the W-211 RCS software represented on project-specific AUTOCAD drawings that are released as part of the C1 definitive designmore » package (these drawings are identified on the drawing list associated with each C-1 package), and the associated software code. Implementation of the plan is required for formal acceptance testing and production release. The software configuration management plan does not apply to reports and data generated by the software except where specifically identified. Control of information produced by the software once it has been transferred for operation is the responsibility of the receiving organization.« less

  16. Survey of Software Assurance Techniques for Highly Reliable Systems

    NASA Technical Reports Server (NTRS)

    Nelson, Stacy

    2004-01-01

    This document provides a survey of software assurance techniques for highly reliable systems including a discussion of relevant safety standards for various industries in the United States and Europe, as well as examples of methods used during software development projects. It contains one section for each industry surveyed: Aerospace, Defense, Nuclear Power, Medical Devices and Transportation. Each section provides an overview of applicable standards and examples of a mission or software development project, software assurance techniques used and reliability achieved.

  17. Progressive retry for software error recovery in distributed systems

    NASA Technical Reports Server (NTRS)

    Wang, Yi-Min; Huang, Yennun; Fuchs, W. K.

    1993-01-01

    In this paper, we describe a method of execution retry for bypassing software errors based on checkpointing, rollback, message reordering and replaying. We demonstrate how rollback techniques, previously developed for transient hardware failure recovery, can also be used to recover from software faults by exploiting message reordering to bypass software errors. Our approach intentionally increases the degree of nondeterminism and the scope of rollback when a previous retry fails. Examples from our experience with telecommunications software systems illustrate the benefits of the scheme.

  18. Science Gateways, Scientific Workflows and Open Community Software

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Marru, S.

    2014-12-01

    Science gateways and scientific workflows occupy different ends of the spectrum of user-focused cyberinfrastructure. Gateways, sometimes called science portals, provide a way for enabling large numbers of users to take advantage of advanced computing resources (supercomputers, advanced storage systems, science clouds) by providing Web and desktop interfaces and supporting services. Scientific workflows, at the other end of the spectrum, support advanced usage of cyberinfrastructure that enable "power users" to undertake computational experiments that are not easily done through the usual mechanisms (managing simulations across multiple sites, for example). Despite these different target communities, gateways and workflows share many similarities and can potentially be accommodated by the same software system. For example, pipelines to process InSAR imagery sets or to datamine GPS time series data are workflows. The results and the ability to make downstream products may be made available through a gateway, and power users may want to provide their own custom pipelines. In this abstract, we discuss our efforts to build an open source software system, Apache Airavata, that can accommodate both gateway and workflow use cases. Our approach is general, and we have applied the software to problems in a number of scientific domains. In this talk, we discuss our applications to usage scenarios specific to earth science, focusing on earthquake physics examples drawn from the QuakSim.org and GeoGateway.org efforts. We also examine the role of the Apache Software Foundation's open community model as a way to build up common commmunity codes that do not depend upon a single "owner" to sustain. Pushing beyond open source software, we also see the need to provide gateways and workflow systems as cloud services. These services centralize operations, provide well-defined programming interfaces, scale elastically, and have global-scale fault tolerance. We discuss our work providing

  19. Fault Tolerant Software Technology for Distributed Computer Systems

    DTIC Science & Technology

    1989-03-01

    RAY.) &-TR-88-296 I Fin;.’ Technical Report ,r 19,39 i A28 3329 F’ULT TOLERANT SOFTWARE TECHNOLOGY FOR DISTRIBUTED COMPUTER SYSTEMS Georgia Institute...GrfisABN 34-70IiWftlI NO0. IN?3. NO IACCESSION NO. 158 21 7 11. TITLE (Incld security Cassification) FAULT TOLERANT SOFTWARE FOR DISTRIBUTED COMPUTER ...Technology for Distributed Computing Systems," a two year effort performed at Georgia Institute of Technology as part of the Clouds Project. The Clouds

  20. Software systems for operation, control, and monitoring of the EBEX instrument

    NASA Astrophysics Data System (ADS)

    Milligan, Michael; Ade, Peter; Aubin, François; Baccigalupi, Carlo; Bao, Chaoyun; Borrill, Julian; Cantalupo, Christopher; Chapman, Daniel; Didier, Joy; Dobbs, Matt; Grainger, Will; Hanany, Shaul; Hillbrand, Seth; Hubmayr, Johannes; Hyland, Peter; Jaffe, Andrew; Johnson, Bradley; Kisner, Theodore; Klein, Jeff; Korotkov, Andrei; Leach, Sam; Lee, Adrian; Levinson, Lorne; Limon, Michele; MacDermid, Kevin; Matsumura, Tomotake; Miller, Amber; Pascale, Enzo; Polsgrove, Daniel; Ponthieu, Nicolas; Raach, Kate; Reichborn-Kjennerud, Britt; Sagiv, Ilan; Tran, Huan; Tucker, Gregory S.; Vinokurov, Yury; Yadav, Amit; Zaldarriaga, Matias; Zilic, Kyle

    2010-07-01

    We present the hardware and software systems implementing autonomous operation, distributed real-time monitoring, and control for the EBEX instrument. EBEX is a NASA-funded balloon-borne microwave polarimeter designed for a 14 day Antarctic flight that circumnavigates the pole. To meet its science goals the EBEX instrument autonomously executes several tasks in parallel: it collects attitude data and maintains pointing control in order to adhere to an observing schedule; tunes and operates up to 1920 TES bolometers and 120 SQUID amplifiers controlled by as many as 30 embedded computers; coordinates and dispatches jobs across an onboard computer network to manage this detector readout system; logs over 3 GiB/hour of science and housekeeping data to an onboard disk storage array; responds to a variety of commands and exogenous events; and downlinks multiple heterogeneous data streams representing a selected subset of the total logged data. Most of the systems implementing these functions have been tested during a recent engineering flight of the payload, and have proven to meet the target requirements. The EBEX ground segment couples uplink and downlink hardware to a client-server software stack, enabling real-time monitoring and command responsibility to be distributed across the public internet or other standard computer networks. Using the emerging dirfile standard as a uniform intermediate data format, a variety of front end programs provide access to different components and views of the downlinked data products. This distributed architecture was demonstrated operating across multiple widely dispersed sites prior to and during the EBEX engineering flight.

  1. Phase relations in the system Cu-Ho-O and stability of Cu{sub 2}Ho{sub 2}O{sub 5}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, T.; Jacob, K.T.

    1994-01-01

    The phase relations in the system Cu-Ho-O have been determined at 1300 K using X-ray diffraction, optical microscopy, and electron microprobe analysis of samples equilibrated in evacuated quartz ampules and in pure oxygen. Only one ternary compound, Cu{sub 2}Ho{sub 2}O{sub 5}, was found to be stable. The Gibbs free energy of formation of this compound has been measured. Since the formation is endothermic, Cu{sub 2}Ho{sub 2}O{sub 5} becomes thermodynamically unstable with respect to CuO and Ho{sub 2}O{sub 3} below 810 K. When the oxygen partial pressure over Cu{sub 2}Ho{sub 2}O{sub 5} is lowered, it decomposes. The decomposition temperature at anmore » oxygen partial pressure of 1.52 X 10{sup 4} Pa was measured using a combined DTA-TGA apparatus. Based on these results, an oxygen potential diagram for the system Cu-Ho-O at 1300 K is presented.« less

  2. The Effect of Back Pressure on the Operation of a Diesel Engine

    DTIC Science & Technology

    2011-02-01

    increased back pressure on a turbocharged diesel engine. Steady state and varying back pressure are considered. The results show that high back...a turbocharged diesel engine using the Ricardo Wave engine modelling software, to gain understanding of the problem and provide a good base for...higher pressure. The pressure ratios across the turbocharger compressor and turbine decrease, reducing the mass flow of air through these components

  3. The Effect of Back Pressure on the Operation of a Disel Engine

    DTIC Science & Technology

    2011-02-01

    increased back pressure on a turbocharged diesel engine. Steady state and varying back pressure are considered. The results show that high back...a turbocharged diesel engine using the Ricardo Wave engine modelling software, to gain understanding of the problem and provide a good base for...higher pressure. The pressure ratios across the turbocharger compressor and turbine decrease, reducing the mass flow of air through these components

  4. Adaptable Computing Environment/Self-Assembling Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osbourn, Gordon C.; Bouchard, Ann M.; Bartholomew, John W.

    Complex software applications are difficult to learn to use and to remember how to use. Further, the user has no control over the functionality available in a given application. The software we use can be created and modified only by a relatively small group of elite, highly skilled artisans known as programmers. "Normal users" are powerless to create and modify software themselves, because the tools for software development, designed by and for programmers, are a barrier to entry. This software, when completed, will be a user-adaptable computing environment in which the user is really in control of his/her own software,more » able to adapt the system, make new parts of the system interactive, and even modify the behavior of the system itself. Som key features of the basic environment that have been implemented are (a) books in bookcases, where all data is stored, (b) context-sensitive compass menus (compass, because the buttons are located in compass directions relative to the mouose cursor position), (c) importing tabular data and displaying it in a book, (d) light-weight table querying/sorting, (e) a Reach&Get capability (sort of a "smart" copy/paste that prevents the user from copying invalid data), and (f) a LogBook that automatically logs all user actions that change data or the system itself. To bootstrap toward full end-user adaptability, we implemented a set of development tools. With the development tools, compass menus can be made and customized.« less

  5. End-to-End Modeling with the Heimdall Code to Scope High-Power Microwave Systems

    DTIC Science & Technology

    2007-06-01

    END-TO-END MODELING WITH THE HEIMDALL CODE TO SCOPE HIGH - POWER MICROWAVE SYSTEMS ∗ John A. Swegleξ Savannah River National Laboratory, 743A...describe the expert-system code HEIMDALL, which is used to model full high - power microwave systems using over 60 systems-engineering models, developed in...of our calculations of the mass of a Supersystem producing 500-MW, 15-ns output pulses in the X band for bursts of 1 s , interspersed with 10- s

  6. Integrating open-source software applications to build molecular dynamics systems.

    PubMed

    Allen, Bruce M; Predecki, Paul K; Kumosa, Maciej

    2014-04-05

    Three open-source applications, NanoEngineer-1, packmol, and mis2lmp are integrated using an open-source file format to quickly create molecular dynamics (MD) cells for simulation. The three software applications collectively make up the open-source software (OSS) suite known as MD Studio (MDS). The software is validated through software engineering practices and is verified through simulation of the diglycidyl ether of bisphenol-a and isophorone diamine (DGEBA/IPD) system. Multiple simulations are run using the MDS software to create MD cells, and the data generated are used to calculate density, bulk modulus, and glass transition temperature of the DGEBA/IPD system. Simulation results compare well with published experimental and numerical results. The MDS software prototype confirms that OSS applications can be analyzed against real-world research requirements and integrated to create a new capability. Copyright © 2014 Wiley Periodicals, Inc.

  7. Gesture Analysis for Astronomy Presentation Software

    NASA Astrophysics Data System (ADS)

    Robinson, Marc A.

    Astronomy presentation software in a planetarium setting provides a visually stimulating way to introduce varied scientific concepts, including computer science concepts, to a wide audience. However, the underlying computational complexity and opportunities for discussion are often overshadowed by the brilliance of the presentation itself. To bring this discussion back out into the open, a method needs to be developed to make the computer science applications more visible. This thesis introduces the GAAPS system, which endeavors to implement free-hand gesture-based control of astronomy presentation software, with the goal of providing that talking point to begin the discussion of computer science concepts in a planetarium setting. The GAAPS system incorporates gesture capture and analysis in a unique environment presenting unique challenges, and introduces a novel algorithm called a Bounding Box Tree to create and select features for this particular gesture data. This thesis also analyzes several different machine learning techniques to determine a well-suited technique for the classification of this particular data set, with an artificial neural network being chosen as the implemented algorithm. The results of this work will allow for the desired introduction of computer science discussion into the specific setting used, as well as provide for future work pertaining to gesture recognition with astronomy presentation software.

  8. Wake Turbulence Mitigation for Departures (WTMD) Prototype System - Software Design Document

    NASA Technical Reports Server (NTRS)

    Sturdy, James L.

    2008-01-01

    This document describes the software design of a prototype Wake Turbulence Mitigation for Departures (WTMD) system that was evaluated in shadow mode operation at the Saint Louis (KSTL) and Houston (KIAH) airports. This document describes the software that provides the system framework, communications, user displays, and hosts the Wind Forecasting Algorithm (WFA) software developed by the M.I.T. Lincoln Laboratory (MIT-LL). The WFA algorithms and software are described in a separate document produced by MIT-LL.

  9. Idea Paper: The Lifecycle of Software for Scientific Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubey, Anshu; McInnes, Lois C.

    The software lifecycle is a well researched topic that has produced many models to meet the needs of different types of software projects. However, one class of projects, software development for scientific computing, has received relatively little attention from lifecycle researchers. In particular, software for end-to-end computations for obtaining scientific results has received few lifecycle proposals and no formalization of a development model. An examination of development approaches employed by the teams implementing large multicomponent codes reveals a great deal of similarity in their strategies. This idea paper formalizes these related approaches into a lifecycle model for end-to-end scientific applicationmore » software, featuring loose coupling between submodels for development of infrastructure and scientific capability. We also invite input from stakeholders to converge on a model that captures the complexity of this development processes and provides needed lifecycle guidance to the scientific software community.« less

  10. Modular Software for Spacecraft Navigation Using the Global Positioning System (GPS)

    NASA Technical Reports Server (NTRS)

    Truong, S. H.; Hartman, K. R.; Weidow, D. A.; Berry, D. L.; Oza, D. H.; Long, A. C.; Joyce, E.; Steger, W. L.

    1996-01-01

    The Goddard Space Flight Center Flight Dynamics and Mission Operations Divisions have jointly investigated the feasibility of engineering modular Global Positioning SYSTEM (GPS) navigation software to support both real time flight and ground postprocessing configurations. The goals of this effort are to define standard GPS data interfaces and to engineer standard, reusable navigation software components that can be used to build a broad range of GPS navigation support applications. The paper discusses the GPS modular software (GMOD) system and operations concepts, major requirements, candidate software architecture, feasibility assessment and recommended software interface standards. In additon, ongoing efforts to broaden the scope of the initial study and to develop modular software to support autonomous navigation using GPS are addressed,

  11. End-to-end operations at the National Radio Astronomy Observatory

    NASA Astrophysics Data System (ADS)

    Radziwill, Nicole M.

    2008-07-01

    In 2006 NRAO launched a formal organization, the Office of End to End Operations (OEO), to broaden access to its instruments (VLA/EVLA, VLBA, GBT and ALMA) in the most cost-effective ways possible. The VLA, VLBA and GBT are mature instruments, and the EVLA and ALMA are currently under construction, which presents unique challenges for integrating software across the Observatory. This article 1) provides a survey of the new developments over the past year, and those planned for the next year, 2) describes the business model used to deliver many of these services, and 3) discusses the management models being applied to ensure continuous innovation in operations, while preserving the flexibility and autonomy of telescope software development groups.

  12. Intelligent Software for System Design and Documentation

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In an effort to develop a real-time, on-line database system that tracks documentation changes in NASA's propulsion test facilities, engineers at Stennis Space Center teamed with ECT International of Brookfield, WI, through the NASA Dual-Use Development Program to create the External Data Program and Hyperlink Add-on Modules for the promis*e software. Promis*e is ECT's top-of-the-line intelligent software for control system design and documentation. With promis*e the user can make use of the automated design process to quickly generate control system schematics, panel layouts, bills of material, wire lists, terminal plans and more. NASA and its testing contractors currently use promis*e to create the drawings and schematics at the E2 Cell 2 test stand located at Stennis Space Center.

  13. Software Systems for High-performance Quantum Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; Britt, Keith A

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventionalmore » computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.« less

  14. Final Scientific/Technical Report for "Enabling Exascale Hardware and Software Design through Scalable System Virtualization"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinda, Peter August

    2015-03-17

    This report describes the activities, findings, and products of the Northwestern University component of the "Enabling Exascale Hardware and Software Design through Scalable System Virtualization" project. The purpose of this project has been to extend the state of the art of systems software for high-end computing (HEC) platforms, and to use systems software to better enable the evaluation of potential future HEC platforms, for example exascale platforms. Such platforms, and their systems software, have the goal of providing scientific computation at new scales, thus enabling new research in the physical sciences and engineering. Over time, the innovations in systems softwaremore » for such platforms also become applicable to more widely used computing clusters, data centers, and clouds. This was a five-institution project, centered on the Palacios virtual machine monitor (VMM) systems software, a project begun at Northwestern, and originally developed in a previous collaboration between Northwestern University and the University of New Mexico. In this project, Northwestern (including via our subcontract to the University of Pittsburgh) contributed to the continued development of Palacios, along with other team members. We took the leadership role in (1) continued extension of support for emerging Intel and AMD hardware, (2) integration and performance enhancement of overlay networking, (3) connectivity with architectural simulation, (4) binary translation, and (5) support for modern Non-Uniform Memory Access (NUMA) hosts and guests. We also took a supporting role in support for specialized hardware for I/O virtualization, profiling, configurability, and integration with configuration tools. The efforts we led (1-5) were largely successful and executed as expected, with code and papers resulting from them. The project demonstrated the feasibility of a virtualization layer for HEC computing, similar to such layers for cloud or datacenter computing. For effort (3

  15. Certification of COTS Software in NASA Human Rated Flight Systems

    NASA Technical Reports Server (NTRS)

    Goforth, Andre

    2012-01-01

    Adoption of commercial off-the-shelf (COTS) products in safety critical systems has been seen as a promising acquisition strategy to improve mission affordability and, yet, has come with significant barriers and challenges. Attempts to integrate COTS software components into NASA human rated flight systems have been, for the most part, complicated by verification and validation (V&V) requirements necessary for flight certification per NASA s own standards. For software that is from COTS sources, and, in general from 3rd party sources, either commercial, government, modified or open source, the expectation is that it meets the same certification criteria as those used for in-house and that it does so as if it were built in-house. The latter is a critical and hidden issue. This paper examines the longstanding barriers and challenges in the use of 3rd party software in safety critical systems and cover recent efforts to use COTS software in NASA s Multi-Purpose Crew Vehicle (MPCV) project. It identifies some core artifacts that without them, the use of COTS and 3rd party software is, for all practical purposes, a nonstarter for affordable and timely insertion into flight critical systems. The paper covers the first use in a flight critical system by NASA of COTS software that has prior FAA certification heritage, which was shown to meet the RTCA-DO-178B standard, and how this certification may, in some cases, be leveraged to allow the use of analysis in lieu of testing. Finally, the paper proposes the establishment of an open source forum for development of safety critical 3rd party software.

  16. Software Prototyping

    PubMed Central

    Del Fiol, Guilherme; Hanseler, Haley; Crouch, Barbara Insley; Cummins, Mollie R.

    2016-01-01

    Summary Background Health information exchange (HIE) between Poison Control Centers (PCCs) and Emergency Departments (EDs) could improve care of poisoned patients. However, PCC information systems are not designed to facilitate HIE with EDs; therefore, we are developing specialized software to support HIE within the normal workflow of the PCC using user-centered design and rapid prototyping. Objective To describe the design of an HIE dashboard and the refinement of user requirements through rapid prototyping. Methods Using previously elicited user requirements, we designed low-fidelity sketches of designs on paper with iterative refinement. Next, we designed an interactive high-fidelity prototype and conducted scenario-based usability tests with end users. Users were asked to think aloud while accomplishing tasks related to a case vignette. After testing, the users provided feedback and evaluated the prototype using the System Usability Scale (SUS). Results Survey results from three users provided useful feedback that was then incorporated into the design. After achieving a stable design, we used the prototype itself as the specification for development of the actual software. Benefits of prototyping included having 1) subject-matter experts heavily involved with the design; 2) flexibility to make rapid changes, 3) the ability to minimize software development efforts early in the design stage; 4) rapid finalization of requirements; 5) early visualization of designs; 6) and a powerful vehicle for communication of the design to the programmers. Challenges included 1) time and effort to develop the prototypes and case scenarios; 2) no simulation of system performance; 3) not having all proposed functionality available in the final product; and 4) missing needed data elements in the PCC information system. PMID:27081404

  17. CheMentor Software System by H. A. Peoples

    NASA Astrophysics Data System (ADS)

    Reid, Brian P.

    1997-09-01

    CheMentor Software System H. A. Peoples. Computerized Learning Enhancements: http://www.ecis.com/~clehap; email: clehap@ecis.com; 1996 - 1997. CheMentor is a series of software packages for introductory-level chemistry, which includes Practice Items (I), Stoichiometry (I), Calculating Chemical Formulae, and the CheMentor Toolkit. The first three packages provide practice problems for students and various types of help to solve them; the Toolkit includes "calculators" for determining chemical quantities as well as the Practice Items (I) set of problems. The set of software packages is designed so that each individual product acts as a module of a common CheMentor program. As the name CheMentor implies, the software is designed as a "mentor" for students learning introductory chemistry concepts and problems. The typical use of the software would be by individual students (or perhaps small groups) as an adjunct to lectures. CheMentor is a HyperCard application and the modules are HyperCard stacks. The requirements to run the packages include a Macintosh computer with at least 1 MB of RAM, a hard drive with several MB of available space depending upon the packages selected (10 MB were required for all the packages reviewed here), and the Mac operating system 6.0.5 or later.

  18. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  19. Traceability of Software Safety Requirements in Legacy Safety Critical Systems

    NASA Technical Reports Server (NTRS)

    Hill, Janice L.

    2007-01-01

    How can traceability of software safety requirements be created for legacy safety critical systems? Requirements in safety standards are imposed most times during contract negotiations. On the other hand, there are instances where safety standards are levied on legacy safety critical systems, some of which may be considered for reuse for new applications. Safety standards often specify that software development documentation include process-oriented and technical safety requirements, and also require that system and software safety analyses are performed supporting technical safety requirements implementation. So what can be done if the requisite documents for establishing and maintaining safety requirements traceability are not available?

  20. Improving a data-acquisition software system with abstract data type components

    NASA Technical Reports Server (NTRS)

    Howard, S. D.

    1990-01-01

    Abstract data types and object-oriented design are active research areas in computer science and software engineering. Much of the interest is aimed at new software development. Abstract data type packages developed for a discontinued software project were used to improve a real-time data-acquisition system under maintenance. The result saved effort and contributed to a significant improvement in the performance, maintainability, and reliability of the Goldstone Solar System Radar Data Acquisition System.

  1. Process Based on SysML for New Launchers System and Software Developments

    NASA Astrophysics Data System (ADS)

    Hiron, Emmanuel; Miramont, Philippe

    2010-08-01

    The purpose of this paper is to present the Astrium-ST engineering process based on SysML. This process is currently set-up in the frame of common CNES /Astrium-ST R&T studies related to the Ariane 5 electrical system and flight software modelling. The tool used to set up this process is Rhapsody release 7.3 from IBM-Software firm [1]. This process focuses on the system engineering phase dedicated to Software with the objective to generate both System documents (sequential system design and flight control) and Software specifications.

  2. Agile: From Software to Mission Systems

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Shirley, Mark; Hobart, Sarah

    2017-01-01

    To maximize efficiency and flexibility in Mission Operations System (MOS) design, we are evolving principles from agile and lean methods for software, to the complete mission system. This allows for reduced operational risk at reduced cost, and achieves a more effective design through early integration of operations into mission system engineering and flight system design. The core principles are assessment of capability through demonstration, risk reduction through targeted experiments, early test and deployment, and maturation of processes and tools through use.

  3. Software for integrated manufacturing systems, part 2

    NASA Technical Reports Server (NTRS)

    Volz, R. A.; Naylor, A. W.

    1987-01-01

    Part 1 presented an overview of the unified approach to manufacturing software. The specific characteristics of the approach that allow it to realize the goals of reduced cost, increased reliability and increased flexibility are considered. Why the blending of a components view, distributed languages, generics and formal models is important, why each individual part of this approach is essential, and why each component will typically have each of these parts are examined. An example of a specification for a real material handling system is presented using the approach and compared with the standard interface specification given by the manufacturer. Use of the component in a distributed manufacturing system is then compared with use of the traditional specification with a more traditional approach to designing the system. An overview is also provided of the underlying mechanisms used for implementing distributed manufacturing systems using the unified software/hardware component approach.

  4. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    NASA Technical Reports Server (NTRS)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  5. Software control and system configuration management - A process that works

    NASA Technical Reports Server (NTRS)

    Petersen, K. L.; Flores, C., Jr.

    1983-01-01

    A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.

  6. T-LECS: The Control Software System for MOIRCS

    NASA Astrophysics Data System (ADS)

    Yoshikawa, T.; Omata, K.; Konishi, M.; Ichikawa, T.; Suzuki, R.; Tokoku, C.; Katsuno, Y.; Nishimura, T.

    2006-07-01

    MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru Telescope. We present the system design of the control software system for MOIRCS, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS is a PC-Linux based network distributed system. Two PCs equipped with the focal plane array system operate two HAWAII2 detectors, respectively, and another PC is used for user interfaces and a database server. Moreover, these PCs control various devices for observations distributed on a TCP/IP network. T-LECS has three interfaces; interfaces to the devices and two user interfaces. One of the user interfaces is to the integrated observation control system (Subaru Observation Software System) for observers, and another one provides the system developers the direct access to the devices of MOIRCS. In order to help the communication between these interfaces, we employ an SQL database system.

  7. An open-architecture approach to defect analysis software for mask inspection systems

    NASA Astrophysics Data System (ADS)

    Pereira, Mark; Pai, Ravi R.; Reddy, Murali Mohan; Krishna, Ravi M.

    2009-04-01

    Industry data suggests that Mask Inspection represents the second biggest component of Mask Cost and Mask Turn Around Time (TAT). Ever decreasing defect size targets lead to more sensitive mask inspection across the chip, thus generating too many defects. Hence, more operator time is being spent in analyzing and disposition of defects. Also, the fact that multiple Mask Inspection Systems and Defect Analysis strategies would typically be in use in a Mask Shop or a Wafer Foundry further complicates the situation. In this scenario, there is a need for a versatile, user friendly and extensible Defect Analysis software that reduces operator analysis time and enables correct classification and disposition of mask defects by providing intuitive visual and analysis aids. We propose a new vendor-neutral defect analysis software, NxDAT, based on an open architecture. The open architecture of NxDAT makes it easily extensible to support defect analysis for mask inspection systems from different vendors. The capability to load results from mask inspection systems from different vendors either directly or through a common interface enables the functionality of establishing correlation between inspections carried out by mask inspection systems from different vendors. This capability of NxDAT enhances the effectiveness of defect analysis as it directly addresses the real-life scenario where multiple types of mask inspection systems from different vendors co-exist in mask shops or wafer foundries. The open architecture also potentially enables loading wafer inspection results as well as loading data from other related tools such as Review Tools, Repair Tools, CD-SEM tools etc, and correlating them with the corresponding mask inspection results. A unique concept of Plug-In interface to NxDAT further enhances the openness of the architecture of NxDAT by enabling end-users to add their own proprietary defect analysis and image processing algorithms. The plug-in interface makes it

  8. Advanced Transport Operating System (ATOPS) utility library software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.

    1993-01-01

    The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.

  9. Agile Software Development in Defense Acquisition: A Mission Assurance Perspective

    DTIC Science & Technology

    2012-03-23

    based information retrieval system, we might say that this program works like a hive of bees , going out for pollen and bringing it back to the hive...developers ® Six Siqma is reqistered in the U. S. Patent and Trademark Office by Motorola ^_ 33 @ AEROSPACE Major Areas in a Typical Software...requirements - Capturing and evaluating quality metrics, identifying common problem areas **» Despite its positive impact on quality, pair programming

  10. Structure, microstructure and infrared studies of Ba{sub 0.06}(Na{sub 1/2}Bi{sub 1/2}){sub 0.94}TiO{sub 3}-NaNbO{sub 3} ceramics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, Sumit K., E-mail: sumit.sxc13@gmail.com; Singh, S. N., E-mail: snsphyru@gmail.com; Prasad, K., E-mail: k.prasad65@gmail.com

    2016-05-06

    Lead-free solid solutions (1-x)Ba{sub 0.06}(Na{sub 1/2}Bi{sub 1/2}){sub 0.94}TiO{sub 3}-xNaNbO{sub 3} (0 ≤ x ≤ 1.0) were prepared by conventional ceramic fabrication technique. X-ray diffraction and Rietveld refinement analyses of these ceramics were carried out using X’Pert HighScore Plus software to determine the crystal symmetry, space group and unit cell dimensions. Rietveld refinement revealed that NaNbO{sub 3} with orthorhombic structure was completely diffused into Ba{sub 0.06}(Na{sub 1/2}Bi{sub 1/2}){sub 0.94}TiO{sub 3} lattice having the rhombohedral-tetragonal symmetry. EDS and SEM studies were carried out in order to evaluate the quality and purity of the compounds. SEM images showed a change in grain shapemore » with the increase of NaNbO{sub 3} content. FTIR spectra confirmed the formation of solid solution.« less

  11. Software Engineering Laboratory (SEL) data base reporting software user's guide and system description. Volume 1: Introduction and user's guide

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Reporting software programs provide formatted listings and summary reports of the Software Engineering Laboratory (SEL) data base contents. The operating procedures and system information for 18 different reporting software programs are described. Sample output reports from each program are provided.

  12. Proposed software system for atomic-structure calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, C.F.

    1981-07-01

    Atomic structure calculations are understood well enough that, at a routine level, an atomic structure software package can be developed. At the Atomic Physics Conference in Riga, 1978 L.V. Chernysheva and M.Y. Amusia of Leningrad University, presented a paper on Software for Atomic Calculations. Their system, called ATOM is based on the Hartree-Fock approximation and correlation is included within the framework of RPAE. Energy level calculations, transition probabilities, photo-ionization cross-sections, electron scattering cross-sections are some of the physical properties that can be evaluated by their system. The MCHF method, together with CI techniques and the Breit-Pauli approximation also provides amore » sound theoretical basis for atomic structure calculations.« less

  13. End-to-end automated microfluidic platform for synthetic biology: from design to functional analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linshiz, Gregory; Jensen, Erik; Stawski, Nina

    Synthetic biology aims to engineer biological systems for desired behaviors. The construction of these systems can be complex, often requiring genetic reprogramming, extensive de novo DNA synthesis, and functional screening. Here, we present a programmable, multipurpose microfluidic platform and associated software and apply the platform to major steps of the synthetic biology research cycle: design, construction, testing, and analysis. We show the platform’s capabilities for multiple automated DNA assembly methods, including a new method for Isothermal Hierarchical DNA Construction, and for Escherichia coli and Saccharomyces cerevisiae transformation. The platform enables the automated control of cellular growth, gene expression induction, andmore » proteogenic and metabolic output analysis. Finally, taken together, we demonstrate the microfluidic platform’s potential to provide end-to-end solutions for synthetic biology research, from design to functional analysis.« less

  14. End-to-end automated microfluidic platform for synthetic biology: from design to functional analysis

    DOE PAGES

    Linshiz, Gregory; Jensen, Erik; Stawski, Nina; ...

    2016-02-02

    Synthetic biology aims to engineer biological systems for desired behaviors. The construction of these systems can be complex, often requiring genetic reprogramming, extensive de novo DNA synthesis, and functional screening. Here, we present a programmable, multipurpose microfluidic platform and associated software and apply the platform to major steps of the synthetic biology research cycle: design, construction, testing, and analysis. We show the platform’s capabilities for multiple automated DNA assembly methods, including a new method for Isothermal Hierarchical DNA Construction, and for Escherichia coli and Saccharomyces cerevisiae transformation. The platform enables the automated control of cellular growth, gene expression induction, andmore » proteogenic and metabolic output analysis. Finally, taken together, we demonstrate the microfluidic platform’s potential to provide end-to-end solutions for synthetic biology research, from design to functional analysis.« less

  15. Phase diagrams of the sections As/sub 2/S/sub 3/-Tl/sub 3/AsS/sub 4/, Tl/sub 3/AsS/sub 4/-S, and Tl/sub 3/AsS/sub 4/-Tl/sub 2/S of the ternary system As-Tl-S

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vorob'ev, Yu.I.; Velikova, N.G.; Kirilenko, V.V.

    1987-12-01

    Using DTA and XPA methods, microstructural investigations, and microhardness measurements, phase diagrams of the quasibinary sections As/sub 2/S/sub 3/-Tl/sub 3/AsS/sub 4/, Tl/sub 3/AsS/sub 4/-S, and Tl/sub 3/AsS/sub 4/-Tl/sub 2/S, are characterized by five ternary compounds Tl/sub 3/As/sub 5/S/sub 10/, Tl/sub 9/As/sub 5/S/sub 15/, Tl/sub 9/As/sub 3/S/sub 13/, Tl/sub 3/AsS/sub 6/, and Tl/sub 8/As/sub 2/S/sub 9/, which decompose by peritectic reactions at 198, 307, 408, 362, and 318/degree/C, respectively. Interplanar spacings and line intensities are given for the detected compounds. Glass formation is considered in the Tl-As-S system.

  16. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multicore, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to approx.50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  17. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multi-core, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to .50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  18. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    DOE PAGES

    Claus, R.

    2015-10-23

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQmore » building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. Furthermore, the full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.« less

  19. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    NASA Astrophysics Data System (ADS)

    Claus, R.; ATLAS Collaboration

    2016-07-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  20. Concurrent simulation of a parallel jaw end effector

    NASA Technical Reports Server (NTRS)

    Bynum, Bill

    1985-01-01

    A system of programs developed to aid in the design and development of the command/response protocol between a parallel jaw end effector and the strategic planner program controlling it are presented. The system executes concurrently with the LISP controlling program to generate a graphical image of the end effector that moves in approximately real time in response to commands sent from the controlling program. Concurrent execution of the simulation program is useful for revealing flaws in the communication command structure arising from the asynchronous nature of the message traffic between the end effector and the strategic planner. Software simulation helps to minimize the number of hardware changes necessary to the microprocessor driving the end effector because of changes in the communication protocol. The simulation of other actuator devices can be easily incorporated into the system of programs by using the underlying support that was developed for the concurrent execution of the simulation process and the communication between it and the controlling program.