Sample records for back-end software sub-system

  1. Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.

    PubMed

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk

    2009-07-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.

  2. Low-Cost, High-Speed Back-End Processing System for High-Frequency Ultrasound B-Mode Imaging

    PubMed Central

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T.; Shung, K. Kirk

    2009-01-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution. PMID:19574160

  3. The Development of Design Tools for Fault Tolerant Quantum Dot Cellular Automata Based Logic

    NASA Technical Reports Server (NTRS)

    Armstrong, Curtis D.; Humphreys, William M.

    2003-01-01

    We are developing software to explore the fault tolerance of quantum dot cellular automata gate architectures in the presence of manufacturing variations and device defects. The Topology Optimization Methodology using Applied Statistics (TOMAS) framework extends the capabilities of the A Quantum Interconnected Network Array Simulator (AQUINAS) by adding front-end and back-end software and creating an environment that integrates all of these components. The front-end tools establish all simulation parameters, configure the simulation system, automate the Monte Carlo generation of simulation files, and execute the simulation of these files. The back-end tools perform automated data parsing, statistical analysis and report generation.

  4. Implementing the concurrent operation of sub-arrays in the ALMA correlator

    NASA Astrophysics Data System (ADS)

    Amestica, Rodrigo; Perez, Jesus; Lacasse, Richard; Saez, Alejandro

    2016-07-01

    The ALMA correlator processes the digitized signals from 64 individual antennas to produce a grand total of 2016 correlated base-lines, with runtime selectable lags resolution and integration time. The on-line software system can process a maximum of 125M visibilities per second, producing an archiving data rate close to one sixteenth of the former (7.8M visibilities per second with a network transfer limit of 60 MB/sec). Mechanisms in the correlator hardware design make it possible to split the total number of antennas in the array into smaller subsets, or sub-arrays, such that they can share correlator resources while executing independent observations. The software part of the sub-system is responsible for configuring and scheduling correlator resources in such a way that observations among independent subarrays occur simultaneously while internally sharing correlator resources under a cooperative arrangement. Configuration of correlator modes through its CAN-bus interface and periodic geometric delay updates are the most relevant activities to schedule concurrently while observations happen at the same time among a number of sub-arrays. For that to work correctly, the software interface to sub-arrays schedules shared correlator resources sequentially before observations actually start on each sub-array. Start times for specific observations are optimized and reported back to the higher level observing software. After that initial sequential phase has taken place then simultaneous executions and recording of correlated data across different sub-arrays move forward concurrently, sharing the local network to broadcast results to other software sub-systems. The present paper presents an overview of the different hardware and software actors within the correlator sub-system that implement some degree of concurrency and synchronization needed for seamless and simultaneous operation of multiple sub-arrays, limitations stemming from the resource-sharing nature of the correlator, limitations intrinsic to the digital technology available in the correlator hardware, and milestones so far reached by this new ALMA feature.

  5. OPeNDAP Server4: Buidling a High-Performance Server for the DAP by Leveraging Existing Software

    NASA Astrophysics Data System (ADS)

    Potter, N.; West, P.; Gallagher, J.; Garcia, J.; Fox, P.

    2006-12-01

    OPeNDAP has been working in conjunction with NCAR/ESSL/HAO to develop a modular, high performance data server that will be the successor to the current OPeNDAP data server. The new server, called Server4, is really two servers: A 'Back-End' data server which reads information from various types of data sources and packages the results in DAP objects; and A 'Front-End' which receives client DAP request and then decides how use features of the Back-End data server to build the correct responses. This architecture can be configured in several interesting ways: The Front- and Back-End components can be run on either the same or different machines, depending on security and performance needs, new Front-End software can be written to support other network data access protocols and local applications can interact directly with the Back-End data server. This new server's Back-End component will use the server infrastructure developed by HAO for use with the Earth System Grid II project. Extensions needed to use it as part of the new OPeNDAP server were minimal. The HAO server was modified so that it loads 'data handlers' at run-time. Each data handler module only needs to satisfy a simple interface which both enabled the existing data handlers written for the old OPeNDAP server to be directly used and also simplifies writing new handlers from scratch. The Back-End server leverages high- performance features developed for the ESG II project, so applications that can interact with it directly can read large volumes of data efficiently. The Front-End module of Server4 uses the Java Servlet system in place of the Common Gateway Interface (CGI) used in the past. New front-end modules can be written to support different network data access protocols, so that same server will ultimately be able to support more than the DAP/2.0 protocol. As an example, we will discuss a SOAP interface that's currently in development. In addition to support for DAP/2.0 and prototypical support for a SOAP interface, the new server includes support for the THREDDS cataloging protocol. THREDDS is tightly integrated into the Front-End of Server4. The Server4 Front-End can make full use of the advanced THREDDS features such as attribute specification and inheritance, custom catalogs which segue into automatically generated catalogs as well as providing a default behavior which requires almost no catalog configuration.

  6. Algorithm for fast event parameters estimation on GEM acquired data

    NASA Astrophysics Data System (ADS)

    Linczuk, Paweł; Krawczyk, Rafał D.; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Chernyshova, Maryna; Czarski, Tomasz

    2016-09-01

    We present study of a software-hardware environment for developing fast computation with high throughput and low latency methods, which can be used as back-end in High Energy Physics (HEP) and other High Performance Computing (HPC) systems, based on high amount of input from electronic sensor based front-end. There is a parallelization possibilities discussion and testing on Intel HPC solutions with consideration of applications with Gas Electron Multiplier (GEM) measurement systems presented in this paper.

  7. The MeqTrees software system and its use for third-generation calibration of radio interferometers

    NASA Astrophysics Data System (ADS)

    Noordam, J. E.; Smirnov, O. M.

    2010-12-01

    Context. The formulation of the radio interferometer measurement equation (RIME) for a generic radio telescope by Hamaker et al. has provided us with an elegant mathematical apparatus for better understanding, simulation and calibration of existing and future instruments. The calibration of the new radio telescopes (LOFAR, SKA) would be unthinkable without the RIME formalism, and new software to exploit it. Aims: The MeqTrees software system is designed to implement numerical models, and to solve for arbitrary subsets of their parameters. It may be applied to many problems, but was originally geared towards implementing Measurement Equations in radio astronomy for the purposes of simulation and calibration. The technical goal of MeqTrees is to provide a tool for rapid implementation of such models, while offering performance comparable to hand-written code. We are also pursuing the wider goal of increasing the rate of evolution of radio astronomical software, by offering a tool that facilitates rapid experimentation, and exchange of ideas (and scripts). Methods: MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time (hours rather than weeks or months) for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code. Results: MeqTrees is already widely used as the simulation tool for new instruments (LOFAR, SKA) and technologies (focal plane arrays). It has demonstrated that it can achieve a noise-limited dynamic range in excess of a million, on WSRT data. It is the only package that is specifically designed to handle what we propose to call third-generation calibration (3GC), which is needed for the new generation of giant radio telescopes, but can also improve the calibration of existing instruments.

  8. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.

    PubMed

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.

  9. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models

    PubMed Central

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542

  10. Brain Computer Interface on Track to Home.

    PubMed

    Miralles, Felip; Vargiu, Eloisa; Dauwalder, Stefan; Solà, Marc; Müller-Putz, Gernot; Wriessnegger, Selina C; Pinegger, Andreas; Kübler, Andrea; Halder, Sebastian; Käthner, Ivo; Martin, Suzanne; Daly, Jean; Armstrong, Elaine; Guger, Christoph; Hintermüller, Christoph; Lowish, Hannah

    2015-01-01

    The novel BackHome system offers individuals with disabilities a range of useful services available via brain-computer interfaces (BCIs), to help restore their independence. This is the time such technology is ready to be deployed in the real world, that is, at the target end users' home. This has been achieved by the development of practical electrodes, easy to use software, and delivering telemonitoring and home support capabilities which have been conceived, implemented, and tested within a user-centred design approach. The final BackHome system is the result of a 3-year long process involving extensive user engagement to maximize effectiveness, reliability, robustness, and ease of use of a home based BCI system. The system is comprised of ergonomic and hassle-free BCI equipment; one-click software services for Smart Home control, cognitive stimulation, and web browsing; and remote telemonitoring and home support tools to enable independent home use for nonexpert caregivers and users. BackHome aims to successfully bring BCIs to the home of people with limited mobility to restore their independence and ultimately improve their quality of life.

  11. Brain Computer Interface on Track to Home

    PubMed Central

    Miralles, Felip; Dauwalder, Stefan; Müller-Putz, Gernot; Wriessnegger, Selina C.; Pinegger, Andreas; Kübler, Andrea; Halder, Sebastian; Käthner, Ivo; Guger, Christoph; Lowish, Hannah

    2015-01-01

    The novel BackHome system offers individuals with disabilities a range of useful services available via brain-computer interfaces (BCIs), to help restore their independence. This is the time such technology is ready to be deployed in the real world, that is, at the target end users' home. This has been achieved by the development of practical electrodes, easy to use software, and delivering telemonitoring and home support capabilities which have been conceived, implemented, and tested within a user-centred design approach. The final BackHome system is the result of a 3-year long process involving extensive user engagement to maximize effectiveness, reliability, robustness, and ease of use of a home based BCI system. The system is comprised of ergonomic and hassle-free BCI equipment; one-click software services for Smart Home control, cognitive stimulation, and web browsing; and remote telemonitoring and home support tools to enable independent home use for nonexpert caregivers and users. BackHome aims to successfully bring BCIs to the home of people with limited mobility to restore their independence and ultimately improve their quality of life. PMID:26167530

  12. The software-defined fast post-processing for GEM soft x-ray diagnostics in the Tungsten Environment in Steady-state Tokamak thermal fusion reactor

    NASA Astrophysics Data System (ADS)

    Krawczyk, Rafał Dominik; Czarski, Tomasz; Linczuk, Paweł; Wojeński, Andrzej; Kolasiński, Piotr; GÄ ska, Michał; Chernyshova, Maryna; Mazon, Didier; Jardin, Axel; Malard, Philippe; Poźniak, Krzysztof; Kasprowicz, Grzegorz; Zabołotny, Wojciech; Kowalska-Strzeciwilk, Ewa; Malinowski, Karol

    2018-06-01

    This article presents a novel software-defined server-based solutions that were introduced in the fast, real-time computation systems for soft X-ray diagnostics for the WEST (Tungsten Environment in Steady-state Tokamak) reactor in Cadarache, France. The objective of the research was to provide a fast processing of data at high throughput and with low latencies for investigating the interplay between the particle transport and magnetohydrodynamic activity. The long-term objective is to implement in the future a fast feedback signal in the reactor control mechanisms to sustain the fusion reaction. The implemented electronic measurement device is anticipated to be deployed in the WEST. A standalone software-defined computation engine was designed to handle data collected at high rates in the server back-end of the system. Signals are obtained from the front-end field-programmable gate array mezzanine cards that acquire and perform a selection from the gas electron multiplier detector. A fast, authorial library for plasma diagnostics was written in C++. It originated from reference offline MATLAB implementations. They were redesigned for runtime analysis during the experiment in the novel online modes of operation. The implementation allowed the benchmarking, evaluation, and optimization of plasma processing algorithms with the possibility to check the consistency with reference computations written in MATLAB. The back-end software and hardware architecture are presented with data evaluation mechanisms. The online modes of operation for the WEST are discussed. The results concerning the performance of the processing and the introduced functionality are presented.

  13. State Analysis Database Tool

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Bennett, Matthew

    2006-01-01

    The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.

  14. Basin Assessment Spatial Planning Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The tool is intended to facilitate hydropower development and water resource planning by improving synthesis and interpretation of disparate spatial datasets that are considered in development actions (e.g., hydrological characteristics, environmentally and culturally sensitive areas, existing or proposed water power resources, climate-informed forecasts). The tool enables this capability by providing a unique framework for assimilating, relating, summarizing, and visualizing disparate spatial data through the use of spatial aggregation techniques, relational geodatabase platforms, and an interactive web-based Geographic Information Systems (GIS). Data are aggregated and related based on shared intersections with a common spatial unit; in this case, industry-standard hydrologic drainagemore » areas for the U.S. (National Hydrography Dataset) are used as the spatial unit to associate planning data. This process is performed using all available scalar delineations of drainage areas (i.e., region, sub-region, basin, sub-basin, watershed, sub-watershed, catchment) to create spatially hierarchical relationships among planning data and drainages. These entity-relationships are stored in a relational geodatabase that provides back-end structure to the web GIS and its widgets. The full technology stack was built using all open-source software in modern programming languages. Interactive widgets that function within the viewport are also compatible with all modern browsers.« less

  15. Software beamforming: comparison between a phased array and synthetic transmit aperture.

    PubMed

    Li, Yen-Feng; Li, Pai-Chi

    2011-04-01

    The data-transfer and computation requirements are compared between software-based beamforming using a phased array (PA) and a synthetic transmit aperture (STA). The advantages of a software-based architecture are reduced system complexity and lower hardware cost. Although this architecture can be implemented using commercial CPUs or GPUs, the high computation and data-transfer requirements limit its real-time beamforming performance. In particular, transferring the raw rf data from the front-end subsystem to the software back-end remains challenging with current state-of-the-art electronics technologies, which offset the cost advantage of the software back end. This study investigated the tradeoff between the data-transfer and computation requirements. Two beamforming methods based on a PA and STA, respectively, were used: the former requires a higher data transfer rate and the latter requires more memory operations. The beamformers were implemente;d in an NVIDIA GeForce GTX 260 GPU and an Intel core i7 920 CPU. The frame rate of PA beamforming was 42 fps with a 128-element array transducer, with 2048 samples per firing and 189 beams per image (with a 95 MB/frame data-transfer requirement). The frame rate of STA beamforming was 40 fps with 16 firings per image (with an 8 MB/frame data-transfer requirement). Both approaches achieved real-time beamforming performance but each had its own bottleneck. On the one hand, the required data-transfer speed was considerably reduced in STA beamforming, whereas this required more memory operations, which limited the overall computation time. The advantages of the GPU approach over the CPU approach were clearly demonstrated.

  16. Perl Extension to the Bproc Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grunau, Daryl W.

    2004-06-07

    The Beowulf Distributed process Space (Bproc) software stack is comprised of UNIX/Linux kernel modifications and a support library by which a cluster of machines, each running their own private kernel, can present itself as a unified process space to the user. A Bproc cluster contains a single front-end machine and many back-end nodes which receive and run processes given to them by the front-end. Any process which is migrated to a back-end node is also visible as a ghost process on the fron-end, and may be controlled there using traditional UNIX semantics (e.g. ps(1), kill(1), etc). This software is amore » Perl extension to the Bproc library which enables the Perl programmer to make direct calls to functions within the Bproc library. See http://www.clustermatic.org, http://bproc.sourceforge.net, and http://www.perl.org« less

  17. Putting Safety in the Software

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha S.; Berens, Kalynnda M.; Hardy, Sandra (Technical Monitor)

    2001-01-01

    Software is a vital component of nearly every piece of modern technology. It is not a 'sub-system', able to be separated out from the system as a whole, but a 'co-system' that controls, manipulates, or interacts with the hardware and with the end user. Software has its fingers into all the pieces of the pie. If that 'pie', the system, can lead to injury, death, loss of major equipment, or impact your business bottom line, then software safety becomes vitally important. Learning to think about software from a safety perspective is the focus of this paper. We want you to think of software as part of the safety critical system, a major part. This requires 'system thinking' - being able to grasp the whole picture. Software's contribution to modern technology is both good and potentially bad. Software allows more complex and useful devices to be built. It can also contribute to plane crashes and power outages. We want you to see software in a whole new light, see it as a contributor to system hazards, and also as a possible fix or mitigation to some of those hazards.

  18. European Space Software Repository ESSR

    NASA Astrophysics Data System (ADS)

    Livschitz, Jakob; Blommestijn, Robert

    2016-08-01

    The paper and presentation will present the status of the ESSR (European Space Software Repository), see [1]. It will describe the development phases, outline the web portal functionality and explain the process steps behind. Not only the front-end but also the back-end will be discussed.The ESSR web portal went live ESA internal on May 15th, 2015 and live world-wide September 19th, 2015. Currently the ESSR is in operations.

  19. DataSpread: Unifying Databases and Spreadsheets.

    PubMed

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-08-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current "pane" (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases.

  20. DataSpread: Unifying Databases and Spreadsheets

    PubMed Central

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-01-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current “pane” (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases. PMID:26900487

  1. Third-Party Software's Trust Quagmire.

    PubMed

    Voas, J; Hurlburt, G

    2015-12-01

    Current software development has trended toward the idea of integrating independent software sub-functions to create more complete software systems. Software sub-functions are often not homegrown - instead they are developed by unknown 3 rd party organizations and reside in software marketplaces owned or controlled by others. Such software sub-functions carry plausible concern in terms of quality, origins, functionality, security, interoperability, to name a few. This article surveys key technical difficulties in confidently building systems from acquired software sub-functions by calling out the principle software supply chain actors.

  2. A new database sub-system for grain-size analysis

    NASA Astrophysics Data System (ADS)

    Suckow, Axel

    2013-04-01

    Detailed grain-size analyses of large depth profiles for palaeoclimate studies create large amounts of data. For instance (Novothny et al., 2011) presented a depth profile of grain-size analyses with 2 cm resolution and a total depth of more than 15 m, where each sample was measured with 5 repetitions on a Beckman Coulter LS13320 with 116 channels. This adds up to a total of more than four million numbers. Such amounts of data are not easily post-processed by spreadsheets or standard software; also MS Access databases would face serious performance problems. The poster describes a database sub-system dedicated to grain-size analyses. It expands the LabData database and laboratory management system published by Suckow and Dumke (2001). This compatibility with a very flexible database system provides ease to import the grain-size data, as well as the overall infrastructure of also storing geographic context and the ability to organize content like comprising several samples into one set or project. It also allows easy export and direct plot generation of final data in MS Excel. The sub-system allows automated import of raw data from the Beckman Coulter LS13320 Laser Diffraction Particle Size Analyzer. During post processing MS Excel is used as a data display, but no number crunching is implemented in Excel. Raw grain size spectra can be exported and controlled as Number- Surface- and Volume-fractions, while single spectra can be locked for further post-processing. From the spectra the usual statistical values (i.e. mean, median) can be computed as well as fractions larger than a grain size, smaller than a grain size, fractions between any two grain sizes or any ratio of such values. These deduced values can be easily exported into Excel for one or more depth profiles. However, such a reprocessing for large amounts of data also allows new display possibilities: normally depth profiles of grain-size data are displayed only with summarized parameters like the clay content, sand content, etc., which always only displays part of the available information at each depth. Alternatively, full spectra were displayed at one depth. The new software now allows to display the whole grain-size spectrum at each depth in a three dimensional display. LabData and the grain-size subsystem are based on MS Access as front-end and MS SQL Server as back-end database systems. The SQL code for the data model, SQL server procedures and triggers and the MS Access basic code for the front end are public domain code, published under the GNU GPL license agreement and are available free of charge. References: Novothny, Á., Frechen, M., Horváth, E., Wacha, L., Rolf, C., 2011. Investigating the penultimate and last glacial cycles of the Sütt dating, high-resolution grain size, and magnetic susceptibility data. Quaternary International 234, 75-85. Suckow, A., Dumke, I., 2001. A database system for geochemical, isotope hydrological and geochronological laboratories. Radiocarbon 43, 325-337.

  3. Automatic thermographic image defect detection of composites

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Liebenberg, Bjorn; Raymont, Jeff; Santospirito, SP

    2011-05-01

    Detecting defects, and especially reliably measuring defect sizes, are critical objectives in automatic NDT defect detection applications. In this work, the Sentence software is proposed for the analysis of pulsed thermography and near IR images of composite materials. Furthermore, the Sentence software delivers an end-to-end, user friendly platform for engineers to perform complete manual inspections, as well as tools that allow senior engineers to develop inspection templates and profiles, reducing the requisite thermographic skill level of the operating engineer. Finally, the Sentence software can also offer complete independence of operator decisions by the fully automated "Beep on Defect" detection functionality. The end-to-end automatic inspection system includes sub-systems for defining a panel profile, generating an inspection plan, controlling a robot-arm and capturing thermographic images to detect defects. A statistical model has been built to analyze the entire image, evaluate grey-scale ranges, import sentencing criteria and automatically detect impact damage defects. A full width half maximum algorithm has been used to quantify the flaw sizes. The identified defects are imported into the sentencing engine which then sentences (automatically compares analysis results against acceptance criteria) the inspection by comparing the most significant defect or group of defects against the inspection standards.

  4. Interferometric direction finding with a metamaterial detector

    NASA Astrophysics Data System (ADS)

    Venkatesh, Suresh; Shrekenhamer, David; Xu, Wangren; Sonkusale, Sameer; Padilla, Willie; Schurig, David

    2013-12-01

    We present measurements and analysis demonstrating useful direction finding of sources in the S band (2-4 GHz) using a metamaterial detector. An augmented metamaterial absorber that supports magnitude and phase measurement of the incident electric field, within each unit cell, is described. The metamaterial is implemented in a commercial printed circuit board process with off-board back-end electronics. We also discuss on-board back-end implementation strategies. Direction finding performance is analyzed for the fabricated metamaterial detector using simulated data and the standard algorithm, MUtiple SIgnal Classification. The performance of this complete system is characterized by its angular resolution as a function of radiation density at the detector. Sources with power outputs typical of mobile communication devices can be resolved at kilometer distances with sub-degree resolution and high frame rates.

  5. Acid Rain Data System: Progressive application of information technology for operation of a market-based environmental program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, D.A.

    1995-12-31

    Under the Acid Rain Program, by statute and regulation, affected utility units are allocated annual allowances. Each allowance permits a unit to emit one ton of SO{sub 2} during or after a specified year. At year end, utilities must hold allowances equal to or greater than the cumulative SO{sub 2} emissions throughout the year from their affected units. The program has been developing, on a staged basis, two major computer-based information systems: the Allowance Tracking System (ATS) for tracking creation, transfer, and ultimate use of allowances; and the Emissions Tracking System (ETS) for transmission, receipt, processing, and inventory of continuousmore » emissions monitoring (CEM) data. The systems collectively form a logical Acid Rain Data System (ARDS). ARDS will be the largest information system ever used to operate and evaluate an environmental program. The paper describes the progressive software engineering approach the Acid Rain Program has been using to develop ARDS. Iterative software version releases, keyed to critical program deadlines, add the functionality required to support specific statutory and regulatory provisions. Each software release also incorporates continual improvements for efficiency, user-friendliness, and lower life-cycle costs. The program is migrating the independent ATS and ETS systems into a logically coordinated True-Up processing model, to support the end-of-year reconciliation for balancing allowance holdings against annual emissions and compliance plans for Phase 1 affected utility units. The paper provides specific examples and data to illustrate exciting applications of today`s information technology in ARDS.« less

  6. LabData database sub-systems for post-processing and quality control of stable isotope and gas chromatography measurements

    NASA Astrophysics Data System (ADS)

    Suckow, A. O.

    2013-12-01

    Measurements need post-processing to obtain results that are comparable between laboratories. Raw data may need to be corrected for blank, memory, drift (change of reference values with time), linearity (dependence of reference on signal height) and normalized to international reference materials. Post-processing parameters need to be stored for traceability of results. State of the art stable isotope correction schemes are available based on MS Excel (Geldern and Barth, 2012; Gröning, 2011) or MS Access (Coplen, 1998). These are specialized to stable isotope measurements only, often only to the post-processing of a special run. Embedding of algorithms into a multipurpose database system was missing. This is necessary to combine results of different tracers (3H, 3He, 2H, 18O, CFCs, SF6...) or geochronological tools (Sediment dating e.g. with 210Pb, 137Cs), to relate to attribute data (submitter, batch, project, geographical origin, depth in core, well information etc.) and for further interpretation tools (e.g. lumped parameter modelling). Database sub-systems to the LabData laboratory management system (Suckow and Dumke, 2001) are presented for stable isotopes and for gas chromatographic CFC and SF6 measurements. The sub-system for stable isotopes allows the following post-processing: 1. automated import from measurement software (Isodat, Picarro, LGR), 2. correction for sample-to sample memory, linearity, drift, and renormalization of the raw data. The sub-system for gas chromatography covers: 1. storage of all raw data 2. storage of peak integration parameters 3. correction for blank, efficiency and linearity The user interface allows interactive and graphical control of the post-processing and all corrections by export to and plot in MS Excel and is a valuable tool for quality control. The sub-databases are integrated into LabData, a multi-user client server architecture using MS SQL server as back-end and an MS Access front-end and installed in four laboratories to date. Attribute data storage (unique ID for each subsample, origin, project context etc.) and laboratory management features are included. Export routines to Excel (depth profiles, time series, all possible tracer-versus tracer plots...) and modelling capabilities are add-ons. The source code is public domain and available under the GNU general public licence agreement (GNU-GPL). References Coplen, T.B., 1998. A manual for a laboratory information management system (LIMS) for light stable isotopes. Version 7.0. USGS open file report 98-284. Geldern, R.v., Barth, J.A.C., 2012. Optimization of instrument setup and post-run corrections for oxygen and hydrogen stable isotope measurements of water by isotope ratio infrared spectroscopy (IRIS). Limnology and Oceanography: Methods 10, 1024-1036. Gröning, M., 2011. Improved water δ2H and δ18O calibration and calculation of measurement uncertainty using a simple software tool. Rapid Communications in Mass Spectrometry 25, 2711-2720. Suckow, A., Dumke, I., 2001. A database system for geochemical, isotope hydrological and geochronological laboratories. Radiocarbon 43, 325-337.

  7. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    NASA Astrophysics Data System (ADS)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google Earth layers using KML; generation of maps via WMS or ArcIMS protocols; and data manipulation with Unix utilities.

  8. An open-source data storage and visualization back end for experimental data.

    PubMed

    Nielsen, Kenneth; Andersen, Thomas; Jensen, Robert; Nielsen, Jane H; Chorkendorff, Ib

    2014-04-01

    In this article, a flexible free and open-source software system for data logging and presentation will be described. The system is highly modular and adaptable and can be used in any laboratory in which continuous and/or ad hoc measurements require centralized storage. A presentation component for the data back end has furthermore been written that enables live visualization of data on any device capable of displaying Web pages. The system consists of three parts: data-logging clients, a data server, and a data presentation Web site. The logging of data from independent clients leads to high resilience to equipment failure, whereas the central storage of data dramatically eases backup and data exchange. The visualization front end allows direct monitoring of acquired data to see live progress of long-duration experiments. This enables the user to alter experimental conditions based on these data and to interfere with the experiment if needed. The data stored consist both of specific measurements and of continuously logged system parameters. The latter is crucial to a variety of automation and surveillance features, and three cases of such features are described: monitoring system health, getting status of long-duration experiments, and implementation of instant alarms in the event of failure.

  9. Research interface on a programmable ultrasound scanner.

    PubMed

    Shamdasani, Vijay; Bae, Unmin; Sikdar, Siddhartha; Yoo, Yang Mo; Karadayi, Kerem; Managuli, Ravi; Kim, Yongmin

    2008-07-01

    Commercial ultrasound machines in the past did not provide the ultrasound researchers access to raw ultrasound data. Lack of this ability has impeded evaluation and clinical testing of novel ultrasound algorithms and applications. Recently, we developed a flexible ultrasound back-end where all the processing for the conventional ultrasound modes, such as B, M, color flow and spectral Doppler, was performed in software. The back-end has been incorporated into a commercial ultrasound machine, the Hitachi HiVision 5500. The goal of this work is to develop an ultrasound research interface on the back-end for acquiring raw ultrasound data from the machine. The research interface has been designed as a software module on the ultrasound back-end. To increase the amount of raw ultrasound data that can be spooled in the limited memory available on the back-end, we have developed a method that can losslessly compress the ultrasound data in real time. The raw ultrasound data could be obtained in any conventional ultrasound mode, including duplex and triplex modes. Furthermore, use of the research interface does not decrease the frame rate or otherwise affect the clinical usability of the machine. The lossless compression of the ultrasound data in real time can increase the amount of data spooled by approximately 2.3 times, thus allowing more than 6s of raw ultrasound data to be acquired in all the modes. The interface has been used not only for early testing of new ideas with in vitro data from phantoms, but also for acquiring in vivo data for fine-tuning ultrasound applications and conducting clinical studies. We present several examples of how newer ultrasound applications, such as elastography, vibration imaging and 3D imaging, have benefited from this research interface. Since the research interface is entirely implemented in software, it can be deployed on existing HiVision 5500 ultrasound machines and may be easily upgraded in the future. The developed research interface can aid researchers in the rapid testing and clinical evaluation of new ultrasound algorithms and applications. Additionally, we believe that our approach would be applicable to designing research interfaces on other ultrasound machines.

  10. LWAs computational platform for e-consultation using mobile devices: cases from developing nations.

    PubMed

    Olajubu, Emmanuel Ajayi; Odukoya, Oluwatoyin Helen; Akinboro, Solomon Adegbenro

    2014-01-01

    Mobile devices have been impacting on human standard of living by providing timely and accurate information anywhere and anytime through wireless media in developing nations. Shortage of experts in medical fields is very obvious throughout the whole world but more pronounced in developing nations. Thus, this study proposes a telemedicine platform for the vulnerable areas of developing nations. The vulnerable area are the interior with little or no medical facilities, hence the dwellers are very susceptible to sicknesses and diseases. The framework uses mobile devices that can run LightWeight Agents (LWAs) to send consultation requests to a remote medical expert in urban city from the vulnerable interiors. The feedback is conveyed to the requester through the same medium. The system architecture which contained AgenRoller, LWAs, The front-end (mobile devices) and back-end (the medical server) is presented. The algorithm for the software component of the architecture (AgenRoller) is also presented. The system is modeled as M/M/1/c queuing system, and simulated using Simevents from MATLAB Simulink environment. The simulation result presented show the average queue length, the number of entities in the queue and the number of entities departure from the system. These together present the rate of information processing in the system. A full scale development of this system with proper implementation will help extend the few medical facilities available in the urban cities in developing nations to the interiors thereby reducing the number of casualties in the vulnerable areas of the developing world especially in Sub Saharan Africa.

  11. Development of management information system for land in mine area based on MapInfo

    NASA Astrophysics Data System (ADS)

    Wang, Shi-Dong; Liu, Chuang-Hua; Wang, Xin-Chuang; Pan, Yan-Yu

    2008-10-01

    MapInfo is current a popular GIS software. This paper introduces characters of MapInfo and GIS second development methods offered by MapInfo, which include three ones based on MapBasic, OLE automation, and MapX control usage respectively. Taking development of land management information system in mine area for example, in the paper, the method of developing GIS applications based on MapX has been discussed, as well as development of land management information system in mine area has been introduced in detail, including development environment, overall design, design and realization of every function module, and simple application of system, etc. The system uses MapX 5.0 and Visual Basic 6.0 as development platform, takes SQL Server 2005 as back-end database, and adopts Matlab 6.5 to calculate number in back-end. On the basis of integrated design, the system develops eight modules including start-up, layer control, spatial query, spatial analysis, data editing, application model, document management, results output. The system can be used in mine area for cadastral management, land use structure optimization, land reclamation, land evaluation, analysis and forecasting for land in mine area and environmental disruption, thematic mapping, and so on.

  12. Real-Time Multimission Event Notification System for Mars Relay

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Wang, Paul; Hy, Franklin H.

    2013-01-01

    As the Mars Relay Network is in constant flux (missions and teams going through their daily workflow), it is imperative that users are aware of such state changes. For example, a change by an orbiter team can affect operations on a lander team. This software provides an ambient view of the real-time status of the Mars network. The Mars Relay Operations Service (MaROS) comprises a number of tools to coordinate, plan, and visualize various aspects of the Mars Relay Network. As part of MaROS, a feature set was developed that operates on several levels of the software architecture. These levels include a Web-based user interface, a back-end "ReSTlet" built in Java, and databases that store the data as it is received from the network. The result is a real-time event notification and management system, so mission teams can track and act upon events on a moment-by-moment basis. This software retrieves events from MaROS and displays them to the end user. Updates happen in real time, i.e., messages are pushed to the user while logged into the system, and queued when the user is not online for later viewing. The software does not do away with the email notifications, but augments them with in-line notifications. Further, this software expands the events that can generate a notification, and allows user-generated notifications. Existing software sends a smaller subset of mission-generated notifications via email. A common complaint of users was that the system-generated e-mails often "get lost" with other e-mail that comes in. This software allows for an expanded set (including user-generated) of notifications displayed in-line of the program. By separating notifications, this can improve a user's workflow.

  13. An End-to-End System to Enable Quick, Easy and Inexpensive Deployment of Hydrometeorological Stations

    NASA Astrophysics Data System (ADS)

    Celicourt, P.; Piasecki, M.

    2014-12-01

    The high cost of hydro-meteorological data acquisition, communication and publication systems along with limited qualified human resources is considered as the main reason why hydro-meteorological data collection remains a challenge especially in developing countries. Despite significant advances in sensor network technologies which gave birth to open hardware and software, low-cost (less than $50) and low-power (in the order of a few miliWatts) sensor platforms in the last two decades, sensors and sensor network deployment remains a labor-intensive, time consuming, cumbersome, and thus expensive task. These factors give rise for the need to develop a affordable, simple to deploy, scalable and self-organizing end-to-end (from sensor to publication) system suitable for deployment in such countries. The design of the envisioned system will consist of a few Sensed-And-Programmed Arduino-based sensor nodes with low-cost sensors measuring parameters relevant to hydrological processes and a Raspberry Pi micro-computer hosting the in-the-field back-end data management. This latter comprises the Python/Django model of the CUAHSI Observations Data Model (ODM) namely DjangODM backed by a PostgreSQL Database Server. We are also developing a Python-based data processing script which will be paired with the data autoloading capability of Django to populate the DjangODM database with the incoming data. To publish the data, the WOFpy (WaterOneFlow Web Services in Python) developed by the Texas Water Development Board for 'Water Data for Texas' which can produce WaterML web services from a variety of back-end database installations such as SQLite, MySQL, and PostgreSQL will be used. A step further would be the development of an appealing online visualization tool using Python statistics and analytics tools (Scipy, Numpy, Pandas) showing the spatial distribution of variables across an entire watershed as a time variant layer on top of a basemap.

  14. The ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array: camera DAQ software architecture

    NASA Astrophysics Data System (ADS)

    Conforti, Vito; Trifoglio, Massimo; Bulgarelli, Andrea; Gianotti, Fulvio; Fioretti, Valentina; Tacchini, Alessandro; Zoli, Andrea; Malaguti, Giuseppe; Capalbi, Milvia; Catalano, Osvaldo

    2014-07-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end-to-end prototype of a Small Size dual-mirror Telescope. In a second phase the ASTRI project foresees the installation of the first elements of the array at CTA southern site, a mini-array of 7 telescopes. The ASTRI Camera DAQ Software is aimed at the Camera data acquisition, storage and display during Camera development as well as during commissioning and operations on the ASTRI SST-2M telescope prototype that will operate at the INAF observing station located at Serra La Nave on the Mount Etna (Sicily). The Camera DAQ configuration and operations will be sequenced either through local operator commands or through remote commands received from the Instrument Controller System that commands and controls the Camera. The Camera DAQ software will acquire data packets through a direct one-way socket connection with the Camera Back End Electronics. In near real time, the data will be stored in both raw and FITS format. The DAQ Quick Look component will allow the operator to display in near real time the Camera data packets. We are developing the DAQ software adopting the iterative and incremental model in order to maximize the software reuse and to implement a system which is easily adaptable to changes. This contribution presents the Camera DAQ Software architecture with particular emphasis on its potential reuse for the ASTRI/CTA mini-array.

  15. Effectiveness of back-to-back testing

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.

    1987-01-01

    Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.

  16. Sensor Open System Architecture (SOSA) evolution for collaborative standards development

    NASA Astrophysics Data System (ADS)

    Collier, Charles Patrick; Lipkin, Ilya; Davidson, Steven A.; Baldwin, Rusty; Orlovsky, Michael C.; Ibrahim, Tim

    2017-04-01

    The Sensor Open System Architecture (SOSA) is a C4ISR-focused technical and economic collaborative effort between the Air Force, Navy, Army, the Department of Defense (DoD), Industry, and other Governmental agencies to develop (and incorporate) a technical Open Systems Architecture standard in order to maximize C4ISR sub-system, system, and platform affordability, re-configurability, and hardware/software/firmware re-use. The SOSA effort will effectively create an operational and technical framework for the integration of disparate payloads into C4ISR systems; with a focus on the development of a modular decomposition (defining functions and behaviors) and associated key interfaces (physical and logical) for common multi-purpose architecture for radar, EO/IR, SIGINT, EW, and Communications. SOSA addresses hardware, software, and mechanical/electrical interfaces. The modular decomposition will produce a set of re-useable components, interfaces, and sub-systems that engender reusable capabilities. This, in effect, creates a realistic and affordable ecosystem enabling mission effectiveness through systematic re-use of all available re-composed hardware, software, and electrical/mechanical base components and interfaces. To this end, SOSA will leverage existing standards as much as possible and evolve the SOSA architecture through modification, reuse, and enhancements to achieve C4ISR goals. This paper will present accomplishments over the first year of SOSA initiative.

  17. New instrumentation for the 1.2m Southern Millimeter Wave Telescope (SMWT)

    NASA Astrophysics Data System (ADS)

    Vasquez, P.; Astudillo, P.; Rodriguez, R.; Monasterio, D.; Reyes, N.; Finger, R.; Mena, F. P.; Bronfman, L.

    2016-07-01

    Here we describe the status of the upgrade program that is being performed to modernize the Southern 1.2m Wave Telescope. The Telescope was built during early ´80 to complete the first Galactic survey of Molecular Clouds in the CO(1-0) line. After a fruitful operation in CTIO the telescope was relocated to the Universidad de Chile, Cerro Calán Observatory. The new site has an altitude of 850m and allows observations in the millimeter range throughout the year. The telescope was upgraded, including a new building to house operations, new control system, and new receiver and back-end technologies. The new front end is a sideband-separating receiver based on a HEMT amplifier and sub-harmonic mixers. It is cooled with Liquid Nitrogen to diminish its noise temperature. The back-end is a digital spectrometer, based on the Reconfigurable Open Architecture Computing Hardware (ROACH). The new spectrometer includes IF hybridization capabilities to avoid analog hybrids and, therefore, improve the sideband rejection ratio of the receiver.

  18. International Space Station alpha remote manipulator system workstation controls test report

    NASA Astrophysics Data System (ADS)

    Ehrenstrom, William A.; Swaney, Colin; Forrester, Patrick

    1994-05-01

    Previous development testing for the space station remote manipulator system workstation controls determined the need for hardware controls for the emergency stop, brakes on/off, and some camera functions. This report documents the results of an evaluation to further determine control implementation requirements, requested by the Canadian Space Agency (CSA), to close outstanding review item discrepancies. This test was conducted at the Johnson Space Center's Space Station Mockup and Trainer Facility in Houston, Texas, with nine NASA astronauts and one CSA astronaut as operators. This test evaluated camera iris and focus, back-up drive, latching end effector release, and autosequence controls using several types of hardware and software implementations. Recommendations resulting from the testing included providing guarded hardware buttons to prevent accidental actuation, providing autosequence controls and back-up drive controls on a dedicated hardware control panel, and that 'latch on/latch off', or on-screen software, controls not be considered. Generally, the operators preferred hardware controls although other control implementations were acceptable. The results of this evaluation will be used along with further testing to define specific requirements for the workstation design.

  19. International Space Station alpha remote manipulator system workstation controls test report

    NASA Technical Reports Server (NTRS)

    Ehrenstrom, William A.; Swaney, Colin; Forrester, Patrick

    1994-01-01

    Previous development testing for the space station remote manipulator system workstation controls determined the need for hardware controls for the emergency stop, brakes on/off, and some camera functions. This report documents the results of an evaluation to further determine control implementation requirements, requested by the Canadian Space Agency (CSA), to close outstanding review item discrepancies. This test was conducted at the Johnson Space Center's Space Station Mockup and Trainer Facility in Houston, Texas, with nine NASA astronauts and one CSA astronaut as operators. This test evaluated camera iris and focus, back-up drive, latching end effector release, and autosequence controls using several types of hardware and software implementations. Recommendations resulting from the testing included providing guarded hardware buttons to prevent accidental actuation, providing autosequence controls and back-up drive controls on a dedicated hardware control panel, and that 'latch on/latch off', or on-screen software, controls not be considered. Generally, the operators preferred hardware controls although other control implementations were acceptable. The results of this evaluation will be used along with further testing to define specific requirements for the workstation design.

  20. CLIMLAB: a Python-based software toolkit for interactive, process-oriented climate modeling

    NASA Astrophysics Data System (ADS)

    Rose, B. E. J.

    2015-12-01

    Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The IPython notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields. However CLIMLAB is well suited to be deployed as a computational back-end for a graphical gaming environment based on earth-system modeling.

  1. A multitasking, multisinked, multiprocessor data acquisition front end

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, R.; Au, R.; Molen, A.V.

    1989-10-01

    The authors have developed a generalized data acquisition front end system which is based on MC68020 processors running a commercial real time kernel (rhoSOS), and implemented primarily in a high level language (C). This system has been attached to the back end on-line computing system at NSCL via our high performance ETHERNET protocol. Data may be simultaneously sent to any number of back end systems. Fixed fraction sampling along links to back end computing is also supported. A nonprocedural program generator simplifies the development of experiment specific code.

  2. Engineering of Data Acquiring Mobile Software and Sustainable End-User Applications

    NASA Technical Reports Server (NTRS)

    Smith, Benton T.

    2013-01-01

    The criteria for which data acquiring software and its supporting infrastructure should be designed should take the following two points into account: the reusability and organization of stored online and remote data and content, and an assessment on whether abandoning a platform optimized design in favor for a multi-platform solution significantly reduces the performance of an end-user application. Furthermore, in-house applications that control or process instrument acquired data for end-users should be designed with a communication and control interface such that the application's modules can be reused as plug-in modular components in greater software systems. The application of the above mentioned is applied using two loosely related projects: a mobile application, and a website containing live and simulated data. For the intelligent devices mobile application AIDM, the end-user interface have a platform and data type optimized design, while the database and back-end applications store this information in an organized manner and manage access to that data to only to authorized user end application(s). Finally, the content for the website was derived from a database such that the content can be included and uniform to all applications accessing the content. With these projects being ongoing, I have concluded from my research that the applicable methods presented are feasible for both projects, and that a multi-platform design for the mobile application only marginally drop the performance of the mobile application.

  3. World Wide Web Metaphors for Search Mission Data

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Wallick, Michael N.; Joswig, Joseph C.; Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Abramyan, Lucy; Crockett, Thomas M.; Shams, Khawaja S.; Fox, Jason M.; hide

    2010-01-01

    A software program that searches and browses mission data emulates a Web browser, containing standard meta - phors for Web browsing. By taking advantage of back-end URLs, users may save and share search states. Also, since a Web interface is familiar to users, training time is reduced. Familiar back and forward buttons move through a local search history. A refresh/reload button regenerates a query, and loads in any new data. URLs can be constructed to save search results. Adding context to the current search is also handled through a familiar Web metaphor. The query is constructed by clicking on hyperlinks that represent new components to the search query. The selection of a link appears to the user as a page change; the choice of links changes to represent the updated search and the results are filtered by the new criteria. Selecting a navigation link changes the current query and also the URL that is associated with it. The back button can be used to return to the previous search state. This software is part of the MSLICE release, which was written in Java. It will run on any current Windows, Macintosh, or Linux system.

  4. An integrated dexterous robotic testbed for space applications

    NASA Technical Reports Server (NTRS)

    Li, Larry C.; Nguyen, Hai; Sauer, Edward

    1992-01-01

    An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.

  5. High-resolution physical and biogeochemical variability from a shallow back reef on Ofu, American Samoa: an end-member perspective

    NASA Astrophysics Data System (ADS)

    Koweek, David A.; Dunbar, Robert B.; Monismith, Stephen G.; Mucciarone, David A.; Woodson, C. Brock; Samuel, Lianna

    2015-09-01

    Shallow back reefs commonly experience greater thermal and biogeochemical variability owing to a combination of coral community metabolism, environmental forcing, flow regime, and water depth. We present results from a high-resolution (sub-hourly to sub-daily) hydrodynamic and biogeochemical study, along with a coupled long-term (several months) hydrodynamic study, conducted on the back reefs of Ofu, American Samoa. During the high-resolution study, mean temperature was 29.0 °C with maximum temperatures near 32 °C. Dissolved oxygen concentrations spanned 32-178 % saturation, and pHT spanned the range from 7.80 to 8.39 with diel ranges reaching 0.58 units. Empirical cumulative distribution functions reveal that pHT was between 8.0 and 8.2 during only 30 % of the observational period, with approximately even distribution of the remaining 70 % of the time between pHT values less than 8.0 and greater than 8.2. Thermal and biogeochemical variability in the back reefs is partially controlled by tidal modulation of wave-driven flow, which isolates the back reefs at low tide and brings offshore water into the back reefs at high tide. The ratio of net community calcification to net community production was 0.15 ± 0.01, indicating that metabolism on the back reef was dominated by primary production and respiration. Similar to other back reef systems, the back reefs of Ofu are carbon sinks during the daytime. Shallow back reefs like those in Ofu may provide insights for how coral communities respond to extreme temperatures and acidification and are deserving of continued attention.

  6. Status report of the SRT radiotelescope control software: the DISCOS project

    NASA Astrophysics Data System (ADS)

    Orlati, A.; Bartolini, M.; Buttu, M.; Fara, A.; Migoni, C.; Poppi, S.; Righini, S.

    2016-08-01

    The Sardinia Radio Telescope (SRT) is a 64-m fully-steerable radio telescope. It is provided with an active surface to correct for gravitational deformations, allowing observations from 300 MHz to 100 GHz. At present, three receivers are available: a coaxial LP-band receiver (305-410 MHz and 1.5-1.8 GHz), a C-band receiver (5.7-7.7 GHz) and a 7-feed K-band receiver (18-26.5 GHz). Several back-ends are also available in order to perform the different data acquisition and analysis procedures requested by scientific projects. The design and development of the SRT control software started in 2004, and now belongs to a wider project called DISCOS (Development of the Italian Single-dish COntrol System), which provides a common infrastructure to the three Italian radio telescopes (Medicina, Noto and SRT dishes). DISCOS is based on the Alma Common Software (ACS) framework, and currently consists of more than 500k lines of code. It is organized in a common core and three specific product lines, one for each telescope. Recent developments, carried out after the conclusion of the technical commissioning of the instrument (October 2013), consisted in the addition of several new features in many parts of the observing pipeline, spanning from the motion control to the digital back-ends for data acquisition and data formatting; we brie y describe such improvements. More importantly, in the last two years we have supported the astronomical validation of the SRT radio telescope, leading to the opening of the first public call for proposals in late 2015. During this period, while assisting both the engineering and the scientific staff, we massively employed the control software and were able to test all of its features: in this process we received our first feedback from the users and we could verify how the system performed in a real-life scenario, drawing the first conclusions about the overall system stability and performance. We examine how the system behaves in terms of network load and system load, how it reacts to failures and errors, and what components and services seem to be the most critical parts of our architecture, showing how the ACS framework impacts on these aspects. Moreover, the exposure to public utilization has highlighted the major flaws in our development and software management process, which had to be tuned and improved in order to achieve faster release cycles in response to user feedback, and safer deploy operations. In this regard we show how the introduction of testing practices, along with continuous integration, helped us to meet higher quality standards. Having identified the most critical aspects of our software, we conclude showing our intentions for the future development of DISCOS, both in terms of software features and software infrastructures.

  7. A Failing Grade for the German End-of-Life Vehicles Take-Back System

    ERIC Educational Resources Information Center

    Nakajima, Nina; Vanderburg, Willem H.

    2005-01-01

    The German end-of-life vehicle take-back system is described and analyzed in terms of its impact on the environment and the car companies involved. It is concluded that although this system is often cited as an example of a successful take-back scheme, it is not one that maximizes the value recovered from end-of-life vehicles. As a result,…

  8. Storage system software solutions for high-end user needs

    NASA Technical Reports Server (NTRS)

    Hogan, Carole B.

    1992-01-01

    Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.

  9. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.

    1987-01-01

    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.

  10. VLBI2010 Receiver Back End Comparison

    NASA Technical Reports Server (NTRS)

    Petrachenko, Bill

    2013-01-01

    VLBI2010 requires a receiver back-end to convert analog RF signals from the receiver front end into channelized digital data streams to be recorded or transmitted electronically. The back end functions are typically performed in two steps: conversion of analog RF inputs into IF bands (see Table 2), and conversion of IF bands into channelized digital data streams (see Tables 1a, 1b and 1c). The latter IF systems are now completely digital and generically referred to as digital back ends (DBEs). In Table 2 two RF conversion systems are compared, and in Tables 1a, 1b, and 1c nine DBE systems are compared. Since DBE designs are advancing rapidly, the data in these tables are only guaranteed to be current near the update date of this document.

  11. TELICS—A Telescope Instrument Control System for Small/Medium Sized Astronomical Observatories

    NASA Astrophysics Data System (ADS)

    Srivastava, Mudit K.; Ramaprakash, A. N.; Burse, Mahesh P.; Chordia, Pravin A.; Chillal, Kalpesh S.; Mestry, Vilas B.; Das, Hillol K.; Kohok, Abhay A.

    2009-10-01

    For any modern astronomical observatory, it is essential to have an efficient interface between the telescope and its back-end instruments. However, for small and medium-sized observatories, this requirement is often limited by tight financial constraints. Therefore a simple yet versatile and low-cost control system is required for such observatories to minimize cost and effort. Here we report the development of a modern, multipurpose instrument control system TELICS (Telescope Instrument Control System) to integrate the controls of various instruments and devices mounted on the telescope. TELICS consists of an embedded hardware unit known as a common control unit (CCU) in combination with Linux-based data acquisition and user interface. The hardware of the CCU is built around the ATmega 128 microcontroller (Atmel Corp.) and is designed with a backplane, master-slave architecture. A Qt-based graphical user interface (GUI) has been developed and the back-end application software is based on C/C++. TELICS provides feedback mechanisms that give the operator good visibility and a quick-look display of the status and modes of instruments as well as data. TELICS has been used for regular science observations since 2008 March on the 2 m, f/10 IUCAA Telescope located at Girawali in Pune, India.

  12. Next Generation Satellite Communications: Automated Doppler Shift Compensation of PSK-31 Via Software-Defined Radio

    DTIC Science & Technology

    2014-05-09

    Interfaces Configuration – Wired Network Connections before Editing Move the cursor to the end of the line that ends with “eth0 inet dhcp ” and type...X”. This will delete text one character back from the cursor. Delete the word “ dhcp ”. Once this is done, type “a” to begin inserting text and add

  13. On I/O Virtualization Management

    NASA Astrophysics Data System (ADS)

    Danciu, Vitalian A.; Metzker, Martin G.

    The quick adoption of virtualization technology in general and the advent of the Cloud business model entail new requirements on the structure and the configuration of back-end I/O systems. Several approaches to virtualization of I/O links are being introduced, which aim at implementing a more flexible I/O channel configuration without compromising performance. While previously the management of I/O devices could be limited to basic technical requirments (e.g. the establishment and termination of fixed-point links), the additional flexibility carries in its wake additional management requirements on the representation and control of I/O sub-systems.

  14. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software, and systems.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., parts, firmware, software, and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software, and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  15. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  16. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  17. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  18. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  19. Applying Trustworthy Computing to End-to-End Electronic Voting

    ERIC Educational Resources Information Center

    Fink, Russell A.

    2010-01-01

    "End-to-End (E2E)" voting systems provide cryptographic proof that the voter's intention is captured, cast, and tallied correctly. While E2E systems guarantee integrity independent of software, most E2E systems rely on software to provide confidentiality, availability, authentication, and access control; thus, end-to-end integrity is not…

  20. Speech to Text Translation for Malay Language

    NASA Astrophysics Data System (ADS)

    Al-khulaidi, Rami Ali; Akmeliawati, Rini

    2017-11-01

    The speech recognition system is a front end and a back-end process that receives an audio signal uttered by a speaker and converts it into a text transcription. The speech system can be used in several fields including: therapeutic technology, education, social robotics and computer entertainments. In most cases in control tasks, which is the purpose of proposing our system, wherein the speed of performance and response concern as the system should integrate with other controlling platforms such as in voiced controlled robots. Therefore, the need for flexible platforms, that can be easily edited to jibe with functionality of the surroundings, came to the scene; unlike other software programs that require recording audios and multiple training for every entry such as MATLAB and Phoenix. In this paper, a speech recognition system for Malay language is implemented using Microsoft Visual Studio C#. 90 (ninety) Malay phrases were tested by 10 (ten) speakers from both genders in different contexts. The result shows that the overall accuracy (calculated from Confusion Matrix) is satisfactory as it is 92.69%.

  1. Distributed Operations Planning

    NASA Technical Reports Server (NTRS)

    Fox, Jason; Norris, Jeffrey; Powell, Mark; Rabe, Kenneth; Shams, Khawaja

    2007-01-01

    Maestro software provides a secure and distributed mission planning system for long-term missions in general, and the Mars Exploration Rover Mission (MER) specifically. Maestro, the successor to the Science Activity Planner, has a heavy emphasis on portability and distributed operations, and requires no data replication or expensive hardware, instead relying on a set of services functioning on JPL institutional servers. Maestro works on most current computers with network connections, including laptops. When browsing down-link data from a spacecraft, Maestro functions similarly to being on a Web browser. After authenticating the user, it connects to a database server to query an index of data products. It then contacts a Web server to download and display the actual data products. The software also includes collaboration support based upon a highly reliable messaging system. Modifications made to targets in one instance are quickly and securely transmitted to other instances of Maestro. The back end that has been developed for Maestro could benefit many future missions by reducing the cost of centralized operations system architecture.

  2. Tele-healthcare for diabetes management: A low cost automatic approach.

    PubMed

    Benaissa, M; Malik, B; Kanakis, A; Wright, N P

    2012-01-01

    In this paper, a telemedicine system for managing diabetic patients with better care is presented. The system is an end to end solution which relies on the integration of front end (patient unit) and backend web server. A key feature of the system developed is the very low cost automated approach. The front-end of the system is capable of reading glucose measurements from any glucose meter and sending them automatically via existing networks to the back-end server. The back-end is designed and developed using n-tier web client architecture based on model-view-controller design pattern using open source technology, a cost effective solution. The back-end helps the health-care provider with data analysis; data visualization and decision support, and allows them to send feedback and therapeutic advice to patients from anywhere using a browser enabled device. This system will be evaluated during the trials which will be conducted in collaboration with a local hospital in phased manner.

  3. Design of an AdvancedTCA board management controller (IPMC)

    NASA Astrophysics Data System (ADS)

    Mendez, J.; Bobillier, V.; Haas, S.; Joos, M.; Mico, S.; Vasey, F.

    2017-03-01

    The AdvancedTCA (ATCA) standard has been selected as the hardware platform for the upgrade of the back-end electronics of the CMS and ATLAS experiments of the Large Hadron Collider (LHC) . In this context, the electronic systems for experiments group at CERN is running a project to evaluate, specify, design and support xTCA equipment. As part of this project, an Intelligent Platform Management Controller (IPMC) for ATCA blades, based on a commercial solution, has been designed to be used on existing and future ATCA blades. This paper reports on the status of this project presenting the hardware and software developments.

  4. Agent planning in AgScala

    NASA Astrophysics Data System (ADS)

    Tošić, Saša; Mitrović, Dejan; Ivanović, Mirjana

    2013-10-01

    Agent-oriented programming languages are designed to simplify the development of software agents, especially those that exhibit complex, intelligent behavior. This paper presents recent improvements of AgScala, an agent-oriented programming language based on Scala. AgScala includes declarative constructs for managing beliefs, actions and goals of intelligent agents. Combined with object-oriented and functional programming paradigms offered by Scala, it aims to be an efficient framework for developing both purely reactive, and more complex, deliberate agents. Instead of the Prolog back-end used initially, the new version of AgScala relies on Agent Planning Package, a more advanced system for automated planning and reasoning.

  5. A Flexible and Configurable Architecture for Automatic Control Remote Laboratories

    ERIC Educational Resources Information Center

    Kalúz, Martin; García-Zubía, Javier; Fikar, Miroslav; Cirka, Luboš

    2015-01-01

    In this paper, we propose a novel approach in hardware and software architecture design for implementation of remote laboratories for automatic control. In our contribution, we show the solution with flexible connectivity at back-end, providing features of multipurpose usage with different types of experimental devices, and fully configurable…

  6. Transform-Based Channel-Data Compression to Improve the Performance of a Real-Time GPU-Based Software Beamformer.

    PubMed

    Lok, U-Wai; Li, Pai-Chi

    2016-03-01

    Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found that channel data from a 64-channel system can be transferred using the standard USB 3.0 interface in most practical imaging applications.

  7. Cross-platform validation and analysis environment for particle physics

    NASA Astrophysics Data System (ADS)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    2017-11-01

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.

  8. Superconducting cable cooling system by helium gas at two pressures

    DOEpatents

    Dean, John W.

    1977-01-01

    Thermally contacting, oppositely streaming, cryogenic fluid streams in the same enclosure in a closed cycle that changes the fluid from a cool high pressure helium gas to a cooler reduced pressure helium gas in an expander so as to be at different temperature ranges and pressures respectively in go and return legs that are in thermal contact with each other and in thermal contact with a longitudinally extending superconducting transmission line enclosed in the same cable enclosure that insulates the line from the ambient at a temperature T.sub.1. By first circulating the fluid from a refrigerator at one end of the line as a cool gas at a temperature range T.sub.2 to T.sub.3 in the go leg, then circulating the gas through an expander at the other end of the line where the gas becomes a cooler gas at a reduced pressure and at a reduced temperature T.sub.4 and finally by circulating the cooler gas back again to the refrigerator in a return leg at a temperature range T.sub.4 to T.sub.5, while in thermal contact with the gas in the go leg, and in the same enclosure therewith for compression into a higher pressure gas at T.sub.2 in a closed cycle, where T.sub.2 >T.sub.3 and T.sub.5 >T.sub.4, the fluid leaves the enclosure in the go leg as a gas at its coldest point in the go leg, and the temperature distribution is such that the line temperature decreases along its length from the refrigerator due to the cooling from the gas in the return leg.

  9. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.

    PubMed

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen

    2013-03-01

    Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.

  10. A UNIX-based real-time data acquisition system for microprobe analysis using an advanced X11 window toolkit

    NASA Astrophysics Data System (ADS)

    Kramer, J. L. A. M.; Ullings, A. H.; Vis, R. D.

    1993-05-01

    A real-time data acquisition system for microprobe analysis has been developed at the Free University of Amsterdam. The system is composed of two parts: a front-end real-time and a back-end monitoring system. The front-end consists of a VMEbus based system which reads out a CAMAC crate. The back-end is implemented on a Sun work station running the UNIX operating system. This separation allows the integration of a minimal, and consequently very fast, real-time executive within the sophisticated possibilities of advanced UNIX work stations.

  11. Sighten Final Technical Report DEEE0006690 Deploying an integrated and comprehensive solar financing software platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Conlan

    Over the project, Sighten built a comprehensive software-as-a-service (Saas) platform to automate and streamline the residential solar financing workflow. Before the project period, significant time and money was spent by companies on front-end tools related to system design and proposal creation, but comparatively few resources were available to support the many back-end calculations and data management processes that underpin third party financing. Without a tool like Sighten, the solar financing processes involved passing information from the homeowner prospect into separate tools for system design, financing, and then later to reporting tools including Microsoft Excel, CRM software, in-house software, outside software,more » and offline, manual processes. Passing data between tools and attempting to connect disparate systems results in inefficiency and inaccuracy for the industry. Sighten was built to consolidate all financial and solar-related calculations in a single software platform. It significantly improves upon the accuracy of these calculations and exposes sophisticated new analysis tools resulting in a rigorous, efficient and cost-effective toolset for scaling residential solar. Widely deploying a platform like Sighten’s significantly and immediately impacts the residential solar space in several important ways: 1) standardizing and improving the quality of all quantitative calculations involved in the residential financing process, most notably project finance, system production and reporting calculations; 2) representing a true step change in terms of reporting and analysis capabilities by maintaining more accurate data and exposing sophisticated tools around simulation, tranching, and financial reporting, among others, to all stakeholders in the space; 3) allowing a broader group of developers/installers/finance companies to access the capital markets by providing an out-of-the-box toolset that handles the execution of running investor capital through a rooftop solar financing program. Standardizing and improving all calculations, improving data quality, and exposing new analysis tools previously unavailable affects investment in the residential space in several important ways: 1) lowering the cost of capital for existing capital providers by mitigating uncertainty and de-risking the solar asset class; 2) attracting new, lower cost investors to the solar asset class as reporting and data quality resemble standards of more mature asset classes; 3) increasing the prevalence of liquidity options for investors through back leverage, securitization, or secondary sale by providing the tools necessary for lenders, ratings agencies, etc. to properly understand a portfolio of residential solar assets. During the project period, Sighten successfully built and scaled a commercially ready tool for the residential solar market. The software solution built by Sighten has been deployed with key target customer segments identified in the award deliverables: solar installers, solar developers/channel managers, and solar financiers, including lenders. Each of these segments greatly benefits from the availability of the Sighten toolset.« less

  12. Back-illuminate fiber system research for multi-object fiber spectroscopic telescope

    NASA Astrophysics Data System (ADS)

    Zhou, Zengxiang; Liu, Zhigang; Hu, Hongzhuan; Wang, Jianping; Zhai, Chao; Chu, Jiaru

    2016-07-01

    In the telescope observation, the position of fiber will highly influence the spectra efficient input in the fiber to the spectrograph. When the fibers were back illuminated on the spectra end, they would export light on the positioner end, so the CCD cameras could capture the photo of fiber tip position covered the focal plane, calculates the precise position information by light centroid method and feeds back to control system. A set of fiber back illuminated system was developed which combined to the low revolution spectro instruments in LAMOST. It could provide uniform light output to the fibers, meet the requirements for the CCD camera measurement. The paper was introduced the back illuminated system design and different test for the light resource. After optimization, the effect illuminated system could compare with the integrating sphere, meet the conditions of fiber position measurement.Using parallel controlled fiber positioner as the spectroscopic receiver is an efficiency observation system for spectra survey, has been used in LAMOST recently, and will be proposed in CFHT and rebuilt telescope Mayall. In the telescope observation, the position of fiber will highly influence the spectra efficient input in the fiber to the spectrograph. When the fibers were back illuminated on the spectra end, they would export light on the positioner end, so the CCD cameras could capture the photo of fiber tip position covered the focal plane, calculates the precise position information by light centroid method and feeds back to control system. After many years on these research, the back illuminated fiber measurement was the best method to acquire the precision position of fibers. In LAMOST, a set of fiber back illuminated system was developed which combined to the low revolution spectro instruments in LAMOST. It could provide uniform light output to the fibers, meet the requirements for the CCD camera measurement and was controlled by high-level observation system which could shut down during the telescope observation. The paper was introduced the back illuminated system design and different test for the light resource. After optimization, the effect illuminated system could compare the integrating sphere, meet the conditions of fiber position measurement.

  13. User Centric Job Monitoring - a redesign and novel approach in the STAR experiment

    NASA Astrophysics Data System (ADS)

    Arkhipkin, D.; Lauret, J.; Zulkarneeva, Y.

    2014-06-01

    User Centric Monitoring (or UCM) has been a long awaited feature in STAR, whereas programs, workflows and system "events" could be logged, broadcast and later analyzed. UCM allows to collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than an administrative-centric point of view. The first attempt and implementation of "a" UCM approach was made in STAR 2004 using a log4cxx plug-in back-end and then further evolved with an attempt to push toward a scalable database back-end (2006) and finally using a Web-Service approach (2010, CSW4DB SBIR). The latest showed to be incomplete and not addressing the evolving needs of the experiment where streamlined messages for online (data acquisition) purposes as well as the continuous support for the data mining needs and event analysis need to coexists and unified in a seamless approach. The code also revealed to be hardly maintainable. This paper presents the next evolutionary step of the UCM toolkit, a redesign and redirection of our latest attempt acknowledging and integrating recent technologies and a simpler, maintainable and yet scalable manner. The extended version of the job logging package is built upon three-tier approach based on Task, Job and Event, and features a Web-Service based logging API, a responsive AJAX-powered user interface, and a database back-end relying on MongoDB, which is uniquely suited for STAR needs. In addition, we present details of integration of this logging package with the STAR offline and online software frameworks. Leveraging on the reported experience and work from the ATLAS and CMS experience on using the ESPER engine, we discuss and show how such approach has been implemented in STAR for meta-data event triggering stream processing and filtering. An ESPER based solution seems to fit well into the online data acquisition system where many systems are monitored.

  14. Front-End/Gateway Software: Availability and Usefulness.

    ERIC Educational Resources Information Center

    Kesselman, Martin

    1985-01-01

    Reviews features of front-end software packages (interface between user and online system)--database selection, search strategy development, saving and downloading, hardware and software requirements, training and documentation, online systems and database accession, and costs--and discusses gateway services (user searches through intermediary…

  15. Cross-platform validation and analysis environment for particle physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for onlinemore » validation of Monte Carlo event samples through a web interface.« less

  16. Bigdata Driven Cloud Security: A Survey

    NASA Astrophysics Data System (ADS)

    Raja, K.; Hanifa, Sabibullah Mohamed

    2017-08-01

    Cloud Computing (CC) is a fast-growing technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Recently, it has been observed that massive growth in the scale of data or big data generated through cloud computing. CC consists of a front-end, includes the users’ computers and software required to access the cloud network, and back-end consists of various computers, servers and database systems that create the cloud. In SaaS (Software as-a-Service - end users to utilize outsourced software), PaaS (Platform as-a-Service-platform is provided) and IaaS (Infrastructure as-a-Service-physical environment is outsourced), and DaaS (Database as-a-Service-data can be housed within a cloud), where leading / traditional cloud ecosystem delivers the cloud services become a powerful and popular architecture. Many challenges and issues are in security or threats, most vital barrier for cloud computing environment. The main barrier to the adoption of CC in health care relates to Data security. When placing and transmitting data using public networks, cyber attacks in any form are anticipated in CC. Hence, cloud service users need to understand the risk of data breaches and adoption of service delivery model during deployment. This survey deeply covers the CC security issues (covering Data Security in Health care) so as to researchers can develop the robust security application models using Big Data (BD) on CC (can be created / deployed easily). Since, BD evaluation is driven by fast-growing cloud-based applications developed using virtualized technologies. In this purview, MapReduce [12] is a good example of big data processing in a cloud environment, and a model for Cloud providers.

  17. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    DOE PAGES

    Claus, R.

    2015-10-23

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQmore » building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. Furthermore, the full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.« less

  18. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    NASA Astrophysics Data System (ADS)

    Claus, R.; ATLAS Collaboration

    2016-07-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  19. Integration of Photo-Patternable Low-κ Material into Advanced Cu Back-End-Of-The-Line

    NASA Astrophysics Data System (ADS)

    Lin, Qinghuang; Nelson, Alshakim; Chen, Shyng-Tsong; Brock, Philip; Cohen, Stephan A.; Davis, Blake; Kaplan, Richard; Kwong, Ranee; Liniger, Eric; Neumayer, Debra; Patel, Jyotica; Shobha, Hosadurga; Sooriyakumaran, Ratnam; Purushothaman, Sampath; Miller, Robert; Spooner, Terry; Wisnieff, Robert

    2010-05-01

    We report herein the demonstration of a simple, low-cost Cu back-end-of-the-line (BEOL) dual-damascene integration using a novel photo-patternable low-κ dielectric material concept that dramatically reduces Cu BEOL integration complexity. This κ=2.7 photo-patternable low-κ material is based on the SiCOH-based material platform and has sub-200 nm resolution capability with 248 nm optical lithography. Cu/photo-patternable low-κ dual-damascene integration at 45 nm node BEOL fatwire levels has been demonstrated with very high electrical yields using the current manufacturing infrastructure. The photo-patternable low-κ concept is, therefore, a promising technology for highly efficient semiconductor Cu BEOL manufacturing.

  20. Experimental demonstration of software defined data center optical networks with Tbps end-to-end tunability

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Zhang, Jie; Ji, Yuefeng; Li, Hui; Wang, Huitao; Ge, Chao

    2015-10-01

    The end-to-end tunability is important to provision elastic channel for the burst traffic of data center optical networks. Then, how to complete the end-to-end tunability based on elastic optical networks? Software defined networking (SDN) based end-to-end tunability solution is proposed for software defined data center optical networks, and the protocol extension and implementation procedure are designed accordingly. For the first time, the flexible grid all optical networks with Tbps end-to-end tunable transport and switch system have been online demonstrated for data center interconnection, which are controlled by OpenDayLight (ODL) based controller. The performance of the end-to-end tunable transport and switch system has been evaluated with wavelength number tuning, bit rate tuning, and transmit power tuning procedure.

  1. Configurable memory system and method for providing atomic counting operations in a memory device

    DOEpatents

    Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin

    2010-09-14

    A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.

  2. Real-Time Payload Control and Monitoring on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Sun, Charles; Windrem, May; Givens, John J. (Technical Monitor)

    1998-01-01

    World Wide Web (W3) technologies such as the Hypertext Transfer Protocol (HTTP) and the Java object-oriented programming environment offer a powerful, yet relatively inexpensive, framework for distributed application software development. This paper describes the design of a real-time payload control and monitoring system that was developed with W3 technologies at NASA Ames Research Center. Based on Java Development Toolkit (JDK) 1.1, the system uses an event-driven "publish and subscribe" approach to inter-process communication and graphical user-interface construction. A C Language Integrated Production System (CLIPS) compatible inference engine provides the back-end intelligent data processing capability, while Oracle Relational Database Management System (RDBMS) provides the data management function. Preliminary evaluation shows acceptable performance for some classes of payloads, with Java's portability and multimedia support identified as the most significant benefit.

  3. Building a Snow Data Management System using Open Source Software (and IDL)

    NASA Astrophysics Data System (ADS)

    Goodale, C. E.; Mattmann, C. A.; Ramirez, P.; Hart, A. F.; Painter, T.; Zimdars, P. A.; Bryant, A.; Brodzik, M.; Skiles, M.; Seidel, F. C.; Rittger, K. E.

    2012-12-01

    At NASA's Jet Propulsion Laboratory free and open source software is used everyday to support a wide range of projects, from planetary to climate to research and development. In this abstract I will discuss the key role that open source software has played in building a robust science data processing pipeline for snow hydrology research, and how the system is also able to leverage programs written in IDL, making JPL's Snow Data System a hybrid of open source and proprietary software. Main Points: - The Design of the Snow Data System (illustrate how the collection of sub-systems are combined to create a complete data processing pipeline) - Discuss the Challenges of moving from a single algorithm on a laptop, to running 100's of parallel algorithms on a cluster of servers (lesson's learned) - Code changes - Software license related challenges - Storage Requirements - System Evolution (from data archiving, to data processing, to data on a map, to near-real-time products and maps) - Road map for the next 6 months (including how easily we re-used the snowDS code base to support the Airborne Snow Observatory Mission) Software in Use and their Software Licenses: IDL - Used for pre and post processing of data. Licensed under a proprietary software license held by Excelis. Apache OODT - Used for data management and workflow processing. Licensed under the Apache License Version 2. GDAL - Geospatial Data processing library used for data re-projection currently. Licensed under the X/MIT license. GeoServer - WMS Server. Licensed under the General Public License Version 2.0 Leaflet.js - Javascript web mapping library. Licensed under the Berkeley Software Distribution License. Python - Glue code and miscellaneous data processing support. Licensed under the Python Software Foundation License. Perl - Script wrapper for running the SCAG algorithm. Licensed under the General Public License Version 3. PHP - Front-end web application programming. Licensed under the PHP License Version 3.01

  4. Asteroid Discovery and Characterization with the Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Jones, R. Lynne; Jurić, Mario; Ivezić, Željko

    2016-01-01

    The Large Synoptic Survey Telescope (LSST) will be a ground-based, optical, all-sky, rapid cadence survey project with tremendous potential for discovering and characterizing asteroids. With LSST's large 6.5m diameter primary mirror, a wide 9.6 square degree field of view 3.2 Gigapixel camera, and rapid observational cadence, LSST will discover more than 5 million asteroids over its ten year survey lifetime. With a single visit limiting magnitude of 24.5 in r band, LSST will be able to detect asteroids in the Main Belt down to sub-kilometer sizes. The current strawman for the LSST survey strategy is to obtain two visits (each `visit' being a pair of back-to-back 15s exposures) per field, separated by about 30 minutes, covering the entire visible sky every 3-4 days throughout the observing season, for ten years. The catalogs generated by LSST will increase the known number of small bodies in the Solar System by a factor of 10-100 times, among all populations. The median number of observations for Main Belt asteroids will be on the order of 200-300, with Near Earth Objects receiving a median of 90 observations. These observations will be spread among ugrizy bandpasses, providing photometric colors and allow sparse lightcurve inversion to determine rotation periods, spin axes, and shape information. These catalogs will be created using automated detection software, the LSST Moving Object Processing System (MOPS), that will take advantage of the carefully characterized LSST optical system, cosmetically clean camera, and recent improvements in difference imaging. Tests with the prototype MOPS software indicate that linking detections (and thus `discovery') will be possible at LSST depths with our working model for the survey strategy, but evaluation of MOPS and improvements in the survey strategy will continue. All data products and software created by LSST will be publicly available.

  5. GSFC Technology Thrusts and Partnership Opportunities

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline

    2010-01-01

    This slide presentation reviews the technology thrusts and the opportunities to partner in developing software in support of the technological advances at the Goddard Space Flight Center (GSFC). There are thrusts in development of end-to-end software systems for mission data systems in areas of flight software, ground data systems, flight dynamic systems and science data systems. The required technical expertise is reviewed, and the supported missions are shown for the various areas given.

  6. Software Management System

    NASA Technical Reports Server (NTRS)

    1994-01-01

    A software management system, originally developed for Goddard Space Flight Center (GSFC) by Century Computing, Inc. has evolved from a menu and command oriented system to a state-of-the art user interface development system supporting high resolution graphics workstations. Transportable Applications Environment (TAE) was initially distributed through COSMIC and backed by a TAE support office at GSFC. In 1993, Century Computing assumed the support and distribution functions and began marketing TAE Plus, the system's latest version. The software is easy to use and does not require programming experience.

  7. A real-time coherent dedispersion pipeline for the giant metrewave radio telescope

    NASA Astrophysics Data System (ADS)

    De, Kishalay; Gupta, Yashwant

    2016-02-01

    A fully real-time coherent dedispersion system has been developed for the pulsar back-end at the Giant Metrewave Radio Telescope (GMRT). The dedispersion pipeline uses the single phased array voltage beam produced by the existing GMRT software back-end (GSB) to produce coherently dedispersed intensity output in real time, for the currently operational bandwidths of 16 MHz and 32 MHz. Provision has also been made to coherently dedisperse voltage beam data from observations recorded on disk. We discuss the design and implementation of the real-time coherent dedispersion system, describing the steps carried out to optimise the performance of the pipeline. Presently functioning on an Intel Xeon X5550 CPU equipped with a NVIDIA Tesla C2075 GPU, the pipeline allows dispersion free, high time resolution data to be obtained in real-time. We illustrate the significant improvements over the existing incoherent dedispersion system at the GMRT, and present some preliminary results obtained from studies of pulsars using this system, demonstrating its potential as a useful tool for low frequency pulsar observations. We describe the salient features of our implementation, comparing it with other recently developed real-time coherent dedispersion systems. This implementation of a real-time coherent dedispersion pipeline for a large, low frequency array instrument like the GMRT, will enable long-term observing programs using coherent dedispersion to be carried out routinely at the observatory. We also outline the possible improvements for such a pipeline, including prospects for the upgraded GMRT which will have bandwidths about ten times larger than at present.

  8. Requirements Document for Development of a Livermore Tomography Tools Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seetho, I. M.

    In this document, we outline an exercise performed at LLNL to evaluate the user interface deficits of a LLNL-developed CT reconstruction software package, Livermore Tomography Tools (LTT). We observe that a difficult-to-use command line interface and the lack of support functions compound to generate a bottleneck in the CT reconstruction process when input parameters to key functions are not well known. Through the exercise of systems engineering best practices, we generate key performance parameters for a LTT interface refresh, and specify a combination of back-end (“test-mode” functions) and front-end (graphical user interface visualization and command scripting tools) solutions to LTT’smore » poor user interface that aim to mitigate issues and lower costs associated with CT reconstruction using LTT. Key functional and non-functional requirements and risk mitigation strategies for the solution are outlined and discussed.« less

  9. Changes in physical performance among construction workers during extended workweeks with 12-hour workdays.

    PubMed

    Faber, Anne; Strøyer, Jesper; Hjortskov, Nis; Schibye, Bente

    2010-01-01

    To investigate changes of physical performance during long working hours and extended workweeks among construction workers with temporary accommodation in camps. Nineteen construction workers with 12-h workdays and extended workweeks participated. Physical performance in the morning and evening of the second and eleventh workdays was tested by endurance, ability to react to a sudden load, flexibility of the back, handgrip strength and sub-maximal HR during a bicycle test. HR was registered throughout two separate workdays. HR during each of the two separate workdays corresponded to a relative workload of 25%. Sub-maximal HR was lower, reaction time faster and handgrip strength higher in the end of each test day. In the end of the work period, sub-maximal HR was lower, reaction time faster and sitting balance was better. No trends of decreased physical performance were found after a workday or a work period.

  10. Gap Resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labutti, Kurt; Foster, Brian; Lapidus, Alla

    Gap Resolution is a software package that was developed to improve Newbler genome assemblies by automating the closure of sequence gaps caused by repetitive regions in the DNA. This is done by performing the follow steps:1) Identify and distribute the data for each gap in sub-projects. 2) Assemble the data associated with each sub-project using a secondary assembler, such as Newbler or PGA. 3) Determine if any gaps are closed after reassembly, and either design fakes (consensus of closed gap) for those that closed or lab experiments for those that require additional data. The software requires as input a genomemore » assembly produce by the Newbler assembler provided by Roche and 454 data containing paired-end reads.« less

  11. Impacts and Viability of Open Source Software on Earth Science Metadata Clearing House and Service Registry Applications

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Cechini, M. F.; Mitchell, A.

    2011-12-01

    Earth Science applications typically deal with large amounts of data and high throughput rates, if not also high transaction rates. While Open Source is frequently used for smaller scientific applications, large scale, highly available systems frequently fall back to "enterprise" class solutions like Oracle RAC or commercial grade JEE Application Servers. NASA's Earth Observing System Data and Information System (EOSDIS) provides end-to-end capabilities for managing NASA's Earth science data from multiple sources - satellites, aircraft, field measurements, and various other programs. A core capability of EOSDIS, the Earth Observing System (EOS) Clearinghouse (ECHO), is a highly available search and order clearinghouse of over 100 million pieces of science data that has evolved from its early R&D days to a fully operational system. Over the course of this maturity ECHO has largely transitioned from commercial frameworks, databases, and operating systems to Open Source solutions...and in some cases, back. In this talk we discuss the progression of our technological solutions and our lessons learned in the areas of: ? High performance, large scale searching solutions ? GeoSpatial search capabilities and dealing with multiple coordinate systems ? Search and storage of variable format source (science) data ? Highly available deployment solutions ? Scalable (elastic) solutions to visual searching and image handling Throughout the evolution of the ECHO system we have had to evaluate solutions with respect to performance, cost, developer productivity, reliability, and maintainability in the context of supporting global science users. Open Source solutions have played a significant role in our architecture and development but several critical commercial components remain (or have been reinserted) to meet our operational demands.

  12. pysimm: A Python Package for Simulation of Molecular Systems

    NASA Astrophysics Data System (ADS)

    Fortunato, Michael; Colina, Coray

    pysimm, short for python simulation interface for molecular modeling, is a python package designed to facilitate the structure generation and simulation of molecular systems through convenient and programmatic access to object-oriented representations of molecular system data. This poster presents core features of pysimm and design philosophies that highlight a generalized methodology for incorporation of third-party software packages through API interfaces. The integration with the LAMMPS simulation package is explained to demonstrate this methodology. pysimm began as a back-end python library that powered a cloud-based application on nanohub.org for amorphous polymer simulation. The extension from a specific application library to general purpose simulation interface is explained. Additionally, this poster highlights the rapid development of new applications to construct polymer chains capable of controlling chain morphology such as molecular weight distribution and monomer composition.

  13. Chronic administration of phenytoin and pleomorphic adenoma: A case report and review of literature.

    PubMed

    Maharshi, Vikas; Nagar, Pravesh

    2017-01-01

    Adverse drug effects that are uncommon or appear only on chronic administration of a drug may not be detected in clinical trials. This explains the need of strict post-marketing vigilance on drug use. Phenytoin administration has been shown in the literature to be associated with development of neoplasia (benign/malignant). In our knowledge current work represents the first case of pleomorphic-adenoma of sub-mandibular salivary gland developed following chronic phenytoin use. A 40 year old male having a history of head trauma twenty years back, had been on tablet phenytoin 100 mg thrice daily since then. One year back he noticed a small swelling in left sub-mandibular region and gradually increasing in size. FNAC and CECT revealed the diagnosis of pleomorphic-adenoma of sub-mandibular salivary gland. Other causes were ruled out. Surgical excision was performed successfully and continuing follow-up with no recurrence at the end of 6 months. Histo-pathogical examination of the tissue did not show any malignant changes.

  14. CosmoQuest: A Cyber-Infrastructure for Crowdsourcing Planetary Surface Mapping and More

    NASA Astrophysics Data System (ADS)

    Gay, P.; Lehan, C.; Moore, J.; Bracey, G.; Gugliucci, N.

    2014-04-01

    The design and implementation of programs to crowdsource science presents a unique set of challenges to system architects, programmers, and designers. The CosmoQuest Citizen Science Builder (CSB) is an open source platform designed to take advantage of crowd computing and open source platforms to solve crowdsourcing problems in Planetary Science. CSB combines a clean user interface with a powerful back end to allow the quick design and deployment of citizen science sites that meet the needs of both the random Joe Public, and the detail driven Albert Professional. In this talk, the software will be overviewed, and the results of usability testing and accuracy testing with both citizen and professional scientists will be discussed.

  15. CamBAfx: Workflow Design, Implementation and Application for Neuroimaging

    PubMed Central

    Ooi, Cinly; Bullmore, Edward T.; Wink, Alle-Meije; Sendur, Levent; Barnes, Anna; Achard, Sophie; Aspden, John; Abbott, Sanja; Yue, Shigang; Kitzbichler, Manfred; Meunier, David; Maxim, Voichita; Salvador, Raymond; Henty, Julian; Tait, Roger; Subramaniam, Naresh; Suckling, John

    2009-01-01

    CamBAfx is a workflow application designed for both researchers who use workflows to process data (consumers) and those who design them (designers). It provides a front-end (user interface) optimized for data processing designed in a way familiar to consumers. The back-end uses a pipeline model to represent workflows since this is a common and useful metaphor used by designers and is easy to manipulate compared to other representations like programming scripts. As an Eclipse Rich Client Platform application, CamBAfx's pipelines and functions can be bundled with the software or downloaded post-installation. The user interface contains all the workflow facilities expected by consumers. Using the Eclipse Extension Mechanism designers are encouraged to customize CamBAfx for their own pipelines. CamBAfx wraps a workflow facility around neuroinformatics software without modification. CamBAfx's design, licensing and Eclipse Branding Mechanism allow it to be used as the user interface for other software, facilitating exchange of innovative computational tools between originating labs. PMID:19826470

  16. A Mechanized Decision Support System for Academic Scheduling.

    DTIC Science & Technology

    1986-03-01

    an operational system called software. The first step in the development phase is Design . Designers destribute software control by factoring the Data...SUBJECT TERMS (Continue on reverse if necessary and identify by block number) ELD GROUP SUB-GROUP Scheduling, Decision Support System , Software Design ...scheduling system . It will also examine software - design techniques to identify the most appropriate method- ology for this problem. " - Chapter 3 will

  17. Command and Control Software Development Memory Management

    NASA Technical Reports Server (NTRS)

    Joseph, Austin Pope

    2017-01-01

    This internship was initially meant to cover the implementation of unit test automation for a NASA ground control project. As is often the case with large development projects, the scope and breadth of the internship changed. Instead, the internship focused on finding and correcting memory leaks and errors as reported by a COTS software product meant to track such issues. Memory leaks come in many different flavors and some of them are more benign than others. On the extreme end a program might be dynamically allocating memory and not correctly deallocating it when it is no longer in use. This is called a direct memory leak and in the worst case can use all the available memory and crash the program. If the leaks are small they may simply slow the program down which, in a safety critical system (a system for which a failure or design error can cause a risk to human life), is still unacceptable. The ground control system is managed in smaller sub-teams, referred to as CSCIs. The CSCI that this internship focused on is responsible for monitoring the health and status of the system. This team's software had several methods/modules that were leaking significant amounts of memory. Since most of the code in this system is safety-critical, correcting memory leaks is a necessity.

  18. Solid State Lighting Program (Falcon)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meeks, Steven

    2012-06-30

    Over the past two years, KLA-Tencor and partners successfully developed and deployed software and hardware tools that increase product yield for High Brightness LED (HBLED) manufacturing and reduce product development and factory ramp times. This report summarizes our development effort and details of how the results of the Solid State Light Program (Falcon) have started to help HBLED manufacturers optimize process control by enabling them to flag and correct identified killer defect conditions at any point of origin in the process manufacturing flow. This constitutes a quantum leap in yield management over current practice. Current practice consists of die dispositioningmore » which is just rejection of bad die at end of process based upon probe tests, loosely assisted by optical in-line monitoring for gross process deficiencies. For the first time, and as a result of our Solid State Lighting Program, our LED manufacturing partners have obtained the software and hardware tools that optimize individual process steps to control killer defects at the point in the processes where they originate. Products developed during our two year program enable optimized inspection strategies for many product lines to minimize cost and maximize yield. The Solid State Lighting Program was structured in three phases: i) the development of advanced imaging modes that achieve clear separation between LED defect types, improves signal to noise and scan rates, and minimizes nuisance defects for both front end and back end inspection tools, ii) the creation of defect source analysis (DSA) software that connect the defect maps from back-end and front-end HBLED manufacturing tools to permit the automatic overlay and traceability of defects between tools and process steps, suppress nuisance defects, and identify the origin of killer defects with process step and conditions, and iii) working with partners (Philips Lumileds) on product wafers, obtain a detailed statistical correlation of automated defect and DSA map overlay to failed die identified using end product probe test results. Results from our two year effort have led to “automated end-to-end defect detection” with full defect traceability and the ability to unambiguously correlate device killer defects to optically detected features and their point of origin within the process. Success of the program can be measured by yield improvements at our partner’s facilities and new product orders.« less

  19. Remote Software Application and Display Development

    NASA Technical Reports Server (NTRS)

    Sanders, Brandon T.

    2014-01-01

    The era of the shuttle program has come to an end, but only to give rise to newer and more exciting projects. Now is the time of the Orion spacecraft, a work of art designed to exceed all previous endeavors of man. NASA is exiting the time of exploration and is entering a new period, a period of pioneering. With this new mission, many of NASAs organizations must undergo a great deal of change and development to support the Orion missions. The Spaceport Command and Control System (SCCS) is the new system that will provide NASA the ability to launch rockets into orbit and thus control Orion and other spacecraft as the goal of populating Mars becomes ever increasingly tangible. Since the previous control system, Launch Processing System (LPS), was primarily designed to launch the shuttles, SCCS was needed as Kennedy Space Center (KSC) reorganized to a multiuser spaceport for commercial flights, providing a more versatile control over rockets. Within SCCS, is the Launch Control System (LCS), which is the remote software behind the command and monitoring of flight and ground system hardware. This internship at KSC has involved two main components in LCS, including Remote Software Application and Display development. The display environment provides a graphical user interface for an operator to view and see if any cautions are raised, while the remote applications are the backbone that communicate with hardware, and then relay the data back to the displays. These elements go hand in hand as they provide monitoring and control over hardware and software alike from the safety of the Launch Control Center. The remote software applications are written in Application Control Language (ACL), which must undergo unit testing to ensure data integrity. This paper describes both the implementation and writing of unit tests in ACL code for remote software applications, as well as the building of remote displays to be used in the Launch Control Center (LCC).

  20. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology

    PubMed Central

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E.; Troein, Carl; Millar, Andrew J.; Goryanin, Igor; Gilmore, Stephen

    2013-01-01

    Summary: Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats. Availability and implementation: All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials. Contact: stg@inf.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23329415

  1. Development of autonomous gamma dose logger for environmental monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jisha, N. V.; Krishnakumar, D. N.; Surya Prakash, G.

    2012-03-15

    Continuous monitoring and archiving of background radiation levels in and around the nuclear installation is essential and the data would be of immense use during analysis of any untoward incidents. A portable Geiger Muller detector based autonomous gamma dose logger (AGDL) for environmental monitoring is indigenously designed and developed. The system operations are controlled by microcontroller (AT89S52) and the main features of the system are software data acquisition, real time LCD display of radiation level, data archiving at removable compact flash card. The complete system operates on 12 V battery backed up by solar panel and hence the system ismore » totally portable and ideal for field use. The system has been calibrated with Co-60 source (8.1 MBq) at various source-detector distances. The system is field tested and performance evaluation is carried out. This paper covers the design considerations of the hardware, software architecture of the system along with details of the front-end operation of the autonomous gamma dose logger and the data file formats. The data gathered during field testing and inter comparison with GammaTRACER are also presented in the paper. AGDL has shown excellent correlation with energy fluence monitor tuned to identify {sup 41}Ar, proving its utility for real-time plume tracking and source term estimation.« less

  2. Development of autonomous gamma dose logger for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Jisha, N. V.; Krishnakumar, D. N.; Surya Prakash, G.; Kumari, Anju; Baskaran, R.; Venkatraman, B.

    2012-03-01

    Continuous monitoring and archiving of background radiation levels in and around the nuclear installation is essential and the data would be of immense use during analysis of any untoward incidents. A portable Geiger Muller detector based autonomous gamma dose logger (AGDL) for environmental monitoring is indigenously designed and developed. The system operations are controlled by microcontroller (AT89S52) and the main features of the system are software data acquisition, real time LCD display of radiation level, data archiving at removable compact flash card. The complete system operates on 12 V battery backed up by solar panel and hence the system is totally portable and ideal for field use. The system has been calibrated with Co-60 source (8.1 MBq) at various source-detector distances. The system is field tested and performance evaluation is carried out. This paper covers the design considerations of the hardware, software architecture of the system along with details of the front-end operation of the autonomous gamma dose logger and the data file formats. The data gathered during field testing and inter comparison with GammaTRACER are also presented in the paper. AGDL has shown excellent correlation with energy fluence monitor tuned to identify 41Ar, proving its utility for real-time plume tracking and source term estimation.

  3. A mobile trauma database with charge capture.

    PubMed

    Moulton, Steve; Myung, Dan; Chary, Aron; Chen, Joshua; Agarwal, Suresh; Emhoff, Tim; Burke, Peter; Hirsch, Erwin

    2005-11-01

    Charge capture plays an important role in every surgical practice. We have developed and merged a custom mobile database (DB) system with our trauma registry (TRACS), to better understand our billing methods, revenue generators, and areas for improved revenue capture. The mobile database runs on handheld devices using the Windows Compact Edition platform. The front end was written in C# and the back end is SQL. The mobile database operates as a thick client; it includes active and inactive patient lists, billing screens, hot pick lists, and Current Procedural Terminology and International Classification of Diseases, Ninth Revision code sets. Microsoft Information Internet Server provides secure data transaction services between the back ends stored on each device. Traditional, hand written billing information for three of five adult trauma surgeons was averaged over a 5-month period. Electronic billing information was then collected over a 3-month period using handheld devices and the subject software application. One surgeon used the software for all 3 months, and two surgeons used it for the latter 2 months of the electronic data collection period. This electronic billing information was combined with TRACS data to determine the clinical characteristics of the trauma patients who were and were not captured using the mobile database. Total charges increased by 135%, 148%, and 228% for each of the three trauma surgeons who used the mobile DB application. The majority of additional charges were for evaluation and management services. Patients who were captured and billed at the point of care using the mobile DB had higher Injury Severity Scores, were more likely to undergo an operative procedure, and had longer lengths of stay compared with those who were not captured. Total charges more than doubled using a mobile database to bill at the point of care. A subsequent comparison of TRACS data with billing information revealed a large amount of uncaptured patient revenue. Greater familiarity and broader use of mobile database technology holds the potential for even greater revenue capture.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawamura, Yoshiyuki

    The radiative forcing of the greenhouse gases has been studied being based on computational simulations or the observation of the real atmosphere meteorologically. In order to know the greenhouse effect more deeply and to study it from various viewpoints, the study on it in a laboratory scale is important. We have developed a direct measurement system for the infrared back radiation from the carbon dioxide (CO{sub 2}) gas. The system configuration is similar with that of the practical earth-atmosphere-space system. Using this system, the back radiation from the CO{sub 2} gas was directly measured in a laboratory scale, which roughlymore » coincides with meteorologically predicted value.« less

  5. An Autonomous Cryobot Synthetic Aperture Radar for Subsurface Exploration of Europa

    NASA Astrophysics Data System (ADS)

    Pradhan, O.; Gasiewski, A. J.

    2015-12-01

    We present the design and field testing of a forward-looking end-fire synthetic aperture radar (SAR) for the 'Very deep Autonomous Laser-powered Kilowatt-class Yo-yoing Robotic Ice Explorer' (VALKYRIE) ice-penetrating cryobot. This design demonstrates critical technologies that will support an eventual landing and ice penetrating mission to Jupiter's icy moon, Europa. Results proving the feasibility of an end-fire SAR system for vehicle guidance and obstacle avoidance in a sub-surface ice environment will be presented. Data collected by the SAR will also be used for constructing sub-surface images of the glacier which can be used for: (i) mapping of englacial features such as crevasses, moulins, and embedded liquid water and (ii) ice-depth and glacier bed analysis to construct digital elevation models (DEM) that can help in the selection of crybot trajectories and future drill sites for extracting long-term climate records. The project consists of three parts, (i) design of an array of four conformal cavity-backed log-periodic folded slot dipole array (LPFSA) antennas that form agile radiating elements, (ii) design of a radar system that includes RF signal generation, 4x4 transmit-receive antenna switching and isolation and digital SAR data processing and (iii) field testing of the SAR in melt holes. The antennas have been designed, fabricated, and lab tested at the Center for Environmental Technology (CET) at CU-Boulder. The radar system was also designed and integrated at CET utilizing rugged RF components and FPGA based digital processing. Field testing was performed in conjunction with VALKYRIE tests by Stone Aerospace in June, 2015 on Matanuska Glacier, Alaska. The antennas are designed to operate inside ice while being immersed in a thin layer of surrounding low-conductivity melt water. Small holes in the corners of the cavities allow flooding of these cavities with the same melt-water thus allowing for quarter-wavelength cavity-backed reflection. Testing of the antenna array was first carried out by characterizing their operation inside a large ice block at the Stone Aerospace facility in Austin, TX. The complete radar system was then tested on the Matanuska glacier in Alaska, which is an effective Earth analog to Europan sub-surface exploration.

  6. Real time computer data system for the 40 x 80 ft wind tunnel facility at Ames Research Center

    NASA Technical Reports Server (NTRS)

    Cambra, J. M.; Tolari, G. P.

    1974-01-01

    The wind tunnel realtime computer system is a distributed data gathering system that features a master computer subsystem, a high speed data gathering subsystem, a quick look dynamic analysis and vibration control subsystem, an analog recording back-up subsystem, a pulse code modulation (PCM) on-board subsystem, a communications subsystem, and a transducer excitation and calibration subsystem. The subsystems are married to the master computer through an executive software system and standard hardware and FORTRAN software interfaces. The executive software system has four basic software routines. These are the playback, setup, record, and monitor routines. The standard hardware interfaces along with the software interfaces provide the system with the capability of adapting to new environments.

  7. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system throughmore » fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of existing publish-subscribe tools. We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.« less

  8. A computerized system to measure and predict air quality for emission control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crooks, G.; Ciccone, A.; Frattolillo, P.

    1997-12-31

    A Supplementary Emission Control (SEC) system has been developed on behalf of the Association Industrielle de l`Est de Montreal (AIEM). The objective of the SEC is to avoid exceedences of the Montreal Urban Community (MUC) 24 hour ambient Air Quality Standard (AQS) for sulphur dioxide in the industrial East Montreal area. The SEC system is comprised of: 3 continuous SO{sub 2} monitoring stations with data loggers and remote communications; a meteorological tower with data logger and modem for acquiring local meteorology; communications with Environment Canada to download meteorological forecast data; a polling PC for data retrieval; and Windows NT basedmore » software running on the AIEM computer server. The SEC software utilizes relational databases to store and maintain measured SO{sub 2} concentration data, emission data, as well as observed and forecast meteorological data. The SEC system automatically executes a numerical dispersion model to forecast SO{sub 2} concentrations up to six hours in the future. Based on measured SO{sub 2} concentrations at the monitoring stations and the six hour forecast concentrations, the system determines if local sources should reduce their emission levels to avoid potential exceedences of the AQS. The SEC system also includes a Graphical User Interface (GUI) for user access to the system. The SEC system and software are described, and the accuracy of the system at forecasting SO{sub 2} concentrations is examined.« less

  9. DSN G/T(sub op) and telecommunications system performance

    NASA Technical Reports Server (NTRS)

    Stelzried, C.; Clauss, R.; Rafferty, W.; Petty, S.

    1992-01-01

    Provided here is an intersystem comparison of present and evolving Deep Space Network (DSN) microwave receiving systems. Comparisons of the receiving systems are based on the widely used G/T sub op figure of merit, which is defined as antenna gain divided by operating system noise temperature. In 10 years, it is expected that the DSN 32 GHz microwave receiving system will improve the G/T sub op performance over the current 8.4 GHz system by 8.3 dB. To compare future telecommunications system end-to-end performance, both the receiving systems' G/T sub op and spacecraft transmit parameters are used. Improving the 32 GHz spacecraft transmitter system is shown to increase the end-to-end telecommunications system performance an additional 3.2 dB, for a net improvement of 11.5 dB. These values are without a planet in the field of view (FOV). A Saturn mission is used for an example calculation to indicate the degradation in performance with a planet in the field of view.

  10. Geographic Information Systems and Web Page Development

    NASA Technical Reports Server (NTRS)

    Reynolds, Justin

    2004-01-01

    The Facilities Engineering and Architectural Branch is responsible for the design and maintenance of buildings, laboratories, and civil structures. In order to improve efficiency and quality, the FEAB has dedicated itself to establishing a data infrastructure based on Geographic Information Systems, GIS. The value of GIS was explained in an article dating back to 1980 entitled "Need for a Multipurpose Cadastre" which stated, "There is a critical need for a better land-information system in the United States to improve land-conveyance procedures, furnish a basis for equitable taxation, and provide much-needed information for resource management and environmental planning." Scientists and engineers both point to GIS as the solution. What is GIS? According to most text books, Geographic Information Systems is a class of software that stores, manages, and analyzes mapable features on, above, or below the surface of the earth. GIS software is basically database management software to the management of spatial data and information. Simply put, Geographic Information Systems manage, analyze, chart, graph, and map spatial information. GIS can be broken down into two main categories, urban GIS and natural resource GIS. Further still, natural resource GIS can be broken down into six sub-categories, agriculture, forestry, wildlife, catchment management, archaeology, and geology/mining. Agriculture GIS has several applications, such as agricultural capability analysis, land conservation, market analysis, or whole farming planning. Forestry GIs can be used for timber assessment and management, harvest scheduling and planning, environmental impact assessment, and pest management. GIS when used in wildlife applications enables the user to assess and manage habitats, identify and track endangered and rare species, and monitor impact assessment.

  11. The control system of the 12-m medium-size telescope prototype: a test-ground for the CTA array control

    NASA Astrophysics Data System (ADS)

    Oya, I.; Anguner, E. A.; Behera, B.; Birsin, E.; Fuessling, M.; Lindemann, R.; Melkumyan, D.; Schlenstedt, S.; Schmidt, T.; Schwanke, U.; Sternberger, R.; Wegner, P.; Wiesand, S.

    2014-07-01

    The Cherenkov Telescope Array (CTA) will be the next generation ground-based very-high energy -ray observatory. CTA will consist of two arrays: one in the Northern hemisphere composed of about 20 telescopes, and the other one in the Southern hemisphere composed of about 100 telescopes, both arrays containing telescopes of different sizes and types and in addition numerous auxiliary devices. In order to provide a test-ground for the CTA array control, the steering software of the 12-m medium size telescope (MST) prototype deployed in Berlin has been implemented using the tools and design concepts under consideration to be used for the control of the CTA array. The prototype control system is implemented based on the Atacama Large Millimeter/submillimeter Array (ALMA) Common Software (ACS) control middleware, with components implemented in Java, C++ and Python. The interfacing to the hardware is standardized via the Object Linking and Embedding for Process Control Unified Architecture (OPC UA). In order to access the OPC UA servers from the ACS framework in a common way, a library has been developed that allows to tie the OPC UA server nodes, methods and events to the equivalents in ACS components. The front-end of the archive system is able to identify the deployed components and to perform the sampling of the monitoring points of each component following time and value change triggers according to the selected configurations. The back-end of the archive system of the prototype is composed by two different databases: MySQL and MongoDB. MySQL has been selected as storage of the system configurations, while MongoDB is used to have an efficient storage of device monitoring data, CCD images, logging and alarm information. In this contribution, the details and conclusions on the implementation of the control software of the MST prototype are presented.

  12. An Internet Protocol-Based Software System for Real-Time, Closed-Loop, Multi-Spacecraft Mission Simulation Applications

    NASA Technical Reports Server (NTRS)

    Davis, George; Cary, Everett; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis

    2003-01-01

    The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.

  13. A Distributed Simulation Software System for Multi-Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Burns, Richard; Davis, George; Cary, Everett

    2003-01-01

    The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.

  14. X-wing fly-by-wire vehicle management system

    NASA Technical Reports Server (NTRS)

    Fischer, Jr., William C. (Inventor)

    1990-01-01

    A complete, computer based, vehicle management system (VMS) for X-Wing aircraft using digital fly-by-wire technology controlling many subsystems and providing functions beyond the classical aircraft flight control system. The vehicle management system receives input signals from a multiplicity of sensors and provides commands to a large number of actuators controlling many subsystems. The VMS includes--segregating flight critical and mission critical factors and providing a greater level of back-up or redundancy for the former; centralizing the computation of functions utilized by several subsystems (e.g. air data, rotor speed, etc.); integrating the control of the flight control functions, the compressor control, the rotor conversion control, vibration alleviation by higher harmonic control, engine power anticipation and self-test, all in the same flight control computer (FCC) hardware units. The VMS uses equivalent redundancy techniques to attain quadruple equivalency levels; includes alternate modes of operation and recovery means to back-up any functions which fail; and uses back-up control software for software redundancy.

  15. Control Software for the VERITAS Cerenkov Telescope System

    NASA Astrophysics Data System (ADS)

    Krawczynski, H.; Olevitch, M.; Sembroski, G.; Gibbs, K.

    2003-07-01

    The VERITAS collab oration is developing a system of initially 4 and ˇ eventually 7 Cerenkov telescopes of the 12 m diameter class for high sensitivity gamma-ray astronomy in the >50 GeV energy range. In this contribution we describe the software that controls and monitors the various VERITAS subsystems. The software uses an object-oriented approach to cop e with the complexities that arise from using sub-groups of the 7 VERITAS telescopes to observe several sources at the same time. Inter-pro cess communication is based on the CORBA object Request Broker proto col and watch-dog processes monitor the sub-system performance.

  16. Integrating RFID technique to design mobile handheld inventory management system

    NASA Astrophysics Data System (ADS)

    Huang, Yo-Ping; Yen, Wei; Chen, Shih-Chung

    2008-04-01

    An RFID-based mobile handheld inventory management system is proposed in this paper. Differing from the manual inventory management method, the proposed system works on the personal digital assistant (PDA) with an RFID reader. The system identifies electronic tags on the properties and checks the property information in the back-end database server through a ubiquitous wireless network. The system also provides a set of functions to manage the back-end inventory database and assigns different levels of access privilege according to various user categories. In the back-end database server, to prevent improper or illegal accesses, the server not only stores the inventory database and user privilege information, but also keeps track of the user activities in the server including the login and logout time and location, the records of database accessing, and every modification of the tables. Some experimental results are presented to verify the applicability of the integrated RFID-based mobile handheld inventory management system.

  17. Supporting Ecological Research With a Flexible Satellite Sensornet Gateway

    NASA Astrophysics Data System (ADS)

    Silva, F.; Rundel, P. W.; Graham, E. A.; Falk, A.; Ye, W.; Pradkin, Y.; Deschon, A.; Bhatt, S.; McHenry, T.

    2007-12-01

    Wireless sensor networks are a promising technology for ecological research due to their capability to make continuous and in-situ measurements. However, there are some challenges for the wide adoption of this technology by scientists, who may have various research focuses. First, the observation system needs to be rapidly and easily deployable at different remote locations. Second, the system needs to be flexible enough to meet the requirements of different applications and easily reconfigurable by scientists, who may not always be technology experts. To address these challenges, we designed and implemented a flexible satellite gateway for using sensor networks. Our first prototype is being deployed at Stunt Ranch in the Santa Monica Mountains to support biological research at UCLA. In this joint USC/ISI-UCLA deployment, scientists are interested in a long-term investigation of the influence of the 2006-07 southern California drought conditions on the water relations of important chaparral shrub and tree species that differ in their depth of rooting. Rainfall over this past hydrologic year in southern California has been less than 25% of normal, making it the driest year on record. In addition to core measurements of air temperature, relative humidity, wind speed, solar irradiance, rainfall, and soil moisture, we use constant-heating sap flow sensors to continuously monitor the flow of water through the xylem of replicated stems of four species to compare their access to soil moisture with plant water stress. Our gateway consists of a front-end data acquisition system and a back-end data storage system, connected by a long-haul satellite communication link. At the front-end, all environmental sensors are connected to a Compact RIO, a rugged data acquisition platform developed by National Instruments. Sap flow sensors are deployed in several locations that are 20 to 50 meters away from the Compact RIO. At each plant, a Hobo datalogger is used to collect sap flow sensor readings. A Crossbow mote interfaces with the Hobo datalogger to collect data from it and send the data to the Compact RIO through wireless communication. The Compact RIO relays the sensor data to the back- end system over the satellite link. The back-end system stores the data in a database and provides interfaces for easy data retrieval and system reconfiguration. We have developed data exchange and management protocols for reliable data transfer and storage. We have also developed tools to support remote operation, such as system health monitoring and user reconfiguration. Our design emphasizes a modular software architecture that is flexible, to support various scientific applications. This poster illustrates our system design and describes our first deployment at Stunt Ranch. Stunt Ranch is a 310-acre reserve in the Santa Monica Mountains, located within the Santa Monica Mountains National Recreation Area of the National Park Service. The reserve includes mixed communities of chaparral, live oak woodland, and riparian habitats. Stunt Ranch is managed by UCLA as part of the University of California Natural Reserve System.

  18. Characterization of Cloud Water-Content Distribution

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2010-01-01

    The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.

  19. Using S3 cloud storage with ROOT and CvmFS

    NASA Astrophysics Data System (ADS)

    Arsuaga-Ríos, María; Heikkilä, Seppo S.; Duellmann, Dirk; Meusel, René; Blomer, Jakob; Couturier, Ben

    2015-12-01

    Amazon S3 is a widely adopted web API for scalable cloud storage that could also fulfill storage requirements of the high-energy physics community. CERN has been evaluating this option using some key HEP applications such as ROOT and the CernVM filesystem (CvmFS) with S3 back-ends. In this contribution we present an evaluation of two versions of the Huawei UDS storage system stressed with a large number of clients executing HEP software applications. The performance of concurrently storing individual objects is presented alongside with more complex data access patterns as produced by the ROOT data analysis framework. Both Huawei UDS generations show a successful scalability by supporting multiple byte-range requests in contrast with Amazon S3 or Ceph which do not support these commonly used HEP operations. We further report the S3 integration with recent CvmFS versions and summarize the experience with CvmFS/S3 for publishing daily releases of the full LHCb experiment software stack.

  20. Evaluation of the Huawei UDS cloud storage system for CERN specific data

    NASA Astrophysics Data System (ADS)

    Zotes Resines, M.; Heikkila, S. S.; Duellmann, D.; Adde, G.; Toebbicke, R.; Hughes, J.; Wang, L.

    2014-06-01

    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.

  1. An effective write policy for software coherence schemes

    NASA Technical Reports Server (NTRS)

    Chen, Yung-Chin; Veidenbaum, Alexander V.

    1992-01-01

    The authors study the write behavior and evaluate the performance of various write strategies and buffering techniques for a MIN-based multiprocessor system using the simple software coherence scheme. Hit ratios, memory latencies, total execution time, and total write traffic are used as the performance indices. The write-through write-allocate no-fetch cache using a write-back write buffer is shown to have a better performance than both write-through and write-back caches. This type of write buffer is effective in reducing the volume as well as bursts of write traffic. On average, the use of a write-back cache reduces by 60 percent the total write traffic generated by a write-through cache.

  2. Unmanned Systems Safety Guide for DoD Acquisition

    DTIC Science & Technology

    2007-06-27

    Weapons release authorization validation. • Weapons release verification . • Weapons release abort/back-out, including clean -up or reset of weapons...conditions, clean room, stress) and other environments (e.g. software engineering environment, electromagnetic) related to system utilization. Error 22 (1...A solid or liquid energetic substance (or a mixture of substances) which is in itself capable, OUSD (AT&L) Systems and Software Engineering

  3. Staged-Fault Testing of Distance Protection Relay Settings

    NASA Astrophysics Data System (ADS)

    Havelka, J.; Malarić, R.; Frlan, K.

    2012-01-01

    In order to analyze the operation of the protection system during induced fault testing in the Croatian power system, a simulation using the CAPE software has been performed. The CAPE software (Computer-Aided Protection Engineering) is expert software intended primarily for relay protection engineers, which calculates current and voltage values during faults in the power system, so that relay protection devices can be properly set up. Once the accuracy of the simulation model had been confirmed, a series of simulations were performed in order to obtain the optimal fault location to test the protection system. The simulation results were used to specify the test sequence definitions for the end-to-end relay testing using advanced testing equipment with GPS synchronization for secondary injection in protection schemes based on communication. The objective of the end-to-end testing was to perform field validation of the protection settings, including verification of the circuit breaker operation, telecommunication channel time and the effectiveness of the relay algorithms. Once the end-to-end secondary injection testing had been completed, the induced fault testing was performed with three-end lines loaded and in service. This paper describes and analyses the test procedure, consisting of CAPE simulations, end-to-end test with advanced secondary equipment and staged-fault test of a three-end power line in the Croatian transmission system.

  4. Software package for performing experiments about the convolutionally encoded Voyager 1 link

    NASA Technical Reports Server (NTRS)

    Cheng, U.

    1989-01-01

    A software package enabling engineers to conduct experiments to determine the actual performance of long constraint-length convolutional codes over the Voyager 1 communication link directly from the Jet Propulsion Laboratory (JPL) has been developed. Using this software, engineers are able to enter test data from the Laboratory in Pasadena, California. The software encodes the data and then sends the encoded data to a personal computer (PC) at the Goldstone Deep Space Complex (GDSC) over telephone lines. The encoded data are sent to the transmitter by the PC at GDSC. The received data, after being echoed back by Voyager 1, are first sent to the PC at GDSC, and then are sent back to the PC at the Laboratory over telephone lines for decoding and further analysis. All of these operations are fully integrated and are completely automatic. Engineers can control the entire software system from the Laboratory. The software encoder and the hardware decoder interface were developed for other applications, and have been modified appropriately for integration into the system so that their existence is transparent to the users. This software provides: (1) data entry facilities, (2) communication protocol for telephone links, (3) data displaying facilities, (4) integration with the software encoder and the hardware decoder, and (5) control functions.

  5. Development of models and software for liquidus temperatures of glasses of HWVP products. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hrma, P.R.; Vienna, J.D.; Pelton, A.D.

    In an earlier report [92 Pel] was described the development of software and thermodynamic databases for the calculation of liquidus temperatures of glasses of HWVP products containing the components SiO{sub 2}-B{sub 2}O{sub 3}-Na{sub 2}O-Li{sub 2}O-CaO-MgO-Fe{sub 2}O{sub 3}-Al{sub 2}O{sub 3}-ZrO{sub 2}-{open_quotes}others{close_quotes}. The software package developed at that time consisted of the EQUILIB program of the F*A*C*T computer system with special input/output routines. Since then, Battelle has purchased the entire F*A*C*T computer system, and this fully replaces the earlier package. Furthermore, with the entire F*A*C*T system, additional calculations can be performed such as calculations at fixed O{sub 2}, SO{sub 2} etc. pressures,more » or graphing of output. Furthermore, the public F*A*C*T database of over 5000 gaseous species and condensed phases is now accessible. The private databases for the glass and crystalline phases were developed for Battelle by optimization of thermodynamic and phase diagram data. That is, all available data for 2- and 3-component sub-systems of the 9-component oxide system were collected, and parameters of model equations for the thermodynamic properties were found which best reproduce all the data. For representing the thermodynamic properties of the glass as a function of composition and temperature, the modified quasichemical model was used. This model was described in the earlier report [92 Pel] along with all the optimizations. With the model, it was possible to predict the thermodynamic properties of the 9-component glass, and thereby to calculate liquidus temperatures. Liquidus temperatures measured by Battelle for 123 CVS glass compositions were used to test the model and to refine the model by the addition of further parameters.« less

  6. RRI-GBT MULTI-BAND RECEIVER: MOTIVATION, DESIGN, AND DEVELOPMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maan, Yogesh; Deshpande, Avinash A.; Chandrashekar, Vinutha

    2013-01-15

    We report the design and development of a self-contained multi-band receiver (MBR) system, intended for use with a single large aperture to facilitate sensitive and high time-resolution observations simultaneously in 10 discrete frequency bands sampling a wide spectral span (100-1500 MHz) in a nearly log-periodic fashion. The development of this system was primarily motivated by need for tomographic studies of pulsar polar emission regions. Although the system design is optimized for the primary goal, it is also suited for several other interesting astronomical investigations. The system consists of a dual-polarization multi-band feed (with discrete responses corresponding to the 10 bandsmore » pre-selected as relatively radio frequency interference free), a common wide-band radio frequency front-end, and independent back-end receiver chains for the 10 individual sub-bands. The raw voltage time sequences corresponding to 16 MHz bandwidth each for the two linear polarization channels and the 10 bands are recorded at the Nyquist rate simultaneously. We present the preliminary results from the tests and pulsar observations carried out with the Robert C. Byrd Green Bank Telescope using this receiver. The system performance implied by these results and possible improvements are also briefly discussed.« less

  7. Maternity in Spanish elite sportswomen: a qualitative study.

    PubMed

    Martinez-Pascual, Beatriz; Alvarez-Harris, Sara; Fernández-De-Las-Peñas, César; Palacios-Ceña, Domingo

    2014-01-01

    The aim of this qualitative phenomenological study was to describe the experiences of maternity among Spanish elite sportswomen. Twenty (n = 20) Spanish elite sportswomen with the following criteria were included: (a) aged 18-65 years; (b) had been pregnant during their sporting professional career; and (c) after the end of their pregnancy they had returned to their professional sporting career for at least one year. A qualitative analysis was conducted. Data were collected using in-depth personal interviews, investigator's field notes, and extracts from the participants' personal letters. Identified themes included: (a) a new identity, with two sub-themes ("mother role" and "being visible"); (b) going back to sport, with three subthemes ("guilt appears," "justifying going back to sport," and "rediscovering sport"); and, (c) reaching a goal, with two subthemes ("balancing mother-sportswoman" and "the challenge of maternity"). Understanding the meaning of maternity for elite Spanish sportswomen might help gain deeper insight into their expectations and develop training systems focused on elite sports women after pregnancy.

  8. BrainIACS: a system for web-based medical image processing

    NASA Astrophysics Data System (ADS)

    Kishore, Bhaskar; Bazin, Pierre-Louis; Pham, Dzung L.

    2009-02-01

    We describe BrainIACS, a web-based medical image processing system that permits and facilitates algorithm developers to quickly create extensible user interfaces for their algorithms. Designed to address the challenges faced by algorithm developers in providing user-friendly graphical interfaces, BrainIACS is completely implemented using freely available, open-source software. The system, which is based on a client-server architecture, utilizes an AJAX front-end written using the Google Web Toolkit (GWT) and Java Servlets running on Apache Tomcat as its back-end. To enable developers to quickly and simply create user interfaces for configuring their algorithms, the interfaces are described using XML and are parsed by our system to create the corresponding user interface elements. Most of the commonly found elements such as check boxes, drop down lists, input boxes, radio buttons, tab panels and group boxes are supported. Some elements such as the input box support input validation. Changes to the user interface such as addition and deletion of elements are performed by editing the XML file or by using the system's user interface creator. In addition to user interface generation, the system also provides its own interfaces for data transfer, previewing of input and output files, and algorithm queuing. As the system is programmed using Java (and finally Java-script after compilation of the front-end code), it is platform independent with the only requirements being that a Servlet implementation be available and that the processing algorithms can execute on the server platform.

  9. Tag ID Subdivision Scheme for Efficient Authentication and Security-Enhancement of RFID System in USN

    NASA Astrophysics Data System (ADS)

    Lee, Kijeong; Park, Byungjoo; Park, Gil-Cheol

    Radio frequency identification (RFID) is a generic term that is used to describe a system that transmits the identity (in the form of a unique serial number) of an object or person wirelessly, using radio waves. However, there are security threats in the RFID system related to its technical components. For example, illegal RFID tag readers can read tag ID and recognize most RFID Readers, a security threat that needs in-depth attention. Previous studies show some ideas on how to minimize these security threats like studying the security protocols between tag, reader and Back-end DB. In this research, the team proposes an RFID Tag ID Subdivision Scheme to authenticate the permitted tag only in USN (Ubiquitous Sensor Network). Using the proposed scheme, the Back-end DB authenticates selected tags only to minimize security threats like eavesdropping and decreasing traffic in Back-end DB.

  10. End-To-End Simulation of Launch Vehicle Trajectories Including Stage Separation Dynamics

    NASA Technical Reports Server (NTRS)

    Albertson, Cindy W.; Tartabini, Paul V.; Pamadi, Bandu N.

    2012-01-01

    The development of methodologies, techniques, and tools for analysis and simulation of stage separation dynamics is critically needed for successful design and operation of multistage reusable launch vehicles. As a part of this activity, the Constraint Force Equation (CFE) methodology was developed and implemented in the Program to Optimize Simulated Trajectories II (POST2). The objective of this paper is to demonstrate the capability of POST2/CFE to simulate a complete end-to-end mission. The vehicle configuration selected was the Two-Stage-To-Orbit (TSTO) Langley Glide Back Booster (LGBB) bimese configuration, an in-house concept consisting of a reusable booster and an orbiter having identical outer mold lines. The proximity and isolated aerodynamic databases used for the simulation were assembled using wind-tunnel test data for this vehicle. POST2/CFE simulation results are presented for the entire mission, from lift-off, through stage separation, orbiter ascent to orbit, and booster glide back to the launch site. Additionally, POST2/CFE stage separation simulation results are compared with results from industry standard commercial software used for solving dynamics problems involving multiple bodies connected by joints.

  11. End-to-End ASR-Free Keyword Search From Speech

    NASA Astrophysics Data System (ADS)

    Audhkhasi, Kartik; Rosenberg, Andrew; Sethy, Abhinav; Ramabhadran, Bhuvana; Kingsbury, Brian

    2017-12-01

    End-to-end (E2E) systems have achieved competitive results compared to conventional hybrid hidden Markov model (HMM)-deep neural network based automatic speech recognition (ASR) systems. Such E2E systems are attractive due to the lack of dependence on alignments between input acoustic and output grapheme or HMM state sequence during training. This paper explores the design of an ASR-free end-to-end system for text query-based keyword search (KWS) from speech trained with minimal supervision. Our E2E KWS system consists of three sub-systems. The first sub-system is a recurrent neural network (RNN)-based acoustic auto-encoder trained to reconstruct the audio through a finite-dimensional representation. The second sub-system is a character-level RNN language model using embeddings learned from a convolutional neural network. Since the acoustic and text query embeddings occupy different representation spaces, they are input to a third feed-forward neural network that predicts whether the query occurs in the acoustic utterance or not. This E2E ASR-free KWS system performs respectably despite lacking a conventional ASR system and trains much faster.

  12. Structure evolution upon chemical and physical pressure in (Sr{sub 1−x}Ba{sub x}){sub 2}FeSbO{sub 6}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiittanen, T.; Karppinen, M., E-mail: maarit.karppinen@aalto.fi

    Here we demonstrate the gradual structural transformation from the monoclinic I2/m to tetragonal I4/m, cubic Fm-3m and hexagonal P6{sub 3}/mmc structure upon the isovalent larger-for-smaller A-site cation substitution in the B-site ordered double-perovskite system (Sr{sub 1−x}Ba{sub x}){sub 2}FeSbO{sub 6}. This is the same transformation sequence previously observed up to Fm-3m upon heating the parent Sr{sub 2}FeSbO{sub 6} phase to high temperatures. High-pressure treatment, on the other hand, transforms the hexagonal P6{sub 3}/mmc structure of the other end member Ba{sub 2}FeSbO{sub 6} back to the cubic Fm-3m structure. Hence we may conclude that chemical pressure, physical pressure and decreasing temperature allmore » work towards the same direction in the (Sr{sub 1−x}Ba{sub x}){sub 2}FeSbO{sub 6} system. Also shown is that with increasing Ba-for-Sr substitution level, i.e. with decreasing chemical pressure effect, the degree-of-order among the B-site cations, Fe and Sb, decreases. - Graphical abstract: In the (Sr{sub 1−x}Ba{sub x}){sub 2}FeSbO{sub 6} double-perovskite system the gradual structural transformation from the monoclinic I2/m to tetragonal I4/m, cubic Fm-3m and hexagonal P6{sub 3}/mmc structure is seen upon the isovalent larger-for-smaller A-site cation substitution. High-pressure treatment under 4 GPa extends stability of the cubic Fm-3m structure within a wider substitution range of x. - Highlights: • Gradual structural transitions upon A-cation substitution in (Sr{sub 1−x}Ba{sub x}){sub 2}FeSbO{sub 6.} • With increasing x structure changes from I2/m to I4/m, Fm-3m and P6{sub 3}/mmc. • Degree of B-site order decreases with increasing x and A-site cation radius. • High-pressure treatment extends cubic Fm-3m phase stability for wider x range. • High-pressure treatment affects bond lengths mostly around the A-cation.« less

  13. Embracing Open Source for NASA's Earth Science Data Systems

    NASA Technical Reports Server (NTRS)

    Baynes, Katie; Pilone, Dan; Boller, Ryan; Meyer, David; Murphy, Kevin

    2017-01-01

    The overarching purpose of NASAs Earth Science program is to develop a scientific understanding of Earth as a system. Scientific knowledge is most robust and actionable when resulting from transparent, traceable, and reproducible methods. Reproducibility includes open access to the data as well as the software used to arrive at results. Additionally, software that is custom-developed for NASA should be open to the greatest degree possible, to enable re-use across Federal agencies, reduce overall costs to the government, remove barriers to innovation, and promote consistency through the use of uniform standards. Finally, Open Source Software (OSS) practices facilitate collaboration between agencies and the private sector. To best meet these ends, NASAs Earth Science Division promotes the full and open sharing of not only all data, metadata, products, information, documentation, models, images, and research results but also the source code used to generate, manipulate and analyze them. This talk focuses on the challenges to open sourcing NASA developed software within ESD and the growing pains associated with establishing policies running the gamut of tracking issues, properly documenting build processes, engaging the open source community, maintaining internal compliance, and accepting contributions from external sources. This talk also covers the adoption of existing open source technologies and standards to enhance our custom solutions and our contributions back to the community. Finally, we will be introducing the most recent OSS contributions from NASA Earth Science program and promoting these projects for wider community review and adoption.

  14. End-to-end communication test on variable length packet structures utilizing AOS testbed

    NASA Technical Reports Server (NTRS)

    Miller, Warner H.; Sank, V.; Fong, Wai; Miko, J.; Powers, M.; Folk, John; Conaway, B.; Michael, K.; Yeh, Pen-Shu

    1994-01-01

    This paper describes a communication test, which successfully demonstrated the transfer of losslessly compressed images in an end-to-end system. These compressed images were first formatted into variable length Consultative Committee for Space Data Systems (CCSDS) packets in the Advanced Orbiting System Testbed (AOST). The CCSDS data Structures were transferred from the AOST to the Radio Frequency Simulations Operations Center (RFSOC), via a fiber optic link, where data was then transmitted through the Tracking and Data Relay Satellite System (TDRSS). The received data acquired at the White Sands Complex (WSC) was transferred back to the AOST where the data was captured and decompressed back to the original images. This paper describes the compression algorithm, the AOST configuration, key flight components, data formats, and the communication link characteristics and test results.

  15. National Geothermal Data System: Open Access to Geoscience Data, Maps, and Documents

    NASA Astrophysics Data System (ADS)

    Caudill, C. M.; Richard, S. M.; Musil, L.; Sonnenschein, A.; Good, J.

    2014-12-01

    The U.S. National Geothermal Data System (NGDS) provides free open access to millions of geoscience data records, publications, maps, and reports via distributed web services to propel geothermal research, development, and production. NGDS is built on the US Geoscience Information Network (USGIN) data integration framework, which is a joint undertaking of the USGS and the Association of American State Geologists (AASG), and is compliant with international standards and protocols. NGDS currently serves geoscience information from 60+ data providers in all 50 states. Free and open source software is used in this federated system where data owners maintain control of their data. This interactive online system makes geoscience data easily discoverable, accessible, and interoperable at no cost to users. The dynamic project site http://geothermaldata.org serves as the information source and gateway to the system, allowing data and applications discovery and availability of the system's data feed. It also provides access to NGDS specifications and the free and open source code base (on GitHub), a map-centric and library style search interface, other software applications utilizing NGDS services, NGDS tutorials (via YouTube and USGIN site), and user-created tools and scripts. The user-friendly map-centric web-based application has been created to support finding, visualizing, mapping, and acquisition of data based on topic, location, time, provider, or key words. Geographic datasets visualized through the map interface also allow users to inspect the details of individual GIS data points (e.g. wells, geologic units, etc.). In addition, the interface provides the information necessary for users to access the GIS data from third party software applications such as GoogleEarth, UDig, and ArcGIS. A redistributable, free and open source software package called GINstack (USGIN software stack) was also created to give data providers a simple way to release data using interoperable and shareable standards, upload data and documents, and expose those data as a node in the NGDS or any larger data system through a CSW endpoint. The easy-to-use interface is supported by back-end software including Postgres, GeoServer, and custom CKAN extensions among others.

  16. Rapid Pneumatic Transport of Radioactive Samples - RaPToRS

    NASA Astrophysics Data System (ADS)

    Padalino, S.; Barrios, M.; Sangster, C.

    2005-10-01

    Some ICF neutron activation diagnostics require quick retrieval of the activated sample. Minimizing retrieval times is particularly important when the half-life of the activated material is on the order of the transport time or the degree of radioactivity is close to the background counting level. These restrictions exist in current experiments performed at the Laboratory for Laser Energetics, thus motivating the development of the RaPToRS system. The system has been designed to minimize transportation time while requiring no human intervention during transport or counting. These factors will be important if the system is to be used at the NIF where radiological hazards will be present during post activation. The sample carrier is pneumatically transported via a 4 inch ID PVC pipe to a remote location in excess of 100 meters from the activation site at a speed of approximately 7 m/s. It arrives at an end station where it is dismounted robotically from the carrier and removed from its hermetic package. The sample is then placed by the robot in a counting station. This system is currently being developed to measure back-to-back gamma rays produced by positron annihilation which were emitted by activated graphite. Funded in part by the U.S. DOE under sub contract with LLE at the University of Rochester.

  17. IMAPS Device Packaging Conference 2017 - Engineered Micro Systems & Devices Track

    NASA Technical Reports Server (NTRS)

    Varnavas, Kosta

    2017-01-01

    NASA field center Marshall Space Flight Center (Huntsville, AL), has invested in advanced wireless sensor technology development. Developments for a wireless microcontroller back-end were primarily focused on the commercial Synapse Wireless family of devices. These devices have many useful features for NASA applications, good characteristics and the ability to be programmed Over-The-Air (OTA). The effort has focused on two widely used sensor types, mechanical strain gauges and thermal sensors. Mechanical strain gauges are used extensively in NASA structural testing and even on vehicle instrumentation systems. Additionally, thermal monitoring with many types of sensors is extensively used. These thermal sensors include thermocouples of all types, resistive temperature devices (RTDs), diodes and other thermal sensor types. The wireless thermal board will accommodate all of these types of sensor inputs to an analog front end. The analog front end on each of the sensors interfaces to the Synapse wireless microcontroller, based on the Atmel Atmega128 device. Once the analog sensor output data is digitized by the onboard analog to digital converter (A/D), the data is available for analysis, computation or transmission. Various hardware features allow custom embedded software to manage battery power to enhance battery life. This technology development fits nicely into using numerous additional sensor front ends, including some of the low-cost printed circuit board capacitive moisture content sensors currently being developed at Auburn University.

  18. lumpR 2.0.0: an R package facilitating landscape discretisation for hillslope-based hydrological models

    NASA Astrophysics Data System (ADS)

    Pilz, Tobias; Francke, Till; Bronstert, Axel

    2017-08-01

    The characteristics of a landscape pose essential factors for hydrological processes. Therefore, an adequate representation of the landscape of a catchment in hydrological models is vital. However, many of such models exist differing, amongst others, in spatial concept and discretisation. The latter constitutes an essential pre-processing step, for which many different algorithms along with numerous software implementations exist. In that context, existing solutions are often model specific, commercial, or depend on commercial back-end software, and allow only a limited or no workflow automation at all. Consequently, a new package for the scientific software and scripting environment R, called lumpR, was developed. lumpR employs an algorithm for hillslope-based landscape discretisation directed to large-scale application via a hierarchical multi-scale approach. The package addresses existing limitations as it is free and open source, easily extendible to other hydrological models, and the workflow can be fully automated. Moreover, it is user-friendly as the direct coupling to a GIS allows for immediate visual inspection and manual adjustment. Sufficient control is furthermore retained via parameter specification and the option to include expert knowledge. Conversely, completely automatic operation also allows for extensive analysis of aspects related to landscape discretisation. In a case study, the application of the package is presented. A sensitivity analysis of the most important discretisation parameters demonstrates its efficient workflow automation. Considering multiple streamflow metrics, the employed model proved reasonably robust to the discretisation parameters. However, parameters determining the sizes of subbasins and hillslopes proved to be more important than the others, including the number of representative hillslopes, the number of attributes employed for the lumping algorithm, and the number of sub-discretisations of the representative hillslopes.

  19. NEWFIRM Software--System Integration Using OPC

    NASA Astrophysics Data System (ADS)

    Daly, P. N.

    2004-07-01

    The NOAO Extremely Wide-Field Infra-Red Mosaic (NEWFIRM) camera is being built to satisfy the survey science requirements on the KPNO Mayall and CTIO Blanco 4m telescopes in an era of 8m+ aperture telescopes. Rather than re-invent the wheel, the software system to control the instrument has taken existing software packages and re-used what is appropriate. The result is an end-to-end observation control system using technology components from DRAMA, ORAC, observing tools, GWC, existing in-house motor controllers and new developments like the MONSOON pixel server.

  20. LightWAVE: Waveform and Annotation Viewing and Editing in a Web Browser.

    PubMed

    Moody, George B

    2013-09-01

    This paper describes LightWAVE, recently-developed open-source software for viewing ECGs and other physiologic waveforms and associated annotations (event markers). It supports efficient interactive creation and modification of annotations, capabilities that are essential for building new collections of physiologic signals and time series for research. LightWAVE is constructed of components that interact in simple ways, making it straightforward to enhance or replace any of them. The back end (server) is a common gateway interface (CGI) application written in C for speed and efficiency. It retrieves data from its data repository (PhysioNet's open-access PhysioBank archives by default, or any set of files or web pages structured as in PhysioBank) and delivers them in response to requests generated by the front end. The front end (client) is a web application written in JavaScript. It runs within any modern web browser and does not require installation on the user's computer, tablet, or phone. Finally, LightWAVE's scribe is a tiny CGI application written in Perl, which records the user's edits in annotation files. LightWAVE's data repository, back end, and front end can be located on the same computer or on separate computers. The data repository may be split across multiple computers. For compatibility with the standard browser security model, the front end and the scribe must be loaded from the same domain.

  1. Internal monitoring of GBTx emulator using IPbus for CBM experiment

    NASA Astrophysics Data System (ADS)

    Mandal, Swagata; Zabolotny, Wojciech; Sau, Suman; Chkrabarti, Amlan; Saini, Jogender; Chattopadhyay, Subhasis; Pal, Sushanta Kumar

    2015-09-01

    The Compressed Baryonic Matter (CBM) experiment is a part of the Facility for Antiproton and Ion Research (FAIR) in Darmstadt at GSI. In CBM experiment a precisely time synchronized fault tolerant self-triggered electronics is required for Data Acquisition (DAQ) system in CBM experiments which can support high data rate (up to several TB/s). As a part of the implementation of the DAQ system of Muon Chamber (MUCH) which is one of the important detectors in CBM experiment, a FPGA based Gigabit Transceiver (GBTx) emulator is implemented. Readout chain for MUCH consists of XYTER chips (Front end electronics) which will be directly connected to detector, GBTx emulator, Data Processing Board (DPB) and First level event selector board (FLIB) with backend software interface. GBTx emulator will be connected with the XYTER emulator through LVDS (Low Voltage Differential Signalling) line in the front end and in the back end it is connected with DPB through 4.8 Gbps optical link. IPBus over Ethernet is used for internal monitoring of the registers within the GBTx. In IPbus implementation User Datagram Protocol (UDP) stack is used in transport layer of OSI model so that GBTx can be controlled remotely. A Python script is used at computer side to drive IPbus controller.

  2. WMS Server 2.0

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian; Wood, James F.

    2012-01-01

    This software is a simple, yet flexible server of raster map products, compliant with the Open Geospatial Consortium (OGC) Web Map Service (WMS) 1.1.1 protocol. The server is a full implementation of the OGC WMS 1.1.1 as a fastCGI client and using Geospatial Data Abstraction Library (GDAL) for data access. The server can operate in a proxy mode, where all or part of the WMS requests are done on a back server. The server has explicit support for a colocated tiled WMS, including rapid response of black (no-data) requests. It generates JPEG and PNG images, including 16-bit PNG. The GDAL back-end support allows great flexibility on the data access. The server is a port to a Linux/GDAL platform from the original IRIX/IL platform. It is simpler to configure and use, and depending on the storage format used, it has better performance than other available implementations. The WMS server 2.0 is a high-performance WMS implementation due to the fastCGI architecture. The use of GDAL data back end allows for great flexibility. The configuration is relatively simple, based on a single XML file. It provides scaling and cropping, as well as blending of multiple layers based on layer transparency.

  3. Award ER25750: Coordinated Infrastructure for Fault Tolerance Systems Indiana University Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumsdaine, Andrew

    2013-03-08

    The main purpose of the Coordinated Infrastructure for Fault Tolerance in Systems initiative has been to conduct research with a goal of providing end-to-end fault tolerance on a systemwide basis for applications and other system software. While fault tolerance has been an integral part of most high-performance computing (HPC) system software developed over the past decade, it has been treated mostly as a collection of isolated stovepipes. Visibility and response to faults has typically been limited to the particular hardware and software subsystems in which they are initially observed. Little fault information is shared across subsystems, allowing little flexibility ormore » control on a system-wide basis, making it practically impossible to provide cohesive end-to-end fault tolerance in support of scientific applications. As an example, consider faults such as communication link failures that can be seen by a network library but are not directly visible to the job scheduler, or consider faults related to node failures that can be detected by system monitoring software but are not inherently visible to the resource manager. If information about such faults could be shared by the network libraries or monitoring software, then other system software, such as a resource manager or job scheduler, could ensure that failed nodes or failed network links were excluded from further job allocations and that further diagnosis could be performed. As a founding member and one of the lead developers of the Open MPI project, our efforts over the course of this project have been focused on making Open MPI more robust to failures by supporting various fault tolerance techniques, and using fault information exchange and coordination between MPI and the HPC system software stack from the application, numeric libraries, and programming language runtime to other common system components such as jobs schedulers, resource managers, and monitoring tools.« less

  4. HTS flywheel energy storage system with rotor shaft stabilized by feed-back control of armature currents of motor-generator

    NASA Astrophysics Data System (ADS)

    Tsukamoto, O.; Utsunomiya, A.

    2007-10-01

    We propose an HTS bulk bearing flywheel energy system (FWES) with rotor shaft stabilization system using feed-back control of the armature currents of the motor-generator. In the proposed system the rotor shift has a pivot bearing at one end of the shaft and an HTS bulk bearing (SMB) at the other end. The fluctuation of the rotor shaft with SMB is damped by feed-back control of the armature currents of the motor-generator sensing the position of the rotor shaft. The method has merits that the fluctuations are damped without active control magnet bearings and extra devices which may deteriorate the energy storage efficiency and need additional costs. The principle of the method was demonstrated by an experiment using a model permanent magnet motor.

  5. Subcutaneous Stimulation as an Additional Therapy to Spinal Cord Stimulation for the Treatment of Low Back Pain and Leg Pain in Failed Back Surgery Syndrome: Four-Year Follow-Up.

    PubMed

    Hamm-Faber, Tanja E; Aukes, Hans; van Gorp, Eric-Jan; Gültuna, Ismail

    2015-10-01

    The objective of this study is to investigate the efficacy of long-term follow-up of subcutaneous stimulation (SubQ) as an additional therapy for patients with failed back surgery syndrome (FBSS) with chronic refractory pain, for whom spinal cord stimulation (SCS) alone was unsuccessful in treating low back pain. Prospective case series. FBSS patients with leg and/or low back pain whose conventional therapies had failed, received a combination of SCS (8-contact Octad lead, 3877-45 cm, Medtronic, Minneapolis, MN, USA) and/or SubQ (4-contact Quad Plus lead (s), 2888-28 cm, Medtronic). Initially, an Octad lead was placed in the epidural space for SCS for a trial stimulation to assess the suppression of leg and/or low back pain. Where SCS alone was insufficient in treating low back pain, lead(s) were placed superficially in the subcutaneous tissue of the lower back, exactly in the middle of the pain area. A pulse generator (Prime Advanced, 37702, Medtronic) was implanted if the patient reported more than 50% pain relief during the trial period. We investigated the long-term effect of neuromodulation on pain with the visual analog scale (VAS), and disability using the Quebec Pain Disability Scale. The results after 46 months are presented. Eleven patients, five men and six women (age 51 ± 8 years, mean ± SD) were included in the pilot study. In nine cases, SCS was used in combination with SubQ leads. Two patients received only SubQ leads. In one patient, the SCS + SubQ system was removed after nine months and these results were not taken into account for the analysis. Baseline scores for leg (N = 8) and low back pain (N = 10) were VASbl: 59 ± 15 and VASbl: 63 ± 14, respectively. The long-term follow-up period was 46 ± 4 months. SCS significantly reduced leg pain after 12 months (VAS12: 20 ± 11, p12 = 0.001) and 46 months (VAS46: 37 ± 17, p46 = 0.027). Similarly, SubQ significantly reduced back pain after 12 months(VAS12: 33 ± 16, p12 = 0.001) and 46 months (VAS46: 40 ± 21, p46 = 0.013). At 12 months, the Quebec Pain Disability Scale (QPDS) was 49 ± 12 and after 46 months, 53 ± 15. Both at 12 and 46 months, the QPDS values were statistically significantly better (p12 = 0.001, p46 = 0.04) compared with baseline values (QPDSbl: 61 ± 15). In one patient, the pain suppressive effect of SCS/SubQ had disappeared completely over time and the pain scores returned to prestimulation values. In four, patients back pain scores increased over time due to new issues (SI-joint problems, degenerative spine problems, disc problems, and hip pain) unrelated to FBSS and for which SCS/SubQ was not targeted or a reason for implantation at the start of the pilot study. This is the first prospective report on the combined use of SCS and SubQ with a follow-up period of four years. These data show that SCS and/or SubQ provide persistent long-term pain relief for leg and back pain in patients with FBSS. One should also take into account that new back/leg pain problems may evolve over time and increase the pain score which impact overall pain treatment. SCS combined with SubQ can be considered an effective long term treatment for low back pain in patients with FBSS for whom SCS alone is insufficient in alleviating their pain symptoms. © 2015 International Neuromodulation Society.

  6. Automatic Debugging Support for UML Designs

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Swanson, Keith (Technical Monitor)

    2001-01-01

    Design of large software systems requires rigorous application of software engineering methods covering all phases of the software process. Debugging during the early design phases is extremely important, because late bug-fixes are expensive. In this paper, we describe an approach which facilitates debugging of UML requirements and designs. The Unified Modeling Language (UML) is a set of notations for object-orient design of a software system. We have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts. This algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge. After synthesizing statecharts from sequence diagrams, these statecharts usually are subject to manual modification and refinement. By using the "backward" direction of our synthesis algorithm. we are able to map modifications made to the statechart back into the requirements (sequence diagrams) and check for conflicts there. Fed back to the user conflicts detected by our algorithm are the basis for deductive-based debugging of requirements and domain theory in very early development stages. Our approach allows to generate explanations oil why there is a conflict and which parts of the specifications are affected.

  7. Tools for Modeling & Simulation of Molecular and Nanoelectronics Devices

    DTIC Science & Technology

    2012-06-14

    implemented a prototype DFT simulation software using two different open source Finite Element (FE) libraries: DEALII and FENICS . These two libraries have been...ATK. In the first part of this Phase I project we investigated two different candidate finite element libraries, DEAL II and FENICS . Although both...element libraries, Deal.II and FEniCS /dolfin, for use as back-ends to a finite element DFT in ATK, Quantum Insight and QuantumWise A/S, October 2011.

  8. Operational Suitability Guide. Volume 2. Templates

    DTIC Science & Technology

    1990-05-01

    Intended mission, and the required technical and operational characteristics. The mission must be adequately defined and key hardware and software ...operational availability. With the use of fault-tolerant computer hardware and software , the system R&M will significantly improve end-to-end...should Include both hardware and software elements, as appropriate. Unique characteristics or unique support concepts should be Identified if they result

  9. Using Google Earth for Submarine Operations at Pavilion Lake

    NASA Astrophysics Data System (ADS)

    Deans, M. C.; Lees, D. S.; Fong, T.; Lim, D. S.

    2009-12-01

    During the July 2009 Pavilion Lake field test, we supported submarine "flight" operations using Google Earth. The Intelligent Robotics Group at NASA Ames has experience with ground data systems for NASA missions, earth analog field tests, disaster response, and the Gigapan camera system. Leveraging this expertise and existing software, we put together a set of tools to support sub tracking and mapping, called the "Surface Data System." This system supports flight planning, real time flight operations, and post-flight analysis. For planning, we make overlays of the regional bedrock geology, sonar bathymetry, and sonar backscatter maps that show geology, depth, and structure of the bottom. Placemarks show the mooring locations for start and end points. Flight plans are shown as polylines with icons for waypoints. Flight tracks and imagery from previous field seasons are embedded in the map for planning follow-on activities. These data provide context for flight planning. During flights, sub position is updated every 5 seconds from the nav computer on the chase boat. We periodically update tracking KML files and refresh them with network links. A sub icon shows current location of the sub. A compass rose shows bearings to indicate heading to the next waypoint. A "Science Stenographer" listens on the voice loop and transcribes significant observations in real time. Observations called up to the surface immediately appear on the map as icons with date, time, position, and what was said. After each flight, the science back room immediately has the flight track and georeferenced notes from the pilots. We add additional information in post-processing. The submarines record video continuously, with "event" timestamps marked by the pilot. We cross-correlate the event timestamps with position logs to geolocate events and put a preview image and compressed video clip into the map. Animated flight tracks are also generated, showing timestamped position and providing timelapse playback of the flight. Neogeography tools are increasing in popularity and offer an excellent platform for geoinformatics. The scientists on the team are already familiar with Google Earth, eliminating up-front training on new tools. The flight maps and archived data are available immediately and in a usable format. Google Earth provides lots of measurement tools, annotation tools, and other built-in functions that we can use to create and analyze the map. All of this information is saved to a shared filesystem so that everyone on the team has access to all of the same map data. After the field season, the map data will be used by the team to analyse and correlate information from across the lake and across different flights to support their research, and to plan next year's activities.

  10. Integrated circuits for volumetric ultrasound imaging with 2-D CMUT arrays.

    PubMed

    Bhuyan, Anshuman; Choe, Jung Woo; Lee, Byung Chul; Wygant, Ira O; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T

    2013-12-01

    Real-time volumetric ultrasound imaging systems require transmit and receive circuitry to generate ultrasound beams and process received echo signals. The complexity of building such a system is high due to requirement of the front-end electronics needing to be very close to the transducer. A large number of elements also need to be interfaced to the back-end system and image processing of a large dataset could affect the imaging volume rate. In this work, we present a 3-D imaging system using capacitive micromachined ultrasonic transducer (CMUT) technology that addresses many of the challenges in building such a system. We demonstrate two approaches in integrating the transducer and the front-end electronics. The transducer is a 5-MHz CMUT array with an 8 mm × 8 mm aperture size. The aperture consists of 1024 elements (32 × 32) with an element pitch of 250 μm. An integrated circuit (IC) consists of a transmit beamformer and receive circuitry to improve the noise performance of the overall system. The assembly was interfaced with an FPGA and a back-end system (comprising of a data acquisition system and PC). The FPGA provided the digital I/O signals for the IC and the back-end system was used to process the received RF echo data (from the IC) and reconstruct the volume image using a phased array imaging approach. Imaging experiments were performed using wire and spring targets, a ventricle model and a human prostrate. Real-time volumetric images were captured at 5 volumes per second and are presented in this paper.

  11. Parallel, Real-Time and Pipeline Data Reduction for the ROVER Sub-mm Heterodyne Polarimeter on the JCMT with ACSIS and ORAC-DR

    NASA Astrophysics Data System (ADS)

    Leech, J.; Dewitt, S.; Jenness, T.; Greaves, J.; Lightfoot, J. F.

    2005-12-01

    ROVER is a rotating waveplate polarimeter for use with (sub)mm heterodyne instruments, particularly the 16 element focal plane Heterodyne Array Receiver HARP tep{Smit2003} due for commissioning on the JCMT in 2004. The ROVER/HARP back-end will be a digital auto-correlation spectrometer, known as ACSIS, designed specifically for the demanding data volumes from the HARP array receiver. ACSIS is being developed by DRAO, Penticton and UKATC. This paper will describe the data reduction of ROVER polarimetry data both in real-time by ACSIS-DR, and through the ORAC-DR data reduction pipeline.

  12. Using CAD software to simulate PV energy yield - The case of product integrated photovoltaic operated under indoor solar irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reich, N.H.; van Sark, W.G.J.H.M.; Turkenburg, W.C.

    2010-08-15

    In this paper, we show that photovoltaic (PV) energy yields can be simulated using standard rendering and ray-tracing features of Computer Aided Design (CAD) software. To this end, three-dimensional (3-D) sceneries are ray-traced in CAD. The PV power output is then modeled by translating irradiance intensity data of rendered images back into numerical data. To ensure accurate results, the solar irradiation data used as input is compared to numerical data obtained from rendered images, showing excellent agreement. As expected, also ray-tracing precision in the CAD software proves to be very high. To demonstrate PV energy yield simulations using this innovativemore » concept, solar radiation time course data of a few days was modeled in 3-D to simulate distributions of irradiance incident on flat, single- and double-bend shapes and a PV powered computer mouse located on a window sill. Comparisons of measured to simulated PV output of the mouse show that also in practice, simulation accuracies can be very high. Theoretically, this concept has great potential, as it can be adapted to suit a wide range of solar energy applications, such as sun-tracking and concentrator systems, Building Integrated PV (BIPV) or Product Integrated PV (PIPV). However, graphical user interfaces of 'CAD-PV' software tools are not yet available. (author)« less

  13. Development of automation software for neutron activation analysis process in Malaysian nuclear agency

    NASA Astrophysics Data System (ADS)

    Yussup, N.; Rahman, N. A. A.; Ibrahim, M. M.; Mokhtar, M.; Salim, N. A. A.; Soh@Shaari, S. C.; Azman, A.

    2017-01-01

    Neutron Activation Analysis (NAA) process has been established in Malaysian Nuclear Agency (Nuclear Malaysia) since 1980s. Most of the procedures established especially from sample registration to sample analysis are performed manually. These manual procedures carried out by the NAA laboratory personnel are time consuming and inefficient. Hence, a software to support the system automation is developed to provide an effective method to replace redundant manual data entries and produce faster sample analysis and calculation process. This paper describes the design and development of automation software for NAA process which consists of three sub-programs. The sub-programs are sample registration, hardware control and data acquisition; and sample analysis. The data flow and connection between the sub-programs will be explained. The software is developed by using National Instrument LabView development package.

  14. Back-end and interface implementation of the STS-XYTER2 prototype ASIC for the CBM experiment

    NASA Astrophysics Data System (ADS)

    Kasinski, K.; Szczygiel, R.; Zabolotny, W.

    2016-11-01

    Each front-end readout ASIC for the High-Energy Physics experiments requires robust and effective hit data streaming and control mechanism. A new STS-XYTER2 full-size prototype chip for the Silicon Tracking System and Muon Chamber detectors in the Compressed Baryonic Matter experiment at Facility for Antiproton and Ion Research (FAIR, Germany) is a 128-channel time and amplitude measuring solution for silicon microstrip and gas detectors. It operates at 250 kHit/s/channel hit rate, each hit producing 27 bits of information (5-bit amplitude, 14-bit timestamp, position and diagnostics data). The chip back-end implements fast front-end channel read-out, timestamp-wise hit sorting, and data streaming via a scalable interface implementing the dedicated protocol (STS-HCTSP) for chip control and hit transfer with data bandwidth from 9.7 MHit/s up to 47 MHit/s. It also includes multiple options for link diagnostics, failure detection, and throttling features. The back-end is designed to operate with the data acquisition architecture based on the CERN GBTx transceivers. This paper presents the details of the back-end and interface design and its implementation in the UMC 180 nm CMOS process.

  15. Missile signal processing common computer architecture for rapid technology upgrade

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  16. An Overview of Starfish: A Table-Centric Tool for Interactive Synthesis

    NASA Technical Reports Server (NTRS)

    Tsow, Alex

    2008-01-01

    Engineering is an interactive process that requires intelligent interaction at many levels. My thesis [1] advances an engineering discipline for high-level synthesis and architectural decomposition that integrates perspicuous representation, designer interaction, and mathematical rigor. Starfish, the software prototype for the design method, implements a table-centric transformation system for reorganizing control-dominated system expressions into high-level architectures. Based on the digital design derivation (DDD) system a designer-guided synthesis technique that applies correctness preserving transformations to synchronous data flow specifications expressed as co- recursive stream equations Starfish enhances user interaction and extends the reachable design space by incorporating four innovations: behavior tables, serialization tables, data refinement, and operator retiming. Behavior tables express systems of co-recursive stream equations as a table of guarded signal updates. Developers and users of the DDD system used manually constructed behavior tables to help them decide which transformations to apply and how to specify them. These design exercises produced several formally constructed hardware implementations: the FM9001 microprocessor, an SECD machine for evaluating LISP, and the SchemEngine, garbage collected machine for interpreting a byte-code representation of compiled Scheme programs. Bose and Tuna, two of DDD s developers, have subsequently commercialized the design derivation methodology at Derivation Systems, Inc. (DSI). DSI has formally derived and validated PCI bus interfaces and a Java byte-code processor; they further executed a contract to prototype SPIDER-NASA's ultra-reliable communications bus. To date, most derivations from DDD and DRS have targeted hardware due to its synchronous design paradigm. However, Starfish expressions are independent of the synchronization mechanism; there is no commitment to hardware or globally broadcast clocks. Though software back-ends for design derivation are limited to the DDD stream-interpreter, targeting synchronous or real-time software is not substantively different from targeting hardware.

  17. Reconfigurable radio receiver with fractional sample rate converter and multi-rate ADC based on LO-derived sampling clock

    NASA Astrophysics Data System (ADS)

    Park, Sungkyung; Park, Chester Sungchung

    2018-03-01

    A composite radio receiver back-end and digital front-end, made up of a delta-sigma analogue-to-digital converter (ADC) with a high-speed low-noise sampling clock generator, and a fractional sample rate converter (FSRC), is proposed and designed for a multi-mode reconfigurable radio. The proposed radio receiver architecture contributes to saving the chip area and thus lowering the design cost. To enable inter-radio access technology handover and ultimately software-defined radio reception, a reconfigurable radio receiver consisting of a multi-rate ADC with its sampling clock derived from a local oscillator, followed by a rate-adjustable FSRC for decimation, is designed. Clock phase noise and timing jitter are examined to support the effectiveness of the proposed radio receiver. A FSRC is modelled and simulated with a cubic polynomial interpolator based on Lagrange method, and its spectral-domain view is examined in order to verify its effect on aliasing, nonlinearity and signal-to-noise ratio, giving insight into the design of the decimation chain. The sampling clock path and the radio receiver back-end data path are designed in a 90-nm CMOS process technology with 1.2V supply.

  18. Software Techniques for Balancing Computation & Communication in Parallel Systems

    DTIC Science & Technology

    1994-07-01

    boer of Tasks: 15 PE Loand Yaltanc: 0.0000 K ] PE Loed Ya tance: 0.0000 Into-Tas Com: LInter-Task Com: 116 Ntwok traffic: ±16 PE LAYMT 1, Networkc...confusion. Because past versions for all files were saved and documented within SCCS, software developers were able to roll back to various combinations of

  19. Swept Frequency Laser Metrology System

    NASA Technical Reports Server (NTRS)

    Zhao, Feng (Inventor)

    2010-01-01

    A swept frequency laser ranging system having sub-micron accuracy that employs multiple common-path heterodyne interferometers, one coupled to a calibrated delay-line for use as an absolute reference for the ranging system. An exemplary embodiment uses two laser heterodyne interferometers to create two laser beams at two different frequencies to measure distance and motions of target(s). Heterodyne fringes generated from reflections off a reference fiducial X(sub R) and measurement (or target) fiducial X(sub M) are reflected back and are then detected by photodiodes. The measured phase changes Delta phi(sub R) and Delta phi (sub m) resulting from the laser frequency swept gives target position. The reference delay-line is the only absolute reference needed in the metrology system and this provides an ultra-stable reference and simple/economical system.

  20. An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation

    DOE PAGES

    Nutaro, James

    2014-11-03

    In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.

  1. The initial data products from the EUVE software - A photon's journey through the End-to-End System

    NASA Technical Reports Server (NTRS)

    Antia, Behram

    1993-01-01

    The End-to-End System (EES) is a unique collection of software modules created for use at the Center for EUV Astrophysics. The 'pipeline' is a shell script which executes selected EES modules and creates initial data products: skymaps, data sets for individual sources (called 'pigeonholes') and catalogs of sources. This article emphasizes the data from the all-sky survey, conducted between July 22, 1992 and January 21, 1993. A description of each of the major data products will be given and, as an example of how the pipeline works, the reader will follow a photon's path through the software pipeline into a pigeonhole. These data products are the primary goal of the EUVE all-sky survey mission, and so their relative importance for the follow-up science will also be discussed.

  2. Sleipner vest CO{sub 2} disposal, CO{sub 2} injection into a shallow underground aquifer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baklid, A.; Korbol, R.; Owren, G.

    1996-12-31

    This paper describes the problem of disposing large amounts of CO{sub 2} into a shallow underground aquifer from an offshore location in the North Sea. The solutions presented is an alternative for CO{sub 2} emitting industries in addressing the growing concern for the environmental impact from such activities. The topside injection facilities, the well and reservoir aspects are discussed as well as the considerations made during establishing the design basis and the solutions chosen. The CO{sub 2} injection issues in this project differs from industry practice in that the CO{sub 2} is wet and contaminated with methane, and further, becausemore » of the shallow depth, the total pressure resistance in the system is not sufficient for the CO{sub 2} to naturally stay in the dense phase region. To allow for safe and cost effective handling of the CO{sub 2}, it was necessary to develop an injection system that gave a constant back pressure from the well corresponding to the output pressure from the compressor, and being independent of the injection rate. This is accomplished by selecting a high injectivity sand formation, completing the well with a large bore, and regulating the dense phase CO{sub 2} temperature and thus the density of the fluid in order to account for the variations in back pressure from the well.« less

  3. RELAP-7 Closure Correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Berry, R. A.; Martineau, R. C.

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s and TRACE’s capabilities and extends their analysis capabilities for all reactor system simulation scenarios. The RELAP-7 codemore » utilizes the well-posed 7-equation two-phase flow model for compressible two-phase flow. Closure models used in the TRACE code has been reviewed and selected to reflect the progress made during the past decades and provide a basis for the colure correlations implemented in the RELAP-7 code. This document provides a summary on the closure correlations that are currently implemented in the RELAP-7 code. The closure correlations include sub-grid models that describe interactions between the fluids and the flow channel, and interactions between the two phases.« less

  4. A study on spatial decision support systems for HIV/AIDS prevention based on COM GIS technology

    NASA Astrophysics Data System (ADS)

    Yang, Kun; Luo, Huasong; Peng, Shungyun; Xu, Quanli

    2007-06-01

    Based on the deeply analysis of the current status and the existing problems of GIS technology applications in Epidemiology, this paper has proposed the method and process for establishing the spatial decision support systems of AIDS epidemic prevention by integrating the COM GIS, Spatial Database, GPS, Remote Sensing, and Communication technologies, as well as ASP and ActiveX software development technologies. One of the most important issues for constructing the spatial decision support systems of AIDS epidemic prevention is how to integrate the AIDS spreading models with GIS. The capabilities of GIS applications in the AIDS epidemic prevention have been described here in this paper firstly. Then some mature epidemic spreading models have also been discussed for extracting the computation parameters. Furthermore, a technical schema has been proposed for integrating the AIDS spreading models with GIS and relevant geospatial technologies, in which the GIS and model running platforms share a common spatial database and the computing results can be spatially visualized on Desktop or Web GIS clients. Finally, a complete solution for establishing the decision support systems of AIDS epidemic prevention has been offered in this paper based on the model integrating methods and ESRI COM GIS software packages. The general decision support systems are composed of data acquisition sub-systems, network communication sub-systems, model integrating sub-systems, AIDS epidemic information spatial database sub-systems, AIDS epidemic information querying and statistical analysis sub-systems, AIDS epidemic dynamic surveillance sub-systems, AIDS epidemic information spatial analysis and decision support sub-systems, as well as AIDS epidemic information publishing sub-systems based on Web GIS.

  5. Astro-WISE: Chaining to the Universe

    NASA Astrophysics Data System (ADS)

    Valentijn, E. A.; McFarland, J. P.; Snigula, J.; Begeman, K. G.; Boxhoorn, D. R.; Rengelink, R.; Helmich, E.; Heraudeau, P.; Verdoes Kleijn, G.; Vermeij, R.; Vriend, W.-J.; Tempelaar, M. J.; Deul, E.; Kuijken, K.; Capaccioli, M.; Silvotti, R.; Bender, R.; Neeser, M.; Saglia, R.; Bertin, E.; Mellier, Y.

    2007-10-01

    The recent explosion of recorded digital data and its processed derivatives threatens to overwhelm researchers when analysing their experimental data or looking up data items in archives and file systems. While current hardware developments allow the acquisition, processing and storage of hundreds of terabytes of data at the cost of a modern sports car, the software systems to handle these data are lagging behind. This problem is very general and is well recognized by various scientific communities; several large projects have been initiated, e.g., DATAGRID/EGEE {http://www.eu-egee.org/} federates compute and storage power over the high-energy physical community, while the international astronomical community is building an Internet geared Virtual Observatory {http://www.euro-vo.org/pub/} (Padovani 2006) connecting archival data. These large projects either focus on a specific distribution aspect or aim to connect many sub-communities and have a relatively long trajectory for setting standards and a common layer. Here, we report first light of a very different solution (Valentijn & Kuijken 2004) to the problem initiated by a smaller astronomical IT community. It provides an abstract scientific information layer which integrates distributed scientific analysis with distributed processing and federated archiving and publishing. By designing new abstractions and mixing in old ones, a Science Information System with fully scalable cornerstones has been achieved, transforming data systems into knowledge systems. This break-through is facilitated by the full end-to-end linking of all dependent data items, which allows full backward chaining from the observer/researcher to the experiment. Key is the notion that information is intrinsic in nature and thus is the data acquired by a scientific experiment. The new abstraction is that software systems guide the user to that intrinsic information by forcing full backward and forward chaining in the data modelling.

  6. Vortex Generators to Control Boundary Layer Interactions

    NASA Technical Reports Server (NTRS)

    Babinsky, Holger (Inventor); Loth, Eric (Inventor); Lee, Sang (Inventor)

    2014-01-01

    Devices for generating streamwise vorticity in a boundary includes various forms of vortex generators. One form of a split-ramp vortex generator includes a first ramp element and a second ramp element with front ends and back ends, ramp surfaces extending between the front ends and the back ends, and vertical surfaces extending between the front ends and the back ends adjacent the ramp surfaces. A flow channel is between the first ramp element and the second ramp element. The back ends of the ramp elements have a height greater than a height of the front ends, and the front ends of the ramp elements have a width greater than a width of the back ends.

  7. Requirements for guidelines systems: implementation challenges and lessons from existing software-engineering efforts.

    PubMed

    Shah, Hemant; Allard, Raymond D; Enberg, Robert; Krishnan, Ganesh; Williams, Patricia; Nadkarni, Prakash M

    2012-03-09

    A large body of work in the clinical guidelines field has identified requirements for guideline systems, but there are formidable challenges in translating such requirements into production-quality systems that can be used in routine patient care. Detailed analysis of requirements from an implementation perspective can be useful in helping define sub-requirements to the point where they are implementable. Further, additional requirements emerge as a result of such analysis. During such an analysis, study of examples of existing, software-engineering efforts in non-biomedical fields can provide useful signposts to the implementer of a clinical guideline system. In addition to requirements described by guideline-system authors, comparative reviews of such systems, and publications discussing information needs for guideline systems and clinical decision support systems in general, we have incorporated additional requirements related to production-system robustness and functionality from publications in the business workflow domain, in addition to drawing on our own experience in the development of the Proteus guideline system (http://proteme.org). The sub-requirements are discussed by conveniently grouping them into the categories used by the review of Isern and Moreno 2008. We cite previous work under each category and then provide sub-requirements under each category, and provide example of similar work in software-engineering efforts that have addressed a similar problem in a non-biomedical context. When analyzing requirements from the implementation viewpoint, knowledge of successes and failures in related software-engineering efforts can guide implementers in the choice of effective design and development strategies.

  8. Requirements for guidelines systems: implementation challenges and lessons from existing software-engineering efforts

    PubMed Central

    2012-01-01

    Background A large body of work in the clinical guidelines field has identified requirements for guideline systems, but there are formidable challenges in translating such requirements into production-quality systems that can be used in routine patient care. Detailed analysis of requirements from an implementation perspective can be useful in helping define sub-requirements to the point where they are implementable. Further, additional requirements emerge as a result of such analysis. During such an analysis, study of examples of existing, software-engineering efforts in non-biomedical fields can provide useful signposts to the implementer of a clinical guideline system. Methods In addition to requirements described by guideline-system authors, comparative reviews of such systems, and publications discussing information needs for guideline systems and clinical decision support systems in general, we have incorporated additional requirements related to production-system robustness and functionality from publications in the business workflow domain, in addition to drawing on our own experience in the development of the Proteus guideline system (http://proteme.org). Results The sub-requirements are discussed by conveniently grouping them into the categories used by the review of Isern and Moreno 2008. We cite previous work under each category and then provide sub-requirements under each category, and provide example of similar work in software-engineering efforts that have addressed a similar problem in a non-biomedical context. Conclusions When analyzing requirements from the implementation viewpoint, knowledge of successes and failures in related software-engineering efforts can guide implementers in the choice of effective design and development strategies. PMID:22405400

  9. Magneto-transport study of top- and back-gated LaAlO{sub 3}/SrTiO{sub 3} heterostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W., E-mail: W.Liu@unige.ch; Gariglio, S.; Fête, A.

    2015-06-01

    We report a detailed analysis of magneto-transport properties of top- and back-gated LaAlO{sub 3}/SrTiO{sub 3} heterostructures. Efficient modulation in magneto-resistance, carrier density, and mobility of the two-dimensional electron liquid present at the interface is achieved by sweeping top and back gate voltages. Analyzing those changes with respect to the carrier density tuning, we observe that the back gate strongly modifies the electron mobility while the top gate mainly varies the carrier density. The evolution of the spin-orbit interaction is also followed as a function of top and back gating.

  10. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  11. Development of the Subaru-Mitaka-Okayama-Kiso Archive System

    NASA Astrophysics Data System (ADS)

    Baba, Hajime; Yasuda, Naoki; Ichikawa, Shin-Ichi; Yagi, Masafumi; Iwamoto, Nobuyuki; Takata, Tadafumi; Horaguchi, Toshihiro; Taga, Masatoshi; Watanabe, Masaru; Ozawa, Tomohiko; Hamabe, Masaru

    We have developed the Subaru-Mitaka-Okayama-Kiso-Archive (SMOKA) public science archive system which provides access to the data of the Subaru Telescope, the 188 cm telescope at Okayama Astrophysical Observatory, and the 105 cm Schmidt telescope at Kiso Observatory/University of Tokyo. SMOKA is the successor of the MOKA3 system. The user can browse the Quick-Look Images, Header Information (HDI) and the ASCII Table Extension (ATE) of each frame from the search result table. A request for data can be submitted in a simple manner. The system is developed with Java Servlet for the back-end, and Java Server Pages (JSP) for content display. The advantage of JSP's is the separation of the front-end presentation from the middle- and back-end tiers which led to an efficient development of the system. The SMOKA homepage is available at SMOKA

  12. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    PubMed

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  13. Satellite-Tracking Millimeter-Wave Reflector Antenna System For Mobile Satellite-Tracking

    NASA Technical Reports Server (NTRS)

    Densmore, Arthur C. (Inventor); Jamnejad, Vahraz (Inventor); Woo, Kenneth E. (Inventor)

    2001-01-01

    A miniature dual-band two-way mobile satellite-tracking antenna system mounted on a movable vehicle includes a miniature parabolic reflector dish having an elliptical aperture with major and minor elliptical axes aligned horizontally and vertically, respectively, to maximize azimuthal directionality and minimize elevational directionality to an extent corresponding to expected pitch excursions of the movable ground vehicle. A feed-horn has a back end and an open front end facing the reflector dish and has vertical side walls opening out from the back end to the front end at a lesser horn angle and horizontal top and bottom walls opening out from the back end to the front end at a greater horn angle. An RF circuit couples two different signal bands between the feed-horn and the user. An antenna attitude controller maintains an antenna azimuth direction relative to the satellite by rotating it in azimuth in response to sensed yaw motions of the movable ground vehicle so as to compensate for the yaw motions to within a pointing error angle. The controller sinusoidally dithers the antenna through a small azimuth dither angle greater than the pointing error angle while sensing a signal from the satellite received at the reflector dish, and deduces the pointing angle error from dither-induced fluctuations in the received signal.

  14. A satellite-tracking millimeter-wave reflector antenna system for mobile satellite-tracking

    NASA Technical Reports Server (NTRS)

    Densmore, Arthur C. (Inventor); Jamnejad, Vahraz (Inventor); Woo, Kenneth E. (Inventor)

    1995-01-01

    A miniature dual-band two-way mobile satellite tracking antenna system mounted on a movable ground vehicle includes a miniature parabolic reflector dish having an elliptical aperture with major and minor elliptical axes aligned horizontally and vertically, respectively, to maximize azimuthal directionality and minimize elevational directionality to an extent corresponding to expected pitch excursions of the movable ground vehicle. A feed-horn has a back end and an open front end facing the reflector dish and has vertical side walls opening out from the back end to the front end at a lesser horn angle and horizontal top and bottom walls opening out from the back end to the front end at a greater horn angle. An RF circuit couples two different signal bands between the feed-horn and the user. An antenna attitude controller maintains an antenna azimuth direction relative to the satellite by rotating it in azimuth in response to sensed yaw motions of the movable ground vehicle so as to compensate for the yaw motions to within a pointing error angle. The controller sinusoidally dithers the antenna through a small azimuth dither angle greater than the pointing error angle while sensing a signal from the satellite received at the reflector dish, and deduces the pointing angle error from dither-induced fluctuations in the received signal.

  15. Common Database Interface for Heterogeneous Software Engineering Tools.

    DTIC Science & Technology

    1987-12-01

    SUB-GROUP Database Management Systems ;Programming(Comuters); 1e 05 Computer Files;Information Transfer;Interfaces; 19. ABSTRACT (Continue on reverse...Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Systems ...Literature ..... 8 System 690 Configuration ......... 8 Database Functionis ............ 14 Software Engineering Environments ... 14 Data Manager

  16. Metrinome: Continuous Monitoring and Security Validation of Distributed Systems

    DTIC Science & Technology

    2014-03-01

    Integration into the SDLC ( Software Development Life Cycle), Retrieved Nov 06 2013, https://www.owasp.org/ images/f/f6/Integration_into_the_SDLC.ppt [2...assessment as part of the software development life cycle, current approaches suffer from a number of shortcomings that limit their application in...with assessing security and correct functionality. Second, integrated and end-to-end testing and experimentation is often postponed until software

  17. High-power microwave-induced TM{sub 01} plasma ring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schamiloglu, E.; Jordan, R.; Moreland, L.D.

    1996-02-01

    Open-shutter photography was used to capture the air breakdown pattern induced by a TM{sub 01} mode radiated by a high-power backward wave oscillator. The resultant plasma ring was formed in air adjacent to a conical horn antenna fitted with a membrane to keep the experiment under vacuum. This image was digitized and further processed using Khoros 2.0 software to obtain the dimensions of the plasma ring. This information was used in an air breakdown analysis to estimate the radiated power, and agrees within 10% with the power measured using field mapping with an open-ended WR-90 waveguide.

  18. Data Management for Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Snyder, Joseph F.; Smyth, David E.

    2004-01-01

    Data Management for the Mars Exploration Rovers (MER) project is a comprehensive system addressing the needs of development, test, and operations phases of the mission. During development of flight software, including the science software, the data management system can be simulated using any POSIX file system. During testing, the on-board file system can be bit compared with files on the ground to verify proper behavior and end-to-end data flows. During mission operations, end-to-end accountability of data products is supported, from science observation concept to data products within the permanent ground repository. Automated and human-in-the-loop ground tools allow decisions regarding retransmitting, re-prioritizing, and deleting data products to be made using higher level information than is available to a protocol-stack approach such as the CCSDS File Delivery Protocol (CFDP).

  19. Simulated moving bed system for CO.sub.2 separation, and method of same

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, Jeannine Elizabeth; Copeland, Robert James; Lind, Jeff

    A system and method for separating and/or purification of CO.sub.2 gas from a CO.sub.2 feed stream is described. The system and method include a plurality of fixed sorbent beds, adsorption zones and desorption zones, where the sorbent beds are connected via valve and lines to create a simulated moving bed system, where the sorbent beds move from one adsorption position to another adsorption position, and then into one regeneration position to another regeneration position, and optionally back to an adsorption position. The system and method operate by concentration swing adsorption/desorption and by adsorptive/desorptive displacement.

  20. Conducting Research on the International Space Station Using the EXPRESS Rack Facilities

    NASA Technical Reports Server (NTRS)

    Thompson, Sean W.; Lake, Robert E.

    2013-01-01

    Eight "Expedite the Processing of Experiments to Space Station" (EXPRESS) Rack facilities are located within the International Space Station (ISS) laboratories to provide standard resources and interfaces for the simultaneous and independent operation of multiple experiments within each rack. Each EXPRESS Rack provides eight Middeck Locker Equivalent locations and two drawer locations for powered experiment equipment, also referred to as sub-rack payloads. Payload developers may provide their own structure to occupy the equivalent volume of one, two, or four lockers as a single unit. Resources provided for each location include power (28 Vdc, 0-500 W), command and data handling (Ethernet, RS-422, 5 Vdc discrete, +/- 5 Vdc analog), video (NTSC/RS 170A), and air cooling (0-200 W). Each rack also provides water cooling (500 W) for two locations, one vacuum exhaust interface, and one gaseous nitrogen interface. Standard interfacing cables and hoses are provided on-orbit. One laptop computer is provided with each rack to control the rack and to accommodate payload application software. Four of the racks are equipped with the Active Rack Isolation System to reduce vibration between the ISS and the rack. EXPRESS Racks are operated by the Payload Operations Integration Center at Marshall Space Flight Center and the sub-rack experiments are operated remotely by the investigating organization. Payload Integration Managers serve as a focal to assist organizations developing payloads for an EXPRESS Rack. NASA provides EXPRESS Rack simulator software for payload developers to checkout payload command and data handling at the development site before integrating the payload with the EXPRESS Functional Checkout Unit for an end-to-end test before flight. EXPRESS Racks began supporting investigations onboard ISS on April 24, 2001 and will continue through the life of the ISS.

  1. Conducting Research on the International Space Station using the EXPRESS Rack Facilities

    NASA Technical Reports Server (NTRS)

    Thompson, Sean W.; Lake, Robert E.

    2016-01-01

    Eight "Expedite the Processing of Experiments to Space Station" (EXPRESS) Rack facilities are located within the International Space Station (ISS) laboratories to provide standard resources and interfaces for the simultaneous and independent operation of multiple experiments within each rack. Each EXPRESS Rack provides eight Middeck Locker Equivalent locations and two drawer locations for powered experiment equipment, also referred to as sub-rack payloads. Payload developers may provide their own structure to occupy the equivalent volume of one, two, or four lockers as a single unit. Resources provided for each location include power (28 Vdc, 0-500 W), command and data handling (Ethernet, RS-422, 5 Vdc discrete, +/- 5 Vdc analog), video (NTSC/RS 170A), and air cooling (0-200 W). Each rack also provides water cooling for two locations (500W ea.), one vacuum exhaust interface, and one gaseous nitrogen interface. Standard interfacing cables and hoses are provided on-orbit. One laptop computer is provided with each rack to control the rack and to accommodate payload application software. Four of the racks are equipped with the Active Rack Isolation System to reduce vibration between the ISS and the rack. EXPRESS Racks are operated by the Payload Operations Integration Center at Marshall Space Flight Center and the sub-rack experiments are operated remotely by the investigating organization. Payload Integration Managers serve as a focal to assist organizations developing payloads for an EXPRESS Rack. NASA provides EXPRESS Rack simulator software for payload developers to checkout payload command and data handling at the development site before integrating the payload with the EXPRESS Functional Checkout Unit for an end-to-end test before flight. EXPRESS Racks began supporting investigations onboard ISS on April 24, 2001 and will continue through the life of the ISS.

  2. The Chandra X-ray Center data system: supporting the mission of the Chandra X-ray Observatory

    NASA Astrophysics Data System (ADS)

    Evans, Janet D.; Cresitello-Dittmar, Mark; Doe, Stephen; Evans, Ian; Fabbiano, Giuseppina; Germain, Gregg; Glotfelty, Kenny; Hall, Diane; Plummer, David; Zografou, Panagoula

    2006-06-01

    The Chandra X-ray Center Data System provides end-to-end scientific software support for Chandra X-ray Observatory mission operations. The data system includes the following components: (1) observers' science proposal planning tools; (2) science mission planning tools; (3) science data processing, monitoring, and trending pipelines and tools; and (4) data archive and database management. A subset of the science data processing component is ported to multiple platforms and distributed to end-users as a portable data analysis package. Web-based user tools are also available for data archive search and retrieval. We describe the overall architecture of the data system and its component pieces, and consider the design choices and their impacts on maintainability. We discuss the many challenges involved in maintaining a large, mission-critical software system with limited resources. These challenges include managing continually changing software requirements and ensuring the integrity of the data system and resulting data products while being highly responsive to the needs of the project. We describe our use of COTS and OTS software at the subsystem and component levels, our methods for managing multiple release builds, and adapting a large code base to new hardware and software platforms. We review our experiences during the life of the mission so-far, and our approaches for keeping a small, but highly talented, development team engaged during the maintenance phase of a mission.

  3. Microwave corrosion detection using open ended rectangular waveguide sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qaddoumi, N.; Handjojo, L.; Bigelow, T.

    The use of microwave and millimeter wave nondestructive testing methods utilizing open ended rectangular waveguide sensors has shown great potential for detecting minute thickness variations in laminate structures, in particular those backed by a conducting plate. Slight variations in the dielectric properties of materials may also be detected using a set of optimal parameters which include the standoff distance and the frequency of operation. In a recent investigation, on detecting rust under paint, the dielectric properties of rust were assumed to be similar to those of Fe{sub 2}O{sub 3} powder. These values were used in an electromagnetic model that simulatesmore » the interaction of fields radiated by a rectangular waveguide aperture with layered structures to obtain optimal parameters. The dielectric properties of Fe{sub 2}O{sub 3} were measured to be very similar to the properties of paint. Nevertheless, the presence of a simulated Fe{sub 2}O{sub 3} layer under a paint layer was detected. In this paper the dielectric properties of several different rust samples from different environments are measured. The measurements indicate that the nature of real rust is quite diverse and is different from Fe{sub 2}O{sub 3} and paint, indicating that the presence of rust under paint can be easily detected. The same electromagnetic model is also used (with the newly measured dielectric properties of real rust) to obtain an optimal standoff distance at a frequency of 24 GHz. The results indicate that variations in the magnitude as well as the phase of the reflection coefficient can be used to obtain information about the presence of rust. An experimental investigation on detecting the presence of very thin rust layers (2.5--5 x 10{sup {minus}2} mm [09--2.0 x 10{sup {minus}3} in.]) using an open ended rectangular waveguide probe is also conducted. Microwave images of rusted specimens, obtained at 24 GHz, are also presented.« less

  4. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    NASA Astrophysics Data System (ADS)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and engineering an appropriate data storage solution. The present pilot version of the service implements noise source maps for Switzerland. Extension of the solution to Central Europe is planned for the next project phase.

  5. Tevatron beam position monitor upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolbers, Stephen; Banerjee, B.; Barker, B.

    2005-05-01

    The Tevatron Beam Position Monitor (BPM) readout electronics and software have been upgraded to improve measurement precision, functionality and reliability. The original system, designed and built in the early 1980's, became inadequate for current and future operations of the Tevatron. The upgraded system consists of 960 channels of new electronics to process analog signals from 240 BPMs, new front-end software, new online and controls software, and modified applications to take advantage of the improved measurements and support the new functionality. The new system reads signals from both ends of the existing directional stripline pickups to provide simultaneous proton and antiprotonmore » position measurements. Measurements using the new system are presented that demonstrate its improved resolution and overall performance.« less

  6. A hybrid single-end-access MZI and Φ-OTDR vibration sensing system with high frequency response

    NASA Astrophysics Data System (ADS)

    Zhang, Yixin; Xia, Lan; Cao, Chunqi; Sun, Zhenhong; Li, Yanting; Zhang, Xuping

    2017-01-01

    A hybrid single-end-access Mach-Zehnder interferometer (MZI) and phase sensitive OTDR (Φ-OTDR) vibration sensing system is proposed and demonstrated experimentally. In our system, the narrow optical pulses and the continuous wave are injected into the fiber through the front end of the fiber at the same time. And at the rear end of the fiber, a frequency-shift-mirror (FSM) is designed to back propagate the continuous wave modulated by the external vibration. Thus the Rayleigh backscattering signals (RBS) and the back propagated continuous wave interfere with the reference light at the same end of the sensing fiber and a single-end-access configuration is achieved. The RBS can be successfully separated from the interference signal (IS) through digital signal process due to their different intermediate frequency based on frequency division multiplexing technique. There is no influence between these two schemes. The experimental results show 10 m spatial resolution and up to 1.2 MHz frequency response along a 6.35 km long fiber. This newly designed single-end-access setup can achieve vibration events locating and high frequency events response, which can be widely used in health monitoring for civil infrastructures and transportation.

  7. An improved real time superresolution FPGA system

    NASA Astrophysics Data System (ADS)

    Lakshmi Narasimha, Pramod; Mudigoudar, Basavaraj; Yue, Zhanfeng; Topiwala, Pankaj

    2009-05-01

    In numerous computer vision applications, enhancing the quality and resolution of captured video can be critical. Acquired video is often grainy and low quality due to motion, transmission bottlenecks, etc. Postprocessing can enhance it. Superresolution greatly decreases camera jitter to deliver a smooth, stabilized, high quality video. In this paper, we extend previous work on a real-time superresolution application implemented in ASIC/FPGA hardware. A gradient based technique is used to register the frames at the sub-pixel level. Once we get the high resolution grid, we use an improved regularization technique in which the image is iteratively modified by applying back-projection to get a sharp and undistorted image. The algorithm was first tested in software and migrated to hardware, to achieve 320x240 -> 1280x960, about 30 fps, a stunning superresolution by 16X in total pixels. Various input parameters, such as size of input image, enlarging factor and the number of nearest neighbors, can be tuned conveniently by the user. We use a maximum word size of 32 bits to implement the algorithm in Matlab Simulink as well as in FPGA hardware, which gives us a fine balance between the number of bits and performance. The proposed system is robust and highly efficient. We have shown the performance improvement of the hardware superresolution over the software version (C code).

  8. 77 FR 65582 - Notice of Determinations Regarding Eligibility To Apply for Worker Adjustment Assistance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-29

    .... Div., Back Office Customer Support, Primary Services & Inceed. 81,972 Pharmetrics, An IMS Health... Constellation Homebuilder Redmond, WA September 14, 2011. Systems, Fast Division, Constellation Software, Inc...

  9. A computer-based time study system for timber harvesting operations

    Treesearch

    Jingxin Wang; Joe McNeel; John Baumgras

    2003-01-01

    A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...

  10. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    NASA Technical Reports Server (NTRS)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  11. The AR Sandbox: Augmented Reality in Geoscience Education

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.; Reed, S.; Hsi, S.; Yikilmaz, M. B.; Schladow, G.; Segale, H.; Chan, L.

    2016-12-01

    The AR Sandbox is a combination of a physical box full of sand, a 3D (depth) camera such as a Microsoft Kinect, a data projector, and a computer running open-source software, creating a responsive and interactive system to teach geoscience concepts in formal or informal contexts. As one or more users shape the sand surface to create planes, hills, or valleys, the 3D camera scans the surface in real-time, the software creates a dynamic topographic map including elevation color maps and contour lines, and the projector projects that map back onto the sand surface such that real and projected features match exactly. In addition, users can add virtual water to the sandbox, which realistically flows over the real surface driven by a real-time fluid flow simulation. The AR Sandbox can teach basic geographic and hydrologic skills and concepts such as reading topographic maps, interpreting contour lines, formation of watersheds, flooding, or surface wave propagation in a hands-on and explorative manner. AR Sandbox installations in more than 150 institutions have shown high audience engagement and long dwell times of often 20 minutes and more. In a more formal context, the AR Sandbox can be used in field trip preparation, and can teach advanced geoscience skills such as extrapolating 3D sub-surface shapes from surface expression, via advanced software features such as the ability to load digital models of real landscapes and guiding users towards recreating them in the sandbox. Blueprints, installation instructions, and the open-source AR Sandbox software package are available at http://arsandbox.org .

  12. Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

    PubMed Central

    Petrovici, Mihai A.; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz

    2014-01-01

    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks. PMID:25303102

  13. Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms.

    PubMed

    Petrovici, Mihai A; Vogginger, Bernhard; Müller, Paul; Breitwieser, Oliver; Lundqvist, Mikael; Muller, Lyle; Ehrlich, Matthias; Destexhe, Alain; Lansner, Anders; Schüffny, René; Schemmel, Johannes; Meier, Karlheinz

    2014-01-01

    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.

  14. Common MD-IS infrastructure for wireless data technologies

    NASA Astrophysics Data System (ADS)

    White, Malcolm E.

    1995-12-01

    The expansion of global networks, caused by growth and acquisition within the commercial sector, is forcing users to move away from proprietary systems in favor of standards-based, open systems architectures. The same is true in the wireless data communications arena, where operators of proprietary wireless data networks have endeavored to convince users that their particular implementation provides the best service. However, most of the vendors touting these solutions have failed to gain the critical mass that might have lead to their technologies' adoption as a defacto standard, and have been held back by a lack of applications and the high cost of mobile devices. The advent of the cellular digital packet data (CDPD) specification and its support by much of the public cellular service industry has set the stage for the ubiquitous coverage of wireless packet data services across the Unites States. Although CDPD was developed for operation over the advanced mobile phone system (AMPS) cellular network, many of the defined protocols are industry standards that can be applied to the construction of a common infrastructure supporting multiple airlink standards. This approach offers overall cost savings and operation efficiency for service providers, hardware, and software developers and end-users alike, and could be equally advantageous for those service operators using proprietary end system protocols, should they wish to migrate towards an open standard.

  15. Flight code validation simulator

    NASA Astrophysics Data System (ADS)

    Sims, Brent A.

    1996-05-01

    An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer and reads and writes actual hardware sensor locations in which Inertial Measurement Unit data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System in January of 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.

  16. Evaluation of Work-related Psychosocial and Ergonomics Factors in Relation to Low Back Discomfort in Emergency Unit Nurses

    PubMed Central

    Habibi, Ehsanollah; Pourabdian, Siamak; Atabaki, Azadeh Kianpour; Hoseini, Mohsen

    2012-01-01

    Background and Aim: High prevalence of low back pain is one of the most common problems among nurses. The aim of this study was to evaluate the relation of the intensity of low back discomfort to two low back pain contributor factors (Ergonomics risk factors and psychosocial factors). Methods: This cross-sectional survey was conducted on 120 emergency unit nurses in Esfahan. Job content, ergonomics hazards and nordic questionnaire were used in that order for daily assessment of Psychosocial and Ergonomics factors and the intensity of low back discomfort. Nurses were questioned during a 5-week period, at the end of each shift work. The final results were analyzed with SPSS software18/PASW by using Spearman, Mann-Whitney and Kolmogorov-Smirnove test. Results: There was a significant relationship between work demand, job content, social support and intensity of low back discomfort (P value <0.05). But, there was not any link between intensity of low back discomfort and job control. Also, there was significant relationship between intensity of low back discomfort and ergonomics risk factors. Conclusion: This study showed an indirect relationship between the intensity of low back discomfort and social support. This study also confirmed a direct relationship between the intensity of low back discomfort and work demand, job content, ergonomics factors (Awkward Postures (rotating and bending), manual patient handling and repetitiveness, standing continuously more than 30 min). So, to decrease work related low back discomfort, psychosocial factors should be attended in addition to ergonomics factors. PMID:22973487

  17. Structural analyses of a rigid pavement overlaying a sub-surface void

    NASA Astrophysics Data System (ADS)

    Adam, Fatih Alperen

    Pavement failures are very hazardous for public safety and serviceability. These failures in pavements are mainly caused by subsurface voids, cracks, and undulation at the slab-base interface. On the other hand, current structural analysis procedures for rigid pavement assume that the slab-base interface is perfectly planar and no imperfections exist in the sub-surface soil. This assumption would be violated if severe erosion were to occur due to inadequate drainage, thermal movements, and/or mechanical loading. Until now, the effect of erosion was only considered in the faulting performance model, but not with regards to transverse cracking at the mid-slab edge. In this research, the bottom up fatigue cracking potential, caused by the combined effects of wheel loading and a localized imperfection in the form of a void below the mid-slab edge, is studied. A robust stress and surface deflection analysis was also conducted to evaluate the influence of a sub-surface void on layer moduli back-calculation. Rehabilitative measures were considered, which included a study on overlay and fill remediation. A series regression of equations was proposed that provides a relationship between void size, layer moduli stiffness, and the overlay thickness required to reduce the stress to its original pre-void level. The effect of the void on 3D pavement crack propagation was also studied under a single axle load. The amplifications to the stress intensity was shown to be high but could be mitigated substantially if stiff material is used to fill the void and impede crack growth. The pavement system was modeled using the commercial finite element modeling program Abaqus RTM. More than 10,000 runs were executed to do the following analysis: stress analysis of subsurface voids, E-moduli back-calculation of base layer, pavement damage calculations of Beaumont, TX, overlay thickness estimations, and mode I crack analysis. The results indicate that the stress and stress intensity are, on average, amplified considerably: 80% and 150%, respectively, by the presence of the void and more severe in a bonded pavement system compared to an un-bonded system. The sub-surface void also significantly affects the layer moduli back-calculation. The equivalent moduli of the layers are reduced considerably when a sub-surface void is present. However, the results indicate the back-calculated moduli derived using surface deflection, and longitudinal stress basins did not yield equivalent layer moduli under mechanical loading; the back-calculated deflection-based moduli were larger than the stress-based moduli, leading to stress calculations that were lower than those found in the real system.

  18. Web-Accessible Scientific Workflow System for Performance Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roelof Versteeg; Roelof Versteeg; Trevor Rowe

    2006-03-01

    We describe the design and implementation of a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic Javascript and HTML/CSS) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This environment allows for reproducible, transparent result generation by a diverse user base. It has been implemented for several monitoringmore » systems with different degrees of complexity.« less

  19. [Instruments in Brazilian Sign Language for assessing the quality of life of the deaf population].

    PubMed

    Chaveiro, Neuma; Duarte, Soraya Bianca Reis; Freitas, Adriana Ribeiro de; Barbosa, Maria Alves; Porto, Celmo Celeno; Fleck, Marcelo Pio de Almeida

    2013-06-01

    To construct versions of the WHOQOL-BREF and WHOQOL-DIS instruments in Brazilian sign language to evaluate the Brazilian deaf population's quality of life. The methodology proposed by the World Health Organization (WHOQOL-BREF and WHOQOL-DIS) was used to construct instruments adapted to the deaf community using Brazilian Sign Language (Libras). The research for constructing the instrument took placein 13 phases: 1) creating the QUALITY OF LIFE sign; 2) developing the answer scales in Libras; 3) translation by a bilingual group; 4) synthesized version; 5) first back translation; 6) production of the version in Libras to be provided to the focal groups; 7) carrying out the Focal Groups; 8) review by a monolingual group; 9) revision by the bilingual group; 10) semantic/syntactic analysis and second back translation; 11) re-evaluation of the back translation by the bilingual group; 12) recording the version into the software; 13) developing the WHOQOL-BREF and WHOQOL-DIS software in Libras. Characteristics peculiar to the culture of the deaf population indicated the necessity of adapting the application methodology of focal groups composed of deaf people. The writing conventions of sign languages have not yet been consolidated, leading to difficulties in graphically registering the translation phases. Linguistics structures that caused major problems in translation were those that included idiomatic Portuguese expressions, for many of which there are no equivalent concepts between Portuguese and Libras. In the end, it was possible to create WHOQOL-BREF and WHOQOL-DIS software in Libras. The WHOQOL-BREF and the WHOQOL-DIS in Libras will allow the deaf to express themselves about their quality of life in an autonomous way, making it possible to investigate these issues more accurately.

  20. Research in millimeter wave techniques

    NASA Technical Reports Server (NTRS)

    Mcmillan, R. W.

    1977-01-01

    The following is investigated; (1) the design of a 183 GHz single ended fundamental mixer to serve as a back up mixer to the subharmonic mixer for airborne applications, (2) attainment of 6 db single sideband conversion loss with the 6 GHz subharmonic mixer model, together with initial tests to determine the feasibility of pumping the mixer at w sub s/4, (3) additional ground based radiometric measurements, and (4) derivation of equations for power transmission of wire grid interferometers, and initial tests to verify these equations.

  1. Tailoring Software for Multiple Processor Systems

    DTIC Science & Technology

    1982-10-01

    resource management decisions . Despite the lack of programming support, the use of multiple processor systems has grown sub- -stantially. Software has...making resource management decisions . Specifically, program- 1 mers need not allocate specific hardware resources to individual program components...Instead, such allocation decisions are automatically made based on high-level resource directives stated by ap- plication programmers, where each directive

  2. An affordable modular vehicle radar for landmine and IED detection

    NASA Astrophysics Data System (ADS)

    Daniels, David; Curtis, Paul; Dittmer, Jon; Hunt, Nigel; Graham, Blair; Allan, Robert

    2009-05-01

    This paper describes a vehicle mounted 8-channel radar system suitable for buried landmine and IED detection. The system is designed to find Anti Tank (AT) landmines and buried Improvised Explosive Devices (IEDs). The radar uses field-proven ground penetrating radar sub-system modules and is scalable to 16, 32 or 64 channels, for covering greater swathe widths and for providing higher cross track resolution. This offers the capability of detecting smaller targets down to a minimum dimension of 100mm. The current rate of advance of the technology demonstrator is 10 kph; this can be increased to 20 kph where required. The data output is triggered via shaft encoder or via GPS and, for each forward increment; the data output is variable from a single byte per channel through to the 512 samples per channel. Trials using an autonomous vehicle, combined with a COFDM wireless link for data and telemetry back to a base station, have proven successful and the system architecture is described in this paper. The GPR array can be used as a standalone sensor or can be integrated with off-the-shelf software and a metal detection array.

  3. Advanced Wireless Sensor Nodes - MSFC

    NASA Technical Reports Server (NTRS)

    Varnavas, Kosta; Richeson, Jeff

    2017-01-01

    NASA field center Marshall Space Flight Center (Huntsville, AL), has invested in advanced wireless sensor technology development. Developments for a wireless microcontroller back-end were primarily focused on the commercial Synapse Wireless family of devices. These devices have many useful features for NASA applications, good characteristics and the ability to be programmed Over-The-Air (OTA). The effort has focused on two widely used sensor types, mechanical strain gauges and thermal sensors. Mechanical strain gauges are used extensively in NASA structural testing and even on vehicle instrumentation systems. Additionally, thermal monitoring with many types of sensors is extensively used. These thermal sensors include thermocouples of all types, resistive temperature devices (RTDs), diodes and other thermal sensor types. The wireless thermal board will accommodate all of these types of sensor inputs to an analog front end. The analog front end on each of the sensors interfaces to the Synapse wireless microcontroller, based on the Atmel Atmega128 device. Once the analog sensor output data is digitized by the onboard analog to digital converter (A/D), the data is available for analysis, computation or transmission. Various hardware features allow custom embedded software to manage battery power to enhance battery life. This technology development fits nicely into using numerous additional sensor front ends, including some of the low-cost printed circuit board capacitive moisture content sensors currently being developed at Auburn University.

  4. Integration and validation of a data grid software

    NASA Astrophysics Data System (ADS)

    Carenton-Madiec, Nicolas; Berger, Katharina; Cofino, Antonio

    2014-05-01

    The Earth System Grid Federation (ESGF) Peer-to-Peer (P2P) is a software infrastructure for the management, dissemination, and analysis of model output and observational data. The ESGF grid is composed with several types of nodes which have different roles. About 40 data nodes host model outputs and datasets using thredds catalogs. About 25 compute nodes offer remote visualization and analysis tools. About 15 index nodes crawl data nodes catalogs and implement faceted and federated search in a web interface. About 15 Identity providers nodes manage accounts, authentication and authorization. Here we will present an actual size test federation spread across different institutes in different countries and a python test suite that were started in December 2013. The first objective of the test suite is to provide a simple tool that helps to test and validate a single data node and its closest index, compute and identity provider peer. The next objective will be to run this test suite on every data node of the federation and therefore test and validate every single node of the whole federation. The suite already implements nosetests, requests, myproxy-logon, subprocess, selenium and fabric python libraries in order to test both web front ends, back ends and security services. The goal of this project is to improve the quality of deliverable in a small developers team context. Developers are widely spread around the world working collaboratively and without hierarchy. This kind of working organization context en-lighted the need of a federated integration test and validation process.

  5. Front End Software for Online Database Searching Part 1: Definitions, System Features, and Evaluation.

    ERIC Educational Resources Information Center

    Hawkins, Donald T.; Levy, Louise R.

    1985-01-01

    This initial article in series of three discusses barriers inhibiting use of current online retrieval systems by novice users and notes reasons for front end and gateway online retrieval systems. Definitions, front end features, user interface, location (personal computer, host mainframe), evaluation, and strengths and weaknesses are covered. (16…

  6. End-to-end observatory software modeling using domain specific languages

    NASA Astrophysics Data System (ADS)

    Filgueira, José M.; Bec, Matthieu; Liu, Ning; Peng, Chien; Soto, José

    2014-07-01

    The Giant Magellan Telescope (GMT) is a 25-meter extremely large telescope that is being built by an international consortium of universities and research institutions. Its software and control system is being developed using a set of Domain Specific Languages (DSL) that supports a model driven development methodology integrated with an Agile management process. This approach promotes the use of standardized models that capture the component architecture of the system, that facilitate the construction of technical specifications in a uniform way, that facilitate communication between developers and domain experts and that provide a framework to ensure the successful integration of the software subsystems developed by the GMT partner institutions.

  7. Magnetic refrigeration system with separated inlet and outlet flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auringer, Jon Jay; Boeder, Andre Michael; Chell, Jeremy Jonathan

    An active magnetic regenerative (AMR) refrigerator apparatus can include at least one AMR bed with a first end and a second end and a first heat exchanger (HEX) with a first end and a second end. The AMR refrigerator can also include a first pipe that fluidly connects the first end of the first HEX to the first end of the AMR bed and a second pipe that fluidly connects the second end of the first HEX to the first end of the AMR bed. The first pipe can divide into two or more sub-passages at the AMR bed. Themore » second pipe can divide into two or more sub-passages at the AMR bed. The sub-passages of the first pipe and the second pipe can interleave at the AMR bed.« less

  8. GiA Roots: software for the high throughput analysis of plant root system architecture.

    PubMed

    Galkovskyi, Taras; Mileyko, Yuriy; Bucksch, Alexander; Moore, Brad; Symonova, Olga; Price, Charles A; Topp, Christopher N; Iyer-Pascuzzi, Anjali S; Zurek, Paul R; Fang, Suqin; Harer, John; Benfey, Philip N; Weitz, Joshua S

    2012-07-26

    Characterizing root system architecture (RSA) is essential to understanding the development and function of vascular plants. Identifying RSA-associated genes also represents an underexplored opportunity for crop improvement. Software tools are needed to accelerate the pace at which quantitative traits of RSA are estimated from images of root networks. We have developed GiA Roots (General Image Analysis of Roots), a semi-automated software tool designed specifically for the high-throughput analysis of root system images. GiA Roots includes user-assisted algorithms to distinguish root from background and a fully automated pipeline that extracts dozens of root system phenotypes. Quantitative information on each phenotype, along with intermediate steps for full reproducibility, is returned to the end-user for downstream analysis. GiA Roots has a GUI front end and a command-line interface for interweaving the software into large-scale workflows. GiA Roots can also be extended to estimate novel phenotypes specified by the end-user. We demonstrate the use of GiA Roots on a set of 2393 images of rice roots representing 12 genotypes from the species Oryza sativa. We validate trait measurements against prior analyses of this image set that demonstrated that RSA traits are likely heritable and associated with genotypic differences. Moreover, we demonstrate that GiA Roots is extensible and an end-user can add functionality so that GiA Roots can estimate novel RSA traits. In summary, we show that the software can function as an efficient tool as part of a workflow to move from large numbers of root images to downstream analysis.

  9. The Co-Creation of Information Systems

    ERIC Educational Resources Information Center

    Gomillion, David

    2013-01-01

    In information systems development, end-users have shifted in their role: from consumers of information to informants for requirements to developers of systems. This shift in the role of users has also changed how information systems are developed. Instead of systems developers creating specifications for software or end-users creating small…

  10. The software architecture to control the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Oya, I.; Füßling, M.; Antonino, P. O.; Conforti, V.; Hagge, L.; Melkumyan, D.; Morgenstern, A.; Tosti, G.; Schwanke, U.; Schwarz, J.; Wegner, P.; Colomé, J.; Lyard, E.

    2016-07-01

    The Cherenkov Telescope Array (CTA) project is an initiative to build two large arrays of Cherenkov gamma- ray telescopes. CTA will be deployed as two installations, one in the northern and the other in the southern hemisphere, containing dozens of telescopes of different sizes. CTA is a big step forward in the field of ground- based gamma-ray astronomy, not only because of the expected scientific return, but also due to the order-of- magnitude larger scale of the instrument to be controlled. The performance requirements associated with such a large and distributed astronomical installation require a thoughtful analysis to determine the best software solutions. The array control and data acquisition (ACTL) work-package within the CTA initiative will deliver the software to control and acquire the data from the CTA instrumentation. In this contribution we present the current status of the formal ACTL system decomposition into software building blocks and the relationships among them. The system is modelled via the Systems Modelling Language (SysML) formalism. To cope with the complexity of the system, this architecture model is sub-divided into different perspectives. The relationships with the stakeholders and external systems are used to create the first perspective, the context of the ACTL software system. Use cases are employed to describe the interaction of those external elements with the ACTL system and are traced to a hierarchy of functionalities (abstract system functions) describing the internal structure of the ACTL system. These functions are then traced to fully specified logical elements (software components), the deployment of which as technical elements, is also described. This modelling approach allows us to decompose the ACTL software in elements to be created and the ow of information within the system, providing us with a clear way to identify sub-system interdependencies. This architectural approach allows us to build the ACTL system model and trace requirements to deliverables (source code, documentation, etc.), and permits the implementation of a flexible use-case driven software development approach thanks to the traceability from use cases to the logical software elements. The Alma Common Software (ACS) container/component framework, used for the control of the Atacama Large Millimeter/submillimeter Array (ALMA) is the basis for the ACTL software and as such it is considered as an integral part of the software architecture.

  11. GeoMEx: Geographic Information System (GIS) Prototype for Mars Express Data

    NASA Astrophysics Data System (ADS)

    Manaud, N.; Frigeri, A.; Ivanov, A. B.

    2013-09-01

    As of today almost a decade of observational data have been returned by the multidisciplinary instruments on-board the ESA's Mars Express spacecraft. All data are archived into the ESA's Planetary Science Archive (PSA), which is the central repository for all ESA's Solar System missions [1]. Data users can perform advanced queries and retrieve data from the PSA using graphical and map-based search interfaces, or via direct FTP download [2]. However the PSA still offers limited geometrical search and visualisation capabilities that are essential for scientists to identify their data of interest. A former study has shown [3] that this limitation is mostly due to the fact that (1) only a subset of the instruments observations geometry information has been modeled and ingested into the PSA, and (2) that the access to that information from GIS software is impossible without going through a cumbersome and undocumented process. With the increasing number of Mars GIS data sets available to the community [4], GIS software have become invaluable tools for researchers to capture, manage, visualise, and analyse data from various sources. Although Mars Express surface imaging data are natural candidates for use in a GIS environment, other non-imaging instruments data (subsurface, atmosphere, plasma) integration is being investigated [5]. The objective of this work is to develop a GIS prototype that will integrate all the Mars Express instruments observations geometry information into a spatial database that can be accessed from external GIS software using standard WMS and WFS protocols. We will firstly focus on the integration of surface and subsurface instruments data (HRSC, OMEGA, MARSIS). In addition to the geometry information, base and context maps of Mars derived from surface mapping instruments data will also be ingested into the system. The system back-end architecture will be implemented using open-source GIS frameworks: PostgreSQL/PostGIS for the database, and MapServer for the web publishing module. Interfaces with existing GIS front-end software (such as QGIS, GRASS, ArcView, or OpenLayers) will be investigated and tested in a second phase. This prototype is primarily intended to be used by the Mars Express instruments teams in support to their scientific investigations. It will also be used by the mission Archive Scientist in support to the data validation and PSA interface requirements definition tasks. Depending on its success, this prototype might be used in the future to demonstrate the benefit of a GIS component integration to ESA's planetary science operations planning systems.

  12. Culture and Creativity: World of Warcraft Modding in China and the US

    NASA Astrophysics Data System (ADS)

    Kow, Yong Ming; Nardi, Bonnie

    Modding - end-user modification of commercial hardware and software - can be traced back at least to 1961 when Spacewar! was developed by a group of MIT students on a DEC PDP-1. Spacewar! evolved into arcade games including Space Wars produced in 1977 by Cinematronics (Sotamaa 2003). In 1992, players altering Wolfenstein 3-D (1992), a first person shooter game made by id Software, overwrote the graphics and sounds by editing the game files. Learning from this experience, id Software released Doom in 1993 with isolated media files and open source code for players to develop custom maps, images, sounds, and other utilities. Players were able to pass on their modifications to others. By 1996, with the release of Quake, end-user modifications had come to be known as "mods," and modding was an accepted part of the gaming community (Kucklich 2005; Postigo 2008a, b). Since late-2005, we have been studying World of Warcraft (WoW) in which the use of mods is an important aspect of player practice (Nardi and Harris 2006; Nardi et al. 2007). Technically minded players with an interest in extending the game write mods and make them available to players for free download on distribution sites. Most modders work for free, but the distribution sites are commercial enterprises with advertising.

  13. Advanced software development workstation project ACCESS user's guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    ACCESS is a knowledge based software information system designed to assist the user in modifying retrieved software to satisfy user specifications. A user's guide is presented for the knowledge engineer who wishes to create for ACCESS a knowledge base consisting of representations of objects in some software system. This knowledge is accessible to an end user who wishes to use the catalogued software objects to create a new application program or an input stream for an existing system. The application specific portion of an ACCESS knowledge base consists of a taxonomy of object classes, as well as instances of these classes. All objects in the knowledge base are stored in an associative memory. ACCESS provides a standard interface for the end user to browse and modify objects. In addition, the interface can be customized by the addition of application specific data entry forms and by specification of display order for the taxonomy and object attributes. These customization options are described.

  14. Pattern recognition monitoring of PEM fuel cell

    DOEpatents

    Meltser, M.A.

    1999-08-31

    The CO-concentration in the H{sub 2} feed stream to a PEM fuel cell stack is monitored by measuring current and voltage behavior patterns from an auxiliary cell attached to the end of the stack. The auxiliary cell is connected to the same oxygen and hydrogen feed manifolds that supply the stack, and discharges through a constant load. Pattern recognition software compares the current and voltage patterns from the auxiliary cell to current and voltage signature determined from a reference cell similar to the auxiliary cell and operated under controlled conditions over a wide range of CO-concentrations in the H{sub 2} fuel stream. 4 figs.

  15. Pattern recognition monitoring of PEM fuel cell

    DOEpatents

    Meltser, Mark Alexander

    1999-01-01

    The CO-concentration in the H.sub.2 feed stream to a PEM fuel cell stack is monitored by measuring current and voltage behavior patterns from an auxiliary cell attached to the end of the stack. The auxiliary cell is connected to the same oxygen and hydrogen feed manifolds that supply the stack, and discharges through a constant load. Pattern recognition software compares the current and voltage patterns from the auxiliary cell to current and voltage signature determined from a reference cell similar to the auxiliary cell and operated under controlled conditions over a wide range of CO-concentrations in the H.sub.2 fuel stream.

  16. LimsPortal and BonsaiLIMS: development of a lab information management system for translational medicine

    PubMed Central

    2011-01-01

    Background Laboratory Information Management Systems (LIMS) are an increasingly important part of modern laboratory infrastructure. As typically very sophisticated software products, LIMS often require considerable resources to select, deploy and maintain. Larger organisations may have access to specialist IT support to assist with requirements elicitation and software customisation, however smaller groups will often have limited IT support to perform the kind of iterative development that can resolve the difficulties that biologists often have when specifying requirements. Translational medicine aims to accelerate the process of treatment discovery by bringing together multiple disciplines to discover new approaches to treating disease, or novel applications of existing treatments. The diverse set of disciplines and complexity of processing procedures involved, especially with the use of high throughput technologies, bring difficulties in customizing a generic LIMS to provide a single system for managing sample related data within a translational medicine research setting, especially where limited IT support is available. Results We have designed and developed a LIMS, BonsaiLIMS, around a very simple data model that can be easily implemented using a variety of technologies, and can be easily extended as specific requirements dictate. A reference implementation using Oracle 11 g database and the Python framework, Django is presented. Conclusions By focusing on a minimal feature set and a modular design we have been able to deploy the BonsaiLIMS system very quickly. The benefits to our institute have been the avoidance of the prolonged implementation timescales, budget overruns, scope creep, off-specifications and user fatigue issues that typify many enterprise software implementations. The transition away from using local, uncontrolled records in spreadsheet and paper formats to a centrally held, secured and backed-up database brings the immediate benefits of improved data visibility, audit and overall data quality. The open-source availability of this software allows others to rapidly implement a LIMS which in itself might sufficiently address user requirements. In situations where this software does not meet requirements, it can serve to elicit more accurate specifications from end-users for a more heavyweight LIMS by acting as a demonstrable prototype. PMID:21569484

  17. LimsPortal and BonsaiLIMS: development of a lab information management system for translational medicine.

    PubMed

    Bath, Timothy G; Bozdag, Selcuk; Afzal, Vackar; Crowther, Daniel

    2011-05-13

    Laboratory Information Management Systems (LIMS) are an increasingly important part of modern laboratory infrastructure. As typically very sophisticated software products, LIMS often require considerable resources to select, deploy and maintain. Larger organisations may have access to specialist IT support to assist with requirements elicitation and software customisation, however smaller groups will often have limited IT support to perform the kind of iterative development that can resolve the difficulties that biologists often have when specifying requirements. Translational medicine aims to accelerate the process of treatment discovery by bringing together multiple disciplines to discover new approaches to treating disease, or novel applications of existing treatments. The diverse set of disciplines and complexity of processing procedures involved, especially with the use of high throughput technologies, bring difficulties in customizing a generic LIMS to provide a single system for managing sample related data within a translational medicine research setting, especially where limited IT support is available. We have designed and developed a LIMS, BonsaiLIMS, around a very simple data model that can be easily implemented using a variety of technologies, and can be easily extended as specific requirements dictate. A reference implementation using Oracle 11 g database and the Python framework, Django is presented. By focusing on a minimal feature set and a modular design we have been able to deploy the BonsaiLIMS system very quickly. The benefits to our institute have been the avoidance of the prolonged implementation timescales, budget overruns, scope creep, off-specifications and user fatigue issues that typify many enterprise software implementations. The transition away from using local, uncontrolled records in spreadsheet and paper formats to a centrally held, secured and backed-up database brings the immediate benefits of improved data visibility, audit and overall data quality. The open-source availability of this software allows others to rapidly implement a LIMS which in itself might sufficiently address user requirements. In situations where this software does not meet requirements, it can serve to elicit more accurate specifications from end-users for a more heavyweight LIMS by acting as a demonstrable prototype.

  18. A Web-Based Information System for Field Data Management

    NASA Astrophysics Data System (ADS)

    Weng, Y. H.; Sun, F. S.

    2014-12-01

    A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.

  19. Development of a Multi-frequency Interferometer Telescope for Radio Astronomy (MITRA)

    NASA Astrophysics Data System (ADS)

    Ingala, Dominique Guelord Kumamputu

    2015-03-01

    This dissertation describes the development and construction of the Multi-frequency Interferometer Telescope for Radio Astronomy (MITRA) at the Durban University of Technology. The MITRA station consists of 2 antenna arrays separated by a baseline distance of 8 m. Each array consists of 8 Log-Periodic Dipole Antennas (LPDAs) operating from 200 MHz to 800 MHz. The design and construction of the LPDA antenna and receiver system is described. The receiver topology provides an equivalent noise temperature of 113.1 K and 55.1 dB of gain. The Intermediate Frequency (IF) stage was designed to produce a fixed IF frequency of 800 MHz. The digital Back-End and correlator were implemented using a low cost Software Defined Radio (SDR) platform and Gnu-Radio software. Gnu-Octave was used for data analysis to generate the relevant received signal parameters including total power, real, and imaginary, magnitude and phase components. Measured results show that interference fringes were successfully detected within the bandwidth of the receiver using a Radio Frequency (RF) generator as a simulated source. This research was presented at the IEEE Africon 2013 / URSI Session Mauritius, and published in the proceedings.

  20. APBSmem: A Graphical Interface for Electrostatic Calculations at the Membrane

    PubMed Central

    Callenberg, Keith M.; Choudhary, Om P.; de Forest, Gabriel L.; Gohara, David W.; Baker, Nathan A.; Grabe, Michael

    2010-01-01

    Electrostatic forces are one of the primary determinants of molecular interactions. They help guide the folding of proteins, increase the binding of one protein to another and facilitate protein-DNA and protein-ligand binding. A popular method for computing the electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation, and there are several easy-to-use software packages available that solve the PB equation for soluble proteins. Here we present a freely available program, called APBSmem, for carrying out these calculations in the presence of a membrane. The Adaptive Poisson-Boltzmann Solver (APBS) is used as a back-end for solving the PB equation, and a Java-based graphical user interface (GUI) coordinates a set of routines that introduce the influence of the membrane, determine its placement relative to the protein, and set the membrane potential. The software Jmol is embedded in the GUI to visualize the protein inserted in the membrane before the calculation and the electrostatic potential after completing the computation. We expect that the ease with which the GUI allows one to carry out these calculations will make this software a useful resource for experimenters and computational researchers alike. Three examples of membrane protein electrostatic calculations are carried out to illustrate how to use APBSmem and to highlight the different quantities of interest that can be calculated. PMID:20949122

  1. APBSmem: a graphical interface for electrostatic calculations at the membrane.

    PubMed

    Callenberg, Keith M; Choudhary, Om P; de Forest, Gabriel L; Gohara, David W; Baker, Nathan A; Grabe, Michael

    2010-09-29

    Electrostatic forces are one of the primary determinants of molecular interactions. They help guide the folding of proteins, increase the binding of one protein to another and facilitate protein-DNA and protein-ligand binding. A popular method for computing the electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation, and there are several easy-to-use software packages available that solve the PB equation for soluble proteins. Here we present a freely available program, called APBSmem, for carrying out these calculations in the presence of a membrane. The Adaptive Poisson-Boltzmann Solver (APBS) is used as a back-end for solving the PB equation, and a Java-based graphical user interface (GUI) coordinates a set of routines that introduce the influence of the membrane, determine its placement relative to the protein, and set the membrane potential. The software Jmol is embedded in the GUI to visualize the protein inserted in the membrane before the calculation and the electrostatic potential after completing the computation. We expect that the ease with which the GUI allows one to carry out these calculations will make this software a useful resource for experimenters and computational researchers alike. Three examples of membrane protein electrostatic calculations are carried out to illustrate how to use APBSmem and to highlight the different quantities of interest that can be calculated.

  2. High Fidelity CFD Analysis and Validation of Rotorcraft Gearbox Aerodynamics Under Operational and Oil-Out Conditions

    NASA Technical Reports Server (NTRS)

    Kunz, Robert F.

    2014-01-01

    This document represents the evolving formal documentation of the NPHASE-PSU computer code. Version 3.15 is being delivered along with the software to NASA in 2013.Significant upgrades to the NPHASE-PSU have been made since the first delivery of draft documentation to DARPA and USNRC in 2006. These include a much lighter, faster and memory efficient face based front end, support for arbitrary polyhedra in front end, flow-solver and back-end, a generalized homogeneous multiphase capability, and several two-fluid modelling and algorithmic elements. Specific capability installed for the NASA Gearbox Windage Aerodynamics NRA are included in this version: Hybrid Immersed Overset Boundary Method (HOIBM) [Noack et. al (2009)] Periodic boundary conditions for multiple frames of reference, Fully generalized immersed boundary method, Fully generalized conjugate heat transfer, Droplet deposition, bouncing, splashing models, and, Film transport and breakup.

  3. Goddard Space Flight Center's Structural Dynamics Data Acquisition System

    NASA Technical Reports Server (NTRS)

    McLeod, Christopher

    2004-01-01

    Turnkey Commercial Off The Shelf (COTS) data acquisition systems typically perform well and meet most of the objectives of the manufacturer. The problem is that they seldom meet most of the objectives of the end user. The analysis software, if any, is unlikely to be tailored to the end users specific application; and there is seldom the chance of incorporating preferred algorithms to solve unique problems. Purchasing a customized system allows the end user to get a system tailored to the actual application, but the cost can be prohibitive. Once the system has been accepted, future changes come with a cost and response time that's often not workable. When it came time to replace the primary digital data acquisition system used in the Goddard Space Flight Center's Structural Dynamics Test Section, the decision was made to use a combination of COTS hardware and in-house developed software. The COTS hardware used is the DataMAX II Instrumentation Recorder built by R.C. Electronics Inc. and a desktop Pentium 4 computer system. The in-house software was developed using MATLAB from The MathWorks. This paper will describe the design and development of the new data acquisition and analysis system.

  4. Goddard Space Flight Center's Structural Dynamics Data Acquisition System

    NASA Technical Reports Server (NTRS)

    McLeod, Christopher

    2004-01-01

    Turnkey Commercial Off The Shelf (COTS) data acquisition systems typically perform well and meet most of the objectives of the manufacturer. The problem is that they seldom meet most of the objectives of the end user. The analysis software, if any, is unlikely to be tailored to the end users specific application; and there is seldom the chance of incorporating preferred algorithms to solve unique problems. Purchasing a customized system allows the end user to get a system tailored to the actual application, but the cost can be prohibitive. Once the system has been accepted, future changes come with a cost and response time that's often not workable. When it came time to replace the primary digital data acquisition system used in the Goddard Space Flight Center's Structural Dynamics Test Section, the decision was made to use a combination of COTS hardware and in-house developed software. The COTS hardware used is the DataMAX II Instrumentation Recorder built by R.C. Electronics Inc. and a desktop Pentium 4 computer system. The in-house software was developed using MATLAF3 from The Mathworks. This paper will describe the design and development of the new data acquisition and analysis system.

  5. Sub-cooled liquid nitrogen cryogenic system with neon turbo-refrigerator for HTS power equipment

    NASA Astrophysics Data System (ADS)

    Yoshida, S.; Hirai, H.; Nara, N.; Ozaki, S.; Hirokawa, M.; Eguchi, T.; Hayashi, H.; Iwakuma, M.; Shiohara, Y.

    2014-01-01

    We developed a prototype sub-cooled liquid nitrogen (LN) circulation system for HTS power equipment. The system consists of a neon turbo-Brayton refrigerator with a LN sub-cooler and LN circulation pump unit. The neon refrigerator has more than 2 kW cooling power at 65 K. The LN sub-cooler is a plate-fin type heat exchanger and is installed in a refrigerator cold box. In order to carry out the system performance tests, a dummy cryostat having an electric heater was set instead of a HTS power equipment. Sub-cooled LN is delivered into the sub-cooler by the LN circulation pump and cooled within it. After the sub-cooler, sub-cooled LN goes out from the cold box to the dummy cryostat, and comes back to the pump unit. The system can control an outlet sub-cooled LN temperature by adjusting refrigerator cooling power. The refrigerator cooling power is automatically controlled by the turbo-compressor rotational speed. In the performance tests, we increased an electric heater power from 200 W to 1300 W abruptly. We confirmed the temperature fluctuation was about ±1 K. We show the cryogenic system details and performance test results in this paper.

  6. Secure it now or secure it later: the benefits of addressing cyber-security from the outset

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Nutaro, James

    2013-05-01

    The majority of funding for research and development (R&D) in cyber-security is focused on the end of the software lifecycle where systems have been deployed or are nearing deployment. Recruiting of cyber-security personnel is similarly focused on end-of-life expertise. By emphasizing cyber-security at these late stages, security problems are found and corrected when it is most expensive to do so, thus increasing the cost of owning and operating complex software systems. Worse, expenditures on expensive security measures often mean less money for innovative developments. These unwanted increases in cost and potential slowing of innovation are unavoidable consequences of an approach to security that finds and remediate faults after software has been implemented. We argue that software security can be improved and the total cost of a software system can be substantially reduced by an appropriate allocation of resources to the early stages of a software project. By adopting a similar allocation of R&D funds to the early stages of the software lifecycle, we propose that the costs of cyber-security can be better controlled and, consequently, the positive effects of this R&D on industry will be much more pronounced.

  7. Talking Back: Weapons, Warfare, and Feedback

    DTIC Science & Technology

    2010-04-01

    realize that these laws are not laws of physics . They don’t allow for performance or effectiveness comparisons either as they don’t have a common...the weapon’s next software update. Software updates are done by physical connections like most legacy systems as well as by secure data link...Generally the land based Air Force squadrons use physical connections due to the increased reliability, while sea based squadrons use the wireless

  8. Using MATLAB Software on the Peregrine System | High-Performance Computing

    Science.gov Websites

    Learn how to run MATLAB software in batch mode on the Peregrine system. Below is an example MATLAB job in batch (non-interactive) mode. To try the example out, create both matlabTest.sub and /$USER. In this example, it is also the directory into which MATLAB will write the output file x.dat

  9. Production of Reliable Flight Crucial Software: Validation Methods Research for Fault Tolerant Avionics and Control Systems Sub-Working Group Meeting

    NASA Technical Reports Server (NTRS)

    Dunham, J. R. (Editor); Knight, J. C. (Editor)

    1982-01-01

    The state of the art in the production of crucial software for flight control applications was addressed. The association between reliability metrics and software is considered. Thirteen software development projects are discussed. A short term need for research in the areas of tool development and software fault tolerance was indicated. For the long term, research in format verification or proof methods was recommended. Formal specification and software reliability modeling, were recommended as topics for both short and long term research.

  10. Emission properties and back-bombardment for CeB{sub 6} compared to LaB{sub 6}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakr, Mahmoud, E-mail: m-a-bakr@iae.kyoto-u.ac.jp; Kawai, M.; Kii, T.

    The emission properties of CeB{sub 6} compared to LaB{sub 6} thermionic cathodes have been measured using an electrostatic DC gun. Obtaining knowledge of the emission properties is the first step in understanding the back-bombardment effect that limits wide usage of thermionic radio-frequency electron guns. The effect of back-bombardment electrons on CeB{sub 6} compared to LaB{sub 6} was studied using a numerical simulation model. The results show that for 6 μs pulse duration with input radio-frequency power of 8 MW, CeB{sub 6} should experience 14% lower temperature increase and 21% lower current density rise compared to LaB{sub 6}. We conclude that CeB{submore » 6} has the potential to become the future replacement for LaB{sub 6} thermionic cathodes in radio-frequency electron guns.« less

  11. Diagnostic system for measuring temperature, pressure, CO.sub.2 concentration and H.sub.2O concentration in a fluid stream

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Partridge, Jr., William P.; Jatana, Gurneesh Singh; Yoo, Ji Hyung

    A diagnostic system for measuring temperature, pressure, CO.sub.2 concentration and H.sub.2O concentration in a fluid stream is described. The system may include one or more probes that sample the fluid stream spatially, temporally and over ranges of pressure and temperature. Laser light sources are directed down pitch optical cables, through a lens and to a mirror, where the light sources are reflected back, through the lens to catch optical cables. The light travels through the catch optical cables to detectors, which provide electrical signals to a processer. The processer utilizes the signals to calculate CO.sub.2 concentration based on the temperaturesmore » derived from H.sub.2O vapor concentration. A probe for sampling CO.sub.2 and H.sub.2O vapor concentrations is also disclosed. Various mechanical features interact together to ensure the pitch and catch optical cables are properly aligned with the lens during assembly and use.« less

  12. 77 FR 72368 - Privacy Act of 1974; Notice of a New System of Records, Enterprise Wide Operations Data Store

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ...-end repository to manage various reporting, pooling, and risk management activities associated with... records is to serve as a central back-end repository to house loan origination and servicing, security...

  13. Experimental demonstration of record high 19.125 Gb/s real-time end-to-end dual-band optical OFDM transmission over 25 km SMF in a simple EML-based IMDD system.

    PubMed

    Giddings, R P; Hugues-Salas, E; Tang, J M

    2012-08-27

    Record high 19.125 Gb/s real-time end-to-end dual-band optical OFDM (OOFDM) transmission is experimentally demonstrated, for the first time, in a simple electro-absorption modulated laser (EML)-based 25 km standard SMF system using intensity modulation and direct detection (IMDD). Adaptively modulated baseband (0-2GHz) and passband (6.125 ± 2GHz) OFDM RF sub-bands, supporting line rates of 10 Gb/s and 9.125 Gb/s respectively, are independently generated and detected with FPGA-based DSP clocked at only 100 MHz and DACs/ADCs operating at sampling speeds as low as 4GS/s. The two OFDM sub-bands are electrically frequency-division-multiplexed (FDM) for intensity modulation of a single optical carrier by an EML. To maximize and balance the signal transmission performance of each sub-band, on-line adaptive features and on-line performance monitoring is fully exploited to optimize key OOFDM transceiver and system parameters, which includes subcarrier characteristics within each individual OFDM sub-band, total and relative sub-band power as well as EML operating conditions. The achieved 19.125 Gb/s over 25 km SMF OOFDM transmission system has an optical power budget of 13.5 dB, and shows almost identical bit error rate (BER) performances for both the baseband and passband signals. In addition, experimental investigations also indicate that the maximum achievable transmission capacity of the present system is mainly determined by the EML frequency chirp-enhanced chromatic dispersion effect, and the passband BER performance is not affected by the two sub-band-induced intermixing effect, which, however, gives a 1.2dB optical power penalty to the baseband signal transmission.

  14. Study on Network Error Analysis and Locating based on Integrated Information Decision System

    NASA Astrophysics Data System (ADS)

    Yang, F.; Dong, Z. H.

    2017-10-01

    Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.

  15. Designing for Change: Minimizing the Impact of Changing Requirements in the Later Stages of a Spaceflight Software Project

    NASA Technical Reports Server (NTRS)

    Allen, B. Danette

    1998-01-01

    In the traditional 'waterfall' model of the software project life cycle, the Requirements Phase ends and flows into the Design Phase, which ends and flows into the Development Phase. Unfortunately, the process rarely, if ever, works so smoothly in practice. Instead, software developers often receive new requirements, or modifications to the original requirements, well after the earlier project phases have been completed. In particular, projects with shorter than ideal schedules are highly susceptible to frequent requirements changes, as the software requirements analysis phase is often forced to begin before the overall system requirements and top-level design are complete. This results in later modifications to the software requirements, even though the software design and development phases may be complete. Requirements changes received in the later stages of a software project inevitably lead to modification of existing developed software. Presented here is a series of software design techniques that can greatly reduce the impact of last-minute requirements changes. These techniques were successfully used to add built-in flexibility to two complex software systems in which the requirements were expected to (and did) change frequently. These large, real-time systems were developed at NASA Langley Research Center (LaRC) to test and control the Lidar In-Space Technology Experiment (LITE) instrument which flew aboard the space shuttle Discovery as the primary payload on the STS-64 mission.

  16. A New Control System Software for SANS BATAN Spectrometer in Serpong, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bharoto; Putra, Edy Giri Rachman

    2010-06-22

    The original main control system of the 36 meter small-angle neutron scattering (SANS) BATAN Spectrometer (SMARTer) has been replaced with the new ones due to the malfunction of the main computer. For that reason, a new control system software for handling all the control systems was also developed in order to put the spectrometer back in operation. The developed software is able to control the system such as rotation movement of six pinholes system, vertical movement of four neutron guide system with the total length of 16.5 m, two-directional movement of a neutron beam stopper, forward-backward movement of a 2Dmore » position sensitive detector (2D-PSD) along 16.7 m, etc. A Visual Basic language program running on Windows operating system was employed to develop the software and it can be operated by other remote computers in the local area network. All device positions and command menu are displayed graphically in the main monitor or window and each device control can be executed by clicking the control button. Those advantages are necessary required for developing a new user-friendly control system software. Finally, the new software has been tested for handling a complete SANS experiment and it works properly.« less

  17. A New Control System Software for SANS BATAN Spectrometer in Serpong, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bharoto,; Putra, Edy Giri Rachman

    2010-06-22

    The original main control system of the 36 meter small‐angle neutron scattering (SANS) BATAN Spectrometer (SMARTer) has been replaced with the new ones due to the malfunction of the main computer. For that reason, a new control system software for handling all the control systems was also developed in order to put the spectrometer back in operation. The developed software is able to control the system such as rotation movement of six pinholes system, vertical movement of four neutron guide system with the total length of 16.5 m, two‐directional movement of a neutron beam stopper, forward‐backward movement of a 2Dmore » position sensitive detector (2D‐PSD) along 16.7 m, etc. A Visual Basic language program running on Windows operating system was employed to develop the software and it can be operated by other remote computers in the local area network. All device positions and command menu are displayed graphically in the main monitor or window and each device control can be executed by clicking the control button. Those advantages are necessary required for developing a new user‐friendly control system software. Finally, the new software has been tested for handling a complete SANS experiment and it works properly.« less

  18. Software Tool Integrating Data Flow Diagrams and Petri Nets

    NASA Technical Reports Server (NTRS)

    Thronesbery, Carroll; Tavana, Madjid

    2010-01-01

    Data Flow Diagram - Petri Net (DFPN) is a software tool for analyzing other software to be developed. The full name of this program reflects its design, which combines the benefit of data-flow diagrams (which are typically favored by software analysts) with the power and precision of Petri-net models, without requiring specialized Petri-net training. (A Petri net is a particular type of directed graph, a description of which would exceed the scope of this article.) DFPN assists a software analyst in drawing and specifying a data-flow diagram, then translates the diagram into a Petri net, then enables graphical tracing of execution paths through the Petri net for verification, by the end user, of the properties of the software to be developed. In comparison with prior means of verifying the properties of software to be developed, DFPN makes verification by the end user more nearly certain, thereby making it easier to identify and correct misconceptions earlier in the development process, when correction is less expensive. After the verification by the end user, DFPN generates a printable system specification in the form of descriptions of processes and data.

  19. The VLBA correlator: Real-time in the distributed era

    NASA Technical Reports Server (NTRS)

    Wells, D. C.

    1992-01-01

    The correlator is the signal processing engine of the Very Long Baseline Array (VLBA). Radio signals are recorded on special wideband (128 Mb/s) digital recorders at the 10 telescopes, with sampling times controlled by hydrogen maser clocks. The magnetic tapes are shipped to the Array Operations Center in Socorro, New Mexico, where they are played back simultaneously into the correlator. Real-time software and firmware controls the playback drives to achieve synchronization, compute models of the wavefront delay, control the numerous modules of the correlator, and record FITS files of the fringe visibilities at the back-end of the correlator. In addition to the more than 3000 custom VLSI chips which handle the massive data flow of the signal processing, the correlator contains a total of more than 100 programmable computers, 8-, 16- and 32-bit CPUs. Code is downloaded into front-end CPU's dependent on operating mode. Low-level code is assembly language, high-level code is C running under a RT OS. We use VxWorks on Motorola MVME147 CPU's. Code development is on a complex of SPARC workstations connected to the RT CPU's by Ethernet. The overall management of the correlation process is dependent on a database management system. We use Ingres running on a Sparcstation-2. We transfer logging information from the database of the VLBA Monitor and Control System to our database using Ingres/NET. Job scripts are computed and are transferred to the real-time computers using NFS, and correlation job execution logs and status flow back by the route. Operator status and control displays use windows on workstations, interfaced to the real-time processes by network protocols. The extensive network protocol support provided by VxWorks is invaluable. The VLBA Correlator's dependence on network protocols is an example of the radical transformation of the real-time world over the past five years. Real-time is becoming more like conventional computing. Paradoxically, 'conventional' computing is also adopting practices from the real-time world: semaphores, shared memory, light-weight threads, and concurrency. This appears to be a convergence of thinking.

  20. The TAME Project: Towards improvement-oriented software environments

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Rombach, H. Dieter

    1988-01-01

    Experience from a dozen years of analyzing software engineering processes and products is summarized as a set of software engineering and measurement principles that argue for software engineering process models that integrate sound planning and analysis into the construction process. In the TAME (Tailoring A Measurement Environment) project at the University of Maryland, such an improvement-oriented software engineering process model was developed that uses the goal/question/metric paradigm to integrate the constructive and analytic aspects of software development. The model provides a mechanism for formalizing the characterization and planning tasks, controlling and improving projects based on quantitative analysis, learning in a deeper and more systematic way about the software process and product, and feeding the appropriate experience back into the current and future projects. The TAME system is an instantiation of the TAME software engineering process model as an ISEE (integrated software engineering environment). The first in a series of TAME system prototypes has been developed. An assessment of experience with this first limited prototype is presented including a reassessment of its initial architecture.

  1. FALL-BACK DISKS IN LONG AND SHORT GAMMA-RAY BURSTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannizzo, J. K.; Troja, E.; Gehrels, N., E-mail: John.K.Cannizzo@nasa.gov

    2011-06-10

    We present time-dependent numerical calculations for fall-back disks relevant to gamma-ray bursts (GRBs) in which the disk of material surrounding the black hole powering the GRB jet modulates the mass flow and hence the strength of the jet. Given the initial existence of a small mass {approx}< 10{sup -4} M{sub sun} near the progenitor with a circularization radius {approx}10{sup 10}-10{sup 11} cm, an unavoidable consequence will be the formation of an 'external disk' whose outer edge continually moves to larger radii due to angular momentum transport and lack of a confining torque. For long GRBs, if the mass distribution inmore » the initial fall-back disk traces the progenitor envelope, then a radius {approx}10{sup 11} cm gives a timescale {approx}10{sup 4} s for the X-ray plateau. For late times t > 10{sup 7} s a steepening due to a cooling front in the disk may have observational support in GRB 060729. For short GRBs, one expects most of the mass initially to lie at small radii <10{sup 8} cm; however, the presence of even a trace amount {approx}10{sup -9} M{sub sun} of high angular material can give a brief plateau in the light curve. By studying the plateaus in the X-ray decay of GRBs, which can last up to {approx}10{sup 4} s after the prompt emission, Dainotti et al. find an apparent inverse relation between the X-ray luminosity at the end of the plateau and the duration of the plateau. We show that this relation may simply represent the fact that one is biased against detecting faint plateaus and therefore preferentially sampling the more energetic GRBs. If, however, there were a standard reservoir in fall-back mass, our model could reproduce the inverse X-ray luminosity-duration relation. We emphasize that we do not address the very steep, initial decays immediately following the prompt emission, which have been modeled by Lindner et al. as fall back of the progenitor core, and may entail the accretion of {approx}> 1 M{sub sun}.« less

  2. Subcutaneous stimulation as an additional therapy to spinal cord stimulation for the treatment of lower limb pain and/or back pain: a feasibility study.

    PubMed

    Hamm-Faber, Tanja E; Aukes, Hans A; de Loos, Frank; Gültuna, Ismail

    2012-01-01

    The objective of this study was to demonstrate the efficacy of subcutaneous stimulation (SubQ) as an additional therapy in patients with failed back surgery syndrome (FBSS) with chronic refractory pain, for whom spinal cord stimulation (SCS) was unsuccessful in treating low back pain. Case series. FBSS patients with chronic limb and/or low back pain whose conventional therapies had failed received a combination of SCS (8-contact Octad lead) and/or SubQ (4-contact Quad Plus lead(s)). Initially leads were placed in the epidural space for SCS for a trial stimulation to assess response to suppression of limb and low back pain. Where SCS alone was insufficient in treating lower back pain, leads were placed superficially in the subcutaneous tissue of the lower back, directly in the middle of the pain area. A pulse generator was implanted if patients reported more than 50% pain relief during the trial period. Pain intensity for limb and lower back pain was scored separately, using visual analog scale (VAS). Pain and Quebec Back Pain Disability Scale (QBPDS) after 12-month treatment were compared with pain and QBPDS at baseline. Eleven FBSS patients, five male and six female (age: 51 ± 8 years; mean ± SD), in whom SCS alone was insufficient in treating lower back pain, were included. In nine cases, SubQ was used in combination with SCS to treat chronic lower back and lower extremity pain. In two cases only SubQ was used to treat lower back pain. SCS significantly reduced limb pain after 12 months (VAS(bl) : 62 ± 14 vs. VAS(12m) : 20 ± 11; p= 0.001, N= 8). SubQ stimulation significantly reduced low back pain after 12 months (VAS(bl) : 62 ± 13.0 vs. VAS(12m) : 32 ± 16; p= 0.0002, N= 10). Overall pain medication was reduced by more than 70%. QBPDS improved from 61 ± 15 to 49 ± 12 (p= 0.046, N= 10). Furthermore, we observed that two patients returned to work. SubQ may be an effective additional treatment for chronic low back pain in patients with FBSS for whom SCS alone is insufficient in alleviating their pain symptoms. © 2011 International Neuromodulation Society.

  3. Orion MPCV GN and C End-to-End Phasing Tests

    NASA Technical Reports Server (NTRS)

    Neumann, Brian C.

    2013-01-01

    End-to-end integration tests are critical risk reduction efforts for any complex vehicle. Phasing tests are an end-to-end integrated test that validates system directional phasing (polarity) from sensor measurement through software algorithms to end effector response. Phasing tests are typically performed on a fully integrated and assembled flight vehicle where sensors are stimulated by moving the vehicle and the effectors are observed for proper polarity. Orion Multi-Purpose Crew Vehicle (MPCV) Pad Abort 1 (PA-1) Phasing Test was conducted from inertial measurement to Launch Abort System (LAS). Orion Exploration Flight Test 1 (EFT-1) has two end-to-end phasing tests planned. The first test from inertial measurement to Crew Module (CM) reaction control system thrusters uses navigation and flight control system software algorithms to process commands. The second test from inertial measurement to CM S-Band Phased Array Antenna (PAA) uses navigation and communication system software algorithms to process commands. Future Orion flights include Ascent Abort Flight Test 2 (AA-2) and Exploration Mission 1 (EM-1). These flights will include additional or updated sensors, software algorithms and effectors. This paper will explore the implementation of end-to-end phasing tests on a flight vehicle which has many constraints, trade-offs and compromises. Orion PA-1 Phasing Test was conducted at White Sands Missile Range (WSMR) from March 4-6, 2010. This test decreased the risk of mission failure by demonstrating proper flight control system polarity. Demonstration was achieved by stimulating the primary navigation sensor, processing sensor data to commands and viewing propulsion response. PA-1 primary navigation sensor was a Space Integrated Inertial Navigation System (INS) and Global Positioning System (GPS) (SIGI) which has onboard processing, INS (3 accelerometers and 3 rate gyros) and no GPS receiver. SIGI data was processed by GN&C software into thrust magnitude and direction commands. The processing changes through three phases of powered flight: pitchover, downrange and reorientation. The primary inputs to GN&C are attitude position, attitude rates, angle of attack (AOA) and angle of sideslip (AOS). Pitch and yaw attitude and attitude rate responses were verified by using a flight spare SIGI mounted to a 2-axis rate table. AOA and AOS responses were verified by using a data recorded from SIGI movements on a robotic arm located at NASA Johnson Space Center. The data was consolidated and used in an open-loop data input to the SIGI. Propulsion was the Launch Abort System (LAS) Attitude Control Motor (ACM) which consisted of a solid motor with 8 nozzles. Each nozzle has active thrust control by varying throat area with a pintle. LAS ACM pintles are observable through optically transparent nozzle covers. SIGI movements on robot arm, SIGI rate table movements and LAS ACM pintle responses were video recorded as test artifacts for analysis and evaluation. The PA-1 Phasing Test design was determined based on test performance requirements, operational restrictions and EGSE capabilities. This development progressed during different stages. For convenience these development stages are initial, working group, tiger team, Engineering Review Team (ERT) and final.

  4. Development of a data management front-end for use with a LANDSAT-based information system

    NASA Technical Reports Server (NTRS)

    Turner, B. J.

    1982-01-01

    The development and implementation of a data management front-end system for use with a LANDSAT based information system that facilitates the processsing of both LANDSAT and ancillary data was examined. The final tasks, reported on here, involved; (1) the implementation of the VICAR image processing software system at Penn State and the development of a user-friendly front-end for this system; (2) the implementation of JPL-developed software based on VICAR, for mosaicking LANDSAT scenes; (3) the creation and storage of a mosiac of 1981 summer LANDSAT data for the entire state of Pennsylvania; (4) demonstrations of the defoliation assessment procedure for Perry and Centre Counties, and presentation of the results at the 1982 National Gypsy Moth Review Meeting, and (5) the training of Pennsylvania Bureau of Forestry personnel in the use of the defoliation analysis system.

  5. UCMS - A new signal parameter measurement system using digital signal processing techniques. [User Constraint Measurement System

    NASA Technical Reports Server (NTRS)

    Choi, H. J.; Su, Y. T.

    1986-01-01

    The User Constraint Measurement System (UCMS) is a hardware/software package developed by NASA Goddard to measure the signal parameter constraints of the user transponder in the TDRSS environment by means of an all-digital signal sampling technique. An account is presently given of the features of UCMS design and of its performance capabilities and applications; attention is given to such important aspects of the system as RF interface parameter definitions, hardware minimization, the emphasis on offline software signal processing, and end-to-end link performance. Applications to the measurement of other signal parameters are also discussed.

  6. Coordinated Fault Tolerance for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack; Bosilca, George; et al.

    2013-04-08

    Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.

  7. Virtual Ultrasound Guidance for Inexperienced Operators

    NASA Technical Reports Server (NTRS)

    Caine, Timothy; Martin, David

    2012-01-01

    Medical ultrasound or echocardiographic studies are highly operator-dependent and generally require lengthy training and internship to perfect. To obtain quality echocardiographic images in remote environments, such as on-orbit, remote guidance of studies has been employed. This technique involves minimal training for the user, coupled with remote guidance from an expert. When real-time communication or expert guidance is not available, a more autonomous system of guiding an inexperienced operator through an ultrasound study is needed. One example would be missions beyond low Earth orbit in which the time delay inherent with communication will make remote guidance impractical. The Virtual Ultrasound Guidance system is a combination of hardware and software. The hardware portion includes, but is not limited to, video glasses that allow hands-free, full-screen viewing. The glasses also allow the operator a substantial field of view below the glasses to view and operate the ultrasound system. The software is a comprehensive video program designed to guide an inexperienced operator through a detailed ultrasound or echocardiographic study without extensive training or guidance from the ground. The program contains a detailed description using video and audio to demonstrate equipment controls, ergonomics of scanning, study protocol, and scanning guidance, including recovery from sub-optimal images. The components used in the initial validation of the system include an Apple iPod Classic third-generation as the video source, and Myvue video glasses. Initially, the program prompts the operator to power-up the ultrasound and position the patient. The operator would put on the video glasses and attach them to the video source. After turning on both devices and the ultrasound system, the audio-video guidance would then instruct on patient positioning and scanning techniques. A detailed scanning protocol follows with descriptions and reference video of each view along with advice on technique. The program also instructs the operator regarding the types of images to store and how to overcome pitfalls in scanning. Images can be forwarded to the ground or other site when convenient. Following study completion, the video glasses, video source, and ultrasound system are powered down and stored. Virtually any equipment that can play back video can be used to play back the program. This includes a DVD player, personal computer, and some MP3 players.

  8. Agile Software Development in Defense Acquisition: A Mission Assurance Perspective

    DTIC Science & Technology

    2012-03-23

    based information retrieval system, we might say that this program works like a hive of bees , going out for pollen and bringing it back to the hive...developers ® Six Siqma is reqistered in the U. S. Patent and Trademark Office by Motorola ^_ 33 @ AEROSPACE Major Areas in a Typical Software...requirements - Capturing and evaluating quality metrics, identifying common problem areas **» Despite its positive impact on quality, pair programming

  9. Analysis of fractionation in corn-to-ethanol plants

    NASA Astrophysics Data System (ADS)

    Nelson, Camille

    As the dry grind ethanol industry has grown, the research and technology surrounding ethanol production and co-product value has increased. Including use of back-end oil extraction and front-end fractionation. Front-end fractionation is pre-fermentation separation of the corn kernel into 3 fractions: endosperm, bran, and germ. The endosperm fraction enters the existing ethanol plant, and a high protein DDGS product remains after fermentation. High value oil is extracted out of the germ fraction. This leaves corn germ meal and bran as co-products from the other two streams. These 3 co-products have a very different composition than traditional corn DDGS. Installing this technology allows ethanol plants to increase profitability by tapping into more diverse markets, and ultimately could allow for an increase in profitability. An ethanol plant model was developed to evaluate both back-end oil extraction and front-end fractionation technology and predict the change in co-products based on technology installed. The model runs in Microsoft Excel and requires inputs of whole corn composition (proximate analysis), amino acid content, and weight to predict the co-product quantity and quality. User inputs include saccharification and fermentation efficiencies, plant capacity, and plant process specifications including front-end fractionation and backend oil extraction, if applicable. This model provides plants a way to assess and monitor variability in co-product composition due to the variation in whole corn composition. Additionally the co-products predicted in this model are entered into the US Pork Center of Excellence, National Swine Nutrition Guide feed formulation software. This allows the plant user and animal nutritionists to evaluate the value of new co-products in existing animal diets.

  10. Backed Bending Actuator

    NASA Technical Reports Server (NTRS)

    Costen, Robert C.; Su, Ji

    2004-01-01

    Bending actuators of a proposed type would partly resemble ordinary bending actuators, but would include simple additional components that would render them capable of exerting large forces at small displacements. Like an ordinary bending actuator, an actuator according to the proposal would include a thin rectangular strip that would comprise two bonded layers (possibly made of electroactive polymers with surface electrodes) and would be clamped at one end in the manner of a cantilever beam. Unlike an ordinary bending actuator, the proposed device would include a rigid flat backplate that would support part of the bending strip against backward displacement; because of this feature, the proposed device is called a backed bending actuator. When an ordinary bending actuator is inactive, the strip typically lies flat, the tip displacement is zero, and the force exerted by the tip is zero. During activation, the tip exerts a transverse force and undergoes a bending displacement that results from the expansion or contraction of one or more of the bonded layers. The tip force of an ordinary bending actuator is inversely proportional to its length; hence, a long actuator tends to be weak. The figure depicts an ordinary bending actuator and the corresponding backed bending actuator. The bending, the tip displacement (d(sub t)), and the tip force (F) exerted by the ordinary bending actuator are well approximated by the conventional equations for the loading and deflection of a cantilever beam subject to a bending moment which, in this case, is applied by the differential expansion or contraction of the bonded layers. The bending, displacement, and tip force of the backed bending actuator are calculated similarly, except that it is necessary to account for the fact that the force F(sub b) that resists the displacement of the tip could be sufficient to push part of the strip against the backplate; in such a condition, the cantilever beam would be effectively shortened (length L*) and thereby stiffened and, hence, made capable of exerting a greater tip force for a given degree of differential expansion or contraction of the bonded layers. Taking all of these effects into account, the cantilever-beam equations show that F(sub b) would be approximately inversely proportional to d(sup 1/2) for d less than a calculable amount, denoted the transition displacement (dt). For d less than d(sub t), part of the strip would be pressed against the backplate. Therefore, the force F(sub b) would be very large for d at or near zero and would decrease as d increases toward d(sub t). At d greater than d(sub t), none of the strip would be pressed against the backplate and F(sub b) would equal the tip force F of the corresponding ordinary bending actuator. The advantage of the proposal is that a backed bending actuator could be made long to obtain large displacement when it encountered little resistance but it could also exert a large zero-displacement force, so that it could more easily start the movement of a large mass, throw a mechanical switch, or release a stuck mechanism.

  11. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Vouk, Mladen A.

    1989-01-01

    Twenty functionally equivalent programs were built and tested in a multiversion software experiment. Following unit testing, all programs were subjected to an extensive system test. In the process sixty-one distinct faults were identified among the versions. Less than 12 percent of the faults exhibited varying degrees of positive correlation. The common-cause (or similar) faults spanned as many as 14 components. However, a majority of these faults were trivial, and easily detected by proper unit and/or system testing. Only two of the seven similar faults were difficult faults, and both were caused by specification ambiguities. One of these faults exhibited variable identical-and-wrong response span, i.e. response span which varied with the testing conditions and input data. Techniques that could have been used to avoid the faults are discussed. For example, it was determined that back-to-back testing of 2-tuples could have been used to eliminate about 90 percent of the faults. In addition, four of the seven similar faults could have been detected by using back-to-back testing of 5-tuples. It is believed that most, if not all, similar faults could have been avoided had the specifications been written using more formal notation, the unit testing phase was subject to more stringent standards and controls, and better tools for measuring the quality and adequacy of the test data (e.g. coverage) were used.

  12. The Strategic Academic Enterprise: Why ERPs Will No Longer Be Adequate

    ERIC Educational Resources Information Center

    Jones, Mary

    2009-01-01

    In the 1970s and '80s, manufacturing firms began purchasing centralized administrative software--"Enterprise Resource Planning (ERP) systems"--to support their infrastructure needs. In the 1990s, higher education adopted the term ERP to define the back-office systems used by institutions to meet their most pressing business needs--typically those…

  13. How Do I Start a Property Records System?

    ERIC Educational Resources Information Center

    Whyman, Wynne

    2003-01-01

    A property records system organizes data to be utilized by a camp's facilities department and integrated into other areas. Start by deciding what records to keep and allotting the time. Then develop consistent procedures, including organizing data, creating a catalog, making back-up copies, and integrating procedures. Use software tools. A good…

  14. Going with Best-of-Breed

    ERIC Educational Resources Information Center

    Ramaswami, Rama

    2010-01-01

    Back in the 1990s, enterprise resource planning (ERP) systems may not have been user-friendly, but what they tried to do was totally reasonable: replace stand-alone systems in various departments--such as finance, logistics, and human resources--with a single integrated software program. The idea was that although each department would still have…

  15. Motion Tracker: Camera-Based Monitoring of Bodily Movements Using Motion Silhouettes

    PubMed Central

    Westlund, Jacqueline Kory; D’Mello, Sidney K.; Olney, Andrew M.

    2015-01-01

    Researchers in the cognitive and affective sciences investigate how thoughts and feelings are reflected in the bodily response systems including peripheral physiology, facial features, and body movements. One specific question along this line of research is how cognition and affect are manifested in the dynamics of general body movements. Progress in this area can be accelerated by inexpensive, non-intrusive, portable, scalable, and easy to calibrate movement tracking systems. Towards this end, this paper presents and validates Motion Tracker, a simple yet effective software program that uses established computer vision techniques to estimate the amount a person moves from a video of the person engaged in a task (available for download from http://jakory.com/motion-tracker/). The system works with any commercially available camera and with existing videos, thereby affording inexpensive, non-intrusive, and potentially portable and scalable estimation of body movement. Strong between-subject correlations were obtained between Motion Tracker’s estimates of movement and body movements recorded from the seat (r =.720) and back (r = .695 for participants with higher back movement) of a chair affixed with pressure-sensors while completing a 32-minute computerized task (Study 1). Within-subject cross-correlations were also strong for both the seat (r =.606) and back (r = .507). In Study 2, between-subject correlations between Motion Tracker’s movement estimates and movements recorded from an accelerometer worn on the wrist were also strong (rs = .801, .679, and .681) while people performed three brief actions (e.g., waving). Finally, in Study 3 the within-subject cross-correlation was high (r = .855) when Motion Tracker’s estimates were correlated with the movement of a person’s head as tracked with a Kinect while the person was seated at a desk (Study 3). Best-practice recommendations, limitations, and planned extensions of the system are discussed. PMID:26086771

  16. Pressurized water nuclear reactor system with hot leg vortex mitigator

    DOEpatents

    Lau, Louis K. S.

    1990-01-01

    A pressurized water nuclear reactor system includes a vortex mitigator in the form of a cylindrical conduit between the hot leg conduit and a first section of residual heat removal conduit, which conduit leads to a pump and a second section of residual heat removal conduit leading back to the reactor pressure vessel. The cylindrical conduit is of such a size that where the hot leg has an inner diameter D.sub.1, the first section has an inner diameter D.sub.2, and the cylindrical conduit or step nozzle has a length L and an inner diameter of D.sub.3 ; D.sub.3 /D.sub.1 is at least 0.55, D.sub.2 is at least 1.9, and L/D.sub.3 is at least 1.44, whereby cavitation of the pump by a vortex formed in the hot leg is prevented.

  17. Architecture of PAU survey camera readout electronics

    NASA Astrophysics Data System (ADS)

    Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo

    2012-07-01

    PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.

  18. Software Tools for Developing and Simulating the NASA LaRC CMF Motion Base

    NASA Technical Reports Server (NTRS)

    Bryant, Richard B., Jr.; Carrelli, David J.

    2006-01-01

    The NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base has provided many design and analysis challenges. In the process of addressing these challenges, a comprehensive suite of software tools was developed. The software tools development began with a detailed MATLAB/Simulink model of the motion base which was used primarily for safety loads prediction, design of the closed loop compensator and development of the motion base safety systems1. A Simulink model of the digital control law, from which a portion of the embedded code is directly generated, was later added to this model to form a closed loop system model. Concurrently, software that runs on a PC was created to display and record motion base parameters. It includes a user interface for controlling time history displays, strip chart displays, data storage, and initializing of function generators used during motion base testing. Finally, a software tool was developed for kinematic analysis and prediction of mechanical clearances for the motion system. These tools work together in an integrated package to support normal operations of the motion base, simulate the end to end operation of the motion base system providing facilities for software-in-the-loop testing, mechanical geometry and sensor data visualizations, and function generator setup and evaluation.

  19. Playbook Data Analysis Tool: Collecting Interaction Data from Extremely Remote Users

    NASA Technical Reports Server (NTRS)

    Kanefsky, Bob; Zheng, Jimin; Deliz, Ivonne; Marquez, Jessica J.; Hillenius, Steven

    2017-01-01

    Typically, user tests for software tools are conducted in person. At NASA, the users may be located at the bottom of the ocean in a pressurized habitat, above the atmosphere in the International Space Station, or in an isolated capsule on a simulated asteroid mission. The Playbook Data Analysis Tool (P-DAT) is a human-computer interaction (HCI) evaluation tool that the NASA Ames HCI Group has developed to record user interactions with Playbook, the group's existing planning-and-execution software application. Once the remotely collected user interaction data makes its way back to Earth, researchers can use P-DAT for in-depth analysis. Since a critical component of the Playbook project is to understand how to develop more intuitive software tools for astronauts to plan in space, P-DAT helps guide us in the development of additional easy-to-use features for Playbook, informing the design of future crew autonomy tools.P-DAT has demonstrated the capability of discreetly capturing usability data in amanner that is transparent to Playbook’s end-users. In our experience, P-DAT data hasalready shown its utility, revealing potential usability patterns, helping diagnose softwarebugs, and identifying metrics and events that are pertinent to Playbook usage aswell as spaceflight operations. As we continue to develop this analysis tool, P-DATmay yet provide a method for long-duration, unobtrusive human performance collectionand evaluation for mission controllers back on Earth and researchers investigatingthe effects and mitigations related to future human spaceflight performance.

  20. Direct-to-Earth Communications with Mars Science Laboratory During Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Soriano, Melissa; Finley, Susan; Fort, David; Schratz, Brian; Ilott, Peter; Mukai, Ryan; Estabrook, Polly; Oudrhiri, Kamal; Kahan, Daniel; Satorius, Edgar

    2013-01-01

    Mars Science Laboratory (MSL) undergoes extreme heating and acceleration during Entry, Descent, and Landing (EDL) on Mars. Unknown dynamics lead to large Doppler shifts, making communication challenging. During EDL, a special form of Multiple Frequency Shift Keying (MFSK) communication is used for Direct-To-Earth (DTE) communication. The X-band signal is received by the Deep Space Network (DSN) at the Canberra Deep Space Communication complex, then down-converted, digitized, and recorded by open-loop Radio Science Receivers (RSR), and decoded in real-time by the EDL Data Analysis (EDA) System. The EDA uses lock states with configurable Fast Fourier Transforms to acquire and track the signal. RSR configuration and channel allocation is shown. Testing prior to EDL is discussed including software simulations, test bed runs with MSL flight hardware, and the in-flight end-to-end test. EDA configuration parameters and signal dynamics during pre-entry, entry, and parachute deployment are analyzed. RSR and EDA performance during MSL EDL is evaluated, including performance using a single 70-meter DSN antenna and an array of two 34-meter DSN antennas as a back up to the 70-meter antenna.

  1. Molybdenum oxide and molybdenum oxide-nitride back contacts for CdTe solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drayton, Jennifer A., E-mail: drjadrayton@yahoo.com; Geisthardt, Russell M., E-mail: Russell.Geisthardt@gmail.com; Sites, James R., E-mail: james.sites@colostate.edu

    2015-07-15

    Molybdenum oxide (MoO{sub x}) and molybdenum oxynitride (MoON) thin film back contacts were formed by a unique ion-beam sputtering and ion-beam-assisted deposition process onto CdTe solar cells and compared to back contacts made using carbon–nickel (C/Ni) paint. Glancing-incidence x-ray diffraction and x-ray photoelectron spectroscopy measurements show that partially crystalline MoO{sub x} films are created with a mixture of Mo, MoO{sub 2}, and MoO{sub 3} components. Lower crystallinity content is observed in the MoON films, with an additional component of molybdenum nitride present. Three different film thicknesses of MoO{sub x} and MoON were investigated that were capped in situ in Ni.more » Small area devices were delineated and characterized using current–voltage (J-V), capacitance–frequency, capacitance–voltage, electroluminescence, and light beam-induced current techniques. In addition, J-V data measured as a function of temperature (JVT) were used to estimate back barrier heights for each thickness of MoO{sub x} and MoON and for the C/Ni paint. Characterization prior to stressing indicated the devices were similar in performance. Characterization after stress testing indicated little change to cells with 120 and 180-nm thick MoO{sub x} and MoON films. However, moderate-to-large cell degradation was observed for 60-nm thick MoO{sub x} and MoON films and for C/Ni painted back contacts.« less

  2. Equalization enhanced phase noise in Nyquist-spaced superchannel transmission systems using multi-channel digital back-propagation

    PubMed Central

    Xu, Tianhua; Liga, Gabriele; Lavery, Domaniç; Thomsen, Benn C.; Savory, Seb J.; Killey, Robert I.; Bayvel, Polina

    2015-01-01

    Superchannel transmission spaced at the symbol rate, known as Nyquist spacing, has been demonstrated for effectively maximizing the optical communication channel capacity and spectral efficiency. However, the achievable capacity and reach of transmission systems using advanced modulation formats are affected by fibre nonlinearities and equalization enhanced phase noise (EEPN). Fibre nonlinearities can be effectively compensated using digital back-propagation (DBP). However EEPN which arises from the interaction between laser phase noise and dispersion cannot be efficiently mitigated, and can significantly degrade the performance of transmission systems. Here we report the first investigation of the origin and the impact of EEPN in Nyquist-spaced superchannel system, employing electronic dispersion compensation (EDC) and multi-channel DBP (MC-DBP). Analysis was carried out in a Nyquist-spaced 9-channel 32-Gbaud DP-64QAM transmission system. Results confirm that EEPN significantly degrades the performance of all sub-channels of the superchannel system and that the distortions are more severe for the outer sub-channels, both using EDC and MC-DBP. It is also found that the origin of EEPN depends on the relative position between the carrier phase recovery module and the EDC (or MC-DBP) module. Considering EEPN, diverse coding techniques and modulation formats have to be applied for optimizing different sub-channels in superchannel systems. PMID:26365422

  3. Virtual and flexible digital signal processing system based on software PnP and component works

    NASA Astrophysics Data System (ADS)

    He, Tao; Wu, Qinghua; Zhong, Fei; Li, Wei

    2005-05-01

    An idea about software PnP (Plug & Play) is put forward according to the hardware PnP. And base on this idea, a virtual flexible digital signal processing system (FVDSPS) is carried out. FVDSPS is composed of a main control center, many sub-function modules and other hardware I/O modules. Main control center sends out commands to sub-function modules, and manages running orders, parameters and results of sub-functions. The software kernel of FVDSPS is DSP (Digital Signal Processing) module, which communicates with the main control center through some protocols, accept commands or send requirements. The data sharing and exchanging between the main control center and the DSP modules are carried out and managed by the files system of the Windows Operation System through the effective communication. FVDSPS real orients objects, orients engineers and orients engineering problems. With FVDSPS, users can freely plug and play, and fast reconfigure a signal process system according to engineering problems without programming. What you see is what you get. Thus, an engineer can orient engineering problems directly, pay more attention to engineering problems, and promote the flexibility, reliability and veracity of testing system. Because FVDSPS orients TCP/IP protocol, through Internet, testing engineers, technology experts can be connected freely without space. Engineering problems can be resolved fast and effectively. FVDSPS can be used in many fields such as instruments and meter, fault diagnosis, device maintenance and quality control.

  4. Interchangeable whole-body and nose-only exposure system

    DOEpatents

    Cannon, W.C.; Allemann, R.T.; Moss, O.R.; Decker, J.R. Jr.

    1992-03-31

    An exposure system for experimental animals includes a container for a single animal which has a double wall. The animal is confined within the inner wall. Gaseous material enters a first end, flows over the entire animal, then back between the walls and out the first end. The system also includes an arrangement of valve-controlled manifolds for supplying gaseous material to, and exhausting it from, the containers. 6 figs.

  5. Interchangeable whole-body and nose-only exposure system

    DOEpatents

    Cannon, William C.; Allemann, Rudolph T.; Moss, Owen R.; Decker, Jr., John R.

    1992-01-01

    An exposure system for experimental animals includes a container for a single animal which has a double wall. The animal is confined within the inner wall. Gaseous material enters a first end, flows over the entire animal, then back between the walls and out the first end. The system also includes an arrangement of valve-controlled manifolds for supplying gaseous material to, and exhausting it from, the containers.

  6. Adapting astronomical source detection software to help detect animals in thermal images obtained by unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Longmore, S. N.; Collins, R. P.; Pfeifer, S.; Fox, S. E.; Mulero-Pazmany, M.; Bezombes, F.; Goodwind, A.; de Juan Ovelar, M.; Knapen, J. H.; Wich, S. A.

    2017-02-01

    In this paper we describe an unmanned aerial system equipped with a thermal-infrared camera and software pipeline that we have developed to monitor animal populations for conservation purposes. Taking a multi-disciplinary approach to tackle this problem, we use freely available astronomical source detection software and the associated expertise of astronomers, to efficiently and reliably detect humans and animals in aerial thermal-infrared footage. Combining this astronomical detection software with existing machine learning algorithms into a single, automated, end-to-end pipeline, we test the software using aerial video footage taken in a controlled, field-like environment. We demonstrate that the pipeline works reliably and describe how it can be used to estimate the completeness of different observational datasets to objects of a given type as a function of height, observing conditions etc. - a crucial step in converting video footage to scientifically useful information such as the spatial distribution and density of different animal species. Finally, having demonstrated the potential utility of the system, we describe the steps we are taking to adapt the system for work in the field, in particular systematic monitoring of endangered species at National Parks around the world.

  7. Automatic centring and bonding of lenses

    NASA Astrophysics Data System (ADS)

    Krey, Stefan; Heinisch, J.; Dumitrescu, E.

    2007-05-01

    We present an automatic bonding station which is able to center and bond individual lenses or doublets to a barrel with sub micron centring accuracy. The complete manufacturing cycle includes the glue dispensing and UV curing. During the process the state of centring is continuously controlled by the vision software, and the final result is recorded to a file for process statistics. Simple pass or fail results are displayed to the operator at the end of the process.

  8. E-cigarettes, Hookah Pens and Vapes: Adolescent and Young Adult Perceptions of Electronic Nicotine Delivery Systems.

    PubMed

    Wagoner, Kimberly G; Cornacchione, Jennifer; Wiseman, Kimberly D; Teal, Randall; Moracco, Kathryn E; Sutfin, Erin L

    2016-10-01

    Most studies have assessed use of "e-cigarettes" or "electronic cigarettes," potentially excluding new electronic nicotine delivery systems (ENDS), such as e-hookahs and vape pens. Little is known about how adolescents and young adults perceive ENDS and if their perceptions vary by sub-type. We explored ENDS perceptions among these populations. Ten focus groups with 77 adolescents and young adults, ages 13-25, were conducted in spring 2014. Participants were users or susceptible nonusers of novel tobacco products. Focus group transcripts were coded for emergent themes. Participants reported positive ENDS attributes, including flavor variety; user control of nicotine content; and smoke trick facilitation. Negative attributes included different feel compared to combustible cigarettes, nicotine addiction potential, and no cue to stop use. Participants perceived less harm from ENDS compared to combustible cigarettes, perhaps due to marketing and lack of product regulation, but noted the uncertainty of ingredients in ENDS. Numerous terms were used to describe ENDS, including "e-cigarette," "e-hookah," "hookah pens," "tanks," and "vapes." Although no clear classification system emerged, participants used product characteristics like nicotine content and chargeability to attempt classification. Perceptions differed by product used. E-hookah users were perceived as young and trendy while e-cigarette users were perceived as old and addicted to nicotine. Young adults and adolescents report distinct ENDS sub-types with varying characteristics and social perceptions of users. Although they had more positive than negative perceptions of ENDS, prevention efforts should consider highlighting negative attributes as they may discourage use and product trial among young nonusers. Our study underscores the need for a standardized measurement system for ENDS sub-types and additional research on how ENDS sub-types are perceived among adolescents and young adults. In addition, our findings highlight negative product attributes reported by participants that may be useful in prevention and regulatory efforts to offset favorable marketing messages. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. E-cigarettes, Hookah Pens and Vapes: Adolescent and Young Adult Perceptions of Electronic Nicotine Delivery Systems

    PubMed Central

    Cornacchione, Jennifer; Wiseman, Kimberly D.; Teal, Randall; Moracco, Kathryn E.; Sutfin, Erin L.

    2016-01-01

    Introduction: Most studies have assessed use of “e-cigarettes” or “electronic cigarettes,” potentially excluding new electronic nicotine delivery systems (ENDS), such as e-hookahs and vape pens. Little is known about how adolescents and young adults perceive ENDS and if their perceptions vary by sub-type. We explored ENDS perceptions among these populations. Methods: Ten focus groups with 77 adolescents and young adults, ages 13–25, were conducted in spring 2014. Participants were users or susceptible nonusers of novel tobacco products. Focus group transcripts were coded for emergent themes. Results: Participants reported positive ENDS attributes, including flavor variety; user control of nicotine content; and smoke trick facilitation. Negative attributes included different feel compared to combustible cigarettes, nicotine addiction potential, and no cue to stop use. Participants perceived less harm from ENDS compared to combustible cigarettes, perhaps due to marketing and lack of product regulation, but noted the uncertainty of ingredients in ENDS. Numerous terms were used to describe ENDS, including “e-cigarette,” “e-hookah,” “hookah pens,” “tanks,” and “vapes.” Although no clear classification system emerged, participants used product characteristics like nicotine content and chargeability to attempt classification. Perceptions differed by product used. E-hookah users were perceived as young and trendy while e-cigarette users were perceived as old and addicted to nicotine. Conclusions: Young adults and adolescents report distinct ENDS sub-types with varying characteristics and social perceptions of users. Although they had more positive than negative perceptions of ENDS, prevention efforts should consider highlighting negative attributes as they may discourage use and product trial among young nonusers. Implications: Our study underscores the need for a standardized measurement system for ENDS sub-types and additional research on how ENDS sub-types are perceived among adolescents and young adults. In addition, our findings highlight negative product attributes reported by participants that may be useful in prevention and regulatory efforts to offset favorable marketing messages. PMID:27029821

  10. Software defined radio (SDR) architecture for concurrent multi-satellite communications

    NASA Astrophysics Data System (ADS)

    Maheshwarappa, Mamatha R.

    SDRs have emerged as a viable approach for space communications over the last decade by delivering low-cost hardware and flexible software solutions. The flexibility introduced by the SDR concept not only allows the realisation of concurrent multiple standards on one platform, but also promises to ease the implementation of one communication standard on differing SDR platforms by signal porting. This technology would facilitate implementing reconfigurable nodes for parallel satellite reception in Mobile/Deployable Ground Segments and Distributed Satellite Systems (DSS) for amateur radio/university satellite operations. This work outlines the recent advances in embedded technologies that can enable new communication architectures for concurrent multi-satellite or satellite-to-ground missions where multi-link challenges are associated. This research proposes a novel concept to run advanced parallelised SDR back-end technologies in a Commercial-Off-The-Shelf (COTS) embedded system that can support multi-signal processing for multi-satellite scenarios simultaneously. The initial SDR implementation could support only one receiver chain due to system saturation. However, the design was optimised to facilitate multiple signals within the limited resources available on an embedded system at any given time. This was achieved by providing a VHDL solution to the existing Python and C/C++ programming languages along with parallelisation so as to accelerate performance whilst maintaining the flexibility. The improvement in the performance was validated at every stage through profiling. Various cases of concurrent multiple signals with different standards such as frequency (with Doppler effect) and symbol rates were simulated in order to validate the novel architecture proposed in this research. Also, the architecture allows the system to be reconfigurable by providing the opportunity to change the communication standards in soft real-time. The chosen COTS solution provides a generic software methodology for both ground and space applications that will remain unaltered despite new evolutions in hardware, and supports concurrent multi-standard, multi-channel and multi-rate telemetry signals.

  11. Implementing the Gaia Astrometric Global Iterative Solution (AGIS) in Java

    NASA Astrophysics Data System (ADS)

    O'Mullane, William; Lammers, Uwe; Lindegren, Lennart; Hernandez, Jose; Hobbs, David

    2011-10-01

    This paper provides a description of the Java software framework which has been constructed to run the Astrometric Global Iterative Solution for the Gaia mission. This is the mathematical framework to provide the rigid reference frame for Gaia observations from the Gaia data itself. This process makes Gaia a self calibrated, and input catalogue independent, mission. The framework is highly distributed typically running on a cluster of machines with a database back end. All code is written in the Java language. We describe the overall architecture and some of the details of the implementation.

  12. NASA's Core Trajectory Sub-System Project: Using JBoss Enterprise Middleware for Building Software Systems Used to Support Spacecraft Trajectory Operations

    NASA Technical Reports Server (NTRS)

    Stensrud, Kjell C.; Hamm, Dustin

    2007-01-01

    NASA's Johnson Space Center (JSC) / Flight Design and Dynamics Division (DM) has prototyped the use of Open Source middleware technology for building its next generation spacecraft mission support system. This is part of a larger initiative to use open standards and open source software as building blocks for future mission and safety critical systems. JSC is hoping to leverage standardized enterprise architectures, such as Java EE, so that its internal software development efforts can be focused on the core aspects of their problem domain. This presentation will outline the design and implementation of the Trajectory system and the lessons learned during the exercise.

  13. Defect measurement and analysis of JPL ground software: a case study

    NASA Technical Reports Server (NTRS)

    Powell, John D.; Spagnuolo, John N., Jr.

    2004-01-01

    Ground software systems at JPL must meet high assurance standards while remaining on schedule due to relatively immovable launch dates for spacecraft that will be controlled by such systems. Toward this end, the Software Quality Improvement (SQI) project's Measurement and Benchmarking (M&B) team is collecting and analyzing defect data of JPL ground system software projects to build software defect prediction models. The aim of these models is to improve predictability with regard to software quality activities. Predictive models will quantitatively define typical trends for JPL ground systems as well as Critical Discriminators (CDs) to provide explanations for atypical deviations from the norm at JPL. CDs are software characteristics that can be estimated or foreseen early in a software project's planning. Thus, these CDs will assist in planning for the predicted degree to which software quality activities for a project are likely to deviation from the normal JPL ground system based on pasted experience across the lab.

  14. Improved Load Alleviation Capability for the KC-135

    DTIC Science & Technology

    1997-09-01

    software, such as Matlab, Mathematica, Simulink, and Robotica Front End for Mathematica available in the simulation laboratory Overview This thesis report is...outlined in Spong’s text in order to utilize the Robotica system development software which automates the process of calculating the kinematic and...kinematic and dynamic equations can be accomplished using a computer tool called Robotica Front End (RFE) [ 15], developed by Doctor Spong. Boom Root d3

  15. A system for automatic evaluation of simulation software

    NASA Technical Reports Server (NTRS)

    Ryan, J. P.; Hodges, B. C.

    1976-01-01

    Within the field of computer software, simulation and verification are complementary processes. Simulation methods can be used to verify software by performing variable range analysis. More general verification procedures, such as those described in this paper, can be implicitly, viewed as attempts at modeling the end-product software. From software requirement methodology, each component of the verification system has some element of simulation to it. Conversely, general verification procedures can be used to analyze simulation software. A dynamic analyzer is described which can be used to obtain properly scaled variables for an analog simulation, which is first digitally simulated. In a similar way, it is thought that the other system components and indeed the whole system itself have the potential of being effectively used in a simulation environment.

  16. BAM/DASS: Data Analysis Software for Sub-Microarcsecond Astrometry Device

    NASA Astrophysics Data System (ADS)

    Gardiol, D.; Bonino, D.; Lattanzi, M. G.; Riva, A.; Russo, F.

    2010-12-01

    The INAF - Osservatorio Astronomico di Torino is part of the Data Processing and Analysis Consortium (DPAC) for Gaia, a cornerstone mission of the European Space Agency. Gaia will perform global astrometry by means of two telescopes looking at the sky along two different lines of sight oriented at a fixed angle, also called basic angle. Knowledge of the basic angle fluctuations at the sub-microarcsecond level over periods of the order of the minute is crucial to reach the mission goals. A specific device, the Basic Angle Monitoring, will be dedicated to this purpose. We present here the software system we are developing to analyze the BAM data and recover the basic angle variations. This tool is integrated into the whole DPAC data analysis software.

  17. Lock It Up! Computer Security.

    ERIC Educational Resources Information Center

    Wodarz, Nan

    1997-01-01

    The data contained on desktop computer systems and networks pose security issues for virtually every district. Sensitive information can be protected by educating users, altering the physical layout, using password protection, designating access levels, backing up data, reformatting floppy disks, using antivirus software, and installing encryption…

  18. ILP-based co-optimization of cut mask layout, dummy fill, and timing for sub-14nm BEOL technology

    NASA Astrophysics Data System (ADS)

    Han, Kwangsoo; Kahng, Andrew B.; Lee, Hyein; Wang, Lutong

    2015-10-01

    Self-aligned multiple patterning (SAMP), due to its low overlay error, has emerged as the leading option for 1D gridded back-end-of-line (BEOL) in sub-14nm nodes. To form actual routing patterns from a uniform "sea of wires", a cut mask is needed for line-end cutting or realization of space between routing segments. Constraints on cut shapes and minimum cut spacing result in end-of-line (EOL) extensions and non-functional (i.e. dummy fill) patterns; the resulting capacitance and timing changes must be consistent with signoff performance analyses and their impacts should be minimized. In this work, we address the co-optimization of cut mask layout, dummy fill, and design timing for sub-14nm BEOL design. Our central contribution is an optimizer based on integer linear programming (ILP) to minimize the timing impact due to EOL extensions, considering (i) minimum cut spacing arising in sub-14nm nodes; (ii) cut assignment to different cut masks (color assignment); and (iii) the eligibility to merge two unit-size cuts into a bigger cut. We also propose a heuristic approach to remove dummy fills after the ILP-based optimization by extending the usage of cut masks. Our heuristic can improve critical path performance under minimum metal density and mask density constraints. In our experiments, we study the impact of number of cut masks, minimum cut spacing and metal density under various constraints. Our studies of optimized cut mask solutions in these varying contexts give new insight into the tradeoff of performance and cost that is afforded by cut mask patterning technology options.

  19. 4. EXTERIOR OF SOUTH END OF BUILDING 108 SHOWING STORM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. EXTERIOR OF SOUTH END OF BUILDING 108 SHOWING STORM PORCH ADDITION AND WINDOWS ALONG BACK (WEST SIDE) OF HOUSE. NOTE ORIGNAL SHORT CHIMNEY AT CREST OF ROOF. VIEW TO NORTH. - Rush Creek Hydroelectric System, Clubhouse Cottage, Rush Creek, June Lake, Mono County, CA

  20. Sensor mount assemblies and sensor assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, David H

    2012-04-10

    Sensor mount assemblies and sensor assemblies are provided. In an embodiment, by way of example only, a sensor mount assembly includes a busbar, a main body, a backing surface, and a first finger. The busbar has a first end and a second end. The main body is overmolded onto the busbar. The backing surface extends radially outwardly relative to the main body. The first finger extends axially from the backing surface, and the first finger has a first end, a second end, and a tooth. The first end of the first finger is disposed on the backing surface, and themore » tooth is formed on the second end of the first finger.« less

  1. Design and Development of the SMAP Microwave Radiometer Electronics

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey R.; Medeiros, James J.; Horgan, Kevin A.; Brambora, Clifford K.; Estep, Robert H.

    2014-01-01

    The SMAP microwave radiometer will measure land surface brightness temperature at L-band (1413 MHz) in the presence of radio frequency interference (RFI) for soil moisture remote sensing. The radiometer design was driven by the requirements to incorporate internal calibration, to operate synchronously with the SMAP radar, and to mitigate the deleterious effects of RFI. The system design includes a highly linear super-heterodyne microwave receiver with internal reference loads and noise sources for calibration and an innovative digital signal processor and detection system. The front-end comprises a coaxial cable-based feed network, with a pair of diplexers and a coupled noise source, and radiometer front-end (RFE) box. Internal calibration is provided by reference switches and a common noise source inside the RFE. The RF back-end (RBE) downconverts the 1413 MHz channel to an intermediate frequency (IF) of 120 MHz. The IF signals are then sampled and quantized by high-speed analog-to-digital converters in the radiometer digital electronics (RDE) box. The RBE local oscillator and RDE sampling clocks are phase-locked to a common reference to ensure coherency between the signals. The RDE performs additional filtering, sub-band channelization, cross-correlation for measuring third and fourth Stokes parameters, and detection and integration of the first four raw moments of the signals. These data are packetized and sent to the ground for calibration and further processing. Here we discuss the novel features of the radiometer hardware particularly those influenced by the need to mitigate RFI.

  2. Fluorinated tin oxide back contact for AZTSSe photovoltaic devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gershon, Talia S.; Gunawan, Oki; Haight, Richard A.

    A photovoltaic device includes a substrate, a back contact comprising a stable low-work function material, a photovoltaic absorber material layer comprising Ag.sub.2ZnSn(S,Se).sub.4 (AZTSSe) on a side of the back contact opposite the substrate, wherein the back contact forms an Ohmic contact with the photovoltaic absorber material layer, a buffer layer or Schottky contact layer on a side of the absorber layer opposite the back contact, and a top electrode on a side of the buffer layer opposite the absorber layer.

  3. DAS: A Data Management System for Instrument Tests and Operations

    NASA Astrophysics Data System (ADS)

    Frailis, M.; Sartor, S.; Zacchei, A.; Lodi, M.; Cirami, R.; Pasian, F.; Trifoglio, M.; Bulgarelli, A.; Gianotti, F.; Franceschi, E.; Nicastro, L.; Conforti, V.; Zoli, A.; Smart, R.; Morbidelli, R.; Dadina, M.

    2014-05-01

    The Data Access System (DAS) is a and data management software system, providing a reusable solution for the storage of data acquired both from telescopes and auxiliary data sources during the instrument development phases and operations. It is part of the Customizable Instrument WorkStation system (CIWS-FW), a framework for the storage, processing and quick-look at the data acquired from scientific instruments. The DAS provides a data access layer mainly targeted to software applications: quick-look displays, pre-processing pipelines and scientific workflows. It is logically organized in three main components: an intuitive and compact Data Definition Language (DAS DDL) in XML format, aimed for user-defined data types; an Application Programming Interface (DAS API), automatically adding classes and methods supporting the DDL data types, and providing an object-oriented query language; a data management component, which maps the metadata of the DDL data types in a relational Data Base Management System (DBMS), and stores the data in a shared (network) file system. With the DAS DDL, developers define the data model for a particular project, specifying for each data type the metadata attributes, the data format and layout (if applicable), and named references to related or aggregated data types. Together with the DDL user-defined data types, the DAS API acts as the only interface to store, query and retrieve the metadata and data in the DAS system, providing both an abstract interface and a data model specific one in C, C++ and Python. The mapping of metadata in the back-end database is automatic and supports several relational DBMSs, including MySQL, Oracle and PostgreSQL.

  4. Large Format, Background Limited Arrays of Kinetic Inductance Detectors for Sub-mm Astronomy

    NASA Astrophysics Data System (ADS)

    Baselmans, Jochem

    2018-01-01

    We present the development of large format imaging arrays for sub-mm astronomy based upon microwave Kinetic Inductance detectors and their read-out. In particular we focus on the arrays developed for the A-MKID instrument for the APEX telescope. AMKID contains 2 focal plane arrays, covering a field of view of 15?x15?. One array is optimized for the 350 GHz telluric window, the other for the 850 GHz window. Both arrays are constructed from four 61 x 61 mm detector chips, each of which contains up to 3400 detectors and up to 880 detectors per readout line. The detectors are lens antenna coupled MKIDs made from NbTiN and Aluminium that reach photon noise limited sensitivity in combination with a high optical coupling. The lens-antenna radiation coupling enables the use of 4K optics and Lyot stop due to the intrinsic directivity of the detector beam, allowing a simple cryogenic architecture. We discuss the pixel design and verification, detector packaging and the array performance. We will also discuss the readout system, which is a combination of a digital and analog back-end that can read-out up to 4000 pixels simultaneously using frequency division multiplexing.

  5. An off-the-shelf guider for the Palomar 200-inch telescope: interfacing amateur astronomy software with professional telescopes for an easy life

    NASA Astrophysics Data System (ADS)

    Clarke, Fraser; Lynn, James; Thatte, Niranjan; Tecza, Matthias

    2014-08-01

    We have developed a simple but effective guider for use with the Oxford-SWIFT integral field spectrograph on the Palomar 200-inch telescope. The guider uses mainly off-the-shelf components, including commercial amateur astronomy software to interface with the CCD camera, calculating guiding corrections, and send guide commands to the telescope. The only custom piece of software is an driver to provide an interface between the Palomar telescope control system and the industry standard 'ASCOM' system. Using existing commercial software provided a very cheap guider (<$5000) with minimal (<15 minutes) commissioning time. The final system provides sub-arcsecond guiding, and could easily be adapted to any other professional telescope

  6. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    NASA Technical Reports Server (NTRS)

    Cui, Zhenqian

    1999-01-01

    In this thesis, we analyze various factors that affect quality of service (QoS) communication in high-speed, packet-switching sub-networks. We hypothesize that sub-network-wide bandwidth reservation and guaranteed CPU processing power at endpoint systems for handling data traffic are indispensable to achieving hard end-to-end quality of service. Different bandwidth reservation strategies, traffic characterization schemes, and scheduling algorithms affect the network resources and CPU usage as well as the extent that QoS can be achieved. In order to analyze those factors, we design and implement a communication layer. Our experimental analysis supports our research hypothesis. The Resource ReSerVation Protocol (RSVP) is designed to realize resource reservation. Our analysis of RSVP shows that using RSVP solely is insufficient to provide hard end-to-end quality of service in a high-speed sub-network. Analysis of the IEEE 802.lp protocol also supports the research hypothesis.

  7. XPRESS: eXascale PRogramming Environment and System Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brightwell, Ron; Sterling, Thomas; Koniges, Alice

    The XPRESS Project is one of four major projects of the DOE Office of Science Advanced Scientific Computing Research X-stack Program initiated in September, 2012. The purpose of XPRESS is to devise an innovative system software stack to enable practical and useful exascale computing around the end of the decade with near-term contributions to efficient and scalable operation of trans-Petaflops performance systems in the next two to three years; both for DOE mission-critical applications. To this end, XPRESS directly addresses critical challenges in computing of efficiency, scalability, and programmability through introspective methods of dynamic adaptive resource management and task scheduling.

  8. Diagnostic system for measuring temperature, pressure, CO2 concentration and H2O concentration in a fluid stream

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Partridge, Jr., William P.; Jatana, Gurneesh Singh; Yoo, Ji-Hyung

    A diagnostic system for measuring temperature, pressure, CO.sub.2 concentration and H.sub.2O concentration in a fluid stream is described. The system may include one or more probes that sample the fluid stream spatially, temporally and over ranges of pressure and temperature. Laser light sources are directed down pitch optical cables, through a lens and to a mirror, where the light sources are reflected back, through the lens to catch optical cables. The light travels through the catch optical cables to detectors, which provide electrical signals to a processer. The processer utilizes the signals to calculate CO.sub.2 concentration based on the temperaturesmore » derived from H.sub.2O vapor concentration. A probe for sampling CO.sub.2 and H.sub.2O vapor concentrations is also disclosed. Various mechanical features interact together to ensure the pitch and catch optical cables are properly aligned with the lens during assembly and use.« less

  9. SPENVIS Implementation of End-of-Life Solar Cell Calculations Using the Displacement Damage Dose Methodology

    NASA Technical Reports Server (NTRS)

    Walters, Robert; Summers, Geoffrey P.; Warmer. Keffreu J/; Messenger, Scott; Lorentzen, Justin R.; Morton, Thomas; Taylor, Stephen J.; Evans, Hugh; Heynderickx, Daniel; Lei, Fan

    2007-01-01

    This paper presents a method for using the SPENVIS on-line computational suite to implement the displacement damage dose (D(sub d)) methodology for calculating end-of-life (EOL) solar cell performance for a specific space mission. This paper builds on our previous work that has validated the D(sub d) methodology against both measured space data [1,2] and calculations performed using the equivalent fluence methodology developed by NASA JPL [3]. For several years, the space solar community has considered general implementation of the D(sub d) method, but no computer program exists to enable this implementation. In a collaborative effort, NRL, NASA and OAI have produced the Solar Array Verification and Analysis Tool (SAVANT) under NASA funding, but this program has not progressed beyond the beta-stage [4]. The SPENVIS suite with the Multi Layered Shielding Simulation Software (MULASSIS) contains all of the necessary components to implement the Dd methodology in a format complementary to that of SAVANT [5]. NRL is currently working with ESA and BIRA to include the Dd method of solar cell EOL calculations as an integral part of SPENVIS. This paper describes how this can be accomplished.

  10. Conducting Research on the International Space Station Using the EXPRESS Rack Facilities

    NASA Technical Reports Server (NTRS)

    Thompson, Sean W.; Lake, Robert E.

    2013-01-01

    Conducting Research on the International Space Station using the EXPRESS Rack Facilities. Sean W. Thompson and Robert E. Lake. NASA Marshall Space Flight Center, Huntsville, AL, USA. Eight "Expedite the Processing of Experiments to Space Station" (EXPRESS) Rack facilities are located within the International Space Station (ISS) laboratories to provide standard resources and interfaces for the simultaneous and independent operation of multiple experiments within each rack. Each EXPRESS Rack provides eight Middeck Locker Equivalent locations and two drawer locations for powered experiment equipment, also referred to as sub-rack payloads. Payload developers may provide their own structure to occupy the equivalent volume of one, two, or four lockers as a single unit. Resources provided for each location include power (28 Vdc, 0-500 W), command and data handling (Ethernet, RS-422, 5 Vdc discrete, +/- 5 Vdc analog), video (NTSC/RS 170A), and air cooling (0-200 W). Each rack also provides water cooling (500 W) for two locations, one vacuum exhaust interface, and one gaseous nitrogen interface. Standard interfacing cables and hoses are provided on-orbit. One laptop computer is provided with each rack to control the rack and to accommodate payload application software. Four of the racks are equipped with the Active Rack Isolation System to reduce vibration between the ISS and the rack. EXPRESS Racks are operated by the Payload Operations Integration Center at Marshall Space Flight Center and the sub-rack experiments are operated remotely by the investigating organization. Payload Integration Managers serve as a focal to assist organizations developing payloads for an EXPRESS Rack. NASA provides EXPRESS Rack simulator software for payload developers to checkout payload command and data handling at the development site before integrating the payload with the EXPRESS Functional Checkout Unit for an end-to-end test before flight. EXPRESS Racks began supporting investigations onboard ISS on April 24, 2001 and will continue through the life of the ISS.

  11. EEG acquisition system based on active electrodes with common-mode interference suppression by Driving Right Leg circuit.

    PubMed

    Guermandi, Marco; Bigucci, Alessandro; Franchi Scarselli, Eleonora; Guerrieri, Roberto

    2015-01-01

    We present a system for the acquisition of EEG signals based on active electrodes and implementing a Driving Right Leg circuit (DgRL). DgRL allows for single-ended amplification and analog-to-digital conversion, still guaranteeing a common mode rejection in excess of 110 dB. This allows the system to acquire high-quality EEG signals essentially removing network interference for both wet and dry-contact electrodes. The front-end amplification stage is integrated on the electrode, minimizing the system's sensitivity to electrode contact quality, cable movement and common mode interference. The A/D conversion stage can be either integrated in the remote back-end or placed on the head as well, allowing for an all-digital communication to the back-end. Noise integrated in the band from 0.5 to 100 Hz is comprised between 0.62 and 1.3 μV, depending on the configuration. Current consumption for the amplification and A/D conversion of one channel is 390 μA. Thanks to its low noise, the high level of interference suppression and its quick setup capabilities, the system is particularly suitable for use outside clinical environments, such as in home care, brain-computer interfaces or consumer-oriented applications.

  12. Towards a cross-platform software framework to support end-to-end hydrometeorological sensor network deployment

    NASA Astrophysics Data System (ADS)

    Celicourt, P.; Sam, R.; Piasecki, M.

    2016-12-01

    Global phenomena such as climate change and large scale environmental degradation require the collection of accurate environmental data at detailed spatial and temporal scales from which knowledge and actionable insights can be derived using data science methods. Despite significant advances in sensor network technologies, sensors and sensor network deployment remains a labor-intensive, time consuming, cumbersome and expensive task. These factors demonstrate why environmental data collection remains a challenge especially in developing countries where technical infrastructure, expertise and pecuniary resources are scarce. In addition, they also demonstrate the reason why dense and long-term environmental data collection has been historically quite difficult. Moreover, hydrometeorological data collection efforts usually overlook the (critically important) inclusion of a standards-based system for storing, managing, organizing, indexing, documenting and sharing sensor data. We are developing a cross-platform software framework using the Python programming language that will allow us to develop a low cost end-to-end (from sensor to publication) system for hydrometeorological conditions monitoring. The software framework contains provision for sensor, sensor platforms, calibration and network protocols description, sensor programming, data storage, data publication and visualization and more importantly data retrieval in a desired unit system. It is being tested on the Raspberry Pi microcomputer as end node and a laptop PC as the base station in a wireless setting.

  13. OISI dynamic end-to-end modeling tool

    NASA Astrophysics Data System (ADS)

    Kersten, Michael; Weidler, Alexander; Wilhelm, Rainer; Johann, Ulrich A.; Szerdahelyi, Laszlo

    2000-07-01

    The OISI Dynamic end-to-end modeling tool is tailored to end-to-end modeling and dynamic simulation of Earth- and space-based actively controlled optical instruments such as e.g. optical stellar interferometers. `End-to-end modeling' is meant to denote the feature that the overall model comprises besides optical sub-models also structural, sensor, actuator, controller and disturbance sub-models influencing the optical transmission, so that the system- level instrument performance due to disturbances and active optics can be simulated. This tool has been developed to support performance analysis and prediction as well as control loop design and fine-tuning for OISI, Germany's preparatory program for optical/infrared spaceborne interferometry initiated in 1994 by Dornier Satellitensysteme GmbH in Friedrichshafen.

  14. Orbital friction stir weld system

    NASA Technical Reports Server (NTRS)

    Ding, R. Jeffrey (Inventor); Carter, Robert W. (Inventor)

    2001-01-01

    This invention is an apparatus for joining the ends of two cylindrical (i.e., pipe-shaped) sections together with a friction stir weld. The apparatus holds the two cylindrical sections together and provides back-side weld support as it makes a friction stir weld around the circumference of the joined ends.

  15. 12 CFR 7.5004 - Sale of excess electronic capacity and by-products.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... bank's needs for banking purposes include: (1) Data processing services; (2) Production and distribution of non-financial software; (3) Providing periodic back-up call answering services; (4) Providing full Internet access; (5) Providing electronic security system support services; (6) Providing long...

  16. Framework for End-User Programming of Cross-Smart Space Applications

    PubMed Central

    Palviainen, Marko; Kuusijärvi, Jarkko; Ovaska, Eila

    2012-01-01

    Cross-smart space applications are specific types of software services that enable users to share information, monitor the physical and logical surroundings and control it in a way that is meaningful for the user's situation. For developing cross-smart space applications, this paper makes two main contributions: it introduces (i) a component design and scripting method for end-user programming of cross-smart space applications and (ii) a backend framework of components that interwork to support the brunt of the RDFScript translation, and the use and execution of ontology models. Before end-user programming activities, the software professionals must develop easy-to-apply Driver components for the APIs of existing software systems. Thereafter, end-users are able to create applications from the commands of the Driver components with the help of the provided toolset. The paper also introduces the reference implementation of the framework, tools for the Driver component development and end-user programming of cross-smart space applications and the first evaluation results on their application. PMID:23202169

  17. System for stabilizing cable phase delay utilizing a coaxial cable under pressure

    NASA Technical Reports Server (NTRS)

    Clements, P. A. (Inventor)

    1974-01-01

    Stabilizing the phase delay of signals passing through a pressurizable coaxial cable is disclosed. Signals from an appropriate source at a selected frequency, e.g., 100 MHz, are sent through the controlled cable from a first cable end to a second cable end which, electrically, is open or heavily mismatched at 100 MHz, thereby reflecting 100 MHz signals back to the first cable end. Thereat, the phase difference between the reflected-back signals and the signals from the source is detected by a phase detector. The output of the latter is used to control the flow of gas to or from the cable, thereby controlling the cable pressure, which in turn affects the cable phase delay.

  18. CHIPSat spacecraft design: significant science on a low budget

    NASA Astrophysics Data System (ADS)

    Janicik, Jeffrey; Wolff, Jonathan

    2003-12-01

    The Cosmic Hot Interstellar Plasma Spectrometer satellite (CHIPSat) was launched on January 12, 2003 and is successfully accomplishing its mission. CHIPS is NASA"s first-ever University-Class Explorer (UNEX) project, and is performed through a grant to the University of California at Berkeley (UCB) Space Sciences Laboratory (SSL). As a small start-up aerospace company, SpaceDev was awarded responsibility for a low-cost spacecraft and mission design, build, integration and test, and mission operations. The company leveraged past small satellite mission experiences to help design a robust small spacecraft system architecture. In addition, they utilized common industry hardware and software standards to facilitate design implementation, integration, and test of the bus, including the use of TCP/IP protocols and the Internet for end-to-end satellite communications. The approach called for a single-string design except in critical areas, the use of COTS parts to incorporate the latest proven technologies in commercial electronics, and the establishment of a working system as quickly as possible in order to maximize test hours prior to launch. Furthermore, automated ground systems were combined with table-configured onboard software to allow for "hands-off" mission operations. During nominal operations, the CHIPSat spacecraft uses a 3-axis stabilized zero-momentum bias "Nominal" mode. The secondary mode is a "Safehold" mode where fixed "keep-alive" arrays maintain enough power to operate the essential spacecraft bus in any attitude and spin condition, and no a-priori attitude knowledge is required to recover. Due to the omnidirectional antenna design, communications are robust in "Safehold" mode, including the transmission of basic housekeeping data at a duty cycle that is adjusted based on available solar power. This design enables the entire mission to be spent in "Observation Mode" with timed pointing files mapping the sky as desired unless an anomalous event upsets the health of the bus such that the spacecraft system toggles back to "Safehold". In all conditions, spacecraft operations do not require any time-critical operator involvement. This paper will examine the results of the first six months of CHIPSat on-orbit operations and measure them against the expectations of the aforementioned design architecture. The end result will be a "lessons learned" account of a 3-axis sun-pointing small spacecraft design architecture that will be useful for future science missions.

  19. Support for Quality Assurance in End-User Systems.

    ERIC Educational Resources Information Center

    Klepper, Robert; McKenna, Edward G.

    1989-01-01

    Suggests an approach that organizations can take to provide centralized support services for quality assurance in end-user information systems, based on the experiences of a support group at Citicorp Mortgage, Inc. The functions of the support group include user education, software selection, and assistance in testing, implementation, and support…

  20. Towards a Software Framework to Support Deployment of Low Cost End-to-End Hydroclimatological Sensor Network

    NASA Astrophysics Data System (ADS)

    Celicourt, P.; Piasecki, M.

    2015-12-01

    Deployment of environmental sensors assemblies based on cheap platforms such as Raspberry Pi and Arduino have gained much attention over the past few years. While they are more attractive due to their ability to be controlled with a few programming language choices, the configuration task can become quite complex due to the need of having to learn several different proprietary data formats and protocols which constitute a bottleneck for the expansion of sensor network. In response to this rising complexity the Institute of Electrical and Electronics Engineers (IEEE) has sponsored the development of the IEEE 1451 standard in an attempt to introduce a common standard. The most innovative concept of the standard is the Transducer Electronic Data Sheet (TEDS) which enables transducers to self-identify, self-describe, self-calibrate, to exhibit plug-and-play functionality, etc. We used Python to develop an IEEE 1451.0 platform-independent graphical user interface to generate and provide sufficient information about almost ANY sensor and sensor platforms for sensor programming purposes, automatic calibration of sensors data, incorporation of back-end demands on data management in TEDS for automatic standard-based data storage, search and discovery purposes. These features are paramount to make data management much less onerous in large scale sensor network. Along with the TEDS Creator, we developed a tool namely HydroUnits for three specific purposes: encoding of physical units in the TEDS, dimensional analysis, and on-the-fly conversion of time series allowing users to retrieve data in a desired equivalent unit while accommodating unforeseen and user-defined units. In addition, our back-end data management comprises the Python/Django equivalent of the CUAHSI Observations Data Model (ODM) namely DjangODM that will be hosted by a MongoDB Database Server which offers more convenience for our application. We are also developing a data which will be paired with the data autoloading capability of Django and a TEDS processing script to populate the database with the incoming data. The Python WaterOneFlow Web Services developed by the Texas Water Development Board will be used to publish the data. The software suite is being tested on the Raspberry Pi as end node and a laptop PC as the base station in a wireless setting.

  1. Spinoff 2005

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Topics covered include: Lighting the Way for Quicker, Safer Healing; Discovering New Drugs on the Cellular Level; Hydrogen Sensors Boost Hybrids; Today s Models Losing Gas?; 3-D Highway in the Sky; Popping a Hole in High-Speed Pursuits; Monitoring Wake Vortices for More Efficient Airports; From Rockets to Racecars; All-Terrain Intelligent Robot Braves Battlefront to Save Lives; Keeping the Air Clean and Safe--An Anthrax Smoke Detector; Lightning Often Strikes Twice; Technology That's Ready and Able to Inspect Those Cables; Secure Networks for First Responders and Special Forces; Space Suit Spins; Cooking Dinner at Home--From the Office; Nanoscale Materials Make for Large-Scale Applications; NASA s Growing Commitment: The Space Garden; Bringing Thunder and Lightning Indoors; Forty-Year-Old Foam Springs Back With New Benefits; Experiments With Small Animals Rarely Go This Well; NASA, the Fisherman's Friend; Crystal-Clear Communication a Sweet-Sounding Success; Inertial Motion-Tracking Technology for Virtual 3-D; Then Why Do They Call Earth the Blue Planet?; Valiant 'Zero-Valent' Effort Restores Contaminated Grounds; Harnessing the Power of the Sun; Water and Air Measures That Make 'PureSense'; Remote Sensing for Farmers and Flood Watching; Pesticide-Free Device a Fatal Attraction for Mosquitoes Making the Most of Waste Energy Washing Away the Worries About Germs Celestial Software Scratches More Than the Surface A Search Engine That's Aware of Your Needs Fault-Detection Tool Has Companies 'Mining' Own Business; Software to Manage the Unmanageable; Tracking Electromagnetic Energy With SQUIDs; Taking the Risk Out of Risk Assessment; Satellite and Ground System Solutions at Your Fingertips; Structural Analysis Made 'NESSUSary'; Software of Seismic Proportions Promotes Enjoyable Learning; Making a Reliable Actuator Faster and More Affordable; Cost-Cutting Powdered Lubricant NASA s Radio Frequency Bolt Monitor: A Lifetime of Spinoffs Going End to End to Deliver High-Speed Data; Advanced Joining Technology: Simple, Strong, and Secure; Big Results From a Smaller Gearbox; Low-Pressure Generator Makes Cleanrooms Cleaner; and The Space Laser Business Model.

  2. An FPGA-Based High-Speed Error Resilient Data Aggregation and Control for High Energy Physics Experiment

    NASA Astrophysics Data System (ADS)

    Mandal, Swagata; Saini, Jogender; Zabołotny, Wojciech M.; Sau, Suman; Chakrabarti, Amlan; Chattopadhyay, Subhasis

    2017-03-01

    Due to the dramatic increase of data volume in modern high energy physics (HEP) experiments, a robust high-speed data acquisition (DAQ) system is very much needed to gather the data generated during different nuclear interactions. As the DAQ works under harsh radiation environment, there is a fair chance of data corruption due to various energetic particles like alpha, beta, or neutron. Hence, a major challenge in the development of DAQ in the HEP experiment is to establish an error resilient communication system between front-end sensors or detectors and back-end data processing computing nodes. Here, we have implemented the DAQ using field-programmable gate array (FPGA) due to some of its inherent advantages over the application-specific integrated circuit. A novel orthogonal concatenated code and cyclic redundancy check (CRC) have been used to mitigate the effects of data corruption in the user data. Scrubbing with a 32-b CRC has been used against error in the configuration memory of FPGA. Data from front-end sensors will reach to the back-end processing nodes through multiple stages that may add an uncertain amount of delay to the different data packets. We have also proposed a novel memory management algorithm that helps to process the data at the back-end computing nodes removing the added path delays. To the best of our knowledge, the proposed FPGA-based DAQ utilizing optical link with channel coding and efficient memory management modules can be considered as first of its kind. Performance estimation of the implemented DAQ system is done based on resource utilization, bit error rate, efficiency, and robustness to radiation.

  3. Architecture of the software for LAMOST fiber positioning subsystem

    NASA Astrophysics Data System (ADS)

    Peng, Xiaobo; Xing, Xiaozheng; Hu, Hongzhuan; Zhai, Chao; Li, Weimin

    2004-09-01

    The architecture of the software which controls the LAMOST fiber positioning sub-system is described. The software is composed of two parts as follows: a main control program in a computer and a unit controller program in a MCS51 single chip microcomputer ROM. And the function of the software includes: Client/Server model establishment, observation planning, collision handling, data transmission, pulse generation, CCD control, image capture and processing, and data analysis etc. Particular attention is paid to the ways in which different parts of the software can communicate. Also software techniques for multi threads, SOCKET programming, Microsoft Windows message response, and serial communications are discussed.

  4. Communication Problems in Requirements Engineering: A Field Study

    NASA Technical Reports Server (NTRS)

    Al-Rawas, Amer; Easterbrook, Steve

    1996-01-01

    The requirements engineering phase of software development projects is characterized by the intensity and importance of communication activities. During this phase, the various stakeholders must be able to communicate their requirements to the analysts, and the analysts need to be able to communicate the specifications they generate back to the stakeholders for validation. This paper describes a field investigation into the problems of communication between disparate communities involved in the requirements specification activities. The results of this study are discussed in terms of their relation to three major communication barriers: (1) ineffectiveness of the current communication channels; (2) restrictions on expressiveness imposed by notations; and (3) social and organizational barriers. The results confirm that organizational and social issues have great influence on the effectiveness of communication. They also show that in general, end-users find the notations used by software practitioners to model their requirements difficult to understand and validate.

  5. VGOS Operations and Geodetic Results

    NASA Astrophysics Data System (ADS)

    Niell, Arthur E.; Beaudoin, Christopher J.; Bolotin, Sergei; Cappallo, Roger J.; Corey, Brian E.; Gipson, John; Gordon, David; McWhirter, Russell; Ruszczyk, Chester A.; SooHoo, Jason

    2014-12-01

    Over the past two years the first VGOS geodetic results were obtained using the GGAO12M and Westford broadband systems that have been developed under NASA sponsorship and funding. These observations demonstrated full broadband operation, from data acquisition through correlation, delay extraction, and baseline estimation. The May 2013 24-hour session proceeded almost without human intervention in anticipation of the goal of unattended operation. A recent test observation successfully demonstrated the use of what is expected to be the operational version of the RDBE digital back end and the Mark 6 system on which the outputs of four RDBEs, each processing one RF band, were recorded on a single module at eight gigabits per second. The complex-sample VDIF data from GGAO12M and Westford were cross-correlated on the Haystack DiFX software correlator, and the instrumental delay was calculated from all of the phase calibration tones in each channel. A minimum redundancy frequency sequence (1, 2, 4, 6, 9, 13, 14, 15) was utilized to minimize the first sidelobes of the multiband delay resolution function.

  6. DAILY SIMULATION OF OZONE AND FINE PARTICULATES OVER NEW YORK STATE: FINDINGS AND CHALLENGES

    EPA Science Inventory

    This study investigates the potential utility of the application of a photochemical modeling system in providing simultaneous forecasts of ozone (O3) and fine particulate matter (PM2.5) over New York State. To this end, daily simulations from the Community M...

  7. Miniaturized Airborne Imaging Central Server System

    NASA Technical Reports Server (NTRS)

    Sun, Xiuhong

    2011-01-01

    In recent years, some remote-sensing applications require advanced airborne multi-sensor systems to provide high performance reflective and emissive spectral imaging measurement rapidly over large areas. The key or unique problem of characteristics is associated with a black box back-end system that operates a suite of cutting-edge imaging sensors to collect simultaneously the high throughput reflective and emissive spectral imaging data with precision georeference. This back-end system needs to be portable, easy-to-use, and reliable with advanced onboard processing. The innovation of the black box backend is a miniaturized airborne imaging central server system (MAICSS). MAICSS integrates a complex embedded system of systems with dedicated power and signal electronic circuits inside to serve a suite of configurable cutting-edge electro- optical (EO), long-wave infrared (LWIR), and medium-wave infrared (MWIR) cameras, a hyperspectral imaging scanner, and a GPS and inertial measurement unit (IMU) for atmospheric and surface remote sensing. Its compatible sensor packages include NASA s 1,024 1,024 pixel LWIR quantum well infrared photodetector (QWIP) imager; a 60.5 megapixel BuckEye EO camera; and a fast (e.g. 200+ scanlines/s) and wide swath-width (e.g., 1,920+ pixels) CCD/InGaAs imager-based visible/near infrared reflectance (VNIR) and shortwave infrared (SWIR) imaging spectrometer. MAICSS records continuous precision georeferenced and time-tagged multisensor throughputs to mass storage devices at a high aggregate rate, typically 60 MB/s for its LWIR/EO payload. MAICSS is a complete stand-alone imaging server instrument with an easy-to-use software package for either autonomous data collection or interactive airborne operation. Advanced multisensor data acquisition and onboard processing software features have been implemented for MAICSS. With the onboard processing for real time image development, correction, histogram-equalization, compression, georeference, and data organization, fast aerial imaging applications, including the real time LWIR image mosaic for Google Earth, have been realized for NASA fs LWIR QWIP instrument. MAICSS is a significant improvement and miniaturization of current multisensor technologies. Structurally, it has a complete modular and solid-state design. Without rotating hard drives and other moving parts, it is operational at high altitudes and survivable in high-vibration environments. It is assembled from a suite of miniaturized, precision-machined, standardized, and stackable interchangeable embedded instrument modules. These stackable modules can be bolted together with the interconnection wires inside for the maximal simplicity and portability. Multiple modules are electronically interconnected as stacked. Alternatively, these dedicated modules can be flexibly distributed to fit the space constraints of a flying vehicle. As a flexibly configurable system, MAICSS can be tailored to interface a variety of multisensor packages. For example, with a 1,024x1,024 pixel LWIR and a 8,984x6,732 pixel EO payload, the complete MAICSS volume is approximately 7x9x11 in. (=18x23x28 cm), with a weight of 25 lb (=11.4 kg).

  8. Archive of digital chirp subbottom profile data collected during USGS cruise 12BIM03 offshore of the Chandeleur Islands, Louisiana, July 2012

    USGS Publications Warehouse

    Forde, Arnell S.; Miselis, Jennifer L.; Wiese, Dana S.

    2014-01-01

    From July 23 - 31, 2012, the U.S. Geological Survey conducted geophysical surveys to investigate the geologic controls on barrier island framework and long-term sediment transport along the oil spill mitigation sand berm constructed at the north end and just offshore of the Chandeleur Islands, La. (figure 1). This effort is part of a broader USGS study, which seeks to better understand barrier island evolution over medium time scales (months to years). This report serves as an archive of unprocessed digital chirp subbottom data, trackline maps, navigation files, Geographic Information System (GIS) files, Field Activity Collection System (FACS) logs, and formal Federal Geographic Data Committee (FGDC) metadata. Gained (showing a relative increase in signal amplitude) digital images of the seismic profiles are also provided. Refer to the Abbreviations page for expansions of acronyms and abbreviations used in this report. The USGS St. Petersburg Coastal and Marine Science Center (SPCMSC) assigns a unique identifier to each cruise or field activity. For example, 12BIM03 tells us the data were collected in 2012 during the third field activity for that project in that calendar year and BIM is a generic code, which represents efforts related to Barrier Island Mapping. Refer to http://walrus.wr.usgs.gov/infobank/programs/html/definition/activity.html for a detailed description of the method used to assign the field activity ID. All chirp systems use a signal of continuously varying frequency; the EdgeTech SB-424 system used during this survey produces high-resolution, shallow-penetration (typically less than 50 milliseconds (ms)) profile images of sub-seafloor stratigraphy. The towfish contains a transducer that transmits and receives acoustic energy and is typically towed 1 - 2 m below the sea surface. As transmitted acoustic energy intersects density boundaries, such as the seafloor or sub-surface sediment layers, energy is reflected back toward the transducer, received, and recorded by a PC-based seismic acquisition system. This process is repeated at regular time intervals (for example, 0.125 seconds (s)) and returned energy is recorded for a specific duration (for example, 50 ms). In this way, a two-dimensional (2-D) vertical image of the shallow geologic structure beneath the ship track is produced. Figure 2 displays the acquisition geometry. Refer to table 1 for a summary of acquisition parameters and table 2 for trackline statistics. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG Y rev. 0 format (Barry and others, 1975); the first 3,200 bytes of the card image header are in ASCII format instead of EBCDIC format. The SEG Y files may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU) (Cohen and Stockwell, 2010). See the How To Download SEG Y Data page for download instructions. The web version of this archive does not contain the SEG Y trace files. These files are very large and would require extremely long download times. To obtain the complete DVD archive, contact USGS Information Services at 1-888-ASK-USGS or infoservices@usgs.gov. The printable profiles provided here are GIF images that were processed and gained using SU software and can be viewed from the Profiles page or from links located on the trackline maps; refer to the Software page for links to example SU processing scripts. The SEG Y files are available on the DVD version of this report or on the Web, downloadable via the USGS Coastal and Marine Geoscience Data System (http://cmgds.marine.usgs.gov). The data are also available for viewing using GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org) multi-platform open source software. Detailed information about the navigation system used can be found in table 1 and the Field Activity Collection System (FACS) logs. To view the trackline maps and navigation files, and for more information about these items, see the Navigation page.

  9. 75 FR 71625 - System Restoration Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... processing software should be filed in native applications or print-to-PDF format, and not in a scanned... (2006), aff'd sub nom. Alcoa, Inc. v. FERC, 564 F.3d 1342 (D.C. Cir. 2009). 6. On March 16, 2007, the... electronically using word processing software should be filed in native applications or print-to-PDF format, and...

  10. RELAP-7 Software Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less

  11. Optimization of Water Management of Cranberry Fields under Current and Future Climate Conditions

    NASA Astrophysics Data System (ADS)

    Létourneau, G.; Gumiere, S.; Mailhot, E.; Rousseau, A. N.

    2016-12-01

    In North America, cranberry production is on the rise. Since 2005, land area dedicated to cranberry doubled, principally in Canada. Recent studies have shown that sub-irrigation could lead to improvements in yield, water use efficiency and pumping energy requirements compared to conventional sprinkler irrigation. However, the experimental determination of the optimal water table level of each production site may be expensiveand time-consuming. The primary objective of this study is to optimize the water table level as a function of typical soil properties, and climatic conditions observed in major production areas using a numerical modeling approach. The second objective is to evaluate the impacts of projected climatic conditions on water management of cranberry fields. To that end, cranberry-specific management operations such as harvest flooding, rapid drainage following heavy rainfall, or hydric stress management during dry weather conditions were simulated with the HYDRUS 2D software. Results have shown that maintaining the water table approximately at 60 cm provides optimal results for most of the studied soils. However, under certain extreme climatic conditions, the drainage system design may not allow maintaining optimal hydric conditions for cranberry growth. The long-term benefit of this study has potential to advance the design of drainage/sub-irrigation systems.

  12. The Next Generation in Subsidence and Aquifer-System Compaction Modeling within the MODFLOW Software Family: A New Package for MODFLOW-2005 and MODFLOW-OWHM

    NASA Astrophysics Data System (ADS)

    Boyce, S. E.; Leake, S. A.; Hanson, R. T.; Galloway, D. L.

    2015-12-01

    The Subsidence and Aquifer-System Compaction Packages, SUB and SUB-WT, for MODFLOW are two currently supported subsidence packages within the MODFLOW family of software. The SUB package allows the calculation of instantaneous and delayed releases of water from distributed interbeds (relatively more compressible fine-grained sediments) within a saturated aquifer system or discrete confining beds. The SUB-WT package does not include delayed releases, but does perform a more rigorous calculation of vertical stresses that can vary the effective stress that causes compaction. This calculation of instantaneous compaction can include the effect of water-table fluctuations for unconfined aquifers on effective stress, and can optionally adjust the elastic and inelastic storage properties based on the changes in effective stress. The next generation of subsidence modeling in MODFLOW is under development, and will merge and enhance the capabilities of the SUB and SUB-WT Packages for MODFLOW-2005 and MODFLOW-OWHM. This new version will also provide some additional features such as stress dependent vertical hydraulic conductivity of interbeds, time-varying geostatic loads, and additional attributes related to aquifer-system compaction and subsidence that will broaden the class of problems that can be simulated. The new version will include a redesigned source code, a new user friendly input file structure, more output options, and new subsidence solution options. This presentation will discuss progress in developing the new package and the new features being implemented and their potential applications. By Stanley Leake, Scott E. Boyce, Randall T. Hanson, and Devin Galloway

  13. Using VirtualGL/TurboVNC Software on the Peregrine System |

    Science.gov Websites

    High-Performance Computing | NREL VirtualGL/TurboVNC Software on the Peregrine System Using , allowing users to access and share large-memory visualization nodes with high-end graphics processing units may be better than just using X11 forwarding when connecting from a remote site with low bandwidth and

  14. Clustering of arc volcanoes caused by temperature perturbations in the back-arc mantle

    PubMed Central

    Lee, Changyeol; Wada, Ikuko

    2017-01-01

    Clustering of arc volcanoes in subduction zones indicates along-arc variation in the physical condition of the underlying mantle where majority of arc magmas are generated. The sub-arc mantle is brought in from the back-arc largely by slab-driven mantle wedge flow. Dynamic processes in the back-arc, such as small-scale mantle convection, are likely to cause lateral variations in the back-arc mantle temperature. Here we use a simple three-dimensional numerical model to quantify the effects of back-arc temperature perturbations on the mantle wedge flow pattern and sub-arc mantle temperature. Our model calculations show that relatively small temperature perturbations in the back-arc result in vigorous inflow of hotter mantle and subdued inflow of colder mantle beneath the arc due to the temperature dependence of the mantle viscosity. This causes a three-dimensional mantle flow pattern that amplifies the along-arc variations in the sub-arc mantle temperature, providing a simple mechanism for volcano clustering. PMID:28660880

  15. Clustering of arc volcanoes caused by temperature perturbations in the back-arc mantle.

    PubMed

    Lee, Changyeol; Wada, Ikuko

    2017-06-29

    Clustering of arc volcanoes in subduction zones indicates along-arc variation in the physical condition of the underlying mantle where majority of arc magmas are generated. The sub-arc mantle is brought in from the back-arc largely by slab-driven mantle wedge flow. Dynamic processes in the back-arc, such as small-scale mantle convection, are likely to cause lateral variations in the back-arc mantle temperature. Here we use a simple three-dimensional numerical model to quantify the effects of back-arc temperature perturbations on the mantle wedge flow pattern and sub-arc mantle temperature. Our model calculations show that relatively small temperature perturbations in the back-arc result in vigorous inflow of hotter mantle and subdued inflow of colder mantle beneath the arc due to the temperature dependence of the mantle viscosity. This causes a three-dimensional mantle flow pattern that amplifies the along-arc variations in the sub-arc mantle temperature, providing a simple mechanism for volcano clustering.

  16. [Sb{sub 4}Au{sub 4}Sb{sub 4}]{sup 2−}: A designer all-metal aromatic sandwich

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Wen-Juan; You, Xue-Rui; Guo, Jin-Chang

    We report on the computational design of an all-metal aromatic sandwich, [Sb{sub 4}Au{sub 4}Sb{sub 4}]{sup 2−}. The triple-layered, square-prismatic sandwich complex is the global minimum of the system from Coalescence Kick and Minima Hopping structural searches. Following a standard, qualitative chemical bonding analysis via canonical molecular orbitals, the sandwich complex can be formally described as [Sb{sub 4}]{sup +}[Au{sub 4}]{sup 4−}[Sb{sub 4}]{sup +}, showing ionic bonding characters with electron transfers in between the Sb{sub 4}/Au{sub 4}/Sb{sub 4} layers. For an in-depth understanding of the system, one needs to go beyond the above picture. Significant Sb → Au donation and Sb ←more » Au back-donation occur, redistributing electrons from the Sb{sub 4}/Au{sub 4}/Sb{sub 4} layers to the interlayer Sb–Au–Sb edges, which effectively lead to four Sb–Au–Sb three-center two-electron bonds. The complex is a system with 30 valence electrons, excluding the Sb 5s and Au 5d lone-pairs. The two [Sb{sub 4}]{sup +} ligands constitute an unusual three-fold (π and σ) aromatic system with all 22 electrons being delocalized. An energy gap of ∼1.6 eV is predicted for this all-metal sandwich. The complex is a rare example for rational design of cluster compounds and invites forth-coming synthetic efforts.« less

  17. Catalytic Ignition and Upstream Reaction Propagation in Monolith Reactors

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Dietrich, Daniel L.; Miller, Fletcher J.; T'ien, James S.

    2007-01-01

    Using numerical simulations, this work demonstrates a concept called back-end ignition for lighting-off and pre-heating a catalytic monolith in a power generation system. In this concept, a downstream heat source (e.g. a flame) or resistive heating in the downstream portion of the monolith initiates a localized catalytic reaction which subsequently propagates upstream and heats the entire monolith. The simulations used a transient numerical model of a single catalytic channel which characterizes the behavior of the entire monolith. The model treats both the gas and solid phases and includes detailed homogeneous and heterogeneous reactions. An important parameter in the model for back-end ignition is upstream heat conduction along the solid. The simulations used both dry and wet CO chemistry as a model fuel for the proof-of-concept calculations; the presence of water vapor can trigger homogenous reactions, provided that gas-phase temperatures are adequately high and there is sufficient fuel remaining after surface reactions. With sufficiently high inlet equivalence ratio, back-end ignition occurs using the thermophysical properties of both a ceramic and metal monolith (coated with platinum in both cases), with the heat-up times significantly faster for the metal monolith. For lower equivalence ratios, back-end ignition occurs without upstream propagation. Once light-off and propagation occur, the inlet equivalence ratio could be reduced significantly while still maintaining an ignited monolith as demonstrated by calculations using complete monolith heating.

  18. 40 CFR 63.497 - Back-end process provisions-monitoring provisions for control and recovery devices.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Back-end process provisions-monitoring... Polymers and Resins § 63.497 Back-end process provisions—monitoring provisions for control and recovery devices. (a) An owner or operator complying with the residual organic HAP limitations in § 63.494(a) using...

  19. What is Clinical Safety in Electronic Health Care Record Systems?

    NASA Astrophysics Data System (ADS)

    Davies, George

    There is mounting public awareness of an increasing number of adverse clinical incidents within the National Health Service (NHS), but at the same time, large health care projects like the National Programme for IT (NPFIT) are claiming that safer care is one of the benefits of the project and that health software systems in particular have the potential to reduce the likelihood of accidental or unintentional harm to patients. This paper outlines the approach to clinical safety management taken by CSC, a major supplier to NPFIT; discusses acceptable levels of risk and clinical safety as an end-to-end concept; and touches on the future for clinical safety in health systems software.

  20. A Concept for the One Degree Imager (ODI) Data Reduction Pipeline and Archiving System

    NASA Astrophysics Data System (ADS)

    Knezek, Patricia; Stobie, B.; Michael, S.; Valdes, F.; Marru, S.; Henschel, R.; Pierce, M.

    2010-05-01

    The One Degree Imager (ODI), currently being built by the WIYN Observatory, will provide tremendous possibilities for conducting diverse scientific programs. ODI will be a complex instrument, using non-conventional Orthogonal Transfer Array (OTA) detectors. Due to its large field of view, small pixel size, use of OTA technology, and expected frequent use, ODI will produce vast amounts of astronomical data. If ODI is to achieve its full potential, a data reduction pipeline must be developed. Long-term archiving must also be incorporated into the pipeline system to ensure the continued value of ODI data. This paper presents a concept for an ODI data reduction pipeline and archiving system. To limit costs and development time, our plan leverages existing software and hardware, including existing pipeline software, Science Gateways, Computational Grid & Cloud Technology, Indiana University's Data Capacitor and Massive Data Storage System, and TeraGrid compute resources. Existing pipeline software will be augmented to add functionality required to meet challenges specific to ODI, enhance end-user control, and enable the execution of the pipeline on grid resources including national grid resources such as the TeraGrid and Open Science Grid. The planned system offers consistent standard reductions and end-user flexibility when working with images beyond the initial instrument signature removal. It also gives end-users access to computational and storage resources far beyond what are typically available at most institutions. Overall, the proposed system provides a wide array of software tools and the necessary hardware resources to use them effectively.

  1. The ALMA common software: dispatch from the trenches

    NASA Astrophysics Data System (ADS)

    Schwarz, J.; Sommer, H.; Jeram, B.; Sekoranja, M.; Chiozzi, G.; Grimstrup, A.; Caproni, A.; Paredes, C.; Allaert, E.; Harrington, S.; Turolla, S.; Cirami, R.

    2008-07-01

    The ALMA Common Software (ACS) provides both an application framework and CORBA-based middleware for the distributed software system of the Atacama Large Millimeter Array. Building upon open-source tools such as the JacORB, TAO and OmniORB ORBs, ACS supports the development of component-based software in any of three languages: Java, C++ and Python. Now in its seventh major release, ACS has matured, both in its feature set as well as in its reliability and performance. However, it is only recently that the ALMA observatory's hardware and application software has reached a level at which it can exploit and challenge the infrastructure that ACS provides. In particular, the availability of an Antenna Test Facility(ATF) at the site of the Very Large Array in New Mexico has enabled us to exercise and test the still evolving end-to-end ALMA software under realistic conditions. The major focus of ACS, consequently, has shifted from the development of new features to consideration of how best to use those that already exist. Configuration details which could be neglected for the purpose of running unit tests or skeletal end-to-end simulations have turned out to be sensitive levers for achieving satisfactory performance in a real-world environment. Surprising behavior in some open-source tools has required us to choose between patching code that we did not write or addressing its deficiencies by implementing workarounds in our own software. We will discuss these and other aspects of our recent experience at the ATF and in simulation.

  2. VO-KOREL: A Fourier Disentangling Service of the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Škoda, Petr; Hadrava, Petr; Fuchs, Jan

    2012-04-01

    VO-KOREL is a web service exploiting the technology of the Virtual Observatory for providing astronomers with the intuitive graphical front-end and distributed computing back-end running the most recent version of the Fourier disentangling code KOREL. The system integrates the ideas of the e-shop basket, conserving the privacy of every user by transfer encryption and access authentication, with features of laboratory notebook, allowing the easy housekeeping of both input parameters and final results, as well as it explores a newly emerging technology of cloud computing. While the web-based front-end allows the user to submit data and parameter files, edit parameters, manage a job list, resubmit or cancel running jobs and mainly watching the text and graphical results of a disentangling process, the main part of the back-end is a simple job queue submission system executing in parallel multiple instances of the FORTRAN code KOREL. This may be easily extended for GRID-based deployment on massively parallel computing clusters. The short introduction into underlying technologies is given, briefly mentioning advantages as well as bottlenecks of the design used.

  3. A practical approach for inexpensive searches of radiology report databases.

    PubMed

    Desjardins, Benoit; Hamilton, R Curtis

    2007-06-01

    We present a method to perform full text searches of radiology reports for the large number of departments that do not have this ability as part of their radiology or hospital information system. A tool written in Microsoft Access (front-end) has been designed to search a server (back-end) containing the indexed backup weekly copy of the full relational database extracted from a radiology information system (RIS). This front end-/back-end approach has been implemented in a large academic radiology department, and is used for teaching, research and administrative purposes. The weekly second backup of the 80 GB, 4 million record RIS database takes 2 hours. Further indexing of the exported radiology reports takes 6 hours. Individual searches of the indexed database typically take less than 1 minute on the indexed database and 30-60 minutes on the nonindexed database. Guidelines to properly address privacy and institutional review board issues are closely followed by all users. This method has potential to improve teaching, research, and administrative programs within radiology departments that cannot afford more expensive technology.

  4. Shallow Subsurface Structures of Volcanic Fissures

    NASA Astrophysics Data System (ADS)

    Parcheta, C. E.; Nash, J.; Mitchell, K. L.; Parness, A.

    2015-12-01

    Volcanic fissure vents are a difficult geologic feature to quantify. They are often too thin to document in detail with seismology or remote geophysical methods. Additionally, lava flows, lava drain back, or collapsed rampart blocks typically conceal a fissure's surface expression. For exposed fissures, quantifying the surface (let along sub0surface) geometric expression can become an overwhelming and time-consuming task given the non-uniform distribution of wall irregularities, drain back textures, and the larger scale sinuosity of the whole fissure system. We developed (and previously presented) VolcanoBot to acquire robust characteristic data of fissure geometries by going inside accessible fissures after an eruption ends and the fissure cools off to <50 C. Data from VolcanoBot documents the fissure conduit geometry with a near-IR structured light sensor, and reproduces the 3d structures to cm-scale accuracy. Here we present a comparison of shallow subsurface structures (<30 m depth) within the Mauna Ulu fissure system and their counterpart features at the vent-to-ground-surface interface. While we have not mapped enough length of the fissure to document sinuosity at depth, we see a self-similar pattern of irregularities on the fissure walls throughout the entire shallow subsurface, implying a fracture mechanical origin similar to faults. These irregularities are, on average, 1 m across and protrude 30 cm into the drained fissure. This is significantly larger than the 10% wall roughness addressed in the engineering literature on fluid dynamics, and implies that magma fluid dynamics during fissure eruptions are probably not as passive nor as simple as previously thought. In some locations, it is possible to match piercing points across the fissure walls, where the dike broke the wall rock in order to propagate upwards, yet in other locations there are erosional cavities, again, implying complex fluid dynamics in the shallow sub-surface during fissure eruptions.

  5. Authoritative Authoring: Software That Makes Multimedia Happen.

    ERIC Educational Resources Information Center

    Florio, Chris; Murie, Michael

    1996-01-01

    Compares seven mid- to high-end multimedia authoring software systems that combine graphics, sound, animation, video, and text for Windows and Macintosh platforms. A run-time project was created with each program using video, animation, graphics, sound, formatted text, hypertext, and buttons. (LRW)

  6. Desktop Publishing Choices: Making an Appropriate Decision.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1991-01-01

    Discusses various choices available for desktop publishing systems. Four categories of software are described, including advanced word processing, graphics software, low-end desktop publishing, and mainstream desktop publishing; appropriate hardware is considered; and selection guidelines are offered, including current and future publishing needs,…

  7. OSIRIX: open source multimodality image navigation software

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman

    2005-04-01

    The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/

  8. Feeding People's Curiosity: Leveraging the Cloud for Automatic Dissemination of Mars Images

    NASA Technical Reports Server (NTRS)

    Knight, David; Powell, Mark

    2013-01-01

    Smartphones and tablets have made wireless computing ubiquitous, and users expect instant, on-demand access to information. The Mars Science Laboratory (MSL) operations software suite, MSL InterfaCE (MSLICE), employs a different back-end image processing architecture compared to that of the Mars Exploration Rovers (MER) in order to better satisfy modern consumer-driven usage patterns and to offer greater server-side flexibility. Cloud services are a centerpiece of the server-side architecture that allows new image data to be delivered automatically to both scientists using MSLICE and the general public through the MSL website (http://mars.jpl.nasa.gov/msl/).

  9. Sub-component modeling for face image reconstruction in video communications

    NASA Astrophysics Data System (ADS)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  10. Advanced Intelligent System Application to Load Forecasting and Control for Hybrid Electric Bus

    NASA Technical Reports Server (NTRS)

    Momoh, James; Chattopadhyay, Deb; Elfayoumy, Mahmoud

    1996-01-01

    The primary motivation for this research emanates from providing a decision support system to the electric bus operators in the municipal and urban localities which will guide the operators to maintain an optimal compromise among the noise level, pollution level, fuel usage etc. This study is backed up by our previous studies on study of battery characteristics, permanent magnet DC motor studies and electric traction motor size studies completed in the first year. The operator of the Hybrid Electric Car must determine optimal power management schedule to meet a given load demand for different weather and road conditions. The decision support system for the bus operator comprises three sub-tasks viz. forecast of the electrical load for the route to be traversed divided into specified time periods (few minutes); deriving an optimal 'plan' or 'preschedule' based on the load forecast for the entire time-horizon (i.e., for all time periods) ahead of time; and finally employing corrective control action to monitor and modify the optimal plan in real-time. A fully connected artificial neural network (ANN) model is developed for forecasting the kW requirement for hybrid electric bus based on inputs like climatic conditions, passenger load, road inclination, etc. The ANN model is trained using back-propagation algorithm employing improved optimization techniques like projected Lagrangian technique. The pre-scheduler is based on a Goal-Programming (GP) optimization model with noise, pollution and fuel usage as the three objectives. GP has the capability of analyzing the trade-off among the conflicting objectives and arriving at the optimal activity levels, e.g., throttle settings. The corrective control action or the third sub-task is formulated as an optimal control model with inputs from the real-time data base as well as the GP model to minimize the error (or deviation) from the optimal plan. These three activities linked with the ANN forecaster proving the output to the GP model which in turn produces the pre-schedule of the optimal control model. Some preliminary results based on a hypothetical test case will be presented for the load forecasting module. The computer codes for the three modules will be made available fe adoption by bus operating agencies. Sample results will be provided using these models. The software will be a useful tool for supporting the control systems for the Electric Bus project of NASA.

  11. 40 CFR 63.497 - Back-end process provisions-monitoring provisions for control and recovery devices used to comply...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Back-end process provisions-monitoring... Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.497 Back-end process... limitations. (a) An owner or operator complying with the residual organic HAP limitations in § 63.494(a)(1...

  12. SEDS1 mission software verification using a signal simulator

    NASA Technical Reports Server (NTRS)

    Pierson, William E.

    1992-01-01

    The first flight of the Small Expendable Deployer System (SEDS1) is schedule to fly as the secondary payload of a Delta 2 in March, 1993. The objective of the SEDS1 mission is to collect data to validate the concept of tethered satellite systems and to verify computer simulations used to predict their behavior. SEDS1 will deploy a 50 lb. instrumented satellite as an end mass using a 20 km tether. Langley Research Center is providing the end mass instrumentation, while the Marshall Space Flight Center is designing and building the deployer. The objective of the experiment is to test the SEDS design concept by demonstrating that the system will satisfactorily deploy the full 20 km tether without stopping prematurely, come to a smooth stop on the application of a brake, and cut the tether at the proper time after it swings to the local vertical. Also, SEDS1 will collect data which will be used to test the accuracy of tether dynamics models used to stimulate this type of deployment. The experiment will last about 1.5 hours and complete approximately 1.5 orbits. Radar tracking of the Delta II and end mass is planned. In addition, the SEDS1 on-board computer will continuously record, store, and transmit mission data over the Delta II S-band telemetry system. The Data System will count tether windings as the tether unwinds, log the times of each turn and other mission events, monitor tether tension, and record the temperature of system components. A summary of the measurements taken during the SEDS1 are shown. The Data System will also control the tether brake and cutter mechanisms. Preliminary versions of two major sections of the flight software, the data telemetry modules and the data collection modules, were developed and tested under the 1990 NASA/ASEE Summer Faculty Fellowship Program. To facilitate the debugging of these software modules, a prototype SEDS Data System was programmed to simulate turn count signals. During the 1991 summer program, the concept of simulating signals produced by the SEDS electronics systems and circuits was expanded and more precisely defined. During the 1992 summer program, the SEDS signal simulator was programmed to test the requirements of the SEDS Mission software, and this simulator will be used in the formal verification of the SEDS Mission Software. The formal test procedures specification was written which incorporates the use of the signal simulator to test the SEDS Mission Software and which incorporates procedures for testing the other major component of the SEDS software, the Monitor Software.

  13. Augmented Feedback System to Support Physical Therapy of Non-specific Low Back Pain

    NASA Astrophysics Data System (ADS)

    Brodbeck, Dominique; Degen, Markus; Stanimirov, Michael; Kool, Jan; Scheermesser, Mandy; Oesch, Peter; Neuhaus, Cornelia

    Low back pain is an important problem in industrialized countries. Two key factors limit the effectiveness of physiotherapy: low compliance of patients with repetitive movement exercises, and inadequate awareness of patients of their own posture. The Backtrainer system addresses these problems by real-time monitoring of the spine position, by providing a framework for most common physiotherapy exercises for the low back, and by providing feedback to patients in a motivating way. A minimal sensor configuration was identified as two inertial sensors that measure the orientation of the lower back at two points with three degrees of freedom. The software was designed as a flexible platform to experiment with different hardware, and with various feedback modalities. Basic exercises for two types of movements are provided: mobilizing and stabilizing. We developed visual feedback - abstract as well as in the form of a virtual reality game - and complemented the on-screen graphics with an ambient feedback device. The system was evaluated during five weeks in a rehabilitation clinic with 26 patients and 15 physiotherapists. Subjective satisfaction of subjects was good, and we interpret the results as encouraging indication for the adoption of such a therapy support system by both patients and therapists.

  14. A new system for measuring three-dimensional back shape in scoliosis

    PubMed Central

    Pynsent, Paul; Fairbank, Jeremy; Disney, Simon

    2008-01-01

    The aim of this work was to develop a low-cost automated system to measure the three-dimensional shape of the back in patients with scoliosis. The resulting system uses structured light to illuminate a patient’s back from an angle while a digital photograph is taken. The height of the surface is calculated using Fourier transform profilometry with an accuracy of ±1 mm. The surface is related to body axes using bony landmarks on the back that have been palpated and marked with small coloured stickers prior to photographing. Clinical parameters are calculated automatically and presented to the user on a monitor and as a printed report. All data are stored in a database. The database can be interrogated and successive measurements plotted for monitoring the deformity changes. The system developed uses inexpensive hardware and open source software. Accurate surface topography can help the clinician to measure spinal deformity at baseline and monitor changes over time. It can help the patients and their families to assess deformity. Above all it reduces the dependence on serial radiography and reduces radiation exposure when monitoring spinal deformity. PMID:18247064

  15. Demonstration of large field effect in topological insulator films via a high-κ back gate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, C. Y.; Lin, H. Y.; Yang, S. R.

    2016-05-16

    The spintronics applications long anticipated for topological insulators (TIs) has been hampered due to the presence of high density intrinsic defects in the bulk states. In this work we demonstrate the back-gating effect on TIs by integrating Bi{sub 2}Se{sub 3} films 6–10 quintuple layer (QL) thick with amorphous high-κ oxides of Al{sub 2}O{sub 3} and Y{sub 2}O{sub 3}. Large gating effect of tuning the Fermi level E{sub F} to very close to the band gap was observed, with an applied bias of an order of magnitude smaller than those of the SiO{sub 2} back gate, and the modulation of filmmore » resistance can reach as high as 1200%. The dependence of the gating effect on the TI film thickness was investigated, and ΔN{sub 2D}/ΔV{sub g} varies with TI film thickness as ∼t{sup −0.75}. To enhance the gating effect, a Y{sub 2}O{sub 3} layer thickness 4 nm was inserted into Al{sub 2}O{sub 3} gate stack to increase the total κ value to 13.2. A 1.4 times stronger gating effect is observed, and the increment of induced carrier numbers is in good agreement with additional charges accumulated in the higher κ oxides. Moreover, we have reduced the intrinsic carrier concentration in the TI film by doping Te to Bi{sub 2}Se{sub 3} to form Bi{sub 2}Te{sub x}Se{sub 1−x}. The observation of a mixed state of ambipolar field that both electrons and holes are present indicates that we have tuned the E{sub F} very close to the Dirac Point. These results have demonstrated that our capability of gating TIs with high-κ back gate to pave the way to spin devices of tunable E{sub F} for dissipationless spintronics based on well-established semiconductor technology.« less

  16. SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.

    2014-12-01

    Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.

  17. Bellerophon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lingerfelt, Eric J; Messer, II, Otis E

    2017-01-02

    The Bellerophon software system supports CHIMERA, a production-level HPC application that simulates the evolution of core-collapse supernovae. Bellerophon enables CHIMERA's geographically dispersed team of collaborators to perform job monitoring and real-time data analysis from multiple supercomputing resources, including platforms at OLCF, NERSC, and NICS. Its multi-tier architecture provides an encapsulated, end-to-end software solution that enables the CHIMERA team to quickly and easily access highly customizable animated and static views of results from anywhere in the world via a cross-platform desktop application.

  18. Ultrasound phase rotation beamforming on multi-core DSP.

    PubMed

    Ma, Jieming; Karadayi, Kerem; Ali, Murtaza; Kim, Yongmin

    2014-01-01

    Phase rotation beamforming (PRBF) is a commonly-used digital receive beamforming technique. However, due to its high computational requirement, it has traditionally been supported by hardwired architectures, e.g., application-specific integrated circuits (ASICs) or more recently field-programmable gate arrays (FPGAs). In this study, we investigated the feasibility of supporting software-based PRBF on a multi-core DSP. To alleviate the high computing requirement, the analog front-end (AFE) chips integrating quadrature demodulation in addition to analog-to-digital conversion were defined and used. With these new AFE chips, only delay alignment and phase rotation need to be performed by DSP, substantially reducing the computational load. We implemented the delay alignment and phase rotation modules on a Texas Instruments C6678 DSP with 8 cores. We found it takes 200 μs to beamform 2048 samples from 64 channels using 2 cores. With 4 cores, 20 million samples can be beamformed in one second. Therefore, ADC frequencies up to 40 MHz with 2:1 decimation in AFE chips or up to 20 MHz with no decimation can be supported as long as the ADC-to-DSP I/O requirement can be met. The remaining 4 cores can work on back-end processing tasks and applications, e.g., color Doppler or ultrasound elastography. One DSP being able to handle both beamforming and back-end processing could lead to low-power and low-cost ultrasound machines, benefiting ultrasound imaging in general, particularly portable ultrasound machines. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. A miniature bidirectional telemetry system for in-vivo gastric slow wave recordings

    PubMed Central

    Farajidavar, Aydin; O’Grady, Gregory; Rao, Smitha M.N.; Cheng, Leo K; Abell, Thomas; Chiao, J.-C.

    2012-01-01

    Stomach contractions are initiated and coordinated by an underlying electrical activity (slow waves), and electrical dysrhythmias accompany motility diseases. Electrical recordings taken directly from the stomach provide the most valuable data, but face technical constraints. Serosal or mucosal electrodes have cables that traverse the abdominal wall, or a natural orifice, causing discomfort and possible infection, and restricting mobility. These problems motivated the development of a wireless system. The bidirectional telemetric system constitutes a front-end transponder, a back-end receiver and a graphical user interface. The front-end module conditions the analog signals, then digitizes and loads the data into a radio for transmission. Data receipt at the back-end is acknowledged via a transceiver function. The system was validated in a bench-top study, then validated in-vivo using serosal electrodes connected simultaneously to a commercial wired system. The front-end module was 35×35×27 mm3 and weighed 20 g. Bench-top tests demonstrated reliable communication within a distance range of 30 m, power consumption of 13.5 mW, and 124-hour operation when utilizing a 560-mAh, 3-V battery. In-vivo, slow wave frequencies were recorded identically with the wireless and wired reference systems (2.4 cycles/min), automated activation time detection was modestly better for the wireless system (5% vs 14% false positive rate), and signal amplitudes were modestly higher via the wireless system (462 vs 386 μV; p<0.001). This telemetric system for slow wave acquisition is reliable, power efficient, readily portable and potentially implantable. The device will enable chronic monitoring and evaluation of slow wave patterns in animals and patients. PMID:22635054

  20. Product Engineering Class in the Software Safety Risk Taxonomy for Building Safety-Critical Systems

    NASA Technical Reports Server (NTRS)

    Hill, Janice; Victor, Daniel

    2008-01-01

    When software safety requirements are imposed on legacy safety-critical systems, retrospective safety cases need to be formulated as part of recertifying the systems for further use and risks must be documented and managed to give confidence for reusing the systems. The SEJ Software Development Risk Taxonomy [4] focuses on general software development issues. It does not, however, cover all the safety risks. The Software Safety Risk Taxonomy [8] was developed which provides a construct for eliciting and categorizing software safety risks in a straightforward manner. In this paper, we present extended work on the taxonomy for safety that incorporates the additional issues inherent in the development and maintenance of safety-critical systems with software. An instrument called a Software Safety Risk Taxonomy Based Questionnaire (TBQ) is generated containing questions addressing each safety attribute in the Software Safety Risk Taxonomy. Software safety risks are surfaced using the new TBQ and then analyzed. In this paper we give the definitions for the specialized Product Engineering Class within the Software Safety Risk Taxonomy. At the end of the paper, we present the tool known as the 'Legacy Systems Risk Database Tool' that is used to collect and analyze the data required to show traceability to a particular safety standard

  1. CosmoQuest: A Glance at Citizen Science Building

    NASA Astrophysics Data System (ADS)

    Richardson, Matthew; Grier, Jennifer; Gay, Pamela; Lehan, Cory; Buxner, Sanlyn; CosmoQuest Team

    2018-01-01

    CosmoQuest is a virtual research facility focused on engaging people - citizen scientists - from across the world in authentic research projects designed to enhance our knowledge of the cosmos around us. Using image data acquired by NASA missions, our citizen scientists are first trained to identify specific features within the data and then requested to identify those features across large datasets. Responses submitted by the citizen scientists are then stored in our database where they await for analysis and eventual publication by CosmoQuest staff and collaborating professional research scientists.While it is clear that the driving power behind our projects are the eyes and minds of our citizen scientists, it is CosmoQuest’s custom software, Citizen Science Builder (CSB), that enables citizen science to be accomplished. On the front end, CosmoQuest’s CSB software allows for the creation of web-interfaces that users can access to perform image annotation through both drawing tools and questions that can accompany images. These tools include: using geometric shapes to identify regions within an image, tracing image attributes using freeform line tools, and flagging features within images. Additionally, checkboxes, dropdowns, and free response boxes may be used to collect information. On the back end, this software is responsible for the proper storage of all data, which allows project staff to perform periodic data quality checks and track the progress of each project. In this poster we present these available tools and resources and seek potential collaborations.

  2. High pressure common rail injection system modeling and control.

    PubMed

    Wang, H P; Zheng, D; Tian, Y

    2016-07-01

    In this paper modeling and common-rail pressure control of high pressure common rail injection system (HPCRIS) is presented. The proposed mathematical model of high pressure common rail injection system which contains three sub-systems: high pressure pump sub-model, common rail sub-model and injector sub-model is a relative complicated nonlinear system. The mathematical model is validated by the software Matlab and a virtual detailed simulation environment. For the considered HPCRIS, an effective model free controller which is called Extended State Observer - based intelligent Proportional Integral (ESO-based iPI) controller is designed. And this proposed method is composed mainly of the referred ESO observer, and a time delay estimation based iPI controller. Finally, to demonstrate the performances of the proposed controller, the proposed ESO-based iPI controller is compared with a conventional PID controller and ADRC. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. REVEAL: Software Documentation and Platform Migration

    NASA Technical Reports Server (NTRS)

    Wilson, Michael A.; Veibell, Victoir T.; Freudinger, Lawrence C.

    2008-01-01

    The Research Environment for Vehicle Embedded Analysis on Linux (REVEAL) is reconfigurable data acquisition software designed for network-distributed test and measurement applications. In development since 2001, it has been successfully demonstrated in support of a number of actual missions within NASA s Suborbital Science Program. Improvements to software configuration control were needed to properly support both an ongoing transition to operational status and continued evolution of REVEAL capabilities. For this reason the project described in this report targets REVEAL software source documentation and deployment of the software on a small set of hardware platforms different from what is currently used in the baseline system implementation. This report specifically describes the actions taken over a ten week period by two undergraduate student interns and serves as a final report for that internship. The topics discussed include: the documentation of REVEAL source code; the migration of REVEAL to other platforms; and an end-to-end field test that successfully validates the efforts.

  4. Advanced Query and Data Mining Capabilities for MaROS

    NASA Technical Reports Server (NTRS)

    Wang, Paul; Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Hy, Franklin H.

    2013-01-01

    The Mars Relay Operational Service (MaROS) comprises a number of tools to coordinate, plan, and visualize various aspects of the Mars Relay network. These levels include a Web-based user interface, a back-end "ReSTlet" built in Java, and databases that store the data as it is received from the network. As part of MaROS, the innovators have developed and implemented a feature set that operates on several levels of the software architecture. This new feature is an advanced querying capability through either the Web-based user interface, or through a back-end REST interface to access all of the data gathered from the network. This software is not meant to replace the REST interface, but to augment and expand the range of available data. The current REST interface provides specific data that is used by the MaROS Web application to display and visualize the information; however, the returned information from the REST interface has typically been pre-processed to return only a subset of the entire information within the repository, particularly only the information that is of interest to the GUI (graphical user interface). The new, advanced query and data mining capabilities allow users to retrieve the raw data and/or to perform their own data processing. The query language used to access the repository is a restricted subset of the structured query language (SQL) that can be built safely from the Web user interface, or entered as freeform SQL by a user. The results are returned in a CSV (Comma Separated Values) format for easy exporting to third party tools and applications that can be used for data mining or user-defined visualization and interpretation. This is the first time that a service is capable of providing access to all cross-project relay data from a single Web resource. Because MaROS contains the data for a variety of missions from the Mars network, which span both NASA and ESA, the software also establishes an access control list (ACL) on each data record in the database repository to enforce user access permissions through a multilayered approach.

  5. E-DECIDER Decision Support Gateway For Earthquake Disaster Response

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Stough, T. M.; Parker, J. W.; Burl, M. C.; Donnellan, A.; Blom, R. G.; Pierce, M. E.; Wang, J.; Ma, Y.; Rundle, J. B.; Yoder, M. R.

    2013-12-01

    Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing capabilities for decision-making utilizing remote sensing data and modeling software in order to provide decision support for earthquake disaster management and response. E-DECIDER incorporates earthquake forecasting methodology and geophysical modeling tools developed through NASA's QuakeSim project in order to produce standards-compliant map data products to aid in decision-making following an earthquake. Remote sensing and geodetic data, in conjunction with modeling and forecasting tools, help provide both long-term planning information for disaster management decision makers as well as short-term information following earthquake events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). E-DECIDER utilizes a service-based GIS model for its cyber-infrastructure in order to produce standards-compliant products for different user types with multiple service protocols (such as KML, WMS, WFS, and WCS). The goal is to make complex GIS processing and domain-specific analysis tools more accessible to general users through software services as well as provide system sustainability through infrastructure services. The system comprises several components, which include: a GeoServer for thematic mapping and data distribution, a geospatial database for storage and spatial analysis, web service APIs, including simple-to-use REST APIs for complex GIS functionalities, and geoprocessing tools including python scripts to produce standards-compliant data products. These are then served to the E-DECIDER decision support gateway (http://e-decider.org), the E-DECIDER mobile interface, and to the Department of Homeland Security decision support middleware UICDS (Unified Incident Command and Decision Support). The E-DECIDER decision support gateway features a web interface that delivers map data products including deformation modeling results (slope change and strain magnitude) and aftershock forecasts, with remote sensing change detection results under development. These products are event triggered (from the USGS earthquake feed) and will be posted to event feeds on the E-DECIDER webpage and accessible via the mobile interface and UICDS. E-DECIDER also features a KML service that provides infrastructure information from the FEMA HAZUS database through UICDS and the mobile interface. The back-end GIS service architecture and front-end gateway components form a decision support system that is designed for ease-of-use and extensibility for end-users.

  6. Object-Oriented Dynamic Bayesian Network-Templates for Modelling Mechatronic Systems

    DTIC Science & Technology

    2002-05-04

    daimlerchrysler.com Abstract are widespread. For modelling mechanical systems The object-oriented paradigma is a new but proven technol- ADAMS [31 or...hardware (sub-)systems. On the Software side thermal flow or hydraulics, see Figure 1. It also contains a the object-oriented paradigma is by now (at

  7. Front-End and Back-End Database Design and Development: Scholar's Academy Case Study

    ERIC Educational Resources Information Center

    Parks, Rachida F.; Hall, Chelsea A.

    2016-01-01

    This case study consists of a real database project for a charter school--Scholar's Academy--and provides background information on the school and its cafeteria processing system. Also included are functional requirements and some illustrative data. Students are tasked with the design and development of a database for the purpose of improving the…

  8. 77 FR 60651 - Airworthiness Directives; BAE Systems (Operations) Limited Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-04

    ... of the wing leading edge. This proposed AD would require a detailed inspection of the end caps on the... tube, and ice accretion on the wing leading edge or run-back ice, which could lead to a reduction in... leading edge anti- icing piccolo tube end caps on two aircraft. This was discovered during routine zonal...

  9. 4. EXTERIOR OF SOUTH END OF BUILDING 104 SHOWING 1LIGHT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. EXTERIOR OF SOUTH END OF BUILDING 104 SHOWING 1-LIGHT SIDE EXIT DOOR AND ORIGINAL WOOD-FRAMED SLIDING GLASS KITCHEN WINDOWS AT PHOTO CENTER, AND TALL RUSTIC STYLE CHIMNEY WITH GABLE FRAME ON BACK WALL OF HOUSE. VIEW TO NORTHEAST. - Rush Creek Hydroelectric System, Worker Cottage, Rush Creek, June Lake, Mono County, CA

  10. 78 FR 7259 - Airworthiness Directives; BAE SYSTEMS (OPERATIONS) LIMITED Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-01

    ... wing leading edge. This AD requires a detailed inspection of the end caps on the anti-icing piccolo... on the wing leading edge or run-back ice, which could lead to a reduction in the stall margin on... the loss of the wing leading edge anti- icing piccolo tube end caps on two aircraft. This was...

  11. Software Piracy, Ethics, and the Academician.

    ERIC Educational Resources Information Center

    Bassler, Richard A.

    The numerous software programs available for easy, low-cost copying raise ethical questions. The problem can be examined from the viewpoints of software users, teachers, authors, vendors, and distributors. Software users might hesitate to purchase or use software which prevents the making of back-up copies for program protection. Teachers in…

  12. Exact free vibration of multi-step Timoshenko beam system with several attachments

    NASA Astrophysics Data System (ADS)

    Farghaly, S. H.; El-Sayed, T. A.

    2016-05-01

    This paper deals with the analysis of the natural frequencies, mode shapes of an axially loaded multi-step Timoshenko beam combined system carrying several attachments. The influence of system design and the proposed sub-system non-dimensional parameters on the combined system characteristics are the major part of this investigation. The effect of material properties, rotary inertia and shear deformation of the beam system for each span are included. The end masses are elastically supported against rotation and translation at an offset point from the point of attachment. A sub-system having two degrees of freedom is located at the beam ends and at any of the intermediate stations and acts as a support and/or a suspension. The boundary conditions of the ordinary differential equation governing the lateral deflections and slope due to bending of the beam system including the shear force term, due to the sub-system, have been formulated. Exact global coefficient matrices for the combined modal frequencies, the modal shape and for the discrete sub-system have been derived. Based on these formulae, detailed parametric studies of the combined system are carried out. The applied mathematical model is valid for wide range of applications especially in mechanical, naval and structural engineering fields.

  13. Data management software concept for WEST plasma measurement system

    NASA Astrophysics Data System (ADS)

    Zienkiewicz, P.; Kasprowicz, G.; Byszuk, A.; Wojeński, A.; Kolasinski, P.; Cieszewski, R.; Czarski, T.; Chernyshova, M.; Pozniak, K.; Zabolotny, W.; Juszczyk, B.; Mazon, D.; Malard, P.

    2014-11-01

    This paper describes the concept of data management software for the multichannel readout system for the GEM detector used in WEST Plasma experiment. The proposed system consists of three separate communication channels: fast data channel, diagnostics channel, slow data channel. Fast data channel is provided by the FPGA with integrated ARM cores providing direct readout data from Analog Front Ends through 10GbE with short, guaranteed intervals. Slow data channel is provided by multiple, fast CPUs after data processing with detailed readout data with use of GNU/Linux OS and appropriate software. Diagnostic channel provides detailed feedback for control purposes.

  14. A multi-GPU real-time dose simulation software framework for lung radiotherapy.

    PubMed

    Santhanam, A P; Min, Y; Neelakkantan, H; Papp, N; Meeks, S L; Kupelian, P A

    2012-09-01

    Medical simulation frameworks facilitate both the preoperative and postoperative analysis of the patient's pathophysical condition. Of particular importance is the simulation of radiation dose delivery for real-time radiotherapy monitoring and retrospective analyses of the patient's treatment. In this paper, a software framework tailored for the development of simulation-based real-time radiation dose monitoring medical applications is discussed. A multi-GPU-based computational framework coupled with inter-process communication methods is introduced for simulating the radiation dose delivery on a deformable 3D volumetric lung model and its real-time visualization. The model deformation and the corresponding dose calculation are allocated among the GPUs in a task-specific manner and is performed in a pipelined manner. Radiation dose calculations are computed on two different GPU hardware architectures. The integration of this computational framework with a front-end software layer and back-end patient database repository is also discussed. Real-time simulation of the dose delivered is achieved at once every 120 ms using the proposed framework. With a linear increase in the number of GPU cores, the computational time of the simulation was linearly decreased. The inter-process communication time also improved with an increase in the hardware memory. Variations in the delivered dose and computational speedup for variations in the data dimensions are investigated using D70 and D90 as well as gEUD as metrics for a set of 14 patients. Computational speed-up increased with an increase in the beam dimensions when compared with a CPU-based commercial software while the error in the dose calculation was <1%. Our analyses show that the framework applied to deformable lung model-based radiotherapy is an effective tool for performing both real-time and retrospective analyses.

  15. Development of RT-components for the M-3 Strawberry Harvesting Robot

    NASA Astrophysics Data System (ADS)

    Yamashita, Tomoki; Tanaka, Motomasa; Yamamoto, Satoshi; Hayashi, Shigehiko; Saito, Sadafumi; Sugano, Shigeki

    We are now developing the strawberry harvest robot called “M-3” prototype robot system under the 4th urgent project of MAFF. In order to develop the control software of the M-3 robot more efficiently, we innovated the RT-middleware “OpenRTM-aist” software platform. In this system, we developed 9 kind of RT-Components (RTC): Robot task sequence player RTC, Proxy RTC for image processing software, DC motor controller RTC, Arm kinematics RTC, and so on. In this paper, we discuss advantages of RT-middleware developing system and problems about operating the RTC-configured robotic system by end-users.

  16. Redundancy, Self-Motion, and Motor Control

    PubMed Central

    Martin, V.; Scholz, J. P.; Schöner, G.

    2011-01-01

    Outside the laboratory, human movement typically involves redundant effector systems. How the nervous system selects among the task-equivalent solutions may provide insights into how movement is controlled. We propose a process model of movement generation that accounts for the kinematics of goal-directed pointing movements performed with a redundant arm. The key element is a neuronal dynamics that generates a virtual joint trajectory. This dynamics receives input from a neuronal timer that paces end-effector motion along its path. Within this dynamics, virtual joint velocity vectors that move the end effector are dynamically decoupled from velocity vectors that do not. Moreover, the sensed real joint configuration is coupled back into this neuronal dynamics, updating the virtual trajectory so that it yields to task-equivalent deviations from the dynamic movement plan. Experimental data from participants who perform in the same task setting as the model are compared in detail to the model predictions. We discover that joint velocities contain a substantial amount of self-motion that does not move the end effector. This is caused by the low impedance of muscle joint systems and by coupling among muscle joint systems due to multiarticulatory muscles. Back-coupling amplifies the induced control errors. We establish a link between the amount of self-motion and how curved the end-effector path is. We show that models in which an inverse dynamics cancels interaction torques predict too little self-motion and too straight end-effector paths. PMID:19718817

  17. Planning in Dynamic and Uncertain Environments

    DTIC Science & Technology

    1994-05-01

    particular, General Electric’s (GE) Tachyon system [2]), and uses the communication software provided in the CPE (in particular, the Cronus and Knet...and gets back information about the world and replanning requests. "* We extended SIPE-2 to interact with GE’s Tachyon system in a loosely coupled...manner. Tachyon is able to process extended temporal constraints for SIPE-2 during planning. They communicate by using the Cronus system in the CPE

  18. Mobile Care (Moca) for Remote Diagnosis and Screening

    PubMed Central

    Celi, Leo Anthony; Sarmenta, Luis; Rotberg, Jhonathan; Marcelo, Alvin; Clifford, Gari

    2010-01-01

    Moca is a cell phone-facilitated clinical information system to improve diagnostic, screening and therapeutic capabilities in remote resource-poor settings. The software allows transmission of any medical file, whether a photo, x-ray, audio or video file, through a cell phone to (1) a central server for archiving and incorporation into an electronic medical record (to facilitate longitudinal care, quality control, and data mining), and (2) a remote specialist for real-time decision support (to leverage expertise). The open source software is designed as an end-to-end clinical information system that seamlessly connects health care workers to medical professionals. It is integrated with OpenMRS, an existing open source medical records system commonly used in developing countries. PMID:21822397

  19. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  20. Optimization of digitization procedures in cultural heritage preservation

    NASA Astrophysics Data System (ADS)

    Martínez, Bea; Mitjà, Carles; Escofet, Jaume

    2013-11-01

    The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.

  1. Integrating High-Throughput Parallel Processing Framework and Storage Area Network Concepts Into a Prototype Interactive Scientific Visualization Environment for Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Smuga-Otto, M. J.; Garcia, R. K.; Knuteson, R. O.; Martin, G. D.; Flynn, B. M.; Hackel, D.

    2006-12-01

    The University of Wisconsin-Madison Space Science and Engineering Center (UW-SSEC) is developing tools to help scientists realize the potential of high spectral resolution instruments for atmospheric science. Upcoming satellite spectrometers like the Cross-track Infrared Sounder (CrIS), experimental instruments like the Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and proposed instruments like the Hyperspectral Environmental Suite (HES) within the GOES-R project will present a challenge in the form of the overwhelmingly large amounts of continuously generated data. Current and near-future workstations will have neither the storage space nor computational capacity to cope with raw spectral data spanning more than a few minutes of observations from these instruments. Schemes exist for processing raw data from hyperspectral instruments currently in testing, that involve distributed computation across clusters. Data, which for an instrument like GIFTS can amount to over 1.5 Terabytes per day, is carefully managed on Storage Area Networks (SANs), with attention paid to proper maintenance of associated metadata. The UW-SSEC is preparing a demonstration integrating these back-end capabilities as part of a larger visualization framework, to assist scientists in developing new products from high spectral data, sourcing data volumes they could not otherwise manage. This demonstration focuses on managing storage so that only the data specifically needed for the desired product are pulled from the SAN, and on running computationally expensive intermediate processing on a back-end cluster, with the final product being sent to a visualization system on the scientist's workstation. Where possible, existing software and solutions are used to reduce cost of development. The heart of the computing component is the GIFTS Information Processing System (GIPS), developed at the UW- SSEC to allow distribution of processing tasks such as conversion of raw GIFTS interferograms into calibrated radiance spectra, and retrieving temperature and water vapor content atmospheric profiles from these spectra. The hope is that by demonstrating the capabilities afforded by a composite system like the one described here, scientists can be convinced to contribute further algorithms in support of this model of computing and visualization.

  2. Teaching with a Dual-Channel Classroom Feedback System in the Digital Classroom Environment

    ERIC Educational Resources Information Center

    Yu, Yuan-Chih

    2017-01-01

    Teaching with a classroom feedback system can benefit both teaching and learning practices of interactivity. In this paper, we propose a dual-channel classroom feedback system integrated with a back-end e-Learning system. The system consists of learning agents running on the students' computers and a teaching agent running on the instructor's…

  3. Time cycle analysis and simulation of material flow in MOX process layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, S.; Saraswat, A.; Danny, K.M.

    The (U,Pu)O{sub 2} MOX fuel is the driver fuel for the upcoming PFBR (Prototype Fast Breeder Reactor). The fuel has around 30% PuO{sub 2}. The presence of high percentages of reprocessed PuO{sub 2} necessitates the design of optimized fuel fabrication process line which will address both production need as well as meet regulatory norms regarding radiological safety criteria. The powder pellet route has highly unbalanced time cycle. This difficulty can be overcome by optimizing process layout in terms of equipment redundancy and scheduling of input powder batches. Different schemes are tested before implementing in the process line with the helpmore » of a software. This software simulates the material movement through the optimized process layout. The different material processing schemes have been devised and validity of the schemes are tested with the software. Schemes in which production batches are meeting at any glove box location are considered invalid. A valid scheme ensures adequate spacing between the production batches and at the same time it meets the production target. This software can be further improved by accurately calculating material movement time through glove box train. One important factor is considering material handling time with automation systems in place.« less

  4. Examination of the relationship between theory-driven policies and allowed lost-time back claims in workers' compensation: a system dynamics model.

    PubMed

    Wong, Jessica J; McGregor, Marion; Mior, Silvano A; Loisel, Patrick

    2014-01-01

    The purpose of this study was to develop a model that evaluates the impact of policy changes on the number of workers' compensation lost-time back claims in Ontario, Canada, over a 30-year timeframe. The model was used to test the hypothesis that a theory- and policy-driven model would be sufficient in reproducing historical claims data in a robust manner and that policy changes would have a major impact on modeled data. The model was developed using system dynamics methods in the Vensim simulation program. The theoretical effects of policies for compensation benefit levels and experience rating fees were modeled. The model was built and validated using historical claims data from 1980 to 2009. Sensitivity analysis was used to evaluate the modeled data at extreme end points of variable input and timeframes. The degree of predictive value of the modeled data was measured by the coefficient of determination, root mean square error, and Theil's inequality coefficients. Correlation between modeled data and actual data was found to be meaningful (R(2) = 0.934), and the modeled data were stable at extreme end points. Among the effects explored, policy changes were found to be relatively minor drivers of back claims data, accounting for a 13% improvement in error. Simulation results suggested that unemployment, number of no-lost-time claims, number of injuries per worker, and recovery rate from back injuries outside of claims management to be sensitive drivers of back claims data. A robust systems-based model was developed and tested for use in future policy research in Ontario's workers' compensation. The study findings suggest that certain areas within and outside the workers' compensation system need to be considered when evaluating and changing policies around back claims. © 2014. Published by National University of Health Sciences All rights reserved.

  5. Limited Connected Speech Experiment.

    DTIC Science & Technology

    1983-03-01

    male and twenty-five female speakers. This report describesT1 - s -"" real-time laboratory CSR system, the data base and training software de- O R M’ a...sounding word( s ). The end point detection class contains those errors in which the CSR system did not properly detect the beginning or end of the phrase...processing continues following end-of-file on the initialization ile. - s Operate in silent, rather than verbose, mode. Normally, each CSR com- mand

  6. Multimedia consultation session recording and playback using Java-based browser in global PACS

    NASA Astrophysics Data System (ADS)

    Martinez, Ralph; Shah, Pinkesh J.; Yu, Yuan-Pin

    1998-07-01

    The current version of the Global PACS software system uses a Java-based implementation of the Remote Consultation and Diagnosis (RCD) system. The Java RCD includes a multimedia consultation session between physicians that includes text, static image, image annotation, and audio data. The JAVA RCD allows 2-4 physicians to collaborate on a patient case. It allows physicians to join the session via WWW Java-enabled browsers or stand alone RCD application. The RCD system includes a distributed database archive system for archiving and retrieving patient and session data. The RCD system can be used for store and forward scenarios, case reviews, and interactive RCD multimedia sessions. The RCD system operates over the Internet, telephone lines, or in a private Intranet. A multimedia consultation session can be recorded, and then played back at a later time for review, comments, and education. A session can be played back using Java-enabled WWW browsers on any operating system platform. The JAVA RCD system shows that a case diagnosis can be captured digitally and played back with the original real-time temporal relationships between data streams. In this paper, we describe design and implementation of the RCD session playback.

  7. Online data handling and storage at the CMS experiment

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gómez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, RK; Morovic, S.; Nuñez-Barranco-Fernández, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.

  8. Online Data Handling and Storage at the CMS Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J. M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced bymore » the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.« less

  9. Development of intelligent instruments with embedded HTTP servers for control and data acquisition in a cryogenic setup--The hardware, firmware, and software implementation.

    PubMed

    Antony, Joby; Mathuria, D S; Datta, T S; Maity, Tanmoy

    2015-12-01

    The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW(®). This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.

  10. Development of intelligent instruments with embedded HTTP servers for control and data acquisition in a cryogenic setup—The hardware, firmware, and software implementation

    NASA Astrophysics Data System (ADS)

    Antony, Joby; Mathuria, D. S.; Datta, T. S.; Maity, Tanmoy

    2015-12-01

    The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW®. This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.

  11. Development of intelligent instruments with embedded HTTP servers for control and data acquisition in a cryogenic setup—The hardware, firmware, and software implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antony, Joby; Mathuria, D. S.; Datta, T. S.

    The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similarmore » control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as “CADS,” which stands for “Complete Automation of Distribution System.” CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW{sup ®}. This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.« less

  12. Specification and simulation of behavior of the Continuous Infusion Insulin Pump system.

    PubMed

    Babamir, Seyed Morteza; Dehkordi, Mehdi Borhani

    2014-01-01

    Continuous Infusion Insulin Pump (CIIP) system is responsible for monitoring diabetic blood sugar. In this paper, we aim to specify and simulate the CIIP software behavior. To this end, we first: (1) presented a model consisting of the CIIP system behavior in response to its environment (diabetic) behavior and (2) we formally defined the safety requirements of the system environment (diabetic) in the Z formal modeling language. Such requirements should be satisfied by the CIIP software. Finally, we programmed the model and requirements.

  13. Design of EPON far-end equipment based on FTTH

    NASA Astrophysics Data System (ADS)

    Feng, Xiancheng; Yun, Xiang

    2008-12-01

    Now, most favors fiber access is mainly the EPON fiber access system. Inheriting from the low cost of Ethernet, usability and bandwidth of optical network, EPON technology is one of the best technologies in fiber access and is adopted by the carriers all over the world widely. According to the scheme analysis to FTTH fan-end equipment, hardware design of ONU is proposed in this paper. The FTTH far-end equipment software design deference modulation design concept, it divides the software designment into 5 function modules: the module of low-layer driver, the module of system management, the module of master/slave communication, and the module of main/Standby switch and the module of command line. The software flow of the host computer is also analyzed. Finally, test is made for Ethernet service performance of FTTH far-end equipment, E1 service performance and the optical path protection switching, and so on. The results of test indicates that all the items are accordance with technical request of far-end ONU equipment and possess good quality and fully reach the requirement of telecommunication level equipment. The far-end equipment of FTTH divides into several parts based on the function: the control module, the exchange module, the UNI interface module, the ONU module, the EPON interface module, the network management debugging module, the voice processing module, the circuit simulation module, the CATV module. In the downstream direction, under the protect condition, we design 2 optical modules. The system can set one group optical module working and another group optical module closure when it is initialized. When the optical fiber line is cut off, the LOS warning comes out. It will cause MUX to replace another group optical module, simultaneously will reset module 3701/3711 and will make it again test the distance, and will give the plug board MPC850 report through the GPIO port. During normal mode, the downstream optical signal is transformed into the electrical signal by the optical module. In the upstream direction, the upstream Ethernet data is retransmitted through the exchange chip BCM5380 to the GMII/MII in module 3701/3711, and then is transmitted to EPON port. The 2MB data are transformed the Ethernet data packet in the plug board TDM, then it's transmitted to the interface MII of the module 3701/3711. The software design of FTTH far-end equipment compiles with modulation design concept. According to the system realization duty, the software is divided into 5 function modules: low-level driver module, system management module, master/slave communication module, the man/Standby switch module and the command line module. The FTTH far-end equipment test, is mainly the Ethernet service performance test, E1 service performance test and the optical path protection switching test and so on the key specification test.

  14. An Investigation to Manufacturing Analytical Services Composition using the Analytical Target Cascading Method.

    PubMed

    Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas

    2017-01-01

    As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.

  15. MassCascade: Visual Programming for LC-MS Data Processing in Metabolomics.

    PubMed

    Beisken, Stephan; Earll, Mark; Portwood, David; Seymour, Mark; Steinbeck, Christoph

    2014-04-01

    Liquid chromatography coupled to mass spectrometry (LC-MS) is commonly applied to investigate the small molecule complement of organisms. Several software tools are typically joined in custom pipelines to semi-automatically process and analyse the resulting data. General workflow environments like the Konstanz Information Miner (KNIME) offer the potential of an all-in-one solution to process LC-MS data by allowing easy integration of different tools and scripts. We describe MassCascade and its workflow plug-in for processing LC-MS data. The Java library integrates frequently used algorithms in a modular fashion, thus enabling it to serve as back-end for graphical front-ends. The functions available in MassCascade have been encapsulated in a plug-in for the workflow environment KNIME, allowing combined use with e.g. statistical workflow nodes from other providers and making the tool intuitive to use without knowledge of programming. The design of the software guarantees a high level of modularity where processing functions can be quickly replaced or concatenated. MassCascade is an open-source library for LC-MS data processing in metabolomics. It embraces the concept of visual programming through its KNIME plug-in, simplifying the process of building complex workflows. The library was validated using open data.

  16. Simulating optoelectronic systems for remote sensing with SENSOR

    NASA Astrophysics Data System (ADS)

    Boerner, Anko

    2003-04-01

    The consistent end-to-end simulation of airborne and spaceborne remote sensing systems is an important task and sometimes the only way for the adaptation and optimization of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software ENvironment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. It allows the simulation of a wide range of optoelectronic systems for remote sensing. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. Part three consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimization requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and examples of its use are given. The verification of SENSOR is demonstrated.

  17. Experimental land observing data system feasibility study

    NASA Technical Reports Server (NTRS)

    Buckley, J. L.; Kraiman, H.

    1982-01-01

    An end-to-end data system to support a Shuttle-based Multispectral Linear Array (MLA) mission in the mid-1980's was defined. The experimental Land Observing System (ELOS) is discussed. A ground system that exploits extensive assets from the LANDSAT-D Program to effectively meet the objectives of the ELOS Mission was defined. The goal of 10 meter pixel precision, the variety of data acquisition capabilities, and the use of Shuttle are key to the mission requirements, Ground mission management functions are met through the use of GSFC's Multi-Satellite Operations Control Center (MSOCC). The MLA Image Generation Facility (MIGF) combines major hardware elements from the Applications Development Data System (ADDS) facility and LANDSAT Assessment System (LAS) with a special purpose MLA interface unit. LANDSAT-D image processing techniques, adapted to MLA characteristics, form the basis for the use of existing software and the definition of new software required.

  18. Fast, High-Resolution Terahertz Radar Imaging at 25 Meters

    NASA Technical Reports Server (NTRS)

    Cooper, Ken B.; Dengler, Robert J.; Llombart, Nuria; Talukder, Ashit; Panangadan, Anand V.; Peay, Chris S.; Siegel, Peter H.

    2010-01-01

    We report improvements in the scanning speed and standoff range of an ultra-wide bandwidth terahertz (THz) imaging radar for person-borne concealed object detection. Fast beam scanning of the single-transceiver radar is accomplished by rapidly deflecting a flat, light-weight subreflector in a confocal Gregorian optical geometry. With RF back-end improvements also implemented, the radar imaging rate has increased by a factor of about 30 compared to that achieved previously in a 4 m standoff prototype instrument. In addition, a new 100 cm diameter ellipsoidal aluminum reflector yields beam spot diameters of approximately 1 cm over a 50x50 cm field of view at a range of 25 m, although some aberrations are observed that probably arise from misaligned optics. Through-clothes images of a concealed threat at 25 m range, acquired in 5 seconds, are presented, and the impact of reduced signal-to-noise from an even faster frame rate is analyzed. These results inform the system requirements for eventually achieving sub-second or video-rate THz radar imaging.

  19. Software Development for the Hobby-Eberly Telescope's Segment Alignment Maintenance System using LABView

    NASA Technical Reports Server (NTRS)

    Hall, Drew P.; Ly, William; Howard, Richard T.; Weir, John; Rakoczy, John; Roe, Fred (Technical Monitor)

    2002-01-01

    The software development for an upgrade to the Hobby-Eberly Telescope (HET) was done in LABView. In order to improve the performance of the HET at the McDonald Observatory, a closed-loop system had to be implemented to keep the mirror segments aligned during periods of observation. The control system, called the Segment Alignment Maintenance System (SAMs), utilized inductive sensors to measure the relative motions of the mirror segments. Software was developed in LABView to tie the sensors, operator interface, and mirror-control motors together. Developing the software in LABView allowed the system to be flexible, understandable, and able to be modified by the end users. Since LABView is built using block diagrams, the software naturally followed the designed control system's block and flow diagrams, and individual software blocks could be easily verified. LABView's many built-in display routines allowed easy visualization of diagnostic and health-monitoring data during testing. Also, since LABView is a multi-platform software package, different programmers could develop the code remotely on various types of machines. LABView s ease of use facilitated rapid prototyping and field testing. There were some unanticipated difficulties in the software development, but the use of LABView as the software "language" for the development of SAMs contributed to the overall success of the project.

  20. An Internet Protocol-Based Software System for Real-Time, Closed-Loop, Multi-Spacecraft Mission Simulation Applications

    NASA Technical Reports Server (NTRS)

    Burns, Richard D.; Davis, George; Cary, Everett; Higinbotham, John; Hogie, Keith

    2003-01-01

    A mission simulation prototype for Distributed Space Systems has been constructed using existing developmental hardware and software testbeds at NASA s Goddard Space Flight Center. A locally distributed ensemble of testbeds, connected through the local area network, operates in real time and demonstrates the potential to assess the impact of subsystem level modifications on system level performance and, ultimately, on the quality and quantity of the end product science data.

  1. Back reflectors based on buried Al{sub 2}O{sub 3} for enhancement of photon recycling in monolithic, on-substrate III-V solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García, I.; Instituto de Energía Solar, Universidad Politécnica de Madrid, Avda Complutense s/n, 28040 Madrid; Kearns-McCoy, C. F.

    Photon management has been shown to be a fruitful way to boost the open circuit voltage and efficiency of high quality solar cells. Metal or low-index dielectric-based back reflectors can be used to confine the reemitted photons and enhance photon recycling. Gaining access to the back of the solar cell for placing these reflectors implies having to remove the substrate, with the associated added complexity to the solar cell manufacturing. In this work, we analyze the effectiveness of a single-layer reflector placed at the back of on-substrate solar cells, and assess the photon recycling improvement as a function of themore » refractive index of this layer. Al{sub 2}O{sub 3}-based reflectors, created by lateral oxidation of an AlAs layer, are identified as a feasible choice for on-substrate solar cells, which can produce a V{sub oc} increase of around 65% of the maximum increase attainable with an ideal reflector. The experimental results obtained using prototype GaAs cell structures show a greater than two-fold increase in the external radiative efficiency and a V{sub oc} increase of ∼2% (∼18 mV), consistent with theoretical calculations. For GaAs cells with higher internal luminescence, this V{sub oc} boost is calculated to be up to 4% relative (36 mV), which directly translates into at least 4% higher relative efficiency.« less

  2. Energy reconstruction of hadrons in highly granular combined ECAL and HCAL systems

    NASA Astrophysics Data System (ADS)

    Israeli, Y.

    2018-05-01

    This paper discusses the hadronic energy reconstruction of two combined electromagnetic and hadronic calorimeter systems using physics prototypes of the CALICE collaboration: the silicon-tungsten electromagnetic calorimeter (Si-W ECAL) and the scintillator-SiPM based analog hadron calorimeter (AHCAL); and the scintillator-tungsten electromagnetic calorimeter (ScECAL) and the AHCAL. These systems were operated in hadron beams at CERN and FNAL, permitting the study of the performance in combined ECAL and HCAL systems. Two techniques for the energy reconstruction are used, a standard reconstruction based on calibrated sub-detector energy sums, and one based on a software compensation algorithm making use of the local energy density information provided by the high granularity of the detectors. The software compensation-based algorithm improves the hadronic energy resolution by up to 30% compared to the standard reconstruction. The combined system data show comparable energy resolutions to the one achieved for data with showers starting only in the AHCAL and therefore demonstrate the success of the inter-calibration of the different sub-systems, despite of their different geometries and different readout technologies.

  3. Roll-Out and Turn-Off Display Software for Integrated Display System

    NASA Technical Reports Server (NTRS)

    Johnson, Edward J., Jr.; Hyer, Paul V.

    1999-01-01

    This report describes the software products, system architectures and operational procedures developed by Lockheed-Martin in support of the Roll-Out and Turn-Off (ROTO) sub-element of the Low Visibility Landing and Surface Operations (LVLASO) program at the NASA Langley Research Center. The ROTO portion of this program focuses on developing technologies that aid pilots in the task of managing the deceleration of an aircraft to a pre-selected exit taxiway. This report focuses on software that produces a system of redundant deceleration cues for a pilot during the landing roll-out, and presents these cues on a head up display (HUD). The software also produces symbology for aircraft operational phases involving cruise flight, approach, takeoff, and go-around. The algorithms and data sources used to compute the deceleration guidance and generate the displays are discussed. Examples of the display formats and symbology options are presented. Logic diagrams describing the design of the ROTO software module are also given.

  4. Aerospace Software Engineering for Advanced Systems Architectures (L’Ingenierie des Logiciels Pour les Architectures des Systemes Aerospatiaux)

    DTIC Science & Technology

    1993-11-01

    Eliezer N. Solomon Steve Sedrel Westinghouse Electronic Systems Group P.O. Box 746, MS 432, Baltimore, Maryland 21203-0746, USA SUMMARY The United States...subset of the Joint Intergrated Avionics NewAgentCollection which has four Working Group (JIAWG), Performance parameters: Acceptor, of type Task._D...Published Noember 1993 Distribution and Availability on Back Cover SAGARD-CP54 ADVISORY GROUP FOR AERSACE RESEARCH & DEVELOPMENT 7 RUE ANCELLE 92200

  5. Common Board Design for the OBC I/O Unit and The OBC CCSDS Unit of The Stuttgart University Satellite "Flying Laptop"

    NASA Astrophysics Data System (ADS)

    Eickhoff, Jens; Cook, Barry; Walker, Paul; Habinc, Sadi; Witt, Rouven; Roser, Hans-Peter

    2011-08-01

    As already published in another paper at DASIA 2010 in Budapest [1] the University of Stuttgart, Germany, is developing an advanced 3-axis stabilized small satellite applying industry standards for command/control techniques, onboard software design and onboard computer components.The satellite has a launch mass of approx. 120kg and is foreseen to be launched end 2013 as piggy back payload on an Indian PSLV launcher.During phase C the main challenge was the conceptual design for an ultra compact and performant onboard computer (OBC), which is able to support an industry standard operating system, a PUS standard based onboard software (OBSW) and CCSDS standard based ground/space communication. The developed architecture is based on 4 main elements (see [1] and Figure 4):• the OBC core board (single board computer based on LEON3 FT architecture),• an I/O Board for all OBC digital interfaces to S/C equipment,• a CCSDS TC/TM pre-processor board,• CPDU being embedded in the PCDU.The EM for the OBC core meanwhile has been shipped to the University by the supplier Aeroflex Colorado Springs, USA and is in use in Stuttgart since January 2011. Figure 2 and Figure 3 provide brief impressions. This paper concentrates on the common design of the I/O board and the CCSDS processor boards.

  6. Improvement of bias-stability in amorphous-indium-gallium-zinc-oxide thin-film transistors by using solution-processed Y{sub 2}O{sub 3} passivation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Sungjin; Mativenga, Mallory; Kim, Youngoo

    2014-08-04

    We demonstrate back channel improvement of back-channel-etch amorphous-indium-gallium-zinc-oxide (a-IGZO) thin-film transistors by using solution-processed yttrium oxide (Y{sub 2}O{sub 3}) passivation. Two different solvents, which are acetonitrile (35%) + ethylene glycol (65%), solvent A and deionized water, solvent B are investigated for the spin-on process of the Y{sub 2}O{sub 3} passivation—performed after patterning source/drain (S/D) Mo electrodes by a conventional HNO{sub 3}-based wet-etch process. Both solvents yield devices with good performance but those passivated by using solvent B exhibit better light and bias stability. Presence of yttrium at the a-IGZO back interface, where it occupies metal vacancy sites, is confirmed by X-ray photoelectronmore » spectroscopy. The passivation effect of yttrium is more significant when solvent A is used because of the existence of more metal vacancies, given that the alcohol (65% ethylene glycol) in solvent A may dissolve the metal oxide (a-IGZO) through the formation of alkoxides and water.« less

  7. LabVIEW interface with Tango control system for a multi-technique X-ray spectrometry IAEA beamline end-station at Elettra Sincrotrone Trieste

    NASA Astrophysics Data System (ADS)

    Wrobel, P. M.; Bogovac, M.; Sghaier, H.; Leani, J. J.; Migliori, A.; Padilla-Alvarez, R.; Czyzycki, M.; Osan, J.; Kaiser, R. B.; Karydas, A. G.

    2016-10-01

    A new synchrotron beamline end-station for multipurpose X-ray spectrometry applications has been recently commissioned and it is currently accessible by end-users at the XRF beamline of Elettra Sincrotrone Trieste. The end-station consists of an ultra-high vacuum chamber that includes as main instrument a seven-axis motorized manipulator for sample and detectors positioning, different kinds of X-ray detectors and optical cameras. The beamline end-station allows performing measurements in different X-ray spectrometry techniques such as Microscopic X-Ray Fluorescence analysis (μXRF), Total Reflection X-Ray Fluorescence analysis (TXRF), Grazing Incidence/Exit X-Ray Fluorescence analysis (GI-XRF/GE-XRF), X-Ray Reflectometry (XRR), and X-Ray Absorption Spectroscopy (XAS). A LabVIEW Graphical User Interface (GUI) bound with Tango control system consisted of many custom made software modules is utilized as a user-friendly tool for control of the entire end-station hardware components. The present work describes this advanced Tango and LabVIEW software platform that utilizes in an optimal synergistic manner the merits and functionality of these well-established programming and equipment control tools.

  8. Poly(ADP-ribose)polymerases are involved in microhomology mediated back-up non-homologous end joining in Arabidopsis thaliana.

    PubMed

    Jia, Qi; den Dulk-Ras, Amke; Shen, Hexi; Hooykaas, Paul J J; de Pater, Sylvia

    2013-07-01

    Besides the KU-dependent classical non-homologous end-joining (C-NHEJ) pathway, an alternative NHEJ pathway first identified in mammalian systems, which is often called the back-up NHEJ (B-NHEJ) pathway, was also found in plants. In mammalian systems PARP was found to be one of the essential components in B-NHEJ. Here we investigated whether PARP1 and PARP2 were also involved in B-NHEJ in Arabidopsis. To this end Arabidopsis parp1, parp2 and parp1parp2 (p1p2) mutants were isolated and functionally characterized. The p1p2 double mutant was crossed with the C-NHEJ ku80 mutant resulting in the parp1parp2ku80 (p1p2k80) triple mutant. As expected, because of their role in single strand break repair (SSBR) and base excision repair (BER), the p1p2 and p1p2k80 mutants were shown to be sensitive to treatment with the DNA damaging agent MMS. End-joining assays in cell-free leaf protein extracts of the different mutants using linear DNA substrates with different ends reflecting a variety of double strand breaks were performed. The results showed that compatible 5'-overhangs were accurately joined in all mutants, that KU80 protected the ends preventing the formation of large deletions and that PARP proteins were involved in microhomology mediated end joining (MMEJ), one of the characteristics of B-NHEJ.

  9. A randomized clinical trial of the effectiveness of mechanical traction for sub-groups of patients with low back pain: study methods and rationale

    PubMed Central

    2010-01-01

    Background Patients with signs of nerve root irritation represent a sub-group of those with low back pain who are at increased risk of persistent symptoms and progression to costly and invasive management strategies including surgery. A period of non-surgical management is recommended for most patients, but there is little evidence to guide non-surgical decision-making. We conducted a preliminary study examining the effectiveness of a treatment protocol of mechanical traction with extension-oriented activities for patients with low back pain and signs of nerve root irritation. The results suggested this approach may be effective, particularly in a more specific sub-group of patients. The aim of this study will be to examine the effectiveness of treatment that includes traction for patients with low back pain and signs of nerve root irritation, and within the pre-defined sub-group. Methods/Design The study will recruit 120 patients with low back pain and signs of nerve root irritation. Patients will be randomized to receive an extension-oriented treatment approach, with or without the addition of mechanical traction. Randomization will be stratified based on the presence of the pre-defined sub-grouping criteria. All patients will receive 12 physical therapy treatment sessions over 6 weeks. Follow-up assessments will occur after 6 weeks, 6 months, and 1 year. The primary outcome will be disability measured with a modified Oswestry questionnaire. Secondary outcomes will include self-reports of low back and leg pain intensity, quality of life, global rating of improvement, additional healthcare utilization, and work absence. Statistical analysis will be based on intention to treat principles and will use linear mixed model analysis to compare treatment groups, and examine the interaction between treatment and sub-grouping status. Discussion This trial will provide a methodologically rigorous evaluation of the effectiveness of using traction for patients with low back pain and signs of nerve root irritation, and will examine the validity of a pre-defined sub-grouping hypothesis. The results will provide evidence to inform non-surgical decision-making for these patients. Trial Registration This trial has been registered with http://ClinicalTrials.gov: NCT00942227 PMID:20433733

  10. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    NASA Technical Reports Server (NTRS)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  11. Integration of Modelling and Graphics to Create an Infrared Signal Processing Test Bed

    NASA Astrophysics Data System (ADS)

    Sethi, H. R.; Ralph, John E.

    1989-03-01

    The work reported in this paper was carried out as part of a contract with MoD (PE) UK. It considers the problems associated with realistic modelling of a passive infrared system in an operational environment. Ideally all aspects of the system and environment should be integrated into a complete end-to-end simulation but in the past limited computing power has prevented this. Recent developments in workstation technology and the increasing availability of parallel processing techniques makes the end-to-end simulation possible. However the complexity and speed of such simulations means difficulties for the operator in controlling the software and understanding the results. These difficulties can be greatly reduced by providing an extremely user friendly interface and a very flexible, high power, high resolution colour graphics capability. Most system modelling is based on separate software simulation of the individual components of the system itself and its environment. These component models may have their own characteristic inbuilt assumptions and approximations, may be written in the language favoured by the originator and may have a wide variety of input and output conventions and requirements. The models and their limitations need to be matched to the range of conditions appropriate to the operational scenerio. A comprehensive set of data bases needs to be generated by the component models and these data bases must be made readily available to the investigator. Performance measures need to be defined and displayed in some convenient graphics form. Some options are presented for combining available hardware and software to create an environment within which the models can be integrated, and which provide the required man-machine interface, graphics and computing power. The impact of massively parallel processing and artificial intelligence will be discussed. Parallel processing will make real time end-to-end simulation possible and will greatly improve the graphical visualisation of the model output data. Artificial intelligence should help to enhance the man-machine interface.

  12. Computational System For Rapid CFD Analysis In Engineering

    NASA Technical Reports Server (NTRS)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  13. Readout, first- and second-level triggers of the new Belle silicon vertex detector

    NASA Astrophysics Data System (ADS)

    Friedl, M.; Abe, R.; Abe, T.; Aihara, H.; Asano, Y.; Aso, T.; Bakich, A.; Browder, T.; Chang, M. C.; Chao, Y.; Chen, K. F.; Chidzik, S.; Dalseno, J.; Dowd, R.; Dragic, J.; Everton, C. W.; Fernholz, R.; Fujii, H.; Gao, Z. W.; Gordon, A.; Guo, Y. N.; Haba, J.; Hara, K.; Hara, T.; Harada, Y.; Haruyama, T.; Hasuko, K.; Hayashi, K.; Hazumi, M.; Heenan, E. M.; Higuchi, T.; Hirai, H.; Hitomi, N.; Igarashi, A.; Igarashi, Y.; Ikeda, H.; Ishino, H.; Itoh, K.; Iwaida, S.; Kaneko, J.; Kapusta, P.; Karawatzki, R.; Kasami, K.; Kawai, H.; Kawasaki, T.; Kibayashi, A.; Koike, S.; Korpar, S.; Križan, P.; Kurashiro, H.; Kusaka, A.; Lesiak, T.; Limosani, A.; Lin, W. C.; Marlow, D.; Matsumoto, H.; Mikami, Y.; Miyake, H.; Moloney, G. R.; Mori, T.; Nakadaira, T.; Nakano, Y.; Natkaniec, Z.; Nozaki, S.; Ohkubo, R.; Ohno, F.; Okuno, S.; Onuki, Y.; Ostrowicz, W.; Ozaki, H.; Peak, L.; Pernicka, M.; Rosen, M.; Rozanska, M.; Sato, N.; Schmid, S.; Shibata, T.; Stamen, R.; Stanič, S.; Steininger, H.; Sumisawa, K.; Suzuki, J.; Tajima, H.; Tajima, O.; Takahashi, K.; Takasaki, F.; Tamura, N.; Tanaka, M.; Taylor, G. N.; Terazaki, H.; Tomura, T.; Trabelsi, K.; Trischuk, W.; Tsuboyama, T.; Uchida, K.; Ueno, K.; Ueno, K.; Uozaki, N.; Ushiroda, Y.; Vahsen, S.; Varner, G.; Varvell, K.; Velikzhanin, Y. S.; Wang, C. C.; Wang, M. Z.; Watanabe, M.; Watanabe, Y.; Yamada, Y.; Yamamoto, H.; Yamashita, Y.; Yamashita, Y.; Yamauchi, M.; Yanai, H.; Yang, R.; Yasu, Y.; Yokoyama, M.; Ziegler, T.; Žontar, D.

    2004-12-01

    A major upgrade of the Silicon Vertex Detector (SVD 2.0) of the Belle experiment at the KEKB factory was installed along with new front-end and back-end electronics systems during the summer shutdown period in 2003 to cope with higher particle rates, improve the track resolution and meet the increasing requirements of radiation tolerance. The SVD 2.0 detector modules are read out by VA1TA chips which provide "fast or" (hit) signals that are combined by the back-end FADCTF modules to coarse, but immediate level 0 track trigger signals at rates of several tens of a kHz. Moreover, the digitized detector signals are compared to threshold lookup tables in the FADCTFs to pass on hit information on a single strip basis to the subsequent level 1.5 trigger system, which reduces the rate below the kHz range. Both FADCTF and level 1.5 electronics make use of parallel real-time processing in Field Programmable Gate Arrays (FPGAs), while further data acquisition and event building is done by PC farms running Linux. The new readout system hardware is described and the first results obtained with cosmics are shown.

  14. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 3: Commands specification

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1990-01-01

    This volume (3 of 4) contains the specification for the command language for the AMPS system. The volume contains a requirements specification for the operating system and commands and a design specification for the operating system and command. The operating system and commands sits on top of the protocol. The commands are an extension of the present set of AMPS commands in that the commands are more compact, allow multiple sub-commands to be bundled into one command, and have provisions for identifying the sender and the intended receiver. The commands make no change to the actual software that implement the commands.

  15. A web-based solution to visualize operational monitoring data in the Trigger and Data Acquisition system of the ATLAS experiment at the LHC

    NASA Astrophysics Data System (ADS)

    Avolio, G.; D'Ascanio, M.; Lehmann-Miotto, G.; Soloviev, I.

    2017-10-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider at CERN is composed of a large number of distributed hardware and software components (about 3000 computers and more than 25000 applications) which, in a coordinated manner, provide the data-taking functionality of the overall system. During data taking runs, a huge flow of operational data is produced in order to constantly monitor the system and allow proper detection of anomalies or misbehaviours. In the ATLAS trigger and data acquisition system, operational data are archived and made available to applications by the P-BEAST (Persistent Back-End for the Atlas Information System of TDAQ) service, implementing a custom time-series database. The possibility to efficiently visualize both realtime and historical operational data is a great asset facilitating both online identification of problems and post-mortem analysis. This paper will present a web-based solution developed to achieve such a goal: the solution leverages the flexibility of the P-BEAST archiver to retrieve data, and exploits the versatility of the Grafana dashboard builder to offer a very rich user experience. Additionally, particular attention will be given to the way some technical challenges (like the efficient visualization of a huge amount of data and the integration of the P-BEAST data source in Grafana) have been faced and solved.

  16. Qualification and Reliability for MEMS and IC Packages

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2004-01-01

    Advanced IC electronic packages are moving toward miniaturization from two key different approaches, front and back-end processes, each with their own challenges. Successful use of more of the back-end process front-end, e.g. microelectromechanical systems (MEMS) Wafer Level Package (WLP), enable reducing size and cost. Use of direct flip chip die is the most efficient approach if and when the issues of know good die and board/assembly are resolved. Wafer level package solve the issue of known good die by enabling package test, but it has its own limitation, e.g., the I/O limitation, additional cost, and reliability. From the back-end approach, system-in-a-package (SIAP/SIP) development is a response to an increasing demand for package and die integration of different functions into one unit to reduce size and cost and improve functionality. MEMS add another challenging dimension to electronic packaging since they include moving mechanical elements. Conventional qualification and reliability need to be modified and expanded in most cases in order to detect new unknown failures. This paper will review four standards that already released or being developed that specifically address the issues on qualification and reliability of assembled packages. Exposures to thermal cycles, monotonic bend test, mechanical shock and drop are covered in these specifications. Finally, mechanical and thermal cycle qualification data generated for MEMS accelerometer will be presented. The MEMS was an element of an inertial measurement unit (IMU) qualified for NASA Mars Exploration Rovers (MERs), Spirit and Opportunity that successfully is currently roaring the Martian surface

  17. SYRMEP Tomo Project: a graphical user interface for customizing CT reconstruction workflows.

    PubMed

    Brun, Francesco; Massimi, Lorenzo; Fratini, Michela; Dreossi, Diego; Billé, Fulvio; Accardo, Agostino; Pugliese, Roberto; Cedola, Alessia

    2017-01-01

    When considering the acquisition of experimental synchrotron radiation (SR) X-ray CT data, the reconstruction workflow cannot be limited to the essential computational steps of flat fielding and filtered back projection (FBP). More refined image processing is often required, usually to compensate artifacts and enhance the quality of the reconstructed images. In principle, it would be desirable to optimize the reconstruction workflow at the facility during the experiment (beamtime). However, several practical factors affect the image reconstruction part of the experiment and users are likely to conclude the beamtime with sub-optimal reconstructed images. Through an example of application, this article presents SYRMEP Tomo Project (STP), an open-source software tool conceived to let users design custom CT reconstruction workflows. STP has been designed for post-beamtime (off-line use) and for a new reconstruction of past archived data at user's home institution where simple computing resources are available. Releases of the software can be downloaded at the Elettra Scientific Computing group GitHub repository https://github.com/ElettraSciComp/STP-Gui.

  18. Decision generation tools and Bayesian inference

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Wang, Wenjian; Forrester, Thomas; Kostrzewski, Andrew; Veeris, Christian; Nielsen, Thomas

    2014-05-01

    Digital Decision Generation (DDG) tools are important software sub-systems of Command and Control (C2) systems and technologies. In this paper, we present a special type of DDGs based on Bayesian Inference, related to adverse (hostile) networks, including such important applications as terrorism-related networks and organized crime ones.

  19. Towards Model-Driven End-User Development in CALL

    ERIC Educational Resources Information Center

    Farmer, Rod; Gruba, Paul

    2006-01-01

    The purpose of this article is to introduce end-user development (EUD) processes to the CALL software development community. EUD refers to the active participation of end-users, as non-professional developers, in the software development life cycle. Unlike formal software engineering approaches, the focus in EUD on means/ends development is…

  20. Addressing Challenges in the Acquisition of Secure Software Systems With Open Architectures

    DTIC Science & Technology

    2012-04-30

    as a “broker” to market specific research topics identified by our sponsors to NPS graduate students. This three-pronged approach provides for a...breaks, and the day-ending socials. Many of our researchers use these occasions to establish new teaming arrangements for future research work. In the...software (CSS) and open source software (OSS). Federal government acquisition policy, as well as many leading enterprise IT centers, now encourage the use

  1. Advanced Software Development Workstation Project, phase 3

    NASA Technical Reports Server (NTRS)

    1991-01-01

    ACCESS provides a generic capability to develop software information system applications which are explicitly intended to facilitate software reuse. In addition, it provides the capability to retrofit existing large applications with a user friendly front end for preparation of input streams in a way that will reduce required training time, improve the productivity even of experienced users, and increase accuracy. Current and past work shows that ACCESS will be scalable to much larger object bases.

  2. Can ICTs contribute to the efficiency and provide equitable access to the health care system in Sub-Saharan Africa? The Mali experience.

    PubMed

    Bagayoko, C O; Anne, A; Fieschi, M; Geissbuhler, A

    2011-01-01

    The aim of this study is to demonstrate from actual projects that ICT can contribute to the balance of health systems in developing countries and to equitable access to human resources and quality health care service. Our study is focused on two essential elements which are: i) Capacity building and support of health professionals, especially those in isolated areas using telemedicine tools; ii) Strengthening of hospital information systems by taking advantage of full potential offered by open-source software. Our research was performed on the activities carried out in Mali and in part through the RAFT (Réseau en Afrique Francophone pour la Télémédecine) Network. We focused mainly on the activities of e-learning, telemedicine, and hospital information systems. These include the use of platforms that work with low Internet connection bandwidth. With regard to information systems, our strategy is mainly focused on the improvement and implementation of open-source tools. Several telemedicine application projects were reviewed including continuing online medical education and the support of isolated health professionals through the usage of innovative tools. This review covers the RAFT project for continuing medical education in French-speaking Africa, the tele-radiology project in Mali, the "EQUI-ResHuS" project for equal access to health over ICT in Mali, The "Pact-e.Santé" project for community health workers in Mali. We also detailed a large-scale experience of an open-source hospital information system implemented in Mali: "Cinz@n". We report on successful experiences in the field of telemedicine and on the evaluation by the end-users of the Cinz@n project, a pilot hospital information system in Mali. These reflect the potential of healthcare-ICT for Sub-Saharan African countries.

  3. A multi-tracer approach coupled to numerical models to improve understanding of mountain block processes in a high elevation, semi-humid catchment

    NASA Astrophysics Data System (ADS)

    Dwivedi, R.; McIntosh, J. C.; Meixner, T.; Ferré, T. P. A.; Chorover, J.

    2016-12-01

    Mountain systems are critical sources of recharge to adjacent alluvial basins in dryland regions. Yet, mountain systems face poorly defined threats due to climate change in terms of reduced snowpack, precipitation changes, and increased temperatures. Fundamentally, the climate risks to mountain systems are uncertain due to our limited understanding of natural recharge processes. Our goal is to combine measurements and models to provide improved spatial and temporal descriptions of groundwater flow paths and transit times in a headwater catchment located in a sub-humid region. This information is important to quantifying groundwater age and, thereby, to providing more accurate assessments of the vulnerability of these systems to climate change. We are using: (a) combination of geochemical composition, along with 2H/18O and 3H isotopes to improve an existing conceptual model for mountain block recharge (MBR) for the Marshall Gulch Catchment (MGC) located within the Santa Catalina Mountains. The current model only focuses on shallow flow paths through the upper unconfined aquifer with no representation of the catchment's fractured-bedrock aquifer. Groundwater flow, solute transport, and groundwater age will be modeled throughout MGC using COMSOL Multiphysics® software. Competing models in terms of spatial distribution of required hydrologic parameters, e.g. hydraulic conductivity and porosity, will be proposed and these models will be used to design discriminatory data collection efforts based on multi-tracer methods. Initial end-member mixing results indicate that baseflow in MGC, if considered the same as the streamflow during the dry periods, is not represented by the chemistry of deep groundwater in the mountain system. In the ternary mixing space, most of the samples plot outside the mixing curve. Therefore, to further constrain the contributions of water from various reservoirs we are collecting stable water isotopes, tritium, and solute chemistry of precipitation, shallow groundwater, local spring water, MGC streamflow, and at a drainage location much lower than MGC outlet to better define and characterize each end-member of the ternary mixing model. Consequently, the end-member mixing results are expected to facilitate us in better understanding the MBR processes in and beyond MGC. Mountain systems are critical sources of recharge to adjacent alluvial basins in dryland regions. Yet, mountain systems face poorly defined threats due to climate change in terms of reduced snowpack, precipitation changes, and increased temperatures. Fundamentally, the climate risks to mountain systems are uncertain due to our limited understanding of natural recharge processes. Our goal is to combine measurements and models to provide improved spatial and temporal descriptions of groundwater flow paths and transit times in a headwater catchment located in a sub-humid region. This information is important to quantifying groundwater age and, thereby, to providing more accurate assessments of the vulnerability of these systems to climate change. We are using: (a) combination of geochemical composition, along with 2H/18O and 3H isotopes to improve an existing conceptual model for mountain block recharge (MBR) for the Marshall Gulch Catchment (MGC) located within the Santa Catalina Mountains. The current model only focuses on shallow flow paths through the upper unconfined aquifer with no representation of the catchment's fractured-bedrock aquifer. Groundwater flow, solute transport, and groundwater age will be modeled throughout MGC using COMSOL Multiphysics® software. Competing models in terms of spatial distribution of required hydrologic parameters, e.g. hydraulic conductivity and porosity, will be proposed and these models will be used to design discriminatory data collection efforts based on multi-tracer methods. Initial end-member mixing results indicate that baseflow in MGC, if considered the same as the streamflow during the dry periods, is not represented by the chemistry of deep groundwater in the mountain system. In the ternary mixing space, most of the samples plot outside the mixing curve. Therefore, to further constrain the contributions of water from various reservoirs we are collecting stable water isotopes, tritium, and solute chemistry of precipitation, shallow groundwater, local spring water, MGC streamflow, and at a drainage location much lower than MGC outlet to better define and characterize each end-member of the ternary mixing model. Consequently, the end-member mixing results are expected to facilitate us in better understanding the MBR processes in and beyond MGC.

  4. 77 FR 26734 - Notice of Intent To Extend a Currently Approved Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... American Samoa, Guam, Micronesia, Northern Marianas, Puerto Rico, and the Virgin Islands. The objectives of... practices; (2) youth group participants; and (3) staff. NEERS consists of separate software sub-systems for...

  5. On an LAS-integrated soft PLC system based on WorldFIP fieldbus.

    PubMed

    Liang, Geng; Li, Zhijun; Li, Wen; Bai, Yan

    2012-01-01

    Communication efficiency is lowered and real-time performance is not good enough in discrete control based on traditional WorldFIP field intelligent nodes in case that the scale of control in field is large. A soft PLC system based on WorldFIP fieldbus was designed and implemented. Link Activity Scheduler (LAS) was integrated into the system and field intelligent I/O modules acted as networked basic nodes. Discrete control logic was implemented with the LAS-integrated soft PLC system. The proposed system was composed of configuration and supervisory sub-systems and running sub-systems. The configuration and supervisory sub-system was implemented with a personal computer or an industrial personal computer; running subsystems were designed and implemented based on embedded hardware and software systems. Communication and schedule in the running subsystem was implemented with an embedded sub-module; discrete control and system self-diagnosis were implemented with another embedded sub-module. Structure of the proposed system was presented. Methodology for the design of the sub-systems was expounded. Experiments were carried out to evaluate the performance of the proposed system both in discrete and process control by investigating the effect of network data transmission delay induced by the soft PLC in WorldFIP network and CPU workload on resulting control performances. The experimental observations indicated that the proposed system is practically applicable. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Evolution of Software-Only-Simulation at NASA IV and V

    NASA Technical Reports Server (NTRS)

    McCarty, Justin; Morris, Justin; Zemerick, Scott

    2014-01-01

    Software-Only-Simulations have been an emerging but quickly developing field of study throughout NASA. The NASA Independent Verification Validation (IVV) Independent Test Capability (ITC) team has been rapidly building a collection of simulators for a wide range of NASA missions. ITC specializes in full end-to-end simulations that enable developers, VV personnel, and operators to test-as-you-fly. In four years, the team has delivered a wide variety of spacecraft simulations that have ranged from low complexity science missions such as the Global Precipitation Management (GPM) satellite and the Deep Space Climate Observatory (DSCOVR), to the extremely complex missions such as the James Webb Space Telescope (JWST) and Space Launch System (SLS).This paper describes the evolution of ITCs technologies and processes that have been utilized to design, implement, and deploy end-to-end simulation environments for various NASA missions. A comparison of mission simulators are discussed with focus on technology and lessons learned in complexity, hardware modeling, and continuous integration. The paper also describes the methods for executing the missions unmodified flight software binaries (not cross-compiled) for verification and validation activities.

  7. Administrative Issues in Planning a Library End User Searching Program. ERIC Digest.

    ERIC Educational Resources Information Center

    Machovec, George S.

    This digest presents a reprint of an article which examines management principles that should be considered when implementing library end user searching programs. A brief discussion of specific implementation issues includes needs assessment, hardware, software, training, budgeting, what systems to offer, publicity and marketing, policies and…

  8. Transfers and Enhancements of the Teleconferencing System and Support of the Special Operations Planning Aids

    DTIC Science & Technology

    1984-10-31

    five colors , page forward, page back, erase, clear the page, store previously annotated material, and later retrieve it. From this developed a four...system to secure sites. These * enchancements are discussed below. -2- .7- -. . . --. J -. . . . .. . . . . . . . ..- . _77 . -.- 2.1 Enhancements to the...and large cache memory of the Winchester drive allows the SGWS software to run much faster when doing file access or direct memory access (DMA) than

  9. Modeling Tools Predict Flow in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    2010-01-01

    "Because rocket engines operate under extreme temperature and pressure, they present a unique challenge to designers who must test and simulate the technology. To this end, CRAFT Tech Inc., of Pipersville, Pennsylvania, won Small Business Innovation Research (SBIR) contracts from Marshall Space Flight Center to develop software to simulate cryogenic fluid flows and related phenomena. CRAFT Tech enhanced its CRUNCH CFD (computational fluid dynamics) software to simulate phenomena in various liquid propulsion components and systems. Today, both government and industry clients in the aerospace, utilities, and petrochemical industries use the software for analyzing existing systems as well as designing new ones."

  10. Self-conscious robotic system design process--from analysis to implementation.

    PubMed

    Chella, Antonio; Cossentino, Massimo; Seidita, Valeria

    2011-01-01

    Developing robotic systems endowed with self-conscious capabilities means realizing complex sub-systems needing ad-hoc software engineering techniques for their modelling, analysis and implementation. In this chapter the whole process (from analysis to implementation) to model the development of self-conscious robotic systems is presented and the new created design process, PASSIC, supporting each part of it, is fully illustrated.

  11. Alternatives for jet engine control

    NASA Technical Reports Server (NTRS)

    Sain, M. K.; Yurkovich, S.; Hill, J. P.; Kingler, T. A.

    1983-01-01

    The development of models of tensor type for a digital simulation of the quiet, clean safe engine (QCSE) gas turbine engine; the extension, to nonlinear multivariate control system design, of the concepts of total synthesis which trace their roots back to certain early investigations under this grant; the role of series descriptions as they relate to questions of scheduling in the control of gas turbine engines; the development of computer-aided design software for tensor modeling calculations; further enhancement of the softwares for linear total synthesis, mentioned above; and calculation of the first known examples using tensors for nonlinear feedback control are discussed.

  12. An evaluation of the physiological demands of elite rugby union using Global Positioning System tracking software.

    PubMed

    Cunniffe, Brian; Proctor, Wayne; Baker, Julien S; Davies, Bruce

    2009-07-01

    The current case study attempted to document the contemporary demands of elite rugby union. Players (n = 2) were tracked continuously during a competitive team selection game using Global Positioning System (GPS) software. Data revealed that players covered on average 6,953 m during play (83 minutes). Of this distance, 37% (2,800 m) was spent standing and walking, 27% (1,900 m) jogging, 10% (700 m) cruising, 14% (990 m) striding, 5% (320 m) high-intensity running, and 6% (420 m) sprinting. Greater running distances were observed for both players (6.7% back; 10% forward) in the second half of the game. Positional data revealed that the back performed a greater number of sprints (>20 km x h(-1)) than the forward (34 vs. 19) during the game. Conversely, the forward entered the lower speed zone (6-12 km x h(-1)) on a greater number of occasions than the back (315 vs. 229) but spent less time standing and walking (66.5 vs. 77.8%). Players were found to perform 87 moderate-intensity runs (>14 km x h(-1)) covering an average distance of 19.7 m (SD = 14.6). Average distances of 15.3 m (back) and 17.3 m (forward) were recorded for each sprint burst (>20 km x h(-1)), respectively. Players exercised at approximately 80 to 85% VO2max during the course of the game with a mean heart rate of 172 b x min(-1) ( approximately 88% HRmax). This corresponded to an estimated energy expenditure of 6.9 and 8.2 MJ, back and forward, respectively. The current study provides insight into the intense and physical nature of elite rugby using "on the field" assessment of physical exertion. Future use of this technology may help practitioners in design and implementation of individual position-specific training programs with appropriate management of player exercise load.

  13. Beyond formalism

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1991-01-01

    The ongoing debate over the role of formalism and formal specifications in software features many speakers with diverse positions. Yet, in the end, they share the conviction that the requirements of a software system can be unambiguously specified, that acceptable software is a product demonstrably meeting the specifications, and that the design process can be carried out with little interaction between designers and users once the specification has been agreed to. This conviction is part of a larger paradigm prevalent in American management thinking, which holds that organizations are systems that can be precisely specified and optimized. This paradigm, which traces historically to the works of Frederick Taylor in the early 1900s, is no longer sufficient for organizations and software systems today. In the domain of software, a new paradigm, called user-centered design, overcomes the limitations of pure formalism. Pioneered in Scandinavia, user-centered design is spreading through Europe and is beginning to make its way into the U.S.

  14. Scientific & Intelligence Exascale Visualization Analysis System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Money, James H.

    SIEVAS provides an immersive visualization framework for connecting multiple systems in real time for data science. SIEVAS provides the ability to connect multiple COTS and GOTS products in a seamless fashion for data fusion, data analysis, and viewing. It provides this capability by using a combination of micro services, real time messaging, and web service compliant back-end system.

  15. The battle between Unix and Windows NT.

    PubMed

    Anderson, H J

    1997-02-01

    For more than a decade, Unix has been the dominant back-end operating system in health care. But that prominent position is being challenged by Windows NT, touted by its developer, Microsoft Corp., as the operating system of the future. CIOs and others are attempting to figure out which system is the best choice in the long run.

  16. Study of a micro-concentrated photovoltaic system based on Cu(In,Ga)Se2 microcells array.

    PubMed

    Jutteau, Sebastien; Guillemoles, Jean-François; Paire, Myriam

    2016-08-20

    We study a micro-concentrated photovoltaic (CPV) system based on micro solar cells made from a thin film technology, Cu(In,Ga)Se2. We designed, using the ray-tracing software Zemax OpticStudio 14, an optical system adapted and integrated to the microcells, with only spherical lenses. The designed architecture has a magnification factor of 100× for an optical efficiency of 85% and an acceptance angle of ±3.5°, without anti-reflective coating. An experimental study is realized to fabricate the first generation prototype on a 5  cm×5  cm substrate. A mini-module achieved a concentration ratio of 72× under AM1.5G, and an absolute efficiency gain of 1.8% for a final aperture area efficiency of 12.6%.

  17. Development of an Oceanographic Data Archiving and Service System for the Korean Researchers

    NASA Astrophysics Data System (ADS)

    Kim, Sung Dae; Park, Hyuk Min; Baek, Sang Ho

    2014-05-01

    Oceanographic Data and Information Center of Korea Institute of Ocean Science and Technology (KIOST) started to develop an oceanographic data archiving and service system in 2010 to support the Korean ocean researchers by providing quality controlled data continuously. Many physical oceanographic data available in the public domain and Korean domestic data were collected periodically, quality controlled, manipulated and provided to ocean modelers who need ocean data continuously and marine biologists who don't know well physical data but need it. The northern limit and the southern limit of the spatial coverage are 20°N and 55°N, and the western limit and the eastern limit are 110°E and 150°E, respectively. To archive TS (Temperature and Salinity) profile data, ARGO data were gathered from ARGO GDACs (France and USA) and many historical TS profile data observed by CTD, OSD and BT were retrieved from World Ocean Database 2009. The quality control software for TS profile data, which meets QC criteria suggested by the ARGO program and the GTSPP (Global Temperature-Salinity Profile Program), was programmed and applied to the collected data. By the end of 2013, the total number of vertical profile data from the ARGO GDACs was 59,642 and total number of station data from WOD 2009 was 1,604,422. We also collected the global satellite SST data produced by NCDC and global SSH data from AVISO every day. An automatic program was coded to collect satellite data, extract sub data sets of the North West Pacific area and produce distribution maps. The total number of collected satellite data sets was 3,613 by the end of 2013. We use 3 different data services to provide archived data to the Korean experts. A FTP service was prepared to allow data users to download data in the original format. We developed TS database system using Oracle RDBMS to contain all collected temperature salinity data and support SQL data retrieval with various conditions. The KIOST ocean data portal was used as the data retrieving service of TS DB, which uses GIS interface made by open source GIS software. We also installed Live Access Service developed by US PMEL for service of the satellite netCDF data files, which support on-the-fly visualization and OPeNDAP (Open-source Project for a Network Data Access Protocol) service for remote connection and sub-setting of large data set

  18. Charter for Systems Engineer Working Group

    NASA Technical Reports Server (NTRS)

    Suffredini, Michael T.; Grissom, Larry

    2015-01-01

    This charter establishes the International Space Station Program (ISSP) Mobile Servicing System (MSS) Systems Engineering Working Group (SEWG). The MSS SEWG is established to provide a mechanism for Systems Engineering for the end-to-end MSS function. The MSS end-to-end function includes the Space Station Remote Manipulator System (SSRMS), the Mobile Remote Servicer (MRS) Base System (MBS), Robotic Work Station (RWS), Special Purpose Dexterous Manipulator (SPDM), Video Signal Converters (VSC), and Operations Control Software (OCS), the Mobile Transporter (MT), and by interfaces between and among these elements, and United States On-Orbit Segment (USOS) distributed systems, and other International Space Station Elements and Payloads, (including the Power Data Grapple Fixtures (PDGFs), MSS Capture Attach System (MCAS) and the Mobile Transporter Capture Latch (MTCL)). This end-to-end function will be supported by the ISS and MSS ground segment facilities. This charter defines the scope and limits of the program authority and document control that is delegated to the SEWG and it also identifies the panel core membership and specific operating policies.

  19. Development and evaluation of SOA-based AAL services in real-life environments: a case study and lessons learned.

    PubMed

    Stav, Erlend; Walderhaug, Ståle; Mikalsen, Marius; Hanke, Sten; Benc, Ivan

    2013-11-01

    The proper use of ICT services can support seniors in living independently longer. While such services are starting to emerge, current proprietary solutions are often expensive, covering only isolated parts of seniors' needs, and lack support for sharing information between services and between users. For developers, the challenge is that it is complex and time consuming to develop high quality, interoperable services, and new techniques are needed to simplify the development and reduce the development costs. This paper provides the complete view of the experiences gained in the MPOWER project with respect to using model-driven development (MDD) techniques for Service Oriented Architecture (SOA) system development in the Ambient Assisted Living (AAL) domain. To address this challenge, the approach of the European research project MPOWER (2006-2009) was to investigate and record the user needs, define a set of reusable software services based on these needs, and then implement pilot systems using these services. Further, a model-driven toolchain covering key development phases was developed to support software developers through this process. Evaluations were conducted both on the technical artefacts (methodology and tools), and on end user experience from using the pilot systems in trial sites. The outcome of the work on the user needs is a knowledge base recorded as a Unified Modeling Language (UML) model. This comprehensive model describes actors, use cases, and features derived from these. The model further includes the design of a set of software services, including full trace information back to the features and use cases motivating their design. Based on the model, the services were implemented for use in Service Oriented Architecture (SOA) systems, and are publicly available as open source software. The services were successfully used in the realization of two pilot applications. There is therefore a direct and traceable link from the user needs of the elderly, through the service design knowledge base, to the service and pilot implementations. The evaluation of the SOA approach on the developers in the project revealed that SOA is useful with respect to job performance and quality. Furthermore, they think SOA is easy to use and support development of AAL applications. An important finding is that the developers clearly report that they intend to use SOA in the future, but not for all type of projects. With respect to using model-driven development in web services design and implementation, the developers reported that it was useful. However, it is important that the code generated from the models is correct if the full potential of MDD should be achieved. The pilots and their evaluation in the trial sites showed that the services of the platform are sufficient to create suitable systems for end users in the domain. A SOA platform with a set of reusable domain services is a suitable foundation for more rapid development and tailoring of assisted living systems covering reoccurring needs among elderly users. It is feasible to realize a tool-chain for model-driven development of SOA applications in the AAL domain, and such a tool-chain can be accepted and found useful by software developers. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  20. Web-based reactive transport modeling using PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Karra, S.; Lichtner, P. C.; Versteeg, R.; Zhang, Y.

    2017-12-01

    Actionable understanding of system behavior in the subsurface is required for a wide spectrum of societal and engineering needs by both commercial firms and government entities and academia. These needs include, for example, water resource management, precision agriculture, contaminant remediation, unconventional energy production, CO2 sequestration monitoring, and climate studies. Such understanding requires the ability to numerically model various coupled processes that occur across different temporal and spatial scales as well as multiple physical domains (reservoirs - overburden, surface-subsurface, groundwater-surface water, saturated-unsaturated zone). Currently, this ability is typically met through an in-house approach where computational resources, model expertise, and data for model parameterization are brought together to meet modeling needs. However, such an approach has multiple drawbacks which limit the application of high-end reactive transport codes such as the Department of Energy funded[?] PFLOTRAN code. In addition, while many end users have a need for the capabilities provided by high-end reactive transport codes, they do not have the expertise - nor the time required to obtain the expertise - to effectively use these codes. We have developed and are actively enhancing a cloud-based software platform through which diverse users are able to easily configure, execute, visualize, share, and interpret PFLOTRAN models. This platform consists of a web application and available on-demand HPC computational infrastructure. The web application consists of (1) a browser-based graphical user interface which allows users to configure models and visualize results interactively, and (2) a central server with back-end relational databases which hold configuration, data, modeling results, and Python scripts for model configuration, and (3) a HPC environment for on-demand model execution. We will discuss lessons learned in the development of this platform, the rationale for different interfaces, implementation choices, as well as the planned path forward.

  1. Lithium isotopic systematics of submarine vent fluids from arc and back-arc hydrothermal systems in the western Pacific

    NASA Astrophysics Data System (ADS)

    Araoka, Daisuke; Nishio, Yoshiro; Gamo, Toshitaka; Yamaoka, Kyoko; Kawahata, Hodaka

    2016-10-01

    The Li concentration and isotopic composition (δ7Li) in submarine vent fluids are important for oceanic Li budget and potentially useful for investigating hydrothermal systems deep under the seafloor because hydrothermal vent fluids are highly enriched in Li relative to seawater. Although Li isotopic geochemistry has been studied at mid-ocean-ridge (MOR) hydrothermal sites, in arc and back-arc settings Li isotopic composition has not been systematically investigated. Here we determined the δ7Li and 87Sr/86Sr values of 11 end-member fluids from 5 arc and back-arc hydrothermal systems in the western Pacific and examined Li behavior during high-temperature water-rock interactions in different geological settings. In sediment-starved hydrothermal systems (Manus Basin, Izu-Bonin Arc, Mariana Trough, and North Fiji Basin), the Li concentrations (0.23-1.30 mmol/kg) and δ7Li values (+4.3‰ to +7.2‰) of the end-member fluids are explained mainly by dissolution-precipitation model during high-temperature seawater-rock interactions at steady state. Low Li concentrations are attributable to temperature-related apportioning of Li in rock into the fluid phase and phase separation process. Small variation in Li among MOR sites is probably caused by low-temperature alteration process by diffusive hydrothermal fluids under the seafloor. In contrast, the highest Li concentrations (3.40-5.98 mmol/kg) and lowest δ7Li values (+1.6‰ to +2.4‰) of end-member fluids from the Okinawa Trough demonstrate that the Li is predominantly derived from marine sediments. The variation of Li in sediment-hosted sites can be explained by the differences in degree of hydrothermal fluid-sediment interactions associated with the thickness of the marine sediment overlying these hydrothermal sites.

  2. An alternative method of closed silicone intubation of the lacrimal system.

    PubMed

    Henderson, P N; McNab, A A

    1996-05-01

    An alternative method of closed lacrimal intubation is described, the basis of which is to place the end of a piece of silicone tubing over the end of a small-diameter metal introducer, stretch the silicone tubing back along the introducer, and then pass the introducer together with the tubing through the lacrimal system into the nasal cavity. The tubing is visualized in the inferior meatus, from where it is retrieved, and then the introducer is withdrawn. The other end of the tubing is passed in a similar fashion. The technique is easily mastered, inexpensive, and less traumatic than other described techniques.

  3. ZnO/Cu(InGa)Se.sub.2 solar cells prepared by vapor phase Zn doping

    DOEpatents

    Ramanathan, Kannan; Hasoon, Falah S.; Asher, Sarah E.; Dolan, James; Keane, James C.

    2007-02-20

    A process for making a thin film ZnO/Cu(InGa)Se.sub.2 solar cell without depositing a buffer layer and by Zn doping from a vapor phase, comprising: depositing Cu(InGa)Se.sub.2 layer on a metal back contact deposited on a glass substrate; heating the Cu(InGa)Se.sub.2 layer on the metal back contact on the glass substrate to a temperature range between about 100.degree. C. to about 250.degree. C.; subjecting the heated layer of Cu(InGa)Se.sub.2 to an evaporant species from a Zn compound; and sputter depositing ZnO on the Zn compound evaporant species treated layer of Cu(InGa)Se.sub.2.

  4. NOAA Climate Program Office Contributions to National ESPC

    NASA Astrophysics Data System (ADS)

    Higgins, W.; Huang, J.; Mariotti, A.; Archambault, H. M.; Barrie, D.; Lucas, S. E.; Mathis, J. T.; Legler, D. M.; Pulwarty, R. S.; Nierenberg, C.; Jones, H.; Cortinas, J. V., Jr.; Carman, J.

    2016-12-01

    NOAA is one of five federal agencies (DOD, DOE, NASA, NOAA, and NSF) which signed an updated charter in 2016 to partner on the National Earth System Prediction Capability (ESPC). Situated within NOAA's Office of Oceanic and Atmospheric Research (OAR), NOAA Climate Program Office (CPO) programs contribute significantly to the National ESPC goals and activities. This presentation will provide an overview of CPO contributions to National ESPC. First, we will discuss selected CPO research and transition activities that directly benefit the ESPC coupled model prediction capability, including The North American Multi-Model Ensemble (NMME) seasonal prediction system The Subseasonal Experiment (SubX) project to test real-time subseasonal ensemble prediction systems. Improvements to the NOAA operational Climate Forecast System (CFS), including software infrastructure and data assimilation. Next, we will show how CPO's foundational research activities are advancing future ESPC capabilities. Highlights will include: The Tropical Pacific Observing System (TPOS) to provide the basis for predicting climate on subseasonal to decadal timescales. Subseasonal-to-Seasonal (S2S) processes and predictability studies to improve understanding, modeling and prediction of the MJO. An Arctic Research Program to address urgent needs for advancing monitoring and prediction capabilities in this major area of concern. Advances towards building an experimental multi-decadal prediction system through studies on the Atlantic Meridional Overturning Circulation (AMOC). Finally, CPO has embraced Integrated Information Systems (IIS's) that build on the innovation of programs such as the National Integrated Drought Information System (NIDIS) to develop and deliver end to end environmental information for key societal challenges (e.g. extreme heat; coastal flooding). These contributions will help the National ESPC better understand and address societal needs and decision support requirements.

  5. Towards Context-Aware and User-Centered Analysis in Assistive Environments: A Methodology and a Software Tool.

    PubMed

    Fontecha, Jesús; Hervás, Ramón; Mondéjar, Tania; González, Iván; Bravo, José

    2015-10-01

    One of the main challenges on Ambient Assisted Living (AAL) is to reach an appropriate acceptance level of the assistive systems, as well as to analyze and monitor end user tasks in a feasible and efficient way. The development and evaluation of AAL solutions based on user-centered perspective help to achive these goals. In this work, we have designed a methodology to integrate and develop analytics user-centered tools into assistive systems. An analysis software tool gathers information of end users from adapted psychological questionnaires and naturalistic observation of their own context. The aim is to enable an in-deep analysis focused on improving the life quality of elderly people and their caregivers.

  6. Building Safer Systems With SpecTRM

    NASA Technical Reports Server (NTRS)

    2003-01-01

    System safety, an integral component in software development, often poses a challenge to engineers designing computer-based systems. While the relaxed constraints on software design allow for increased power and flexibility, this flexibility introduces more possibilities for error. As a result, system engineers must identify the design constraints necessary to maintain safety and ensure that the system and software design enforces them. Safeware Engineering Corporation, of Seattle, Washington, provides the information, tools, and techniques to accomplish this task with its Specification Tools and Requirements Methodology (SpecTRM). NASA assisted in developing this engineering toolset by awarding the company several Small Business Innovation Research (SBIR) contracts with Ames Research Center and Langley Research Center. The technology benefits NASA through its applications for Space Station rendezvous and docking. SpecTRM aids system and software engineers in developing specifications for large, complex safety critical systems. The product enables engineers to find errors early in development so that they can be fixed with the lowest cost and impact on the system design. SpecTRM traces both the requirements and design rationale (including safety constraints) throughout the system design and documentation, allowing engineers to build required system properties into the design from the beginning, rather than emphasizing assessment at the end of the development process when changes are limited and costly.System safety, an integral component in software development, often poses a challenge to engineers designing computer-based systems. While the relaxed constraints on software design allow for increased power and flexibility, this flexibility introduces more possibilities for error. As a result, system engineers must identify the design constraints necessary to maintain safety and ensure that the system and software design enforces them. Safeware Engineering Corporation, of Seattle, Washington, provides the information, tools, and techniques to accomplish this task with its Specification Tools and Requirements Methodology (SpecTRM). NASA assisted in developing this engineering toolset by awarding the company several Small Business Innovation Research (SBIR) contracts with Ames Research Center and Langley Research Center. The technology benefits NASA through its applications for Space Station rendezvous and docking. SpecTRM aids system and software engineers in developing specifications for large, complex safety critical systems. The product enables engineers to find errors early in development so that they can be fixed with the lowest cost and impact on the system design. SpecTRM traces both the requirements and design rationale (including safety constraints) throughout the system design and documentation, allowing engineers to build required system properties into the design from the beginning, rather than emphasizing assessment at the end of the development process when changes are limited and costly.

  7. Design of Soil Salinity Policies with Tinamit, a Flexible and Rapid Tool to Couple Stakeholder-Built System Dynamics Models with Physically-Based Models

    NASA Astrophysics Data System (ADS)

    Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.

    2016-12-01

    Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with limited technical background.

  8. Preventing Chaos.

    ERIC Educational Resources Information Center

    Pineda, Ernest M.

    1999-01-01

    Discusses ways to help resolve the Y2K problem and avoid disruptions in school security and safety. Discusses computer software testing and validation to determine its functionality after year's end, and explores system remediation of non-compliant fire and security systems. (GR)

  9. Uranium oxide fuel cycle analysis in VVER-1000 with VISTA simulation code

    NASA Astrophysics Data System (ADS)

    Mirekhtiary, Seyedeh Fatemeh; Abbasi, Akbar

    2018-02-01

    The VVER-1000 Nuclear power plant generates about 20-25 tons of spent fuel per year. In this research, the fuel transmutation of Uranium Oxide (UOX) fuel was calculated by using of nuclear fuel cycle simulation system (VISTA) code. In this simulation, we evaluated the back end components fuel cycle. The back end component calculations are Spent Fuel (SF), Actinide Inventory (AI) and Fission Product (FP) radioisotopes. The SF, AI and FP values were obtained 23.792178 ton/y, 22.811139 ton/y, 0.981039 ton/y, respectively. The obtained value of spent fuel, major actinide, and minor actinide and fission products were 23.8 ton/year, 22.795 ton/year, 0.024 ton/year and 0.981 ton/year, respectively.

  10. FAME, a microprocessor based front-end analysis and modeling environment

    NASA Technical Reports Server (NTRS)

    Rosenbaum, J. D.; Kutin, E. B.

    1980-01-01

    Higher order software (HOS) is a methodology for the specification and verification of large scale, complex, real time systems. The HOS methodology was implemented as FAME (front end analysis and modeling environment), a microprocessor based system for interactively developing, analyzing, and displaying system models in a low cost user-friendly environment. The nature of the model is such that when completed it can be the basis for projection to a variety of forms such as structured design diagrams, Petri-nets, data flow diagrams, and PSL/PSA source code. The user's interface with the analyzer is easily recognized by any current user of a structured modeling approach; therefore extensive training is unnecessary. Furthermore, when all the system capabilities are used one can check on proper usage of data types, functions, and control structures thereby adding a new dimension to the design process that will lead to better and more easily verified software designs.

  11. 40 CFR 63.493 - Back-end process provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.493 Back-end process provisions. Owners and operators of new and existing affected sources shall comply with the requirements in...

  12. Software Dependability and Safety Evaluations ESA's Initiative

    NASA Astrophysics Data System (ADS)

    Hernek, M.

    ESA has allocated funds for an initiative to evaluate Dependability and Safety methods of Software. The objectives of this initiative are; · More extensive validation of Safety and Dependability techniques for Software · Provide valuable results to improve the quality of the Software thus promoting the application of Dependability and Safety methods and techniques. ESA space systems are being developed according to defined PA requirement specifications. These requirements may be implemented through various design concepts, e.g. redundancy, diversity etc. varying from project to project. Analysis methods (FMECA. FTA, HA, etc) are frequently used during requirements analysis and design activities to assure the correct implementation of system PA requirements. The criticality level of failures, functions and systems is determined and by doing that the critical sub-systems are identified, on which dependability and safety techniques are to be applied during development. Proper performance of the software development requires the development of a technical specification for the products at the beginning of the life cycle. Such technical specification comprises both functional and non-functional requirements. These non-functional requirements address characteristics of the product such as quality, dependability, safety and maintainability. Software in space systems is more and more used in critical functions. Also the trend towards more frequent use of COTS and reusable components pose new difficulties in terms of assuring reliable and safe systems. Because of this, its dependability and safety must be carefully analysed. ESA identified and documented techniques, methods and procedures to ensure that software dependability and safety requirements are specified and taken into account during the design and development of a software system and to verify/validate that the implemented software systems comply with these requirements [R1].

  13. The Implementation of Satellite Attitude Control System Software Using Object Oriented Design

    NASA Technical Reports Server (NTRS)

    Reid, W. Mark; Hansell, William; Phillips, Tom; Anderson, Mark O.; Drury, Derek

    1998-01-01

    NASA established the Small Explorer (SNMX) program in 1988 to provide frequent opportunities for highly focused and relatively inexpensive space science missions. The SMEX program has produced five satellites, three of which have been successfully launched. The remaining two spacecraft are scheduled for launch within the coming year. NASA has recently developed a prototype for the next generation Small Explorer spacecraft (SMEX-Lite). This paper describes the object-oriented design (OOD) of the SMEX-Lite Attitude Control System (ACS) software. The SMEX-Lite ACS is three-axis controlled and is capable of performing sub-arc-minute pointing. This paper first describes high level requirements governing the SMEX-Lite ACS software architecture. Next, the context in which the software resides is explained. The paper describes the principles of encapsulation, inheritance, and polymorphism with respect to the implementation of an ACS software system. This paper will also discuss the design of several ACS software components. Specifically, object-oriented designs are presented for sensor data processing, attitude determination, attitude control, and failure detection. Finally, this paper will address the establishment of the ACS Foundation Class (AFC) Library. The AFC is a large software repository, requiring a minimal amount of code modifications to produce ACS software for future projects.

  14. Spark gap switch system with condensable dielectric gas

    DOEpatents

    Thayer, III, William J.

    1991-01-01

    A spark gap switch system is disclosed which is capable of operating at a high pulse rate comprising an insulated switch housing having a purging gas entrance port and a gas exit port, a pair of spaced apart electrodes each having one end thereof within the housing and defining a spark gap therebetween, an easily condensable and preferably low molecular weight insulating gas flowing through the switch housing from the housing, a heat exchanger/condenser for condensing the insulating gas after it exits from the housing, a pump for recirculating the condensed insulating gas as a liquid back to the housing, and a heater exchanger/evaporator to vaporize at least a portion of the condensed insulating gas back into a vapor prior to flowing the insulating gas back into the housing.

  15. Towards easing the configuration and new team member accommodation for open source software based portals

    NASA Astrophysics Data System (ADS)

    Fu, L.; West, P.; Zednik, S.; Fox, P. A.

    2013-12-01

    For simple portals such as vocabulary based services, which contain small amounts of data and require only hyper-textual representation, it is often an overkill to adopt the whole software stack of database, middleware and front end, or to use a general Web development framework as the starting point of development. Directly combining open source software is a much more favorable approach. However, our experience with the Coastal and Marine Spatial Planning Vocabulary (CMSPV) service portal shows that there are still issues such as system configuration and accommodating a new team member that need to be handled carefully. In this contribution, we share our experience in the context of the CMSPV portal, and focus on the tools and mechanisms we've developed to ease the configuration job and the incorporation process of new project members. We discuss the configuration issues that arise when we don't have complete control over how the software in use is configured and need to follow existing configuration styles that may not be well documented, especially when multiple pieces of such software need to work together as a combined system. As for the CMSPV portal, it is built on two pieces of open source software that are still under rapid development: a Fuseki data server and Epimorphics Linked Data API (ELDA) front end. Both lack mature documentation and tutorials. We developed comparison and labeling tools to ease the problem of system configuration. Another problem that slowed down the project is that project members came and went during the development process, so new members needed to start with a partially configured system and incomplete documentation left by old members. We developed documentation/tutorial maintenance mechanisms based on our comparison and labeling tools to make it easier for the new members to be incorporated into the project. These tools and mechanisms also provided benefit to other projects that reused the software components from the CMSPV system.

  16. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  17. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.

    2000-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benson, D.K.; Tracy, C.E.

    The real and perceived risks of hydrogen fuel use, particularly in passenger vehicles, will require extensive safety precautions including hydrogen leak detection. Conventional hydrogen gas sensors require electrical wiring and may be too expensive for deployment in multiple locations within a vehicle. In this recently initiated project, we are attempting to develop a reversible, thin-film, chemochromic sensor that can be applied to the end of a polymer optical fiber. The presence of hydrogen gas causes the film to become darker. A light beam transmitted from a central instrument in the vehicle along the sensor fibers will be reflected from themore » ends of the fiber back to individual light detectors. A decrease in the reflected light signal will indicate the presence and concentration of hydrogen in the vicinity of the fiber sensor. The typical thin film sensor consists of a layer of transparent, amorphous tungsten oxide covered by a very thin reflective layer of palladium. When the sensor is exposed to hydrogen, a portion of the hydrogen is dissociated, diffuses through the palladium and reacts with the tungsten oxide to form a blue insertion compound, H{sub X}WO{sub 3}- When the hydrogen gas is no longer present, the hydrogen will diffuse out of the H{sub X}WO{sub 3} and oxidize at the palladium/air interface, restoring the tungsten oxide film and the light signal to normal. The principle of this detection scheme has already been demonstrated by scientists in Japan. However, the design of the sensor has not been optimized for speed of response nor tested for its hydrogen selectivity in the presence of hydrocarbon gases. The challenge of this project is to modify the basic sensor design to achieve the required rapid response and assure sufficient selectivity to avoid false readings.« less

  19. Opportunities for leveraging OS virtualization in high-end supercomputing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke

    2010-11-01

    This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.

  20. The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button.

    PubMed

    Swertz, Morris A; Dijkstra, Martijn; Adamusiak, Tomasz; van der Velde, Joeri K; Kanterakis, Alexandros; Roos, Erik T; Lops, Joris; Thorisson, Gudmundur A; Arends, Danny; Byelas, George; Muilu, Juha; Brookes, Anthony J; de Brock, Engbert O; Jansen, Ritsert C; Parkinson, Helen

    2010-12-21

    There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS' generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This 'model-driven' method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist's satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the 'ExtractModel' procedure. The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org.

  1. Hardening Software Defined Networks

    DTIC Science & Technology

    2014-07-01

    networks [2, 35] and electrical systems [28, 37, 25]. Effects of cascading have also been modeled in the study of communication networks such as the AS...is necessary to examine both potential failures of the system , and the risks inherent in success. A true end-to-end perspective includes the complete... potential herd immunity) or only local benefit. Club goods theories provide a strong theoretical foundation for determining the importance and risks of

  2. Source-Constrained Recall: Front-End and Back-End Control of Retrieval Quality

    ERIC Educational Resources Information Center

    Halamish, Vered; Goldsmith, Morris; Jacoby, Larry L.

    2012-01-01

    Research on the strategic regulation of memory accuracy has focused primarily on monitoring and control processes used to edit out incorrect information after it is retrieved (back-end control). Recent studies, however, suggest that rememberers also enhance accuracy by preventing the retrieval of incorrect information in the first place (front-end…

  3. Rover Attitude and Pointing System Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Vanelli, Charles A.; Grinblat, Jonathan F.; Sirlin, Samuel W.; Pfister, Sam

    2009-01-01

    The MER (Mars Exploration Rover) Attitude and Pointing System Simulation Testbed Environment (RAPSSTER) provides a simulation platform used for the development and test of GNC (guidance, navigation, and control) flight algorithm designs for the Mars rovers, which was specifically tailored to the MERs, but has since been used in the development of rover algorithms for the Mars Science Laboratory (MSL) as well. The software provides an integrated simulation and software testbed environment for the development of Mars rover attitude and pointing flight software. It provides an environment that is able to run the MER GNC flight software directly (as opposed to running an algorithmic model of the MER GNC flight code). This improves simulation fidelity and confidence in the results. Further more, the simulation environment allows the user to single step through its execution, pausing, and restarting at will. The system also provides for the introduction of simulated faults specific to Mars rover environments that cannot be replicated in other testbed platforms, to stress test the GNC flight algorithms under examination. The software provides facilities to do these stress tests in ways that cannot be done in the real-time flight system testbeds, such as time-jumping (both forwards and backwards), and introduction of simulated actuator faults that would be difficult, expensive, and/or destructive to implement in the real-time testbeds. Actual flight-quality codes can be incorporated back into the development-test suite of GNC developers, closing the loop between the GNC developers and the flight software developers. The software provides fully automated scripting, allowing multiple tests to be run with varying parameters, without human supervision.

  4. The software architecture of the camera for the ASTRI SST-2M prototype for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Sangiorgi, Pierluca; Capalbi, Milvia; Gimenes, Renato; La Rosa, Giovanni; Russo, Francesco; Segreto, Alberto; Sottile, Giuseppe; Catalano, Osvaldo

    2016-07-01

    The purpose of this contribution is to present the current status of the software architecture of the ASTRI SST-2M Cherenkov Camera. The ASTRI SST-2M telescope is an end-to-end prototype for the Small Size Telescope of the Cherenkov Telescope Array. The ASTRI camera is an innovative instrument based on SiPM detectors and has several internal hardware components. In this contribution we will give a brief description of the hardware components of the camera of the ASTRI SST-2M prototype and of their interconnections. Then we will present the outcome of the software architectural design process that we carried out in order to identify the main structural components of the camera software system and the relationships among them. We will analyze the architectural model that describes how the camera software is organized as a set of communicating blocks. Finally, we will show where these blocks are deployed in the hardware components and how they interact. We will describe in some detail, the physical communication ports and external ancillary devices management, the high precision time-tag management, the fast data collection and the fast data exchange between different camera subsystems, and the interfacing with the external systems.

  5. AIRSAR Web-Based Data Processing

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne

    2007-01-01

    The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.

  6. An Autonomic Framework for Integrating Security and Quality of Service Support in Databases

    ERIC Educational Resources Information Center

    Alomari, Firas

    2013-01-01

    The back-end databases of multi-tiered applications are a major data security concern for enterprises. The abundance of these systems and the emergence of new and different threats require multiple and overlapping security mechanisms. Therefore, providing multiple and diverse database intrusion detection and prevention systems (IDPS) is a critical…

  7. Working with Pedagogical Agents: Understanding the "Back End" of an Intelligent Tutoring System

    ERIC Educational Resources Information Center

    Wolfe, Christopher; Widmer, Colin L.; Weil, Audrey M.; Cedillos-Whynott, Elizabeth M.

    2015-01-01

    Students in an undergraduate psychology course on Learning and Cognition used SKO (formerly AutoTutor Lite), an Intelligent Tutoring System, to create interactive lessons in which a pedagogic agent (animated avatar) engages users in a tutorial dialogue. After briefly describing the technology and underlying psychological theory, data from an…

  8. NASA Tech Briefs, October 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Topics covered include: Cryogenic Temperature-Gradient Foam/Substrate Tensile Tester; Flight Test of an Intelligent Flight-Control System; Slat Heater Boxes for Thermal Vacuum Testing; System for Testing Thermal Insulation of Pipes; Electrical-Impedance-Based Ice-Thickness Gauges; Simulation System for Training in Laparoscopic Surgery; Flasher Powered by Photovoltaic Cells and Ultracapacitors; Improved Autoassociative Neural Networks; Toroidal-Core Microinductors Biased by Permanent Magnets; Using Correlated Photons to Suppress Background Noise; Atmospheric-Fade-Tolerant Tracking and Pointing in Wireless Optical Communication; Curved Focal-Plane Arrays Using Back-Illuminated High-Purity Photodetectors; Software for Displaying Data from Planetary Rovers; Software for Refining or Coarsening Computational Grids; Software for Diagnosis of Multiple Coordinated Spacecraft; Software Helps Retrieve Information Relevant to the User; Software for Simulating a Complex Robot; Software for Planning Scientific Activities on Mars; Software for Training in Pre-College Mathematics; Switching and Rectification in Carbon-Nanotube Junctions; Scandia-and-Yttria-Stabilized Zirconia for Thermal Barriers; Environmentally Safer, Less Toxic Fire-Extinguishing Agents; Multiaxial Temperature- and Time-Dependent Failure Model; Cloverleaf Vibratory Microgyroscope with Integrated Post; Single-Vector Calibration of Wind-Tunnel Force Balances; Microgyroscope with Vibrating Post as Rotation Transducer; Continuous Tuning and Calibration of Vibratory Gyroscopes; Compact, Pneumatically Actuated Filter Shuttle; Improved Bearingless Switched-Reluctance Motor; Fluorescent Quantum Dots for Biological Labeling; Growing Three-Dimensional Corneal Tissue in a Bioreactor; Scanning Tunneling Optical Resonance Microscopy; The Micro-Arcsecond Metrology Testbed; Detecting Moving Targets by Use of Soliton Resonances; and Finite-Element Methods for Real-Time Simulation of Surgery.

  9. Back-Arc Opening in the Western End of the Okinawa Trough Revealed From GNSS/Acoustic Measurements

    NASA Astrophysics Data System (ADS)

    Chen, Horng-Yue; Ikuta, Ryoya; Lin, Cheng-Horng; Hsu, Ya-Ju; Kohmi, Takeru; Wang, Chau-Chang; Yu, Shui-Beih; Tu, Yoko; Tsujii, Toshiaki; Ando, Masataka

    2018-01-01

    We measured seafloor movement using a Global Navigation Satellite Systems (GNSS)/Acoustic technique at the south of the rifting valley in the western end of the Okinawa Trough back-arc basin, 60 km east of northeastern corner of Taiwan. The horizontal position of the seafloor benchmark, measured eight times between July 2012 and May 2016, showed a southeastward movement suggesting a back-arc opening of the Okinawa Trough. The average velocity of the seafloor benchmark shows a block motion together with Yonaguni Island. The westernmost part of the Ryukyu Arc rotates clockwise and is pulled apart from the Taiwan Island, which should cause the expansion of the Yilan Plain, Taiwan. Comparing the motion of the seafloor benchmark with adjacent seismicity, we suggest a gentle episodic opening of the rifting valley accompanying a moderate seismic activation, which differs from the case in the segment north off-Yonaguni Island where a rapid dyke intrusion occurs with a significant seismic activity.

  10. Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2008-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.

  11. FET commutated current-FED inverter

    NASA Technical Reports Server (NTRS)

    Rippel, Wally E. (Inventor); Edwards, Dean B. (Inventor)

    1983-01-01

    A shunt switch comprised of a field-effect transistor (Q.sub.1) is employed to commutate a current-fed inverter (10) using thyristors (SCR1, SCR2) or bijunction transistors (Q.sub.2, Q.sub.3) in a full bridge (1, 2, 3, 4) or half bridge (5, 6) and transformer (T.sub.1) configuration. In the case of thyristors, a tapped inverter (12) is employed to couple the inverter to a dc source to back bias the thyristors during commutation. Alternatively, a commutation power supply (20) may be employed for that purpse. Diodes (D.sub.1, D.sub.2) in series with some voltage dropping element (resistor R.sub.12 or resistors R.sub.1, R.sub.2 or Zener diodes D.sub.4, D.sub.5) are connected in parallel with the thyristors in the half bridge and transformer configuration to assure sharing the back bias voltage. A clamp circuit comprised of a winding (18) negatively coupled to the inductor and a diode (D.sub.3) return stored energy from the inductor to the power supply for efficient operation with buck or boost mode.

  12. Driving out errors through tight integration between software and automation.

    PubMed

    Reifsteck, Mark; Swanson, Thomas; Dallas, Mary

    2006-01-01

    A clear case has been made for using clinical IT to improve medication safety, particularly bar-code point-of-care medication administration and computerized practitioner order entry (CPOE) with clinical decision support. The equally important role of automation has been overlooked. When the two are tightly integrated, with pharmacy information serving as a hub, the distinctions between software and automation become blurred. A true end-to-end medication management system drives out errors from the dockside to the bedside. Presbyterian Healthcare Services in Albuquerque has been building such a system since 1999, beginning by automating pharmacy operations to support bar-coded medication administration. Encouraged by those results, it then began layering on software to further support clinician workflow and improve communication, culminating with the deployment of CPOE and clinical decision support. This combination, plus a hard-wired culture of safety, has resulted in a dramatically lower mortality and harm rate that could not have been achieved with a partial solution.

  13. Basic to Advanced InSAR Processing: GMTSAR

    NASA Astrophysics Data System (ADS)

    Sandwell, D. T.; Xu, X.; Baker, S.; Hogrelius, A.; Mellors, R. J.; Tong, X.; Wei, M.; Wessel, P.

    2017-12-01

    Monitoring crustal deformation using InSAR is becoming a standard technique for the science and application communities. Optimal use of the new data streams from Sentinel-1 and NISAR will require open software tools as well as education on the strengths and limitations of the InSAR methods. Over the past decade we have developed freely available, open-source software for processing InSAR data. The software relies on the Generic Mapping Tools (GMT) for the back-end data analysis and display and is thus called GMTSAR. With startup funding from NSF, we accelerated the development of GMTSAR to include more satellite data sources and provide better integration and distribution with GMT. In addition, with support from UNAVCO we have offered 6 GMTSAR short courses to educate mostly novice InSAR users. Currently, the software is used by hundreds of scientists and engineers around the world to study deformation at more than 4300 different sites. The most challenging aspect of the recent software development was the transition from image alignment using the cross-correlation method to a completely new alignment algorithm that uses only the precise orbital information to geometrically align images to an accuracy of better than 7 cm. This development was needed to process a new data type that is being acquired by the Sentinel-1A/B satellites. This combination of software and open data is transforming radar interferometry from a research tool into a fully operational time series analysis tool. Over the next 5 years we are planning to continue to broaden the user base through: improved software delivery methods; code hardening; better integration with data archives; support for high level products being developed for NISAR; and continued education and outreach.

  14. Genome Annotation Generator: a simple tool for generating and correcting WGS annotation tables for NCBI submission.

    PubMed

    Geib, Scott M; Hall, Brian; Derego, Theodore; Bremer, Forest T; Cannoles, Kyle; Sim, Sheina B

    2018-04-01

    One of the most overlooked, yet critical, components of a whole genome sequencing (WGS) project is the submission and curation of the data to a genomic repository, most commonly the National Center for Biotechnology Information (NCBI). While large genome centers or genome groups have developed software tools for post-annotation assembly filtering, annotation, and conversion into the NCBI's annotation table format, these tools typically require back-end setup and connection to an Structured Query Language (SQL) database and/or some knowledge of programming (Perl, Python) to implement. With WGS becoming commonplace, genome sequencing projects are moving away from the genome centers and into the ecology or biology lab, where fewer resources are present to support the process of genome assembly curation. To fill this gap, we developed software to assess, filter, and transfer annotation and convert a draft genome assembly and annotation set into the NCBI annotation table (.tbl) format, facilitating submission to the NCBI Genome Assembly database. This software has no dependencies, is compatible across platforms, and utilizes a simple command to perform a variety of simple and complex post-analysis, pre-NCBI submission WGS project tasks. The Genome Annotation Generator is a consistent and user-friendly bioinformatics tool that can be used to generate a .tbl file that is consistent with the NCBI submission pipeline. The Genome Annotation Generator achieves the goal of providing a publicly available tool that will facilitate the submission of annotated genome assemblies to the NCBI. It is useful for any individual researcher or research group that wishes to submit a genome assembly of their study system to the NCBI.

  15. Genome Annotation Generator: a simple tool for generating and correcting WGS annotation tables for NCBI submission

    PubMed Central

    Hall, Brian; Derego, Theodore; Bremer, Forest T; Cannoles, Kyle

    2018-01-01

    Abstract Background One of the most overlooked, yet critical, components of a whole genome sequencing (WGS) project is the submission and curation of the data to a genomic repository, most commonly the National Center for Biotechnology Information (NCBI). While large genome centers or genome groups have developed software tools for post-annotation assembly filtering, annotation, and conversion into the NCBI’s annotation table format, these tools typically require back-end setup and connection to an Structured Query Language (SQL) database and/or some knowledge of programming (Perl, Python) to implement. With WGS becoming commonplace, genome sequencing projects are moving away from the genome centers and into the ecology or biology lab, where fewer resources are present to support the process of genome assembly curation. To fill this gap, we developed software to assess, filter, and transfer annotation and convert a draft genome assembly and annotation set into the NCBI annotation table (.tbl) format, facilitating submission to the NCBI Genome Assembly database. This software has no dependencies, is compatible across platforms, and utilizes a simple command to perform a variety of simple and complex post-analysis, pre-NCBI submission WGS project tasks. Findings The Genome Annotation Generator is a consistent and user-friendly bioinformatics tool that can be used to generate a .tbl file that is consistent with the NCBI submission pipeline Conclusions The Genome Annotation Generator achieves the goal of providing a publicly available tool that will facilitate the submission of annotated genome assemblies to the NCBI. It is useful for any individual researcher or research group that wishes to submit a genome assembly of their study system to the NCBI. PMID:29635297

  16. An online operational rainfall-monitoring resource for epidemic malaria early warning systems in Africa

    USGS Publications Warehouse

    Grover-Kopec, Emily; Kawano, Mika; Klaver, Robert W.; Blumenthal, Benno; Ceccato, Pietro; Connor, Stephen J.

    2005-01-01

    Periodic epidemics of malaria are a major public health problem for many sub-Saharan African countries. Populations in epidemic prone areas have a poorly developed immunity to malaria and the disease remains life threatening to all age groups. The impact of epidemics could be minimized by prediction and improved prevention through timely vector control and deployment of appropriate drugs. Malaria Early Warning Systems are advocated as a means of improving the opportunity for preparedness and timely response.Rainfall is one of the major factors triggering epidemics in warm semi-arid and desert-fringe areas. Explosive epidemics often occur in these regions after excessive rains and, where these follow periods of drought and poor food security, can be especially severe. Consequently, rainfall monitoring forms one of the essential elements for the development of integrated Malaria Early Warning Systems for sub-Saharan Africa, as outlined by the World Health Organization.The Roll Back Malaria Technical Resource Network on Prevention and Control of Epidemics recommended that a simple indicator of changes in epidemic risk in regions of marginal transmission, consisting primarily of rainfall anomaly maps, could provide immediate benefit to early warning efforts. In response to these recommendations, the Famine Early Warning Systems Network produced maps that combine information about dekadal rainfall anomalies, and epidemic malaria risk, available via their Africa Data Dissemination Service. These maps were later made available in a format that is directly compatible with HealthMapper, the mapping and surveillance software developed by the WHO's Communicable Disease Surveillance and Response Department. A new monitoring interface has recently been developed at the International Research Institute for Climate Prediction (IRI) that enables the user to gain a more contextual perspective of the current rainfall estimates by comparing them to previous seasons and climatological averages. These resources are available at no cost to the user and are updated on a routine basis.

  17. An efficient approach to the deployment of complex open source information systems

    PubMed Central

    Cong, Truong Van Chi; Groeneveld, Eildert

    2011-01-01

    Complex open source information systems are usually implemented as component-based software to inherit the available functionality of existing software packages developed by third parties. Consequently, the deployment of these systems not only requires the installation of operating system, application framework and the configuration of services but also needs to resolve the dependencies among components. The problem becomes more challenging when the application must be installed and used on different platforms such as Linux and Windows. To address this, an efficient approach using the virtualization technology is suggested and discussed in this paper. The approach has been applied in our project to deploy a web-based integrated information system in molecular genetics labs. It is a low-cost solution to benefit both software developers and end-users. PMID:22102770

  18. Artificial Intelligence and Information Retrieval.

    ERIC Educational Resources Information Center

    Teodorescu, Ioana

    1987-01-01

    Compares artificial intelligence and information retrieval paradigms for natural language understanding, reviews progress to date, and outlines the applicability of artificial intelligence to question answering systems. A list of principal artificial intelligence software for database front end systems is appended. (CLB)

  19. jade: An End-To-End Data Transfer and Catalog Tool

    NASA Astrophysics Data System (ADS)

    Meade, P.

    2017-10-01

    The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. IceCube collects 1 TB of data every day. An online filtering farm processes this data in real time and selects 10% to be sent via satellite to the main data center at the University of Wisconsin-Madison. IceCube has two year-round on-site operators. New operators are hired every year, due to the hard conditions of wintering at the South Pole. These operators are tasked with the daily operations of running a complex detector in serious isolation conditions. One of the systems they operate is the data archiving and transfer system. Due to these challenging operational conditions, the data archive and transfer system must above all be simple and robust. It must also share the limited resource of satellite bandwidth, and collect and preserve useful metadata. The original data archive and transfer software for IceCube was written in 2005. After running in production for several years, the decision was taken to fully rewrite it, in order to address a number of structural drawbacks. The new data archive and transfer software (JADE2) has been in production for several months providing improved performance and resiliency. One of the main goals for JADE2 is to provide a unified system that handles the IceCube data end-to-end: from collection at the South Pole, all the way to long-term archive and preservation in dedicated repositories at the North. In this contribution, we describe our experiences and lessons learned from developing and operating the data archive and transfer software for a particle physics experiment in extreme operational conditions like IceCube.

  20. Space Communications Artificial Intelligence for Link Evaluation Terminal (SCAILET)

    NASA Technical Reports Server (NTRS)

    Shahidi, Anoosh

    1991-01-01

    A software application to assis end-users of the Link Evaluation Terminal (LET) for satellite communication is being developed. This software application incorporates artificial intelligence (AI) techniques and will be deployed as an interface to LET. The high burst rate (HBR) LET provides 30 GHz transmitting/20 GHz receiving, 220/110 Mbps capability for wideband communications technology experiments with the Advanced Communications Technology Satellite (ACTS). The HBR LET and ACTS are being developed at the NASA Lewis Research Center. The HBR LET can monitor and evaluate the integrity of the HBR communications uplink and downlink to the ACTS satellite. The uplink HBR transmission is performed by bursting the bit-pattern as a modulated signal to the satellite. By comparing the transmitted bit pattern with the received bit pattern, HBR LET can determine the bit error rate BER) under various atmospheric conditions. An algorithm for power augmentation is applied to enhance the system's BER performance at reduced signal strength caused by adverse conditions. Programming scripts, defined by the design engineer, set up the HBR LET terminal by programming subsystem devices through IEEE488 interfaces. However, the scripts are difficult to use, require a steep learning curve, are cryptic, and are hard to maintain. The combination of the learning curve and the complexities involved with editing the script files may discourage end-users from utilizing the full capabilities of the HBR LET system. An intelligent assistant component of SCAILET that addresses critical end-user needs in the programming of the HBR LET system as anticipated by its developers is described. A close look is taken at the various steps involved in writing ECM software for a C&P, computer and at how the intelligent assistant improves the HBR LET system and enhances the end-user's ability to perform the experiments.

  1. Software Tools For Building Decision-support Models For Flood Emergency Situations

    NASA Astrophysics Data System (ADS)

    Garrote, L.; Molina, M.; Ruiz, J. M.; Mosquera, J. C.

    The SAIDA decision-support system was developed by the Spanish Ministry of the Environment to provide assistance to decision-makers during flood situations. SAIDA has been tentatively implemented in two test basins: Jucar and Guadalhorce, and the Ministry is currently planning to have it implemented in all major Spanish basins in a few years' time. During the development cycle of SAIDA, the need for providing as- sistance to end-users in model definition and calibration was clearly identified. System developers usually emphasise abstraction and generality with the goal of providing a versatile software environment. End users, on the other hand, require concretion and specificity to adapt the general model to their local basins. As decision-support models become more complex, the gap between model developers and users gets wider: Who takes care of model definition, calibration and validation?. Initially, model developers perform these tasks, but the scope is usually limited to a few small test basins. Before the model enters operational stage, end users must get involved in model construction and calibration, in order to gain confidence in the model recommendations. However, getting the users involved in these activities is a difficult task. The goal of this re- search is to develop representation techniques for simulation and management models in order to define, develop and validate a mechanism, supported by a software envi- ronment, oriented to provide assistance to the end-user in building decision models for the prediction and management of river floods in real time. The system is based on three main building blocks: A library of simulators of the physical system, an editor to assist the user in building simulation models, and a machine learning method to calibrate decision models based on the simulation models provided by the user.

  2. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  3. Satellite freeze forecast system

    NASA Technical Reports Server (NTRS)

    Martsolf, J. D. (Principal Investigator)

    1983-01-01

    Provisions for back-up operations for the satellite freeze forecast system are discussed including software and hardware maintenance and DS/1000-1V linkage; troubleshooting; and digitized radar usage. The documentation developed; dissemination of data products via television and the IFAS computer network; data base management; predictive models; the installation of and progress towards the operational status of key stations; and digital data acquisition are also considered. The d addition of dew point temperature into the P-model is outlined.

  4. Next Generation Cloud-based Science Data Systems and Their Implications on Data and Software Stewardship, Preservation, and Provenance

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.

    2017-12-01

    NASA's upcoming missions are expected to be generating data volumes at least an order of magnitude larger than current missions. A significant increase in data processing, data rates, data volumes, and long-term data archive capabilities are needed. Consequently, new challenges are emerging that impact traditional data and software management approaches. At large-scales, next generation science data systems are exploring the move onto cloud computing paradigms to support these increased needs. New implications such as costs, data movement, collocation of data systems & archives, and moving processing closer to the data, may result in changes to the stewardship, preservation, and provenance of science data and software. With more science data systems being on-boarding onto cloud computing facilities, we can expect more Earth science data records to be both generated and kept in the cloud. But at large scales, the cost of processing and storing global data may impact architectural and system designs. Data systems will trade the cost of keeping data in the cloud with the data life-cycle approaches of moving "colder" data back to traditional on-premise facilities. How will this impact data citation and processing software stewardship? What are the impacts of cloud-based on-demand processing and its affect on reproducibility and provenance. Similarly, with more science processing software being moved onto cloud, virtual machines, and container based approaches, more opportunities arise for improved stewardship and preservation. But will the science community trust data reprocessed years or decades later? We will also explore emerging questions of the stewardship of the science data system software that is generating the science data records both during and after the life of mission.

  5. Ease of adoption of clinical natural language processing software: An evaluation of five systems.

    PubMed

    Zheng, Kai; Vydiswaran, V G Vinod; Liu, Yang; Wang, Yue; Stubbs, Amber; Uzuner, Özlem; Gururaj, Anupama E; Bayer, Samuel; Aberdeen, John; Rumshisky, Anna; Pakhomov, Serguei; Liu, Hongfang; Xu, Hua

    2015-12-01

    In recognition of potential barriers that may inhibit the widespread adoption of biomedical software, the 2014 i2b2 Challenge introduced a special track, Track 3 - Software Usability Assessment, in order to develop a better understanding of the adoption issues that might be associated with the state-of-the-art clinical NLP systems. This paper reports the ease of adoption assessment methods we developed for this track, and the results of evaluating five clinical NLP system submissions. A team of human evaluators performed a series of scripted adoptability test tasks with each of the participating systems. The evaluation team consisted of four "expert evaluators" with training in computer science, and eight "end user evaluators" with mixed backgrounds in medicine, nursing, pharmacy, and health informatics. We assessed how easy it is to adopt the submitted systems along the following three dimensions: communication effectiveness (i.e., how effective a system is in communicating its designed objectives to intended audience), effort required to install, and effort required to use. We used a formal software usability testing tool, TURF, to record the evaluators' interactions with the systems and 'think-aloud' data revealing their thought processes when installing and using the systems and when resolving unexpected issues. Overall, the ease of adoption ratings that the five systems received are unsatisfactory. Installation of some of the systems proved to be rather difficult, and some systems failed to adequately communicate their designed objectives to intended adopters. Further, the average ratings provided by the end user evaluators on ease of use and ease of interpreting output are -0.35 and -0.53, respectively, indicating that this group of users generally deemed the systems extremely difficult to work with. While the ratings provided by the expert evaluators are higher, 0.6 and 0.45, respectively, these ratings are still low indicating that they also experienced considerable struggles. The results of the Track 3 evaluation show that the adoptability of the five participating clinical NLP systems has a great margin for improvement. Remedy strategies suggested by the evaluators included (1) more detailed and operation system specific use instructions; (2) provision of more pertinent onscreen feedback for easier diagnosis of problems; (3) including screen walk-throughs in use instructions so users know what to expect and what might have gone wrong; (4) avoiding jargon and acronyms in materials intended for end users; and (5) packaging prerequisites required within software distributions so that prospective adopters of the software do not have to obtain each of the third-party components on their own. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Software Architecture to Support the Evolution of the ISRU RESOLVE Engineering Breadboard Unit 2 (EBU2)

    NASA Technical Reports Server (NTRS)

    Moss, Thomas; Nurge, Mark; Perusich, Stephen

    2011-01-01

    The In-Situ Resource Utilization (ISRU) Regolith & Environmental Science and Oxygen & Lunar Volatiles Extraction (RESOLVE) software provides operation of the physical plant from a remote location with a high-level interface that can access and control the data from external software applications of other subsystems. This software allows autonomous control over the entire system with manual computer control of individual system/process components. It gives non-programmer operators the capability to easily modify the high-level autonomous sequencing while the software is in operation, as well as the ability to modify the low-level, file-based sequences prior to the system operation. Local automated control in a distributed system is also enabled where component control is maintained during the loss of network connectivity with the remote workstation. This innovation also minimizes network traffic. The software architecture commands and controls the latest generation of RESOLVE processes used to obtain, process, and quantify lunar regolith. The system is grouped into six sub-processes: Drill, Crush, Reactor, Lunar Water Resource Demonstration (LWRD), Regolith Volatiles Characterization (RVC) (see example), and Regolith Oxygen Extraction (ROE). Some processes are independent, some are dependent on other processes, and some are independent but run concurrently with other processes. The first goal is to analyze the volatiles emanating from lunar regolith, such as water, carbon monoxide, carbon dioxide, ammonia, hydrogen, and others. This is done by heating the soil and analyzing and capturing the volatilized product. The second goal is to produce water by reducing the soil at high temperatures with hydrogen. This is done by raising the reactor temperature in the range of 800 to 900 C, causing the reaction to progress by adding hydrogen, and then capturing the water product in a desiccant bed. The software needs to run the entire unit and all sub-processes; however, throughout testing, many variables and parameters need to be changed as more is learned about the system operation. The Master Events Controller (MEC) is run on a standard laptop PC using Windows XP. This PC runs in parallel to another laptop that monitors the GC, and a third PC that monitors the drilling/ crushing operation. These three PCs interface to the process through a CompactRIO, OPC Servers, and modems.

  7. Non Contacting Evaluation of Strains and Cracking Using Optical and Infrared Imaging Techniques

    DTIC Science & Technology

    1988-08-22

    Compatible Zenith Z-386 microcomputer with plotter II. 3-D Motion Measurinq System 1. Complete OPTOTRAK three dimensional digitizing system. System includes...acquisition unit - 16 single ended analog input channels 3. Data Analysis Package software (KINEPLOT) 4. Extra OPTOTRAK Camera (max 224 per system

  8. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  9. Electrochemical variational study of donor/acceptor orbital mixing and electronic coupling in cyanide-bridged mixed-valence complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Yuhuua; Hupp, J.T.

    1992-07-08

    Cyanide-bridged mixed-valence complexes are interesting examples of strongly covalently linked redox systems which, nevertheless, exist in valence-localized form. As mixed-valence species, they display fairly intense intervalence (or metal-to-metal) charge-transfer transitions ([epsilon] [approx] 3000 M[sup [minus]1] cm[sup [minus]1]), which tend to be shifted toward the visible region from the near-infrared on account of substantial redox asymmetry. The authors have recently succeeded in obtaining (by femtosecond transient absorbance spectroscopy) a direct measure of the thermal kinetics (k[sub ET]) of the highly exothermic back-electron-transfer reaction which follows intervalence excitation in one of these complexes, (H[sub 3]N)[sub 5]Ru-NC-Fe(CN)[sub 5][sup [minus

  10. 1.6  MW peak power, 90  ps all-solid-state laser from an aberration self-compensated double-passing end-pumped Nd:YVO4 rod amplifier.

    PubMed

    Wang, Chunhua; Liu, Chong; Shen, Lifeng; Zhao, Zhiliang; Liu, Bin; Jiang, Hongbo

    2016-03-20

    In this paper a delicately designed double-passing end-pumped Nd:YVO4 rod amplifier is reported that produces 10.2 W average laser output when seeded by a 6 mW Nd:YVO4 microchip laser at a repetition rate of 70 kHz with pulse duration of 90 ps. A pulse peak power of ∼1.6  MW and pulse energy of ∼143  μJ is achieved. The beam quality is well preserved by a double-passing configuration for spherical-aberration compensation. The laser-beam size in the amplifier is optimized to prevent the unwanted damage from the high pulse peak-power density. This study provides a simple and robust picosecond all-solid-state master oscillator power amplifier system with both high peak power and high beam quality, which shows great potential in the micromachining.

  11. Sn-based Ge/Ge{sub 0.975}Sn{sub 0.025}/Ge p-i-n photodetector operated with back-side illumination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C.; Li, H.; Huang, S. H.

    2016-04-11

    We report an investigation of a GeSn-based p-i-n photodetector grown on a Ge wafer that collects light signal from the back of the wafer. Temperature dependent absorption measurements performed over a wide temperature range (300 K down to 25 K) show that (a) absorption starts at the indirect bandgap of the active GeSn layer and continues up to the direct bandgap of the Ge wafer, and (b) the peak responsivity increases rapidly at first with decreasing temperature, then increases more slowly, followed by a decrease at the lower temperatures. The maximum responsivity happens at 125 K, which can easily be achieved with themore » use of liquid nitrogen. The temperature dependence of the photocurrent is analyzed by taking into consideration of the temperature dependence of the electron and hole mobility in the active layer, and the analysis result is in reasonable agreement with the data in the temperature regime where the rapid increase occurs. This investigation demonstrates the feasibility of a GeSn-based photodiode that can be operated with back-side illumination for applications in image sensing systems.« less

  12. Sequence similarity is more relevant than species specificity in probabilistic backtranslation.

    PubMed

    Ferro, Alfredo; Giugno, Rosalba; Pigola, Giuseppe; Pulvirenti, Alfredo; Di Pietro, Cinzia; Purrello, Michele; Ragusa, Marco

    2007-02-21

    Backtranslation is the process of decoding a sequence of amino acids into the corresponding codons. All synthetic gene design systems include a backtranslation module. The degeneracy of the genetic code makes backtranslation potentially ambiguous since most amino acids are encoded by multiple codons. The common approach to overcome this difficulty is based on imitation of codon usage within the target species. This paper describes EasyBack, a new parameter-free, fully-automated software for backtranslation using Hidden Markov Models. EasyBack is not based on imitation of codon usage within the target species, but instead uses a sequence-similarity criterion. The model is trained with a set of proteins with known cDNA coding sequences, constructed from the input protein by querying the NCBI databases with BLAST. Unlike existing software, the proposed method allows the quality of prediction to be estimated. When tested on a group of proteins that show different degrees of sequence conservation, EasyBack outperforms other published methods in terms of precision. The prediction quality of a protein backtranslation methis markedly increased by replacing the criterion of most used codon in the same species with a Hidden Markov Model trained with a set of most similar sequences from all species. Moreover, the proposed method allows the quality of prediction to be estimated probabilistically.

  13. Objective measurement of human tolerance to +G sub z acceleration stress. Ph.D. Thesis - Univ. of N. Indiana

    NASA Technical Reports Server (NTRS)

    Rositano, S. A.

    1980-01-01

    The efficacy of a new objective technique using a transcutaneous Doppler flowmeter to monitor superficial temporal artery blood flow velocity during acceleration was investigated. The results were correlated with current objective and subjective G tolerance end points. In over 1300 centrifuge runs, retrograde eye level blood flow leading to total flow cessation was consistently recorded and preceded visual field deterioration leading to blackout by 3 to 23 seconds. The new method was successfully applied as an objective indication of tolerance in a variety of test situations including evaluation of g-suits, straining maneuvers, and 13 deg, 45 deg and 65 deg set back angles.

  14. IBM techexplorer and MathML: Interactive Multimodal Scientific Documents

    NASA Astrophysics Data System (ADS)

    Diaz, Angel

    2001-06-01

    The World Wide Web provides a standard publishing platform for disseminating scientific and technical articles, books, journals, courseware, or even homework on the internet; however, the transition from paper to web-based interactive content has brought new opportunities for creating interactive content. Students, scientists, and engineers are now faced with the task of rendering the 2D presentational structure of mathematics, harnessing the wealth of scientific and technical software, and creating truly accessible scientific portals across international boundaries and markets. The recent emergence of World Wide Web Consortium (W3C) standards such as the Mathematical Markup Language (MathML), Language (XSL), and Aural CSS (ACSS) provide a foundation whereby mathematics can be displayed, enlivened, computed, and audio formatted. With interoperability ensured by standards, software applications can be easily brought together to create extensible and interactive scientific content. In this presentation we will provide an overview of the IBM techexplorer Hypermedia Browser, a web browser plug-in and ActiveX control aimed at bringing interactive mathematics to the masses across platforms and applications. We will demonstrate "live" mathematics where documents that contain MathML expressions can be edited and computed right inside your favorite web browser. This demonstration will be generalized as we show how MathML can be used to enliven even PowerPoint presentations. Finally, we will close the loop by demonstrating a novel approach to spoken mathematics based on MathML, DOM, XSL, ACSS, techexplorer, and IBM ViaVoice. By making use of techexplorer as the glue that binds the rendered content to the web browser, the back-end computation software, the Java applets that augment the exposition, and voice-rendering systems such as ViaVoice, authors can indeed create truly extensible and interactive scientific content. For more information see: [http://www.software.ibm.com/techexplorer] [http://www.alphaworks.ibm.com] [http://www.w3.org

  15. An Overview of Advanced Data Acquisition System (ADAS)

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Steinrock, T. (Technical Monitor)

    2001-01-01

    The paper discusses the following: 1. Historical background. 2. What is ADAS? 3. R and D status. 4. Reliability/cost examples (1, 2, and 3). 5. What's new? 6. Technical advantages. 7. NASA relevance. 8. NASA plans/options. 9. Remaining R and D. 10. Applications. 11. Product benefits. 11. Commercial advantages. 12. intellectual property. Aerospace industry requires highly reliable data acquisition systems. Traditional Acquisition systems employ end-to-end hardware and software redundancy. Typically, redundancy adds weight, cost, power consumption, and complexity.

  16. SENSOR: a tool for the simulation of hyperspectral remote sensing systems

    NASA Astrophysics Data System (ADS)

    Börner, Anko; Wiest, Lorenz; Keller, Peter; Reulke, Ralf; Richter, Rolf; Schaepman, Michael; Schläpfer, Daniel

    The consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. The third part consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimisation requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given. The verification of SENSOR is demonstrated. This work is closely related to the Airborne PRISM Experiment (APEX), an airborne imaging spectrometer funded by the European Space Agency.

  17. Strategies for distant speech recognitionin reverberant environments

    NASA Astrophysics Data System (ADS)

    Delcroix, Marc; Yoshioka, Takuya; Ogawa, Atsunori; Kubo, Yotaro; Fujimoto, Masakiyo; Ito, Nobutaka; Kinoshita, Keisuke; Espi, Miquel; Araki, Shoko; Hori, Takaaki; Nakatani, Tomohiro

    2015-12-01

    Reverberation and noise are known to severely affect the automatic speech recognition (ASR) performance of speech recorded by distant microphones. Therefore, we must deal with reverberation if we are to realize high-performance hands-free speech recognition. In this paper, we review a recognition system that we developed at our laboratory to deal with reverberant speech. The system consists of a speech enhancement (SE) front-end that employs long-term linear prediction-based dereverberation followed by noise reduction. We combine our SE front-end with an ASR back-end that uses neural networks for acoustic and language modeling. The proposed system achieved top scores on the ASR task of the REVERB challenge. This paper describes the different technologies used in our system and presents detailed experimental results that justify our implementation choices and may provide hints for designing distant ASR systems.

  18. Fiji: an open-source platform for biological-image analysis.

    PubMed

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  19. [Three-dimensional finite element analysis of maxillary anterior teeth retraction force system in light wire technique].

    PubMed

    Zhang, Xiangfeng; Wang, Chao; Xia, Xi; Deng, Feng; Zhang, Yi

    2015-06-01

    This study aims to construct a three-dimensional finite element model of a maxillary anterior teeth retraction force system in light wire technique and to investigate the difference of hydrostatic pressure and initial displacement of upper anterior teeth under different torque values of tip back bend. A geometric three-dimensional model of the maxillary bone, including all the upper teeth, was achieved via CT scan. To construct the force model system, lingual brackets and wire were constructed by using the Solidworks. Brackets software, and wire were assembled to the teeth. ANASYS was used to calculate the hydrostatic pressure and the initial displacement of maxillary anterior teeth under different tip-back bend moments of 15, 30, 45, 60, and 75 Nmm when the class II elastic force was 0.556 N. Hydrostatic pressure was concentrated in the root apices and cervical margin of upper anterior teeth. Distal tipping and relative intrusive displacement were observed. The hydrostatic pressure and initial displacement of upper canine were greater than in the central and lateral incisors. This hydrostatic pressure and initial intrusive displacement increased with an increase in tip-back bend moment. Lingual retraction force system of maxillary anterior teeth in light wire technique can be applied safely and controllably. The type and quantity of teeth movement can be controlled by the alteration of tip-back bend moment.

  20. Progress towards an Optimization Methodology for Combustion-Driven Portable Thermoelectric Power Generation Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.

    2012-03-13

    Enormous military and commercial interests exist in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. Design and development of a portable TE power system using a JP-8 combustor as a high temperature heat source and optimal process flows depend on efficient heat generation, transfer, and recovery within the system are explored. Design optimization of the system required considering the combustion system efficiency and TE conversion efficiency simultaneously. The combustor performance and TE sub-system performance were coupled directlymore » through exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation of this system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed thermoelectric converter thermal/mechanical modeling. To this end, this work reports on design integration of systemlevel process flow simulations using commercial software CHEMCADTM with in-house thermoelectric converter and module optimization, and heat exchanger analyses using COMSOLTM software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem level conversion efficiencies exceeding 10%. These TE advances are integrated with a high performance microtechnology combustion reactor based on recent advances at the Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation established a basis for optimal selection of fuel and air flow rates, thermoelectric module design and operating conditions, and microtechnology heat-exchanger design criteria. This paper will discuss this simulation process that leads directly to system efficiency power maps defining potentially available optimal system operating conditions and regimes. This coupled simulation approach enables pathways for integrated use of high-performance combustor components, high performance TE devices, and microtechnologies to produce a compact, lightweight, combustion driven TE power system prototype that operates on common fuels.« less

  1. BioMon: A Google Earth Based Continuous Biomass Monitoring System (Demo Paper)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju

    2009-01-01

    We demonstrate a Google Earth based novel visualization system for continuous monitoring of biomass at regional and global scales. This system is integrated with a back-end spatiotemporal data mining system that continuously detects changes using high temporal resolution MODIS images. In addition to the visualization, we demonstrate novel query features of the system that provides insights into the current conditions of the landscape.

  2. Dielectric relaxation in 0-3 PVDF-Ba(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chandra, K. P., E-mail: kpchandra23@gmail.com; Singh, Rajan; Kulkarni, A. R., E-mail: ajit2957@gmail.com

    2016-05-06

    (1-x)PVDF-xBa(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} ceramic-polymer composites with x = 0.025, 0.05, 0.10, 0.15 were prepared using melt-mixing technique. The crystal symmetry, space group and unit cell dimensions were determined from the XRD data of Ba(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} using FullProf software, whereas crystallite size and lattice strain were estimated using Williamson-Hall approach. The distribution of Ba(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} particles in the PVDF matrix were examined on the cryo-fractured surfaces using a scanning electron microscope. Cole-Cole and pseudo Cole-Cole analysis suggested the dielectric relaxation in this system to be of non-Debye type. Filler concentration dependent real and imaginary parts ofmore » dielectric constant as well as ac conductivity data followed definite trends of exponential growth types of variation.« less

  3. Idea Paper: The Lifecycle of Software for Scientific Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubey, Anshu; McInnes, Lois C.

    The software lifecycle is a well researched topic that has produced many models to meet the needs of different types of software projects. However, one class of projects, software development for scientific computing, has received relatively little attention from lifecycle researchers. In particular, software for end-to-end computations for obtaining scientific results has received few lifecycle proposals and no formalization of a development model. An examination of development approaches employed by the teams implementing large multicomponent codes reveals a great deal of similarity in their strategies. This idea paper formalizes these related approaches into a lifecycle model for end-to-end scientific applicationmore » software, featuring loose coupling between submodels for development of infrastructure and scientific capability. We also invite input from stakeholders to converge on a model that captures the complexity of this development processes and provides needed lifecycle guidance to the scientific software community.« less

  4. The Stickybear Reading Comprehension Series: Science. School Version with Lesson Plans for Ages 7-10. Volume 1. Stickybear Software.

    ERIC Educational Resources Information Center

    1996

    This software product presents multi-level stories to capture the interest of children in grades two through five, while teaching them crucial reading comprehension skills. With stories touching on everything from the invention of velcro to the journey of food through the digestive system, the open-ended reading comprehension program is versatile…

  5. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  6. Predicting Software Suitability Using a Bayesian Belief Network

    NASA Technical Reports Server (NTRS)

    Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.

    2005-01-01

    The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.

  7. Radiation and scattering from printed antennas on cylindrically conformal platforms

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.; Volakis, John L.; Bindiganavale, Sunil

    1994-01-01

    The goal was to develop suitable methods and software for the analysis of antennas on cylindrical coated and uncoated platforms. Specifically, the finite element boundary integral and finite element ABC methods were employed successfully and associated software were developed for the analysis and design of wraparound and discrete cavity-backed arrays situated on cylindrical platforms. This work led to the successful implementation of analysis software for such antennas. Developments which played a role in this respect are the efficient implementation of the 3D Green's function for a metallic cylinder, the incorporation of the fast Fourier transform in computing the matrix-vector products executed in the solver of the finite element-boundary integral system, and the development of a new absorbing boundary condition for terminating the finite element mesh on cylindrical surfaces.

  8. [Portable Epileptic Seizure Monitoring Intelligent System Based on Android System].

    PubMed

    Liang, Zhenhu; Wu, Shufeng; Yang, Chunlin; Jiang, Zhenzhou; Yu, Tao; Lu, Chengbiao; Li, Xiaoli

    2016-02-01

    The clinical electroencephalogram (EEG) monitoring systems based on personal computer system can not meet the requirements of portability and home usage. The epilepsy patients have to be monitored in hospital for an extended period of time, which imposes a heavy burden on hospitals. In the present study, we designed a portable 16-lead networked monitoring system based on the Android smart phone. The system uses some technologies including the active electrode, the WiFi wireless transmission, the multi-scale permutation entropy (MPE) algorithm, the back-propagation (BP) neural network algorithm, etc. Moreover, the software of Android mobile application can realize the processing and analysis of EEG data, the display of EEG waveform and the alarm of epileptic seizure. The system has been tested on the mobile phones with Android 2. 3 operating system or higher version and the results showed that this software ran accurately and steadily in the detection of epileptic seizure. In conclusion, this paper provides a portable and reliable solution for epileptic seizure monitoring in clinical and home applications.

  9. The Effect of Back Pressure on the Operation of a Diesel Engine

    DTIC Science & Technology

    2011-02-01

    increased back pressure on a turbocharged diesel engine. Steady state and varying back pressure are considered. The results show that high back...a turbocharged diesel engine using the Ricardo Wave engine modelling software, to gain understanding of the problem and provide a good base for...higher pressure. The pressure ratios across the turbocharger compressor and turbine decrease, reducing the mass flow of air through these components

  10. The Effect of Back Pressure on the Operation of a Disel Engine

    DTIC Science & Technology

    2011-02-01

    increased back pressure on a turbocharged diesel engine. Steady state and varying back pressure are considered. The results show that high back...a turbocharged diesel engine using the Ricardo Wave engine modelling software, to gain understanding of the problem and provide a good base for...higher pressure. The pressure ratios across the turbocharger compressor and turbine decrease, reducing the mass flow of air through these components

  11. The Precision Formation Flying Integrated Analysis Tool (PFFIAT)

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Lyon, Richard G.; Sears, Edie; Lu, Victor

    2004-01-01

    Several space missions presently in the concept phase (e.g. Stellar Imager, Sub- millimeter Probe of Evolutionary Cosmic Structure, Terrestrial Planet Finder) plan to use multiple spacecraft flying in precise formation to synthesize unprecedently large aperture optical systems. These architectures present challenges to the attitude and position determination and control system; optical performance is directly coupled to spacecraft pointing with typical control requirements being on the scale of milliarcseconds and nanometers. To investigate control strategies, rejection of environmental disturbances, and sensor and actuator requirements, a capability is needed to model both the dynamical and optical behavior of such a distributed telescope system. This paper describes work ongoing at NASA Goddard Space Flight Center toward the integration of a set of optical analysis tools (Optical System Characterization and Analysis Research software, or OSCAR) with the Formation J?lying Test Bed (FFTB). The resulting system is called the Precision Formation Flying Integrated Analysis Tool (PFFIAT), and it provides the capability to simulate closed-loop control of optical systems composed of elements mounted on multiple spacecraft. The attitude and translation spacecraft dynamics are simulated in the FFTB, including effects of the space environment (e.g. solar radiation pressure, differential orbital motion). The resulting optical configuration is then processed by OSCAR to determine an optical image. From this image, wavefront sensing (e.g. phase retrieval) techniques are being developed to derive attitude and position errors. These error signals will be fed back to the spacecraft control systems, completing the control loop. A simple case study is presented to demonstrate the present capabilities of the tool.

  12. Software Template for Instruction in Mathematics

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.; Moebes, Travis A.; Beall, Anna

    2005-01-01

    Intelligent Math Tutor (IMT) is a software system that serves as a template for creating software for teaching mathematics. IMT can be easily connected to artificial-intelligence software and other analysis software through input and output of files. IMT provides an easy-to-use interface for generating courses that include tests that contain both multiple-choice and fill-in-the-blank questions, and enables tracking of test scores. IMT makes it easy to generate software for Web-based courses or to manufacture compact disks containing executable course software. IMT also can function as a Web-based application program, with features that run quickly on the Web, while retaining the intelligence of a high-level language application program with many graphics. IMT can be used to write application programs in text, graphics, and/or sound, so that the programs can be tailored to the needs of most handicapped persons. The course software generated by IMT follows a "back to basics" approach of teaching mathematics by inducing the student to apply creative mathematical techniques in the process of learning. Students are thereby made to discover mathematical fundamentals and thereby come to understand mathematics more deeply than they could through simple memorization.

  13. Enhanced autocompensating quantum cryptography system.

    PubMed

    Bethune, Donald S; Navarro, Martha; Risk, William P

    2002-03-20

    We have improved the hardware and software of our autocompensating system for quantum key distribution by replacing bulk optical components at the end stations with fiber-optic equivalents and implementing software that synchronizes end-station activities, communicates basis choices, corrects errors, and performs privacy amplification over a local area network. The all-fiber-optic arrangement provides stable, efficient, and high-contrast routing of the photons. The low-bit error rate leads to high error-correction efficiency and minimizes data sacrifice during privacy amplification. Characterization measurements made on a number of commercial avalanche photodiodes are presented that highlight the need for improved devices tailored specifically for quantum information applications. A scheme for frequency shifting the photons returning from Alice's station to allow them to be distinguished from backscattered noise photons is also described.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Byungsu; Samsung Display Co. Ltd., Tangjeong, Chungcheongnam-Do 336-741; Choi, Yonghyuk

    We demonstrate an enhanced electrical stability through a Ti oxide (TiO{sub x}) layer on the amorphous InGaZnO (a-IGZO) back-channel; this layer acts as a surface polarity modifier. Ultrathin Ti deposited on the a-IGZO existed as a TiO{sub x} thin film, resulting in oxygen cross-binding with a-IGZO surface. The electrical properties of a-IGZO thin film transistors (TFTs) with TiO{sub x} depend on the surface polarity change and electronic band structure evolution. This result indicates that TiO{sub x} on the back-channel serves as not only a passivation layer protecting the channel from ambient molecules or process variables but also a control layermore » of TFT device parameters.« less

  15. Electrophysiological assessment of piano players' back extensor muscles on a regular piano bench and chair with back rest.

    PubMed

    Honarmand, Kavan; Minaskanian, Rafael; Maboudi, Seyed Ebrahim; Oskouei, Ali E

    2018-01-01

    [Purpose] Sitting position is the dominant position for a professional pianist. There are many static and dynamic forces which affect musculoskeletal system during sitting. In prolonged sitting, these forces are harmful. The aim of this study was to compare pianists' back extensor muscles activity during playing piano while sitting on a regular piano bench and a chair with back rest. [Subjects and Methods] Ten professional piano players (mean age 25.4 ± 5.28, 60% male, 40% female) performed similar tasks for 5 hours in two sessions: one session sitting on a regular piano bench and the other sitting on a chair with back rest. In each session, muscular activity was assessed in 3 ways: 1) recording surface electromyography of the back-extensor muscles at the beginning and end of each session, 2) isometric back extension test, and 3) musculoskeletal discomfort questionnaire. [Results] There were significantly lesser muscular activity, more ability to perform isometric back extension and better personal comfort while sitting on a chair with back rest. [Conclusion] Decreased muscular activity and perhaps fatigue during prolonged piano playing on a chair with back rest may reduce acquired musculoskeletal disorders amongst professional pianists.

  16. Electrophysiological assessment of piano players’ back extensor muscles on a regular piano bench and chair with back rest

    PubMed Central

    Honarmand, Kavan; Minaskanian, Rafael; Maboudi, Seyed Ebrahim; Oskouei, Ali E.

    2018-01-01

    [Purpose] Sitting position is the dominant position for a professional pianist. There are many static and dynamic forces which affect musculoskeletal system during sitting. In prolonged sitting, these forces are harmful. The aim of this study was to compare pianists’ back extensor muscles activity during playing piano while sitting on a regular piano bench and a chair with back rest. [Subjects and Methods] Ten professional piano players (mean age 25.4 ± 5.28, 60% male, 40% female) performed similar tasks for 5 hours in two sessions: one session sitting on a regular piano bench and the other sitting on a chair with back rest. In each session, muscular activity was assessed in 3 ways: 1) recording surface electromyography of the back-extensor muscles at the beginning and end of each session, 2) isometric back extension test, and 3) musculoskeletal discomfort questionnaire. [Results] There were significantly lesser muscular activity, more ability to perform isometric back extension and better personal comfort while sitting on a chair with back rest. [Conclusion] Decreased muscular activity and perhaps fatigue during prolonged piano playing on a chair with back rest may reduce acquired musculoskeletal disorders amongst professional pianists. PMID:29410569

  17. 25 CFR 543.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... software upgrades, data storage media replacement, etc.). The information recorded must be used when...., draw objects and back-up draw objects); and (ii) Random number generator software. (Additional information technology security standards can be found in § 543.16 of this part.) (2) The game software...

  18. 25 CFR 543.7 - What are the minimum internal control standards for bingo?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... software upgrades, data storage media replacement, etc.). The information recorded must be used when...., draw objects and back-up draw objects); and (ii) Random number generator software. (Additional information technology security standards can be found in § 543.16 of this part.) (2) The game software...

  19. Virus Alert: Ten Steps to Safe Computing.

    ERIC Educational Resources Information Center

    Gunter, Glenda A.

    1997-01-01

    Discusses computer viruses and explains how to detect them; discusses virus protection and the need to update antivirus software; and offers 10 safe computing tips, including scanning floppy disks and commercial software, how to safely download files from the Internet, avoiding pirated software copies, and backing up files. (LRW)

  20. The readout system for the ArTeMis camera

    NASA Astrophysics Data System (ADS)

    Doumayrou, E.; Lortholary, M.; Dumaye, L.; Hamon, G.

    2014-07-01

    During ArTeMiS observations at the APEX telescope (Chajnantor, Chile), 5760 bolometric pixels from 20 arrays at 300mK, corresponding to 3 submillimeter focal planes at 450μm, 350μm and 200μm, have to be read out simultaneously at 40Hz. The read out system, made of electronics and software, is the full chain from the cryostat to the telescope. The readout electronics consists of cryogenic buffers at 4K (NABU), based on CMOS technology, and of warm electronic acquisition systems called BOLERO. The bolometric signal given by each pixel has to be amplified, sampled, converted, time stamped and formatted in data packets by the BOLERO electronics. The time stamping is obtained by the decoding of an IRIG-B signal given by APEX and is key to ensure the synchronization of the data with the telescope. Specifically developed for ArTeMiS, BOLERO is an assembly of analogue and digital FPGA boards connected directly on the top of the cryostat. Two detectors arrays (18*16 pixels), one NABU and one BOLERO interconnected by ribbon cables constitute the unit of the electronic architecture of ArTeMiS. In total, the 20 detectors for the tree focal planes are read by 10 BOLEROs. The software is working on a Linux operating system, it runs on 2 back-end computers (called BEAR) which are small and robust PCs with solid state disks. They gather the 10 BOLEROs data fluxes, and reconstruct the focal planes images. When the telescope scans the sky, the acquisitions are triggered thanks to a specific network protocol. This interface with APEX enables to synchronize the acquisition with the observations on sky: the time stamped data packets are sent during the scans to the APEX software that builds the observation FITS files. A graphical user interface enables the setting of the camera and the real time display of the focal plane images, which is essential in laboratory and commissioning phases. The software is a set of C++, Labview and Python, the qualities of which are respectively used for rapidity, powerful graphic interfacing and scripting. The commands to the camera can be sequenced in Python scripts. The paper describes the whole electronic and software readout chain designed to fulfill the specificities of ArTeMiS and its performances. The specific options used are explained, for example, the limited room in the Cassegrain cabin of APEX has led us to a quite compact design. This system was successfully used in summer 2013 for the commissioning and the first scientific observations with a preliminary set of 4 detectors at 350μm.

Top