Sample records for offline software framework

  1. Advanced functionality for radio analysis in the Offline software framework of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Abreu, P.; Aglietta, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antičić, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellido, J. A.; Benzvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Caballero-Mora, K. S.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chudoba, J.; Clay, R. W.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Cotti, U.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Domenico, M.; de Donato, C.; de Jong, S. J.; de La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; de Mitri, I.; de Souza, V.; de Vries, K. D.; Decerprit, G.; Del Peral, L.; Deligny, O.; Dembinski, H.; Denkiewicz, A.; di Giulio, C.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Dobrigkeit, C.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; Dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Ferrero, A.; Fick, B.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fröhlich, U.; Fuchs, B.; Gamarra, R. F.; Gambetta, S.; García, B.; García Gámez, D.; Garcia-Pinto, D.; Gascon, A.; Gemmeke, H.; Gesterling, K.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Góra, D.; Gorgi, A.; Gouffon, P.; Gozzini, S. R.; Grashorn, E.; Grebe, S.; Griffith, N.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hague, J. D.; Hansen, P.; Harari, D.; Harmsma, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horneffer, A.; Hrabovský, M.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jiraskova, S.; Kadija, K.; Kampert, K. H.; Karhan, P.; Karova, T.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lautridou, P.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Lemiere, A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lucero, A.; Ludwig, M.; Lyberis, H.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Meurer, C.; Mićanović, S.; Micheletti, M. I.; Miller, W.; Miramonti, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Morris, C.; Mostafá, M.; Moura, C. A.; Mueller, S.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Nhung, P. T.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Oliva, P.; Olmos-Gilbaja, V. M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Parrisius, J.; Parsons, R. D.; Pastor, S.; Paul, T.; Pech, M.; PeĶala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Petrovic, J.; Pfendner, C.; Phan, N.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Privitera, P.; Prouza, M.; Quel, E. J.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rivera, H.; Riviére, C.; Rizi, V.; Robledo, C.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-D'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Salamida, F.; Salazar, H.; Salina, G.; Sánchez, F.; Santander, M.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Schmidt, F.; Schmidt, T.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schroeder, F.; Schulte, S.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Semikoz, D.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tamashiro, A.; Tapia, A.; Taşcău, O.; Tcaciuc, R.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tiwari, D. K.; Tkaczyk, W.; Todero Peixoto, C. J.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Warner, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Westerhoff, S.; Whelan, B. J.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Winders, L.; Winnick, M. G.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Younk, P.; Yuan, G.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Ziolkowski, M.

    2011-04-01

    The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs “radio-hybrid” measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request.

  2. The Muon Ionization Cooling Experiment User Software

    NASA Astrophysics Data System (ADS)

    Dobbs, A.; Rajaram, D.; MICE Collaboration

    2017-10-01

    The Muon Ionization Cooling Experiment (MICE) is a proof-of-principle experiment designed to demonstrate muon ionization cooling for the first time. MICE is currently on Step IV of its data taking programme, where transverse emittance reduction will be demonstrated. The MICE Analysis User Software (MAUS) is the reconstruction, simulation and analysis framework for the MICE experiment. MAUS is used for both offline data analysis and fast online data reconstruction and visualization to serve MICE data taking. This paper provides an introduction to MAUS, describing the central Python and C++ based framework, the data structure and and the code management and testing procedures.

  3. A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis.

    PubMed

    Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2016-01-01

    Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis.

  4. A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis

    PubMed Central

    Pedrocchi, Alessandra

    2016-01-01

    Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting “building blocks” into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis. PMID:27239191

  5. ATLAS offline software performance monitoring and optimization

    NASA Astrophysics Data System (ADS)

    Chauhan, N.; Kabra, G.; Kittelmann, T.; Langenberg, R.; Mandrysch, R.; Salzburger, A.; Seuster, R.; Ritsch, E.; Stewart, G.; van Eldik, N.; Vitillo, R.; Atlas Collaboration

    2014-06-01

    In a complex multi-developer, multi-package software environment, such as the ATLAS offline framework Athena, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide the optimization work. The first tool we used to instrument the code is PAPI, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles, instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event results in a good understanding of the algorithm level performance of ATLAS code. Further data can be obtained using Pin, a dynamic binary instrumentation tool. Pin tools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is also possible. Pin tools can additionally interrogate the arguments to functions, like those in linear algebra libraries, so that a detailed usage profile can be obtained. These tools have characterized the extensive use of vector and matrix operations in ATLAS tracking. Currently, CLHEP is used here, which is not an optimal choice. To help evaluate replacement libraries a testbed has been setup allowing comparison of the performance of different linear algebra libraries (including CLHEP, Eigen and SMatrix/SVector). Results are then presented via the ATLAS Performance Management Board framework, which runs daily with the current development branch of the code and monitors reconstruction and Monte-Carlo jobs. This framework analyses the CPU and memory performance of algorithms and an overview of results are presented on a web page. These tools have provided the insight necessary to plan and implement performance enhancements in ATLAS code by identifying the most common operations, with the call parameters well understood, and allowing improvements to be quantified in detail.

  6. ALFA: The new ALICE-FAIR software framework

    NASA Astrophysics Data System (ADS)

    Al-Turany, M.; Buncic, P.; Hristov, P.; Kollegger, T.; Kouzinopoulos, C.; Lebedev, A.; Lindenstruth, V.; Manafov, A.; Richter, M.; Rybalchenko, A.; Vande Vyvre, P.; Winckler, N.

    2015-12-01

    The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of large parts of a common software framework in an experiment independent way. The FairRoot project has already shown the feasibility of such an approach for the FAIR experiments and extending it beyond FAIR to experiments at other facilities[1, 2]. The ALFA framework is a joint development between ALICE Online- Offline (O2) and FairRoot teams. ALFA is designed as a flexible, elastic system, which balances reliability and ease of development with performance using multi-processing and multithreading. A message- based approach has been adopted; such an approach will support the use of the software on different hardware platforms, including heterogeneous systems. Each process in ALFA assumes limited communication and reliance on other processes. Such a design will add horizontal scaling (multiple processes) to vertical scaling provided by multiple threads to meet computing and throughput demands. ALFA does not dictate any application protocols. Potentially, any content-based processor or any source can change the application protocol. The framework supports different serialization standards for data exchange between different hardware and software languages.

  7. SILHIL Replication of Electric Aircraft Powertrain Dynamics and Inner-Loop Control for V&V of System Health Management Routines

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Teubert, Christopher Allen; Cuong Chi, Quach; Hogge, Edward; Vazquez, Sixto; Goebel, Kai; George, Vachtsevanos

    2013-01-01

    Software-in-the-loop and Hardware-in-the-loop testing of failure prognostics and decision making tools for aircraft systems will facilitate more comprehensive and cost-effective testing than what is practical to conduct with flight tests. A framework is described for the offline recreation of dynamic loads on simulated or physical aircraft powertrain components based on a real-time simulation of airframe dynamics running on a flight simulator, an inner-loop flight control policy executed by either an autopilot routine or a human pilot, and a supervisory fault management control policy. The creation of an offline framework for verifying and validating supervisory failure prognostics and decision making routines is described for the example of battery charge depletion failure scenarios onboard a prototype electric unmanned aerial vehicle.

  8. Software to Promote Young Children's Growth in Literacy: A Comparison of Online and Offline Formats

    ERIC Educational Resources Information Center

    Wood, Eileen; Grant, Amy K.; Gottardo, Alexandra; Savage, Robert; Evans, Mary Ann

    2017-01-01

    The primary goal of this research was to extend our understanding of the strengths and weaknesses inherent in online and offline early literacy software programs designed for young learners. A taxonomy of reading skills was used to contrast online software with offline closed system (compact disc) based programs with respect to number of skills…

  9. Does the Intel Xeon Phi processor fit HEP workloads?

    NASA Astrophysics Data System (ADS)

    Nowak, A.; Bitzes, G.; Dotti, A.; Lazzaro, A.; Jarp, S.; Szostek, P.; Valsan, L.; Botezatu, M.; Leduc, J.

    2014-06-01

    This paper summarizes the five years of CERN openlab's efforts focused on the Intel Xeon Phi co-processor, from the time of its inception to public release. We consider the architecture of the device vis a vis the characteristics of HEP software and identify key opportunities for HEP processing, as well as scaling limitations. We report on improvements and speedups linked to parallelization and vectorization on benchmarks involving software frameworks such as Geant4 and ROOT. Finally, we extrapolate current software and hardware trends and project them onto accelerators of the future, with the specifics of offline and online HEP processing in mind.

  10. Implementation of the ATLAS trigger within the multi-threaded software framework AthenaMT

    NASA Astrophysics Data System (ADS)

    Wynne, Ben; ATLAS Collaboration

    2017-10-01

    We present an implementation of the ATLAS High Level Trigger, HLT, that provides parallel execution of trigger algorithms within the ATLAS multithreaded software framework, AthenaMT. This development will enable the ATLAS HLT to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data-taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the HLT input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that each execute algorithms sequentially for different events. AthenaMT will provide a fully multi-threaded environment that will additionally enable concurrent execution of algorithms within an event. This has the potential to significantly reduce the memory footprint on future manycore devices. An additional benefit of the HLT implementation within AthenaMT is that it facilitates the integration of offline code into the HLT. The trigger must retain high rejection in the face of increasing numbers of pileup collisions. This will be achieved by greater use of offline algorithms that are designed to maximize the discrimination of signal from background. Therefore a unification of the HLT and offline reconstruction software environment is required. This has been achieved while at the same time retaining important HLT-specific optimisations that minimize the computation performed to reach a trigger decision. Such optimizations include early event rejection and reconstruction within restricted geometrical regions. We report on an HLT prototype in which the need for HLT-specific components has been reduced to a minimum. Promising results have been obtained with a prototype that includes the key elements of trigger functionality including regional reconstruction and early event rejection. We report on the first experience of migrating trigger selections to this new framework and present the next steps towards a full implementation of the ATLAS trigger.

  11. Using the CMS threaded framework in a production environment

    DOE PAGES

    Jones, C. D.; Contreras, L.; Gartung, P.; ...

    2015-12-23

    During 2014, the CMS Offline and Computing Organization completed the necessary changes to use the CMS threaded framework in the full production environment. We will briefly discuss the design of the CMS Threaded Framework, in particular how the design affects scaling performance. We will then cover the effort involved in getting both the CMSSW application software and the workflow management system ready for using multiple threads for production. Finally, we will present metrics on the performance of the application and workflow system as well as the difficulties which were uncovered. As a result, we will end with CMS' plans formore » using the threaded framework to do production for LHC Run 2.« less

  12. The CMS High Level Trigger System: Experience and Future Development

    NASA Astrophysics Data System (ADS)

    Bauer, G.; Behrens, U.; Bowen, M.; Branson, J.; Bukowiec, S.; Cittolin, S.; Coarasa, J. A.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Flossdorf, A.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Hwong, Y. L.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Polese, G.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Shpakov, D.; Simon, S.; Spataru, A. C.; Sumorok, K.

    2012-12-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  13. The Kepler Science Operations Center Pipeline Framework Extensions

    NASA Technical Reports Server (NTRS)

    Klaus, Todd C.; Cote, Miles T.; McCauliff, Sean; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Chandrasekaran, Hema; Bryson, Stephen T.; Middour, Christopher; Caldwell, Douglas A.; hide

    2010-01-01

    The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating on-board data compression tables, monitoring photometer health and status, processing the science data, and exporting the pipeline products to the mission archive. We describe how the generic pipeline framework software developed for Kepler is extended to achieve these goals, including pipeline configurations for processing science data and other support roles, and custom unit of work generators that control how the Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages the retrieval and storage of the data for a given unit of work and the MATLAB algorithms that process these data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing these files to be used to debug and evolve the algorithms offline.

  14. Conic section function neural network circuitry for offline signature recognition.

    PubMed

    Erkmen, Burcu; Kahraman, Nihan; Vural, Revna A; Yildirim, Tulay

    2010-04-01

    In this brief, conic section function neural network (CSFNN) circuitry was designed for offline signature recognition. CSFNN is a unified framework for multilayer perceptron (MLP) and radial basis function (RBF) networks to make simultaneous use of advantages of both. The CSFNN circuitry architecture was developed using a mixed mode circuit implementation. The designed circuit system is problem independent. Hence, the general purpose neural network circuit system could be applied to various pattern recognition problems with different network sizes on condition with the maximum network size of 16-16-8. In this brief, CSFNN circuitry system has been applied to two different signature recognition problems. CSFNN circuitry was trained with chip-in-the-loop learning technique in order to compensate typical analog process variations. CSFNN hardware achieved highly comparable computational performances with CSFNN software for nonlinear signature recognition problems.

  15. ALICE HLT Run 2 performance overview.

    NASA Astrophysics Data System (ADS)

    Krzewicki, Mikolaj; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.

  16. The LHCb software and computing upgrade for Run 3: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Bozzi, C.; Roiser, S.; LHCb Collaboration

    2017-10-01

    The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.

  17. The Offline Software Framework of the NA61/SHINE Experiment

    NASA Astrophysics Data System (ADS)

    Sipos, Roland; Laszlo, Andras; Marcinek, Antoni; Paul, Tom; Szuba, Marek; Unger, Michael; Veberic, Darko; Wyszynski, Oskar

    2012-12-01

    NA61/SHINE (SHINE = SPS Heavy Ion and Neutrino Experiment) is an experiment at the CERN SPS using the upgraded NA49 hadron spectrometer. Among its physics goals are precise hadron production measurements for improving calculations of the neutrino beam flux in the T2K neutrino oscillation experiment as well as for more reliable simulations of cosmic-ray air showers. Moreover, p+p, p+Pb and nucleus+nucleus collisions will be studied extensively to allow for a study of properties of the onset of deconfinement and search for the critical point of strongly interacting matter. Currently NA61/SHINE uses the old NA49 software framework for reconstruction, simulation and data analysis. The core of this legacy framework was developed in the early 1990s. It is written in different programming and scripting languages (C, pgi-Fortran, shell) and provides several concurrent data formats for the event data model, which includes also obsolete parts. In this contribution we will introduce the new software framework, called Shine, that is written in C++ and designed to comprise three principal parts: a collection of processing modules which can be assembled and sequenced by the user via XML files, an event data model which contains all simulation and reconstruction information based on STL and ROOT streaming, and a detector description which provides data on the configuration and state of the experiment. To assure a quick migration to the Shine framework, wrappers were introduced that allow to run legacy code parts as modules in the new framework and we will present first results on the cross validation of the two frameworks.

  18. A dedicated software application for treatment verification with off-line PET/CT imaging at the Heidelberg Ion Beam Therapy Center

    NASA Astrophysics Data System (ADS)

    Chen, W.; Bauer, J.; Kurz, C.; Tessonnier, T.; Handrack, J.; Haberer, T.; Debus, J.; Parodi, K.

    2017-01-01

    We present the workflow of the offline-PET based range verification method used at the Heidelberg Ion Beam Therapy Center, detailing the functionalities of an in-house developed software application, SimInterface14, with which range analysis is performed. Moreover, we introduce the design of a decision support system assessing uncertainties and facilitating physicians in decisions making for plan adaptation.

  19. A Data Analytical Framework for Improving Real-Time, Decision Support Systems in Healthcare

    ERIC Educational Resources Information Center

    Yahav, Inbal

    2010-01-01

    In this dissertation we develop a framework that combines data mining, statistics and operations research methods for improving real-time decision support systems in healthcare. Our approach consists of three main concepts: data gathering and preprocessing, modeling, and deployment. We introduce the notion of offline and semi-offline modeling to…

  20. artdaq: DAQ software development made simple

    NASA Astrophysics Data System (ADS)

    Biery, Kurt; Flumerfelt, Eric; Freeman, John; Ketchum, Wesley; Lukhanin, Gennadiy; Rechenmacher, Ron

    2017-10-01

    For a few years now, the artdaq data acquisition software toolkit has provided numerous experiments with ready-to-use components which allow for rapid development and deployment of DAQ systems. Developed within the Fermilab Scientific Computing Division, artdaq provides data transfer, event building, run control, and event analysis functionality. This latter feature includes built-in support for the art event analysis framework, allowing experiments to run art modules for real-time filtering, compression, disk writing and online monitoring. As art, also developed at Fermilab, is also used for offline analysis, a major advantage of artdaq is that it allows developers to easily switch between developing online and offline software. artdaq continues to be improved. Support for an alternate mode of running whereby data from some subdetector components are only streamed if requested has been added; this option will reduce unnecessary DAQ throughput. Real-time reporting of DAQ metrics has been implemented, along with the flexibility to choose the format through which experiments receive the reports; these formats include the Ganglia, Graphite and syslog software packages, along with flat ASCII files. Additionally, work has been performed investigating more flexible modes of online monitoring, including the capability to run multiple online monitoring processes on different hosts, each running its own set of art modules. Finally, a web-based GUI interface through which users can configure details of their DAQ system has been implemented, increasing the ease of use of the system. Already successfully deployed on the LArlAT, DarkSide-50, DUNE 35ton and Mu2e experiments, artdaq will be employed for SBND and is a strong candidate for use on ICARUS and protoDUNE. With each experiment comes new ideas for how artdaq can be made more flexible and powerful. The above improvements will be described, along with potential ideas for the future.

  1. Offline signature verification using convolution Siamese network

    NASA Astrophysics Data System (ADS)

    Xing, Zi-Jian; Yin, Fei; Wu, Yi-Chao; Liu, Cheng-Lin

    2018-04-01

    This paper presents an offline signature verification approach using convolutional Siamese neural network. Unlike the existing methods which consider feature extraction and metric learning as two independent stages, we adopt a deepleaning based framework which combines the two stages together and can be trained end-to-end. The experimental results on two offline public databases (GPDSsynthetic and CEDAR) demonstrate the superiority of our method on the offline signature verification problem.

  2. A Browser-Based Multi-User Working Environment for Physicists

    NASA Astrophysics Data System (ADS)

    Erdmann, M.; Fischer, R.; Glaser, C.; Klingebiel, D.; Komm, M.; Müller, G.; Rieger, M.; Steggemann, J.; Urban, M.; Winchen, T.

    2014-06-01

    Many programs in experimental particle physics do not yet have a graphical interface, or demand strong platform and software requirements. With the most recent development of the VISPA project, we provide graphical interfaces to existing software programs and access to multiple computing clusters through standard web browsers. The scalable clientserver system allows analyses to be performed in sizable teams, and disburdens the individual physicist from installing and maintaining a software environment. The VISPA graphical interfaces are implemented in HTML, JavaScript and extensions to the Python webserver. The webserver uses SSH and RPC to access user data, code and processes on remote sites. As example applications we present graphical interfaces for steering the reconstruction framework OFFLINE of the Pierre-Auger experiment, and the analysis development toolkit PXL. The browser based VISPA system was field-tested in biweekly homework of a third year physics course by more than 100 students. We discuss the system deployment and the evaluation by the students.

  3. Experimental demonstration of OpenFlow-enabled media ecosystem architecture for high-end applications over metro and core networks.

    PubMed

    Ntofon, Okung-Dike; Channegowda, Mayur P; Efstathiou, Nikolaos; Rashidi Fard, Mehdi; Nejabati, Reza; Hunter, David K; Simeonidou, Dimitra

    2013-02-25

    In this paper, a novel Software-Defined Networking (SDN) architecture is proposed for high-end Ultra High Definition (UHD) media applications. UHD media applications require huge amounts of bandwidth that can only be met with high-capacity optical networks. In addition, there are requirements for control frameworks capable of delivering effective application performance with efficient network utilization. A novel SDN-based Controller that tightly integrates application-awareness with network control and management is proposed for such applications. An OpenFlow-enabled test-bed demonstrator is reported with performance evaluations of advanced online and offline media- and network-aware schedulers.

  4. A Framework for Collaborative Review of Candidate Events in High Data Rate Streams: the V-Fastr Experiment as a Case Study

    NASA Astrophysics Data System (ADS)

    Hart, Andrew F.; Cinquini, Luca; Khudikyan, Shakeh E.; Thompson, David R.; Mattmann, Chris A.; Wagstaff, Kiri; Lazio, Joseph; Jones, Dayton

    2015-01-01

    “Fast radio transients” are defined here as bright millisecond pulses of radio-frequency energy. These short-duration pulses can be produced by known objects such as pulsars or potentially by more exotic objects such as evaporating black holes. The identification and verification of such an event would be of great scientific value. This is one major goal of the Very Long Baseline Array (VLBA) Fast Transient Experiment (V-FASTR), a software-based detection system installed at the VLBA. V-FASTR uses a “commensal” (piggy-back) approach, analyzing all array data continually during routine VLBA observations and identifying candidate fast transient events. Raw data can be stored from a buffer memory, which enables a comprehensive off-line analysis. This is invaluable for validating the astrophysical origin of any detection. Candidates discovered by the automatic system must be reviewed each day by analysts to identify any promising signals that warrant a more in-depth investigation. To support the timely analysis of fast transient detection candidates by V-FASTR scientists, we have developed a metadata-driven, collaborative candidate review framework. The framework consists of a software pipeline for metadata processing composed of both open source software components and project-specific code written expressly to extract and catalog metadata from the incoming V-FASTR data products, and a web-based data portal that facilitates browsing and inspection of the available metadata for candidate events extracted from the VLBA radio data.

  5. LHCb detector and trigger performance in Run II

    NASA Astrophysics Data System (ADS)

    Francesca, Dordei

    2017-12-01

    The LHCb detector is a forward spectrometer at the LHC, designed to perform high precision studies of b- and c- hadrons. In Run II of the LHC, a new scheme for the software trigger at LHCb allows splitting the triggering of events into two stages, giving room to perform the alignment and calibration in real time. In the novel detector alignment and calibration strategy for Run II, data collected at the start of the fill are processed in a few minutes and used to update the alignment, while the calibration constants are evaluated for each run. This allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline selected events. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The larger timing budget, available in the trigger, allows to perform the same track reconstruction online and offline. This enables LHCb to achieve the best reconstruction performance already in the trigger, and allows physics analyses to be performed directly on the data produced by the trigger reconstruction. The novel real-time processing strategy at LHCb is discussed from both the technical and operational point of view. The overall performance of the LHCb detector on the data of Run II is presented as well.

  6. How to review 4 million lines of ATLAS code

    NASA Astrophysics Data System (ADS)

    Stewart, Graeme A.; Lampl, Walter; ATLAS Collaboration

    2017-10-01

    As the ATLAS Experiment prepares to move to a multi-threaded framework (AthenaMT) for Run3, we are faced with the problem of how to migrate 4 million lines of C++ source code. This code has been written over the past 15 years and has often been adapted, re-written or extended to the changing requirements and circumstances of LHC data taking. The code was developed by different authors, many of whom are no longer active, and under the deep assumption that processing ATLAS data would be done in a serial fashion. In order to understand the scale of the problem faced by the ATLAS software community, and to plan appropriately the significant efforts posed by the new AthenaMT framework, ATLAS embarked on a wide ranging review of our offline code, covering all areas of activity: event generation, simulation, trigger, reconstruction. We discuss the difficulties in even logistically organising such reviews in an already busy community, how to examine areas in sufficient depth to learn key areas in need of upgrade, yet also to finish the reviews in a timely fashion. We show how the reviews were organised and how the ouptuts were captured in a way that the sub-system communities could then tackle the problems uncovered on a realistic timeline. Further, we discuss how the review has inuenced the overall planning for the Run 3 ATLAS offline code.

  7. Offline software for the DAMPE experiment

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Liu, Dong; Wei, Yifeng; Zhang, Zhiyong; Zhang, Yunlong; Wang, Xiaolian; Xu, Zizong; Huang, Guangshun; Tykhonov, Andrii; Wu, Xin; Zang, Jingjing; Liu, Yang; Jiang, Wei; Wen, Sicheng; Wu, Jian; Chang, Jin

    2017-10-01

    A software system has been developed for the DArk Matter Particle Explorer (DAMPE) mission, a satellite-based experiment. The DAMPE software is mainly written in C++ and steered using a Python script. This article presents an overview of the DAMPE offline software, including the major architecture design and specific implementation for simulation, calibration and reconstruction. The whole system has been successfully applied to DAMPE data analysis. Some results obtained using the system, from simulation and beam test experiments, are presented. Supported by Chinese 973 Program (2010CB833002), the Strategic Priority Research Program on Space Science of the Chinese Academy of Science (CAS) (XDA04040202-4), the Joint Research Fund in Astronomy under cooperative agreement between the National Natural Science Foundation of China (NSFC) and CAS (U1531126) and 100 Talents Program of the Chinese Academy of Science

  8. microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling

    NASA Astrophysics Data System (ADS)

    Comi, Troy J.; Neumann, Elizabeth K.; Do, Thanh D.; Sweedler, Jonathan V.

    2017-09-01

    Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. [Figure not available: see fulltext.

  9. microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling.

    PubMed

    Comi, Troy J; Neumann, Elizabeth K; Do, Thanh D; Sweedler, Jonathan V

    2017-09-01

    Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. Graphical Abstract ᅟ.

  10. A framework for collaborative review of candidate events in high data rate streams: The V-FASTR experiment as a case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Andrew F.; Cinquini, Luca; Khudikyan, Shakeh E.

    2015-01-01

    “Fast radio transients” are defined here as bright millisecond pulses of radio-frequency energy. These short-duration pulses can be produced by known objects such as pulsars or potentially by more exotic objects such as evaporating black holes. The identification and verification of such an event would be of great scientific value. This is one major goal of the Very Long Baseline Array (VLBA) Fast Transient Experiment (V-FASTR), a software-based detection system installed at the VLBA. V-FASTR uses a “commensal” (piggy-back) approach, analyzing all array data continually during routine VLBA observations and identifying candidate fast transient events. Raw data can be storedmore » from a buffer memory, which enables a comprehensive off-line analysis. This is invaluable for validating the astrophysical origin of any detection. Candidates discovered by the automatic system must be reviewed each day by analysts to identify any promising signals that warrant a more in-depth investigation. To support the timely analysis of fast transient detection candidates by V-FASTR scientists, we have developed a metadata-driven, collaborative candidate review framework. The framework consists of a software pipeline for metadata processing composed of both open source software components and project-specific code written expressly to extract and catalog metadata from the incoming V-FASTR data products, and a web-based data portal that facilitates browsing and inspection of the available metadata for candidate events extracted from the VLBA radio data.« less

  11. Analysis of several Boolean operation based trajectory generation strategies for automotive spray applications

    NASA Astrophysics Data System (ADS)

    Gao, Guoyou; Jiang, Chunsheng; Chen, Tao; Hui, Chun

    2018-05-01

    Industrial robots are widely used in various processes of surface manufacturing, such as thermal spraying. The established robot programming methods are highly time-consuming and not accurate enough to fulfil the demands of the actual market. There are many off-line programming methods developed to reduce the robot programming effort. This work introduces the principle of several based robot trajectory generation strategy on planar surface and curved surface. Since the off-line programming software is widely used and thus facilitates the robot programming efforts and improves the accuracy of robot trajectory, the analysis of this work is based on the second development of off-line programming software Robot studio™. To meet the requirements of automotive paint industry, this kind of software extension helps provide special functions according to the users defined operation parameters. The presented planning strategy generates the robot trajectory by moving an orthogonal surface according to the information of coating surface, a series of intersection curves are then employed to generate the trajectory points. The simulation results show that the path curve created with this method is successive and smooth, which corresponds to the requirements of automotive spray industrial applications.

  12. Open source tools for standardized privacy protection of medical images

    NASA Astrophysics Data System (ADS)

    Lien, Chung-Yueh; Onken, Michael; Eichelberg, Marco; Kao, Tsair; Hein, Andreas

    2011-03-01

    In addition to the primary care context, medical images are often useful for research projects and community healthcare networks, so-called "secondary use". Patient privacy becomes an issue in such scenarios since the disclosure of personal health information (PHI) has to be prevented in a sharing environment. In general, most PHIs should be completely removed from the images according to the respective privacy regulations, but some basic and alleviated data is usually required for accurate image interpretation. Our objective is to utilize and enhance these specifications in order to provide reliable software implementations for de- and re-identification of medical images suitable for online and offline delivery. DICOM (Digital Imaging and Communications in Medicine) images are de-identified by replacing PHI-specific information with values still being reasonable for imaging diagnosis and patient indexing. In this paper, this approach is evaluated based on a prototype implementation built on top of the open source framework DCMTK (DICOM Toolkit) utilizing standardized de- and re-identification mechanisms. A set of tools has been developed for DICOM de-identification that meets privacy requirements of an offline and online sharing environment and fully relies on standard-based methods.

  13. Graphical simulation for aerospace manufacturing

    NASA Technical Reports Server (NTRS)

    Babai, Majid; Bien, Christopher

    1994-01-01

    Simulation software has become a key technological enabler for integrating flexible manufacturing systems and streamlining the overall aerospace manufacturing process. In particular, robot simulation and offline programming software is being credited for reducing down time and labor cost, while boosting quality and significantly increasing productivity.

  14. Strategies for a Creative Future with Computer Science, Quality Design and Communicability

    NASA Astrophysics Data System (ADS)

    Cipolla Ficarra, Francisco V.; Villarreal, Maria

    In the current work is presented the importance of the two-way triad between computer science, design and communicability. It is demonstrated how the principles of quality of software engineering are not universal since they are disappearing inside university training. Besides, a short analysis of the term "creativity" males apparent the existence of plagiarism as a human factor that damages the future of communicability applied to the on-line and off-line contents of the open software. A set of measures and guidelines are presented so that the triad works again correctly in the next years to foster the qualitative design of the interactive systems on-line and/or off-line.

  15. Mapping modern software process engineering techniques onto an HEP development environment

    NASA Astrophysics Data System (ADS)

    Wellisch, J. P.

    2003-04-01

    One of the most challenging issues faced in HEP in recent years is the question of how to capitalise on software development and maintenance experience in a continuous manner. To capitalise means in our context to evaluate and apply new process technologies as they arise, and to further evolve technologies already widely in use. It also implies the definition and adoption of standards. The CMS off-line software improvement effort aims at continual software quality improvement, and continual improvement in the efficiency of the working environment with the goal to facilitate doing great new physics. To achieve this, we followed a process improvement program based on ISO-15504, and Rational Unified Process. This experiment in software process improvement in HEP has been progressing now for a period of 3 years. Taking previous experience from ATLAS and SPIDER into account, we used a soft approach of continuous change within the limits of current culture to create of de facto software process standards within the CMS off line community as the only viable route to a successful software process improvement program in HEP. We will present the CMS approach to software process improvement in this process R&D, describe lessons learned, and mistakes made. We will demonstrate the benefits gained, and the current status of the software processes established in CMS off-line software.

  16. The (Im)Materiality of Literacy: The Significance of Subjectivity to New Literacies Research

    ERIC Educational Resources Information Center

    Burnett, Cathy; Merchant, Guy; Pahl, Kate; Rowsell, Jennifer

    2014-01-01

    This article deconstructs the online and offline experience to show its complexities and idiosyncratic nature. It proposes a theoretical framework designed to conceptualise aspects of meaning-making across on- and offline contexts. In arguing for the "(im)materiality" of literacy, it makes four propositions which highlight the complex…

  17. Large Scale Software Building with CMake in ATLAS

    NASA Astrophysics Data System (ADS)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.

  18. A Roadmap to Continuous Integration for ATLAS Software Development

    NASA Astrophysics Data System (ADS)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million lines of C++ and 1.4 million lines of python code. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This paper describes the CI incorporation program for the ATLAS software infrastructure. It brings modern open source tools such as Jenkins and GitLab into the ATLAS Nightly System, rationalizes hardware resource allocation and administrative operations, provides improved feedback and means to fix broken builds promptly for developers. Once adopted, ATLAS CI practices will improve and accelerate innovation cycles and result in increased confidence in new software deployments. The paper reports the status of Jenkins integration with the ATLAS Nightly System as well as short and long term plans for the incorporation of CI practices.

  19. Integration of PGD-virtual charts into an engineering design process

    NASA Astrophysics Data System (ADS)

    Courard, Amaury; Néron, David; Ladevèze, Pierre; Ballere, Ludovic

    2016-04-01

    This article deals with the efficient construction of approximations of fields and quantities of interest used in geometric optimisation of complex shapes that can be encountered in engineering structures. The strategy, which is developed herein, is based on the construction of virtual charts that allow, once computed offline, to optimise the structure for a negligible online CPU cost. These virtual charts can be used as a powerful numerical decision support tool during the design of industrial structures. They are built using the proper generalized decomposition (PGD) that offers a very convenient framework to solve parametrised problems. In this paper, particular attention has been paid to the integration of the procedure into a genuine engineering design process. In particular, a dedicated methodology is proposed to interface the PGD approach with commercial software.

  20. Fast object reconstruction in block-based compressive low-light-level imaging

    NASA Astrophysics Data System (ADS)

    Ke, Jun; Sui, Dong; Wei, Ping

    2014-11-01

    In this paper we propose a simply yet effective and efficient method for long-term object tracking. Different from traditional visual tracking method which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion problem. To summarize, our algorithm can be roughly decomposed in a initialization stage and a tracking stage. In the initialization stage, an offline classifier is trained to get the object appearance information in category level. When the video stream is coming, the pre-trained offline classifier is used for detecting the potential target and initializing the tracking stage. In the tracking stage, it consists of three parts which are online tracking part, offline tracking part and confidence judgment part. Online tracking part captures the specific target appearance information while detection part localizes the object based on the pre-trained offline classifier. Since there is no data dependence between online tracking and offline detection, these two parts are running in parallel to significantly improve the processing speed. A confidence selection mechanism is proposed to optimize the object location. Besides, we also propose a simple mechanism to judge the absence of the object. If the target is lost, the pre-trained offline classifier is utilized to re-initialize the whole algorithm as long as the target is re-located. During experiment, we evaluate our method on several challenging video sequences and demonstrate competitive results.

  1. g-PRIME: A Free, Windows Based Data Acquisition and Event Analysis Software Package for Physiology in Classrooms and Research Labs.

    PubMed

    Lott, Gus K; Johnson, Bruce R; Bonow, Robert H; Land, Bruce R; Hoy, Ronald R

    2009-01-01

    We present g-PRIME, a software based tool for physiology data acquisition, analysis, and stimulus generation in education and research. This software was developed in an undergraduate neurophysiology course and strongly influenced by instructor and student feedback. g-PRIME is a free, stand-alone, windows application coded and "compiled" in Matlab (does not require a Matlab license). g-PRIME supports many data acquisition interfaces from the PC sound card to expensive high throughput calibrated equipment. The program is designed as a software oscilloscope with standard trigger modes, multi-channel visualization controls, and data logging features. Extensive analysis options allow real time and offline filtering of signals, multi-parameter threshold-and-window based event detection, and two-dimensional display of a variety of parameters including event time, energy density, maximum FFT frequency component, max/min amplitudes, and inter-event rate and intervals. The software also correlates detected events with another simultaneously acquired source (event triggered average) in real time or offline. g-PRIME supports parameter histogram production and a variety of elegant publication quality graphics outputs. A major goal of this software is to merge powerful engineering acquisition and analysis tools with a biological approach to studies of nervous system function.

  2. The Muon Conditions Data Management:. Database Architecture and Software Infrastructure

    NASA Astrophysics Data System (ADS)

    Verducci, Monica

    2010-04-01

    The management of the Muon Conditions Database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored and their analysis. The Muon conditions database is responsible for almost all of the 'non-event' data and detector quality flags storage needed for debugging of the detector operations and for performing the reconstruction and the analysis. In particular for the early data, the knowledge of the detector performance, the corrections in term of efficiency and calibration will be extremely important for the correct reconstruction of the events. In this work, an overview of the entire Muon conditions database architecture is given, in particular the different sources of the data and the storage model used, including the database technology associated. Particular emphasis is given to the Data Quality chain: the flow of the data, the analysis and the final results are described. In addition, the description of the software interfaces used to access to the conditions data are reported, in particular, in the ATLAS Offline Reconstruction framework ATHENA environment.

  3. Implementation of an object oriented track reconstruction model into multiple LHC experiments*

    NASA Astrophysics Data System (ADS)

    Gaines, Irwin; Gonzalez, Saul; Qian, Sijin

    2001-10-01

    An Object Oriented (OO) model (Gaines et al., 1996; 1997; Gaines and Qian, 1998; 1999) for track reconstruction by the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. The model has been coded in the C++ programming language and has been successfully implemented into the OO computing environments of both the CMS (1994) and ATLAS (1994) experiments at the future Large Hadron Collider (LHC) at CERN. We shall report: how the OO model was adapted, with largely the same code, to different scenarios and serves the different reconstruction aims in different experiments (i.e. the level-2 trigger software for ATLAS and the offline software for CMS); how the OO model has been incorporated into different OO environments with a similar integration structure (demonstrating the ease of re-use of OO program); what are the OO model's performance, including execution time, memory usage, track finding efficiency and ghost rate, etc.; and additional physics performance based on use of the OO tracking model. We shall also mention the experience and lessons learned from the implementation of the OO model into the general OO software framework of the experiments. In summary, our practice shows that the OO technology really makes the software development and the integration issues straightforward and convenient; this may be particularly beneficial for the general non-computer-professional physicists.

  4. Defense Travel System (DTS) Airline Ticket Price Analysis: Do DTS Ticket Prices Differ From Other Online Tickets Available for Naval Postgraduate School Travelers

    DTIC Science & Technology

    2007-12-01

    price dispersion at least as large as dispersion for traditional retailers for books, music CDs, and software offered through 52 Internet and...dispersion differences. For instance, for 22 old-hit albums , average price percentage differences are 31% on-line, compared to 11% off-line. But for 21...current-hit albums , differences are smaller at 18% on-line and 19% off-line. This suggests price dispersion levels are related to product

  5. Preliminary Studies for a CBCT Imaging Protocol for Offline Organ Motion Analysis: Registration Software Validation and CTDI Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falco, Maria Daniela, E-mail: mdanielafalco@hotmail.co; Fontanarosa, Davide; Miceli, Roberto

    2011-04-01

    Cone-beam X-ray volumetric imaging in the treatment room, allows online correction of set-up errors and offline assessment of residual set-up errors and organ motion. In this study the registration algorithm of the X-ray volume imaging software (XVI, Elekta, Crawley, United Kingdom), which manages a commercial cone-beam computed tomography (CBCT)-based positioning system, has been tested using a homemade and an anthropomorphic phantom to: (1) assess its performance in detecting known translational and rotational set-up errors and (2) transfer the transformation matrix of its registrations into a commercial treatment planning system (TPS) for offline organ motion analysis. Furthermore, CBCT dose index hasmore » been measured for a particular site (prostate: 120 kV, 1028.8 mAs, approximately 640 frames) using a standard Perspex cylindrical body phantom (diameter 32 cm, length 15 cm) and a 10-cm-long pencil ionization chamber. We have found that known displacements were correctly calculated by the registration software to within 1.3 mm and 0.4{sup o}. For the anthropomorphic phantom, only translational displacements have been considered. Both studies have shown errors within the intrinsic uncertainty of our system for translational displacements (estimated as 0.87 mm) and rotational displacements (estimated as 0.22{sup o}). The resulting table translations proposed by the system to correct the displacements were also checked with portal images and found to place the isocenter of the plan on the linac isocenter within an error of 1 mm, which is the dimension of the spherical lead marker inserted at the center of the homemade phantom. The registration matrix translated into the TPS image fusion module correctly reproduced the alignment between planning CT scans and CBCT scans. Finally, measurements on the CBCT dose index indicate that CBCT acquisition delivers less dose than conventional CT scans and electronic portal imaging device portals. The registration software was found to be accurate, and its registration matrix can be easily translated into the TPS and a low dose is delivered to the patient during image acquisition. These results can help in designing imaging protocols for offline evaluations.« less

  6. Intellectual Property, Digital Technology and the Developing World

    NASA Astrophysics Data System (ADS)

    Pupillo, Lorenzo Maria

    This chapter provides an overview of how the converging ICTs are challenging the traditional off-line copyright doctrine and suggests how developing countries should approach issues such as copyright in the digital world, software (Protection, Open Source, Reverse Engineering), and data base protection. The balance of the chapter is organized into three sections. After the introduction, the second section explains how digital technology is dramatically changing the entertainment industry, what are the major challenges to the industry, and what are the approaches that the economic literature suggest to face the structural changes that the digital revolution is bringing forward. Starting from the assumption that IPRs frameworks need to be customized to the countries’ development needs, the third section makes recommendations on how developing countries should use copyright to support access to information and to creative industries.

  7. ATLAS fast physics monitoring: TADA

    NASA Astrophysics Data System (ADS)

    Sabato, G.; Elsing, M.; Gumpert, C.; Kamioka, S.; Moyse, E.; Nairz, A.; Eifert, T.; ATLAS Collaboration

    2017-10-01

    The ATLAS experiment at the LHC has been recording data from proton-proton collisions with 13 TeV center-of-mass energy since spring 2015. The collaboration is using a fast physics monitoring framework (TADA) to automatically perform a broad range of fast searches for early signs of new physics and to monitor the data quality across the year with the full analysis level calibrations applied to the rapidly growing data. TADA is designed to provide fast feedback directly after the collected data has been fully calibrated and processed at the Tier-0. The system can monitor a large range of physics channels, offline data quality and physics performance quantities. TADA output is available on a website accessible by the whole collaboration. It gets updated twice a day with the data from newly processed runs. Hints of potentially interesting physics signals or performance issues identified in this way are reported to be followed up by physics or combined performance groups. The note reports as well about the technical aspects of TADA: the software structure to obtain the input TAG files, the framework workflow and structure, the webpage and its implementation.

  8. The VISPA internet platform for outreach, education and scientific research in various experiments

    NASA Astrophysics Data System (ADS)

    van Asseldonk, D.; Erdmann, M.; Fischer, B.; Fischer, R.; Glaser, C.; Heidemann, F.; Müller, G.; Quast, T.; Rieger, M.; Urban, M.; Welling, C.

    2015-12-01

    VISPA provides a graphical front-end to computing infrastructures giving its users all functionality needed for working conditions comparable to a personal computer. It is a framework that can be extended with custom applications to support individual needs, e.g. graphical interfaces for experiment-specific software. By design, VISPA serves as a multipurpose platform for many disciplines and experiments as demonstrated in the following different use-cases. A GUI to the analysis framework OFFLINE of the Pierre Auger collaboration, submission and monitoring of computing jobs, university teaching of hundreds of students, and outreach activity, especially in CERN's open data initiative. Serving heterogeneous user groups and applications gave us lots of experience. This helps us in maturing the system, i.e. improving the robustness and responsiveness, and the interplay of the components. Among the lessons learned are the choice of a file system, the implementation of websockets, efficient load balancing, and the fine-tuning of existing technologies like the RPC over SSH. We present in detail the improved server setup and report on the performance, the user acceptance and the realized applications of the system.

  9. The Application of SNiPER to the JUNO Simulation

    NASA Astrophysics Data System (ADS)

    Lin, Tao; Zou, Jiaheng; Li, Weidong; Deng, Ziyan; Fang, Xiao; Cao, Guofu; Huang, Xingtao; You, Zhengyun; JUNO Collaboration

    2017-10-01

    The JUNO (Jiangmen Underground Neutrino Observatory) is a multipurpose neutrino experiment which is designed to determine neutrino mass hierarchy and precisely measure oscillation parameters. As one of the important systems, the JUNO offline software is being developed using the SNiPER software. In this proceeding, we focus on the requirements of JUNO simulation and present the working solution based on the SNiPER. The JUNO simulation framework is in charge of managing event data, detector geometries and materials, physics processes, simulation truth information etc. It glues physics generator, detector simulation and electronics simulation modules together to achieve a full simulation chain. In the implementation of the framework, many attractive characteristics of the SNiPER have been used, such as dynamic loading, flexible flow control, multiple event management and Python binding. Furthermore, additional efforts have been made to make both detector and electronics simulation flexible enough to accommodate and optimize different detector designs. For the Geant4-based detector simulation, each sub-detector component is implemented as a SNiPER tool which is a dynamically loadable and configurable plugin. So it is possible to select the detector configuration at runtime. The framework provides the event loop to drive the detector simulation and interacts with the Geant4 which is implemented as a passive service. All levels of user actions are wrapped into different customizable tools, so that user functions can be easily extended by just adding new tools. The electronics simulation has been implemented by following an event driven scheme. The SNiPER task component is used to simulate data processing steps in the electronics modules. The electronics and trigger are synchronized by triggered events containing possible physics signals. The JUNO simulation software has been released and is being used by the JUNO collaboration to do detector design optimization, event reconstruction algorithm development and physics sensitivity studies.

  10. Adaptive cyber-attack modeling system

    NASA Astrophysics Data System (ADS)

    Gonsalves, Paul G.; Dougherty, Edward T.

    2006-05-01

    The pervasiveness of software and networked information systems is evident across a broad spectrum of business and government sectors. Such reliance provides an ample opportunity not only for the nefarious exploits of lone wolf computer hackers, but for more systematic software attacks from organized entities. Much effort and focus has been placed on preventing and ameliorating network and OS attacks, a concomitant emphasis is required to address protection of mission critical software. Typical software protection technique and methodology evaluation and verification and validation (V&V) involves the use of a team of subject matter experts (SMEs) to mimic potential attackers or hackers. This manpower intensive, time-consuming, and potentially cost-prohibitive approach is not amenable to performing the necessary multiple non-subjective analyses required to support quantifying software protection levels. To facilitate the evaluation and V&V of software protection solutions, we have designed and developed a prototype adaptive cyber attack modeling system. Our approach integrates an off-line mechanism for rapid construction of Bayesian belief network (BN) attack models with an on-line model instantiation, adaptation and knowledge acquisition scheme. Off-line model construction is supported via a knowledge elicitation approach for identifying key domain requirements and a process for translating these requirements into a library of BN-based cyber-attack models. On-line attack modeling and knowledge acquisition is supported via BN evidence propagation and model parameter learning.

  11. A mathematical model for Vertical Attitude Takeoff and Landing (VATOL) aircraft simulation. Volume 3: User's manual for VATOL simulation program

    NASA Technical Reports Server (NTRS)

    Fortenbaugh, R. L.

    1980-01-01

    Instructions for using Vertical Attitude Takeoff and Landing Aircraft Simulation (VATLAS), the digital simulation program for application to vertical attitude takeoff and landing (VATOL) aircraft developed for installation on the NASA Ames CDC 7600 computer system are described. The framework for VATLAS is the Off-Line Simulation (OLSIM) routine. The OLSIM routine provides a flexible framework and standardized modules which facilitate the development of off-line aircraft simulations. OLSIM runs under the control of VTOLTH, the main program, which calls the proper modules for executing user specified options. These options include trim, stability derivative calculation, time history generation, and various input-output options.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flores, Alejandra Parra; Bravo, Oscar Martinez; Ibargueen, Humberto Salazar

    The purpose of this work is to show the results of the analysis of a library of synthetic data corresponding to Very Inclined Showers (i.e. those with a zenith angle between 60 and 80 degrees and energies from 50 EeV to 80 EeV). Simulations were performed using the Aires software and then analyzed to narrow down the arrival angles that allow us an efficient shower reconstruction using the Offline software.

  13. Development of an Online and Offline Integration Hypothesis for Healthy Internet Use: Theory and Preliminary Evidence

    PubMed Central

    Lin, Xiaoyan; Su, Wenliang; Potenza, Marc N.

    2018-01-01

    The Internet has become an integral part of our daily life, and how to make the best use of the Internet is important to both individuals and the society. Based on previous studies, an Online and Offline Integration Hypothesis is proposed to suggest a framework for considering harmonious and balanced Internet use. The Integration Hypothesis proposes that healthier patterns of Internet usage may be achieved through harmonious integration of people’s online and offline worlds. An online/offline integration is proposed to unite self-identity, interpersonal relationships, and social functioning with both cognitive and behavioral aspects by following the principles of communication, transfer, consistency, and “offline-first” priorities. To begin to test the hypothesis regarding the relationship between integration level and psychological outcomes, data for the present study were collected from 626 undergraduate students (41.5% males). Participants completed scales for online and offline integration, Internet addiction, pros and cons of Internet use, loneliness, extraversion, and life satisfaction. The findings revealed that subjects with higher level of online/offline integration have higher life satisfaction, greater extraversion, and more positive perceptions of the Internet and less loneliness, lower Internet addiction, and fewer negative perceptions of the Internet. Integration mediates the link between extraversion and psychological outcomes, and it may be the mechanism underlying the difference between the “rich get richer” and social compensation hypotheses. The implications of the online and offline integration hypothesis are discussed. PMID:29706910

  14. Development of an Online and Offline Integration Hypothesis for Healthy Internet Use: Theory and Preliminary Evidence.

    PubMed

    Lin, Xiaoyan; Su, Wenliang; Potenza, Marc N

    2018-01-01

    The Internet has become an integral part of our daily life, and how to make the best use of the Internet is important to both individuals and the society. Based on previous studies, an Online and Offline Integration Hypothesis is proposed to suggest a framework for considering harmonious and balanced Internet use. The Integration Hypothesis proposes that healthier patterns of Internet usage may be achieved through harmonious integration of people's online and offline worlds. An online/offline integration is proposed to unite self-identity, interpersonal relationships, and social functioning with both cognitive and behavioral aspects by following the principles of communication, transfer, consistency, and "offline-first" priorities. To begin to test the hypothesis regarding the relationship between integration level and psychological outcomes, data for the present study were collected from 626 undergraduate students (41.5% males). Participants completed scales for online and offline integration, Internet addiction, pros and cons of Internet use, loneliness, extraversion, and life satisfaction. The findings revealed that subjects with higher level of online/offline integration have higher life satisfaction, greater extraversion, and more positive perceptions of the Internet and less loneliness, lower Internet addiction, and fewer negative perceptions of the Internet. Integration mediates the link between extraversion and psychological outcomes, and it may be the mechanism underlying the difference between the "rich get richer" and social compensation hypotheses. The implications of the online and offline integration hypothesis are discussed.

  15. Software and hardware infrastructure for research in electrophysiology

    PubMed Central

    Mouček, Roman; Ježek, Petr; Vařeka, Lukáš; Řondík, Tomáš; Brůha, Petr; Papež, Václav; Mautner, Pavel; Novotný, Jiří; Prokop, Tomáš; Štěbeták, Jan

    2014-01-01

    As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly. This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research. After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software. PMID:24639646

  16. Software and hardware infrastructure for research in electrophysiology.

    PubMed

    Mouček, Roman; Ježek, Petr; Vařeka, Lukáš; Rondík, Tomáš; Brůha, Petr; Papež, Václav; Mautner, Pavel; Novotný, Jiří; Prokop, Tomáš; Stěbeták, Jan

    2014-01-01

    As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly. This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research. After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software.

  17. Optimizing CMS build infrastructure via Apache Mesos

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; Eulisse, Giulio; Mendez, David; Muzaffar, Shahzad

    2015-12-01

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.

  18. Quantifying discrepancies in opinion spectra from online and offline networks.

    PubMed

    Lee, Deokjae; Hahn, Kyu S; Yook, Soon-Hyung; Park, Juyong

    2015-01-01

    Online social media such as Twitter are widely used for mining public opinions and sentiments on various issues and topics. The sheer volume of the data generated and the eager adoption by the online-savvy public are helping to raise the profile of online media as a convenient source of news and public opinions on social and political issues as well. Due to the uncontrollable biases in the population who heavily use the media, however, it is often difficult to measure how accurately the online sphere reflects the offline world at large, undermining the usefulness of online media. One way of identifying and overcoming the online-offline discrepancies is to apply a common analytical and modeling framework to comparable data sets from online and offline sources and cross-analyzing the patterns found therein. In this paper we study the political spectra constructed from Twitter and from legislators' voting records as an example to demonstrate the potential limits of online media as the source for accurate public opinion mining, and how to overcome the limits by using offline data simultaneously.

  19. Quantifying Discrepancies in Opinion Spectra from Online and Offline Networks

    PubMed Central

    Lee, Deokjae; Hahn, Kyu S.; Yook, Soon-Hyung; Park, Juyong

    2015-01-01

    Online social media such as Twitter are widely used for mining public opinions and sentiments on various issues and topics. The sheer volume of the data generated and the eager adoption by the online-savvy public are helping to raise the profile of online media as a convenient source of news and public opinions on social and political issues as well. Due to the uncontrollable biases in the population who heavily use the media, however, it is often difficult to measure how accurately the online sphere reflects the offline world at large, undermining the usefulness of online media. One way of identifying and overcoming the online–offline discrepancies is to apply a common analytical and modeling framework to comparable data sets from online and offline sources and cross-analyzing the patterns found therein. In this paper we study the political spectra constructed from Twitter and from legislators' voting records as an example to demonstrate the potential limits of online media as the source for accurate public opinion mining, and how to overcome the limits by using offline data simultaneously. PMID:25915931

  20. SU-F-J-194: Development of Dose-Based Image Guided Proton Therapy Workflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, R; Sun, B; Zhao, T

    Purpose: To implement image-guided proton therapy (IGPT) based on daily proton dose distribution. Methods: Unlike x-ray therapy, simple alignment based on anatomy cannot ensure proper dose coverage in proton therapy. Anatomy changes along the beam path may lead to underdosing the target, or overdosing the organ-at-risk (OAR). With an in-room mobile computed tomography (CT) system, we are developing a dose-based IGPT software tool that allows patient positioning and treatment adaption based on daily dose distributions. During an IGPT treatment, daily CT images are acquired in treatment position. After initial positioning based on rigid image registration, proton dose distribution is calculatedmore » on daily CT images. The target and OARs are automatically delineated via deformable image registration. Dose distributions are evaluated to decide if repositioning or plan adaptation is necessary in order to achieve proper coverage of the target and sparing of OARs. Besides online dose-based image guidance, the software tool can also map daily treatment doses to the treatment planning CT images for offline adaptive treatment. Results: An in-room helical CT system is commissioned for IGPT purposes. It produces accurate CT numbers that allow proton dose calculation. GPU-based deformable image registration algorithms are developed and evaluated for automatic ROI-delineation and dose mapping. The online and offline IGPT functionalities are evaluated with daily CT images of the proton patients. Conclusion: The online and offline IGPT software tool may improve the safety and quality of proton treatment by allowing dose-based IGPT and adaptive proton treatments. Research is partially supported by Mevion Medical Systems.« less

  1. Association of Maltreatment With High-Risk Internet Behaviors and Offline Encounters

    PubMed Central

    Shenk, Chad E.; Barnes, Jaclyn E.; Haralson, Katherine J.

    2013-01-01

    OBJECTIVE: High-risk Internet behaviors, including viewing sexually explicit content, provocative social networking profiles, and entertaining online sexual solicitations, were examined in a sample of maltreated and nonmaltreated adolescent girls aged 14 to 17 years. The impact of Internet behaviors on subsequent offline meetings was observed over 12 to 16 months. This study tested 2 main hypotheses: (1) maltreatment would be a unique contributor to high-risk Internet behaviors and (2) high-quality parenting would dampen adolescents’ propensity to engage in high-risk Internet behaviors and to participate in offline meetings. METHODS: Online and offline behaviors and parenting quality were gleaned from 251 adolescent girls, 130 of whom experienced substantiated maltreatment and 121 of whom were demographically matched comparison girls. Parents reported on adolescent behaviors and on the level of Internet monitoring in the home. Social networking profiles were objectively coded for provocative self-presentations. Offline meetings with persons first met online were assessed 12 to 16 months later. RESULTS: Thirty percent of adolescents reported having offline meetings. Maltreatment, adolescent behavioral problems, and low cognitive ability were uniquely associated with high-risk Internet behaviors. Exposure to sexual content, creating high-risk social networking profiles, and receiving online sexual solicitations were independent predictors of subsequent offline meetings. High-quality parenting and parental monitoring moderated the associations between adolescent risk factors and Internet behaviors, whereas use of parental control software did not. CONCLUSIONS: Treatment modalities for maltreated adolescents should be enhanced to include Internet safety literacy. Adolescents and parents should be aware of how online self-presentations and other Internet behaviors can increase vulnerability for Internet-initiated victimization. PMID:23319522

  2. Association of maltreatment with high-risk internet behaviors and offline encounters.

    PubMed

    Noll, Jennie G; Shenk, Chad E; Barnes, Jaclyn E; Haralson, Katherine J

    2013-02-01

    High-risk Internet behaviors, including viewing sexually explicit content, provocative social networking profiles, and entertaining online sexual solicitations, were examined in a sample of maltreated and nonmaltreated adolescent girls aged 14 to 17 years. The impact of Internet behaviors on subsequent offline meetings was observed over 12 to 16 months. This study tested 2 main hypotheses: (1) maltreatment would be a unique contributor to high-risk Internet behaviors and (2) high-quality parenting would dampen adolescents' propensity to engage in high-risk Internet behaviors and to participate in offline meetings. Online and offline behaviors and parenting quality were gleaned from 251 adolescent girls, 130 of whom experienced substantiated maltreatment and 121 of whom were demographically matched comparison girls. Parents reported on adolescent behaviors and on the level of Internet monitoring in the home. Social networking profiles were objectively coded for provocative self-presentations. Offline meetings with persons first met online were assessed 12 to 16 months later. Thirty percent of adolescents reported having offline meetings. Maltreatment, adolescent behavioral problems, and low cognitive ability were uniquely associated with high-risk Internet behaviors. Exposure to sexual content, creating high-risk social networking profiles, and receiving online sexual solicitations were independent predictors of subsequent offline meetings. High-quality parenting and parental monitoring moderated the associations between adolescent risk factors and Internet behaviors, whereas use of parental control software did not. Treatment modalities for maltreated adolescents should be enhanced to include Internet safety literacy. Adolescents and parents should be aware of how online self-presentations and other Internet behaviors can increase vulnerability for Internet-initiated victimization.

  3. Attacking Software Crisis: A Macro Approach.

    DTIC Science & Technology

    1985-03-01

    Advisor X0774R.. Dyns, Second Reader W.R. Greer r. armn, Department of AAministrative Sciences Kneale rf. mrh- Dean of Information and Policy siences ...was at least originally intended to have practical value, that is, to satisfy some real need. Even the recent wave of game software for microcomputer...Comparing Online an" Offline Programming Performance, Communications of the ACM, January, 1968. 31. Schwartz, ,J. "Analyzing Large-Scale System

  4. JANIS 4: An Improved Version of the NEA Java-based Nuclear Data Information System

    NASA Astrophysics Data System (ADS)

    Soppera, N.; Bossant, M.; Dupont, E.

    2014-06-01

    JANIS is software developed to facilitate the visualization and manipulation of nuclear data, giving access to evaluated data libraries, and to the EXFOR and CINDA databases. It is stand-alone Java software, downloadable from the web and distributed on DVD. Used offline, the system also makes use of an internet connection to access the NEA Data Bank database. It is now also offered as a full web application, only requiring a browser. The features added in the latest version of the software and this new web interface are described.

  5. JANIS 4: An Improved Version of the NEA Java-based Nuclear Data Information System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soppera, N., E-mail: nicolas.soppera@oecd.org; Bossant, M.; Dupont, E.

    JANIS is software developed to facilitate the visualization and manipulation of nuclear data, giving access to evaluated data libraries, and to the EXFOR and CINDA databases. It is stand-alone Java software, downloadable from the web and distributed on DVD. Used offline, the system also makes use of an internet connection to access the NEA Data Bank database. It is now also offered as a full web application, only requiring a browser. The features added in the latest version of the software and this new web interface are described.

  6. Reactivation, Replay, and Preplay: How It Might All Fit Together

    PubMed Central

    Buhry, Laure; Azizi, Amir H.; Cheng, Sen

    2011-01-01

    Sequential activation of neurons that occurs during “offline” states, such as sleep or awake rest, is correlated with neural sequences recorded during preceding exploration phases. This so-called reactivation, or replay, has been observed in a number of different brain regions such as the striatum, prefrontal cortex, primary visual cortex and, most prominently, the hippocampus. Reactivation largely co-occurs together with hippocampal sharp-waves/ripples, brief high-frequency bursts in the local field potential. Here, we first review the mounting evidence for the hypothesis that reactivation is the neural mechanism for memory consolidation during sleep. We then discuss recent results that suggest that offline sequential activity in the waking state might not be simple repetitions of previously experienced sequences. Some offline sequential activity occurs before animals are exposed to a novel environment for the first time, and some sequences activated offline correspond to trajectories never experienced by the animal. We propose a conceptual framework for the dynamics of offline sequential activity that can parsimoniously describe a broad spectrum of experimental results. These results point to a potentially broader role of offline sequential activity in cognitive functions such as maintenance of spatial representation, learning, or planning. PMID:21918724

  7. The ALICE data quality monitoring system

    NASA Astrophysics Data System (ADS)

    von Haller, B.; Telesca, A.; Chapeland, S.; Carena, F.; Carena, W.; Chibante Barroso, V.; Costa, F.; Denes, E.; Divià, R.; Fuchs, U.; Simonetti, G.; Soós, C.; Vande Vyvre, P.; ALICE Collaboration

    2011-12-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) is a key element of the Data Acquisition's software chain. It provide shifters with precise and complete information to quickly identify and overcome problems, and as a consequence to ensure acquisition of high quality data. DQM typically involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper describes the final design of ALICE'S DQM framework called AMORE (Automatic MOnitoRing Environment), as well as its latest and coming features like the integration with the offline analysis and reconstruction framework, a better use of multi-core processors by a parallelization effort, and its interface with the eLogBook. The concurrent collection and analysis of data in an online environment requires the framework to be highly efficient, robust and scalable. We will describe what has been implemented to achieve these goals and the procedures we follow to ensure appropriate robustness and performance. We finally review the wide range of usages people make of this framework, from the basic monitoring of a single sub-detector to the most complex ones within the High Level Trigger farm or using the Prompt Reconstruction and we describe the various ways of accessing the monitoring results. We conclude with our experience, before and after the LHC startup, when monitoring the data quality in a challenging environment.

  8. A reliable control system for measurement on film thickness in copper chemical mechanical planarization system

    NASA Astrophysics Data System (ADS)

    Li, Hongkai; Qu, Zilian; Zhao, Qian; Tian, Fangxin; Zhao, Dewen; Meng, Yonggang; Lu, Xinchun

    2013-12-01

    In recent years, a variety of film thickness measurement techniques for copper chemical mechanical planarization (CMP) are subsequently proposed. In this paper, the eddy-current technique is used. In the control system of the CMP tool developed in the State Key Laboratory of Tribology, there are in situ module and off-line module for measurement subsystem. The in situ module can get the thickness of copper film on wafer surface in real time, and accurately judge when the CMP process should stop. This is called end-point detection. The off-line module is used for multi-points measurement after CMP process, in order to know the thickness of remained copper film. The whole control system is structured with two levels, and the physical connection between the upper and the lower is achieved by the industrial Ethernet. The process flow includes calibration and measurement, and there are different algorithms for two modules. In the process of software development, C++ is chosen as the programming language, in combination with Qt OpenSource to design two modules' GUI and OPC technology to implement the communication between the two levels. In addition, the drawing function is developed relying on Matlab, enriching the software functions of the off-line module. The result shows that the control system is running stably after repeated tests and practical operations for a long time.

  9. A reliable control system for measurement on film thickness in copper chemical mechanical planarization system.

    PubMed

    Li, Hongkai; Qu, Zilian; Zhao, Qian; Tian, Fangxin; Zhao, Dewen; Meng, Yonggang; Lu, Xinchun

    2013-12-01

    In recent years, a variety of film thickness measurement techniques for copper chemical mechanical planarization (CMP) are subsequently proposed. In this paper, the eddy-current technique is used. In the control system of the CMP tool developed in the State Key Laboratory of Tribology, there are in situ module and off-line module for measurement subsystem. The in situ module can get the thickness of copper film on wafer surface in real time, and accurately judge when the CMP process should stop. This is called end-point detection. The off-line module is used for multi-points measurement after CMP process, in order to know the thickness of remained copper film. The whole control system is structured with two levels, and the physical connection between the upper and the lower is achieved by the industrial Ethernet. The process flow includes calibration and measurement, and there are different algorithms for two modules. In the process of software development, C++ is chosen as the programming language, in combination with Qt OpenSource to design two modules' GUI and OPC technology to implement the communication between the two levels. In addition, the drawing function is developed relying on Matlab, enriching the software functions of the off-line module. The result shows that the control system is running stably after repeated tests and practical operations for a long time.

  10. A reliable control system for measurement on film thickness in copper chemical mechanical planarization system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hongkai; Qu, Zilian; Zhao, Qian

    In recent years, a variety of film thickness measurement techniques for copper chemical mechanical planarization (CMP) are subsequently proposed. In this paper, the eddy-current technique is used. In the control system of the CMP tool developed in the State Key Laboratory of Tribology, there are in situ module and off-line module for measurement subsystem. The in situ module can get the thickness of copper film on wafer surface in real time, and accurately judge when the CMP process should stop. This is called end-point detection. The off-line module is used for multi-points measurement after CMP process, in order to knowmore » the thickness of remained copper film. The whole control system is structured with two levels, and the physical connection between the upper and the lower is achieved by the industrial Ethernet. The process flow includes calibration and measurement, and there are different algorithms for two modules. In the process of software development, C++ is chosen as the programming language, in combination with Qt OpenSource to design two modules’ GUI and OPC technology to implement the communication between the two levels. In addition, the drawing function is developed relying on Matlab, enriching the software functions of the off-line module. The result shows that the control system is running stably after repeated tests and practical operations for a long time.« less

  11. A Metadata Management Framework for Collaborative Review of Science Data Products

    NASA Astrophysics Data System (ADS)

    Hart, A. F.; Cinquini, L.; Mattmann, C. A.; Thompson, D. R.; Wagstaff, K.; Zimdars, P. A.; Jones, D. L.; Lazio, J.; Preston, R. A.

    2012-12-01

    Data volumes generated by modern scientific instruments often preclude archiving the complete observational record. To compensate, science teams have developed a variety of "triage" techniques for identifying data of potential scientific interest and marking it for prioritized processing or permanent storage. This may involve multiple stages of filtering with both automated and manual components operating at different timescales. A promising approach exploits a fast, fully automated first stage followed by a more reliable offline manual review of candidate events. This hybrid approach permits a 24-hour rapid real-time response while also preserving the high accuracy of manual review. To support this type of second-level validation effort, we have developed a metadata-driven framework for the collaborative review of candidate data products. The framework consists of a metadata processing pipeline and a browser-based user interface that together provide a configurable mechanism for reviewing data products via the web, and capturing the full stack of associated metadata in a robust, searchable archive. Our system heavily leverages software from the Apache Object Oriented Data Technology (OODT) project, an open source data integration framework that facilitates the construction of scalable data systems and places a heavy emphasis on the utilization of metadata to coordinate processing activities. OODT provides a suite of core data management components for file management and metadata cataloging that form the foundation for this effort. The system has been deployed at JPL in support of the V-FASTR experiment [1], a software-based radio transient detection experiment that operates commensally at the Very Long Baseline Array (VLBA), and has a science team that is geographically distributed across several countries. Daily review of automatically flagged data is a shared responsibility for the team, and is essential to keep the project within its resource constraints. We describe the development of the platform using open source software, and discuss our experience deploying the system operationally. [1] R.B.Wayth,W.F.Brisken,A.T.Deller,W.A.Majid,D.R.Thompson, S. J. Tingay, and K. L. Wagstaff, "V-fastr: The vlba fast radio transients experiment," The Astrophysical Journal, vol. 735, no. 2, p. 97, 2011. Acknowledgement: This effort was supported by the Jet Propulsion Laboratory, managed by the California Institute of Technology under a contract with the National Aeronautics and Space Administration.

  12. IceCube

    Science.gov Websites

    written the portions of the offline software and simulations that involve the electronics and calibrations resonsible for the pieces of the detector calibration and simulation that are connected to the electronics electronics that process and capture the signal produce by Cerenkov light in the photomultiplier tubes. It

  13. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets.

  14. Tutoring and Multi-Agent Systems: Modeling from Experiences

    ERIC Educational Resources Information Center

    Bennane, Abdellah

    2010-01-01

    Tutoring systems become complex and are offering varieties of pedagogical software as course modules, exercises, simulators, systems online or offline, for single user or multi-user. This complexity motivates new forms and approaches to the design and the modelling. Studies and research in this field introduce emergent concepts that allow the…

  15. Optimizing CMS build infrastructure via Apache Mesos

    DOE PAGES

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; ...

    2015-12-23

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less

  16. Optimizing CMS build infrastructure via Apache Mesos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less

  17. Electrons and photons at High Level Trigger in CMS for Run II

    NASA Astrophysics Data System (ADS)

    Anuar, Afiq A.

    2015-12-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increase in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. New approaches have been studied to keep the HLT output rate manageable while maintaining thresholds low enough to cover physics analyses. The strategy mainly relies on porting online the ingredients that have been successfully applied in the offline reconstruction, thus allowing to move HLT selection closer to offline cuts. Improvements in HLT electron and photon definitions will be presented, focusing in particular on: updated clustering algorithm and the energy calibration procedure, new Particle-Flow-based isolation approach and pileup mitigation techniques, and the electron-dedicated track fitting algorithm based on Gaussian Sum Filter.

  18. ATLAS@AWS

    NASA Astrophysics Data System (ADS)

    Gehrcke, Jan-Philip; Kluth, Stefan; Stonjek, Stefan

    2010-04-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  19. Open source hardware and software platform for robotics and artificial intelligence applications

    NASA Astrophysics Data System (ADS)

    Liang, S. Ng; Tan, K. O.; Lai Clement, T. H.; Ng, S. K.; Mohammed, A. H. Ali; Mailah, Musa; Azhar Yussof, Wan; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-02-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots.

  20. pySPACE—a signal processing and classification environment in Python

    PubMed Central

    Krell, Mario M.; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Teiwes, Johannes; Metzen, Jan H.; Kirchner, Elsa A.; Kirchner, Frank

    2013-01-01

    In neuroscience large amounts of data are recorded to provide insights into cerebral information processing and function. The successful extraction of the relevant signals becomes more and more challenging due to increasing complexities in acquisition techniques and questions addressed. Here, automated signal processing and machine learning tools can help to process the data, e.g., to separate signal and noise. With the presented software pySPACE (http://pyspace.github.io/pyspace), signal processing algorithms can be compared and applied automatically on time series data, either with the aim of finding a suitable preprocessing, or of training supervised algorithms to classify the data. pySPACE originally has been built to process multi-sensor windowed time series data, like event-related potentials from the electroencephalogram (EEG). The software provides automated data handling, distributed processing, modular build-up of signal processing chains and tools for visualization and performance evaluation. Included in the software are various algorithms like temporal and spatial filters, feature generation and selection, classification algorithms, and evaluation schemes. Further, interfaces to other signal processing tools are provided and, since pySPACE is a modular framework, it can be extended with new algorithms according to individual needs. In the presented work, the structural hierarchies are described. It is illustrated how users and developers can interface the software and execute offline and online modes. Configuration of pySPACE is realized with the YAML format, so that programming skills are not mandatory for usage. The concept of pySPACE is to have one comprehensive tool that can be used to perform complete signal processing and classification tasks. It further allows to define own algorithms, or to integrate and use already existing libraries. PMID:24399965

  1. pySPACE-a signal processing and classification environment in Python.

    PubMed

    Krell, Mario M; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Teiwes, Johannes; Metzen, Jan H; Kirchner, Elsa A; Kirchner, Frank

    2013-01-01

    In neuroscience large amounts of data are recorded to provide insights into cerebral information processing and function. The successful extraction of the relevant signals becomes more and more challenging due to increasing complexities in acquisition techniques and questions addressed. Here, automated signal processing and machine learning tools can help to process the data, e.g., to separate signal and noise. With the presented software pySPACE (http://pyspace.github.io/pyspace), signal processing algorithms can be compared and applied automatically on time series data, either with the aim of finding a suitable preprocessing, or of training supervised algorithms to classify the data. pySPACE originally has been built to process multi-sensor windowed time series data, like event-related potentials from the electroencephalogram (EEG). The software provides automated data handling, distributed processing, modular build-up of signal processing chains and tools for visualization and performance evaluation. Included in the software are various algorithms like temporal and spatial filters, feature generation and selection, classification algorithms, and evaluation schemes. Further, interfaces to other signal processing tools are provided and, since pySPACE is a modular framework, it can be extended with new algorithms according to individual needs. In the presented work, the structural hierarchies are described. It is illustrated how users and developers can interface the software and execute offline and online modes. Configuration of pySPACE is realized with the YAML format, so that programming skills are not mandatory for usage. The concept of pySPACE is to have one comprehensive tool that can be used to perform complete signal processing and classification tasks. It further allows to define own algorithms, or to integrate and use already existing libraries.

  2. Development of High Level Trigger Software for Belle II at SuperKEKB

    NASA Astrophysics Data System (ADS)

    Lee, S.; Itoh, R.; Katayama, N.; Mineo, S.

    2011-12-01

    The Belle collaboration has been trying for 10 years to reveal the mystery of the current matter-dominated universe. However, much more statistics is required to search for New Physics through quantum loops in decays of B mesons. In order to increase the experimental sensitivity, the next generation B-factory, SuperKEKB, is planned. The design luminosity of SuperKEKB is 8 x 1035cm-2s-1 a factor 40 above KEKB's peak luminosity. At this high luminosity, the level 1 trigger of the Belle II experiment will stream events of 300 kB size at a 30 kHz rate. To reduce the data flow to a manageable level, a high-level trigger (HLT) is needed, which will be implemented using the full offline reconstruction on a large scale PC farm. There, physics level event selection is performed, reducing the event rate by ~ 10 to a few kHz. To execute the reconstruction the HLT uses the offline event processing framework basf2, which has parallel processing capabilities used for multi-core processing and PC clusters. The event data handling in the HLT is totally object oriented utilizing ROOT I/O with a new method of object passing over the UNIX socket connection. Also under consideration is the use of the HLT output as well to reduce the pixel detector event size by only saving hits associated with a track, resulting in an additional data reduction of ~ 100 for the pixel detector. In this contribution, the design and implementation of the Belle II HLT are presented together with a report of preliminary testing results.

  3. A wellness software platform with smart wearable devices and the demonstration report for personal wellness management

    NASA Astrophysics Data System (ADS)

    Kang, Won-Seok; Son, Chang-Sik; Lee, Sangho; Choi, Rock-Hyun; Ha, Yeong-Mi

    2017-07-01

    In this paper, we introduce a wellness software platform, called WellnessHumanCare, is a semi-automatic wellness management software platform which has the functions of complex wellness data acquisition(mental, physical and environmental one) with smart wearable devices, complex wellness condition analysis, private-aware online/offline recommendation, real-time monitoring apps (Smartphone-based, Web-based) and so on and we has demonstrated a wellness management service with 79 participants (experimental group: 39, control group: 40) who has worked at experimental group (H Corp.) and control group (K Corp.), Korea and 3 months in order to show the efficiency of the WellnessHumanCare.

  4. Software Authority Transition through Multiple Distributors

    PubMed Central

    Han, Kyusunk; Shon, Taeshik

    2014-01-01

    The rapid growth in the use of smartphones and tablets has changed the software distribution ecosystem. The trend today is to purchase software through application stores rather than from traditional offline markets. Smartphone and tablet users can install applications easily by purchasing from the online store deployed in their device. Several systems, such as Android or PC-based OS units, allow users to install software from multiple sources. Such openness, however, can promote serious threats, including malware and illegal usage. In order to prevent such threats, several stores use online authentication techniques. These methods can, however, also present a problem whereby even licensed users cannot use their purchased application. In this paper, we discuss these issues and provide an authentication method that will make purchased applications available to the registered user at all times. PMID:25143971

  5. Software authority transition through multiple distributors.

    PubMed

    Han, Kyusunk; Shon, Taeshik

    2014-01-01

    The rapid growth in the use of smartphones and tablets has changed the software distribution ecosystem. The trend today is to purchase software through application stores rather than from traditional offline markets. Smartphone and tablet users can install applications easily by purchasing from the online store deployed in their device. Several systems, such as Android or PC-based OS units, allow users to install software from multiple sources. Such openness, however, can promote serious threats, including malware and illegal usage. In order to prevent such threats, several stores use online authentication techniques. These methods can, however, also present a problem whereby even licensed users cannot use their purchased application. In this paper, we discuss these issues and provide an authentication method that will make purchased applications available to the registered user at all times.

  6. Next-Generation Bibliographic Manager: An Interview with Trevor Owens

    ERIC Educational Resources Information Center

    Morrison, James L.; Owens, Trevor

    2008-01-01

    James Morrison's interview with Trevor Owens explores Zotero, a free, open-source bibliographic tool that works as a Firefox plug-in. Previous bibliographic software, such as EndNote or Refworks, worked either online or offline to collect references and citations. Zotero leverages the power of the browser to allow users to work either online or…

  7. An Unsupervised Online Spike-Sorting Framework.

    PubMed

    Knieling, Simeon; Sridharan, Kousik S; Belardinelli, Paolo; Naros, Georgios; Weiss, Daniel; Mormann, Florian; Gharabaghi, Alireza

    2016-08-01

    Extracellular neuronal microelectrode recordings can include action potentials from multiple neurons. To separate spikes from different neurons, they can be sorted according to their shape, a procedure referred to as spike-sorting. Several algorithms have been reported to solve this task. However, when clustering outcomes are unsatisfactory, most of them are difficult to adjust to achieve the desired results. We present an online spike-sorting framework that uses feature normalization and weighting to maximize the distinctiveness between different spike shapes. Furthermore, multiple criteria are applied to either facilitate or prevent cluster fusion, thereby enabling experimenters to fine-tune the sorting process. We compare our method to established unsupervised offline (Wave_Clus (WC)) and online (OSort (OS)) algorithms by examining their performance in sorting various test datasets using two different scoring systems (AMI and the Adamos metric). Furthermore, we evaluate sorting capabilities on intra-operative recordings using established quality metrics. Compared to WC and OS, our algorithm achieved comparable or higher scores on average and produced more convincing sorting results for intra-operative datasets. Thus, the presented framework is suitable for both online and offline analysis and could substantially improve the quality of microelectrode-based data evaluation for research and clinical application.

  8. CAT 2 - An improved version of Cryogenic Analysis Tools for online and offline monitoring and analysis of large size cryostats

    NASA Astrophysics Data System (ADS)

    Pagliarone, C. E.; Uttaro, S.; Cappelli, L.; Fallone, M.; Kartal, S.

    2017-02-01

    CAT, Cryogenic Analysis Tools is a software package developed using LabVIEW and ROOT environments to analyze the performances of large size cryostats, where many parameters, input, and control variables need to be acquired and studied at the same time. The present paper describes how CAT works and which are the main improvements achieved in the new version: CAT 2. New Graphical User Interfaces have been developed in order to make the use of the full package more user-friendly as well as a process of resource optimization has been carried out. The offline analysis of the full cryostat performances is available both trough ROOT line command interface band also by using the new graphical interfaces.

  9. An automatic speech recognition system with speaker-independent identification support

    NASA Astrophysics Data System (ADS)

    Caranica, Alexandru; Burileanu, Corneliu

    2015-02-01

    The novelty of this work relies on the application of an open source research software toolkit (CMU Sphinx) to train, build and evaluate a speech recognition system, with speaker-independent support, for voice-controlled hardware applications. Moreover, we propose to use the trained acoustic model to successfully decode offline voice commands on embedded hardware, such as an ARMv6 low-cost SoC, Raspberry PI. This type of single-board computer, mainly used for educational and research activities, can serve as a proof-of-concept software and hardware stack for low cost voice automation systems.

  10. OPC model data collection for 45-nm technology node using automatic CD-SEM offline recipe creation

    NASA Astrophysics Data System (ADS)

    Fischer, Daniel; Talbi, Mohamed; Wei, Alex; Menadeva, Ovadya; Cornell, Roger

    2007-03-01

    Optical and Process Correction in the 45nm node is requiring an ever higher level of characterization. The greater complexity drives a need for automation of the metrology process allowing more efficient, accurate and effective use of the engineering resources and metrology tool time in the fab, helping to satisfy what seems an insatiable appetite for data by lithographers and modelers charged with development of 45nm and 32nm processes. The scope of the work referenced here is a 45nm design cycle "full-loop automation", starting with gds formatted target design layout and ending with the necessary feedback of one and two dimensional printed wafer metrology. In this paper the authors consider the key elements of software, algorithmic framework and Critical Dimension Scanning Electron Microscope (CDSEM) functionality necessary to automate its recipe creation. We evaluate specific problems with the methodology of the former art, "on-tool on-wafer" recipe construction, and discuss how the implementation of the design based recipe generation improves upon the overall metrology process. Individual target-by-target construction, use of a one pattern recognition template fits all approach, a blind navigation to the desired measurement feature, lengthy sessions on tool to construct recipes and limited ability to determine measurement quality in the resultant data set are each discussed as to how the state of the art Design Based Metrology (DBM) approach is implemented. The offline created recipes have shown pattern recognition success rates of up to 100% and measurement success rates of up to 93% for line/space as well as for 2D Minimum/Maximum measurements without manual assists during measurement.

  11. Online, offline, realtime: recent developments in industrial photogrammetry

    NASA Astrophysics Data System (ADS)

    Boesemann, Werner

    2003-01-01

    In recent years industrial photogrammetry has emerged from a highly specialized niche technology to a well established tool in industrial coordinate measurement applications with numerous installations in a significantly growing market of flexible and portable optical measurement systems. This is due to the development of powerful, but affordable video and computer technology. The increasing industrial requirements for accuracy, speed, robustness and ease of use of these systems together with a demand for the highest possible degree of automation have forced universities and system manufacturer to develop hard- and software solutions to meet these requirements. The presentation will show the latest trends in hardware development, especially new generation digital and/or intelligent cameras, aspects of image engineering like use of controlled illumination or projection technologies, and algorithmic and software aspects like automation strategies or new camera models. The basic qualities of digital photogrammetry- like portability and flexibility on one hand and fully automated quality control on the other - sometimes lead to certain conflicts in the design of measurement systems for different online, offline, or real-time solutions. The presentation will further show, how these tools and methods are combined in different configurations to be able to cover the still growing demands of the industrial end-users.

  12. LogScope

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Smith, Margaret H.; Barringer, Howard; Groce, Alex

    2012-01-01

    LogScope is a software package for analyzing log files. The intended use is for offline post-processing of such logs, after the execution of the system under test. LogScope can, however, in principle, also be used to monitor systems online during their execution. Logs are checked against requirements formulated as monitors expressed in a rule-based specification language. This language has similarities to a state machine language, but is more expressive, for example, in its handling of data parameters. The specification language is user friendly, simple, and yet expressive enough for many practical scenarios. The LogScope software was initially developed to specifically assist in testing JPL s Mars Science Laboratory (MSL) flight software, but it is very generic in nature and can be applied to any application that produces some form of logging information (which almost any software does).

  13. Framework Programmable Platform for the advanced software development workstation: Framework processor design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les

    1991-01-01

    The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.

  14. Framework for Development of Object-Oriented Software

    NASA Technical Reports Server (NTRS)

    Perez-Poveda, Gus; Ciavarella, Tony; Nieten, Dan

    2004-01-01

    The Real-Time Control (RTC) Application Framework is a high-level software framework written in C++ that supports the rapid design and implementation of object-oriented application programs. This framework provides built-in functionality that solves common software development problems within distributed client-server, multi-threaded, and embedded programming environments. When using the RTC Framework to develop software for a specific domain, designers and implementers can focus entirely on the details of the domain-specific software rather than on creating custom solutions, utilities, and frameworks for the complexities of the programming environment. The RTC Framework was originally developed as part of a Space Shuttle Launch Processing System (LPS) replacement project called Checkout and Launch Control System (CLCS). As a result of the framework s development, CLCS software development time was reduced by 66 percent. The framework is generic enough for developing applications outside of the launch-processing system domain. Other applicable high-level domains include command and control systems and simulation/ training systems.

  15. ENGAGE: A Game Based Learning and Problem Solving Framework

    DTIC Science & Technology

    2012-08-15

    multiplayer card game Creature Capture now supports an offline multiplayer mode (sharing a single computer), in response to feedback from teachers that a...Planetopia overworld will be ready for use by a number of physical schools as well as integrated into multiple online teaching resources. The games will be...From - To) 7/1/2012 – 7/31/2012 4. TITLE AND SUBTITLE ENGAGE: A Game Based Learning and Problem Solving Framework 5a. CONTRACT NUMBER N/A 5b

  16. Framework Programmable Platform for the Advanced Software Development Workstation (FPP/ASDW). Demonstration framework document. Volume 1: Concepts and activity descriptions

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paul S.; Crump, John W.; Ackley, Keith A.

    1992-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at effectively combining tool and data integration mechanisms with a model of the software development process to provide an intelligent integrated software development environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The Advanced Software Development Workstation (ASDW) program is conducting research into development of advanced technologies for Computer Aided Software Engineering (CASE).

  17. A portable platform to collect and review behavioral data simultaneously with neurophysiological signals.

    PubMed

    Tianxiao Jiang; Siddiqui, Hasan; Ray, Shruti; Asman, Priscella; Ozturk, Musa; Ince, Nuri F

    2017-07-01

    This paper presents a portable platform to collect and review behavioral data simultaneously with neurophysiological signals. The whole system is comprised of four parts: a sensor data acquisition interface, a socket server for real-time data streaming, a Simulink system for real-time processing and an offline data review and analysis toolbox. A low-cost microcontroller is used to acquire data from external sensors such as accelerometer and hand dynamometer. The micro-controller transfers the data either directly through USB or wirelessly through a bluetooth module to a data server written in C++ for MS Windows OS. The data server also interfaces with the digital glove and captures HD video from webcam. The acquired sensor data are streamed under User Datagram Protocol (UDP) to other applications such as Simulink/Matlab for real-time analysis and recording. Neurophysiological signals such as electroencephalography (EEG), electrocorticography (ECoG) and local field potential (LFP) recordings can be collected simultaneously in Simulink and fused with behavioral data. In addition, we developed a customized Matlab Graphical User Interface (GUI) software to review, annotate and analyze the data offline. The software provides a fast, user-friendly data visualization environment with synchronized video playback feature. The software is also capable of reviewing long-term neural recordings. Other featured functions such as fast preprocessing with multithreaded filters, annotation, montage selection, power-spectral density (PSD) estimate, time-frequency map and spatial spectral map are also implemented.

  18. Real-Time Analysis of Electrocardiographic Data for Heart Rate Turbulence

    NASA Technical Reports Server (NTRS)

    Greco, E. Carl, Jr.

    2005-01-01

    Episodes of ventricular ectopy (premature ventricular contractions, PVCs) have been reported in several astronauts and cosmonauts during space flight. Indeed, the "Occurrence of Serious Cardiac Dysrhythmias" is now NASA's #1 priority critical path risk factor in the cardiovascular area that could jeopardize a mission as well as the health and welfare of the astronaut. Epidemiological, experimental and clinical observations suggest that severe autonomic dysfunction and/or transient cardiac ischemia can initiate potentially lethal ventricular arrhythmias. On earth, Heart Rate Turbulence (HRT) in response to PVCs has been shown to provide not only an index of baroreflex sensitivity (BRS), but also more importantly, an index of the propensity for lethal ventricular arrhythmia. An HRT procedure integrated into the existing advanced electrocardiographic system under development in JSC's Human Adaptation and Countermeasures Office was developed to provide a system for assessment of PVCs in a real-time monitoring or offline (play-back) scenario. The offline heart rate turbulence software program that was designed in the summer of 2003 was refined and modified for "close to" real-time results. In addition, assistance was provided with the continued development of the real-time heart rate variability software program. These programs should prove useful in evaluating the risk for arrhythmias in astronauts who do and who do not have premature ventricular contractions, respectively. The software developed for these projects has not been included in this report. Please contact Dr. Todd Schlegel for information on acquiring a specific program.

  19. Operating System Support for Shared Hardware Data Structures

    DTIC Science & Technology

    2013-01-31

    Carbon [73] uses hardware queues to improve fine-grained multitasking for Recognition, Mining , and Synthesis. Compared to software ap- proaches...web transaction processing, data mining , and multimedia. Early work in database processors [114, 96, 79, 111] reduce the costs of relational database...assignment can be solved statically or dynamically. Static assignment deter- mines offline which data structures are assigned to use HWDS resources and at

  20. Improving Memory Error Handling Using Linux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlton, Michael Andrew; Blanchard, Sean P.; Debardeleben, Nathan A.

    As supercomputers continue to get faster and more powerful in the future, they will also have more nodes. If nothing is done, then the amount of memory in supercomputer clusters will soon grow large enough that memory failures will be unmanageable to deal with by manually replacing memory DIMMs. "Improving Memory Error Handling Using Linux" is a process oriented method to solve this problem by using the Linux kernel to disable (offline) faulty memory pages containing bad addresses, preventing them from being used again by a process. The process of offlining memory pages simplifies error handling and results in reducingmore » both hardware and manpower costs required to run Los Alamos National Laboratory (LANL) clusters. This process will be necessary for the future of supercomputing to allow the development of exascale computers. It will not be feasible without memory error handling to manually replace the number of DIMMs that will fail daily on a machine consisting of 32-128 petabytes of memory. Testing reveals the process of offlining memory pages works and is relatively simple to use. As more and more testing is conducted, the entire process will be automated within the high-performance computing (HPC) monitoring software, Zenoss, at LANL.« less

  1. BESIII physical offline data analysis on virtualization platform

    NASA Astrophysics Data System (ADS)

    Huang, Q.; Li, H.; Kan, B.; Shi, J.; Lei, X.

    2015-12-01

    In this contribution, we present an ongoing work, which aims at benefiting BESIII computing system for higher resource utilization and more efficient job operations brought by cloud and virtualization technology with Openstack and KVM. We begin with the architecture of BESIII offline software to understand how it works. We mainly report the KVM performance evaluation and optimization from various factors in hardware and kernel. Experimental results show the CPU performance penalty of KVM can be approximately decreased to 3%. In addition, the performance comparison between KVM and physical machines in aspect of CPU, disk IO and network IO is also presented. Finally, we present our development work, an adaptive cloud scheduler, which allocates and reclaims VMs dynamically according to the status of TORQUE queue and the size of resource pool to improve resource utilization and job processing efficiency.

  2. The third level trigger and output event unit of the UA1 data-acquisition system

    NASA Astrophysics Data System (ADS)

    Cittolin, S.; Demoulin, M.; Fucci, A.; Haynes, W.; Martin, B.; Porte, J. P.; Sphicas, P.

    1989-12-01

    The upgraded UA1 experiment utilizes twelve 3081/E emulators for its third-level trigger system. The system is interfaced to VME, and is controlled by 68000 microprocessor VME boards on the input and output. The output controller communicates with an IBM 9375 mainframe via the CERN-IBM developed VICI interface. The events selected by the emulators are output on IBM-3480 cassettes. The user interface to this system is based on a series of Macintosh personal computer connected to the VME bus. These Macs are also used for developing software for the emulators and for monitoring the entire system. The same configuration has also been used for offline event reconstruction. A description of the system, together with details of both the online and offline modes of operation and an eveluation of its performance are presented.

  3. Improved Software to Browse the Serial Medical Images for Learning

    PubMed Central

    2017-01-01

    The thousands of serial images used for medical pedagogy cannot be included in a printed book; they also cannot be efficiently handled by ordinary image viewer software. The purpose of this study was to provide browsing software to grasp serial medical images efficiently. The primary function of the newly programmed software was to select images using 3 types of interfaces: buttons or a horizontal scroll bar, a vertical scroll bar, and a checkbox. The secondary function was to show the names of the structures that had been outlined on the images. To confirm the functions of the software, 3 different types of image data of cadavers (sectioned and outlined images, volume models of the stomach, and photos of the dissected knees) were inputted. The browsing software was downloadable for free from the homepage (anatomy.co.kr) and available off-line. The data sets provided could be replaced by any developers for their educational achievements. We anticipate that the software will contribute to medical education by allowing users to browse a variety of images. PMID:28581279

  4. Improved Software to Browse the Serial Medical Images for Learning.

    PubMed

    Kwon, Koojoo; Chung, Min Suk; Park, Jin Seo; Shin, Byeong Seok; Chung, Beom Sun

    2017-07-01

    The thousands of serial images used for medical pedagogy cannot be included in a printed book; they also cannot be efficiently handled by ordinary image viewer software. The purpose of this study was to provide browsing software to grasp serial medical images efficiently. The primary function of the newly programmed software was to select images using 3 types of interfaces: buttons or a horizontal scroll bar, a vertical scroll bar, and a checkbox. The secondary function was to show the names of the structures that had been outlined on the images. To confirm the functions of the software, 3 different types of image data of cadavers (sectioned and outlined images, volume models of the stomach, and photos of the dissected knees) were inputted. The browsing software was downloadable for free from the homepage (anatomy.co.kr) and available off-line. The data sets provided could be replaced by any developers for their educational achievements. We anticipate that the software will contribute to medical education by allowing users to browse a variety of images. © 2017 The Korean Academy of Medical Sciences.

  5. UCMS - A new signal parameter measurement system using digital signal processing techniques. [User Constraint Measurement System

    NASA Technical Reports Server (NTRS)

    Choi, H. J.; Su, Y. T.

    1986-01-01

    The User Constraint Measurement System (UCMS) is a hardware/software package developed by NASA Goddard to measure the signal parameter constraints of the user transponder in the TDRSS environment by means of an all-digital signal sampling technique. An account is presently given of the features of UCMS design and of its performance capabilities and applications; attention is given to such important aspects of the system as RF interface parameter definitions, hardware minimization, the emphasis on offline software signal processing, and end-to-end link performance. Applications to the measurement of other signal parameters are also discussed.

  6. Simulation and animation of sensor-driven robots.

    PubMed

    Chen, C; Trivedi, M M; Bidlack, C R

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aid the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the users visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.

  7. Neural mechanisms of vocal imitation: The role of sleep replay in shaping mirror neurons.

    PubMed

    Giret, Nicolas; Edeline, Jean-Marc; Del Negro, Catherine

    2017-06-01

    Learning by imitation involves not only perceiving another individual's action to copy it, but also the formation of a memory trace in order to gradually establish a correspondence between the sensory and motor codes, which represent this action through sensorimotor experience. Memory and sensorimotor processes are closely intertwined. Mirror neurons, which fire both when the same action is performed or perceived, have received considerable attention in the context of imitation. An influential view of memory processes considers that the consolidation of newly acquired information or skills involves an active offline reprocessing of memories during sleep within the neuronal networks that were initially used for encoding. Here, we review the recent advances in the field of mirror neurons and offline processes in the songbird. We further propose a theoretical framework that could establish the neurobiological foundations of sensorimotor learning by imitation. We propose that the reactivation of neuronal assemblies during offline periods contributes to the integration of sensory feedback information and the establishment of sensorimotor mirroring activity at the neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Offline handwritten word recognition using MQDF-HMMs

    NASA Astrophysics Data System (ADS)

    Ramachandrula, Sitaram; Hambarde, Mangesh; Patial, Ajay; Sahoo, Dushyant; Kochar, Shaivi

    2015-01-01

    We propose an improved HMM formulation for offline handwriting recognition (HWR). The main contribution of this work is using modified quadratic discriminant function (MQDF) [1] within HMM framework. In an MQDF-HMM the state observation likelihood is calculated by a weighted combination of MQDF likelihoods of individual Gaussians of GMM (Gaussian Mixture Model). The quadratic discriminant function (QDF) of a multivariate Gaussian can be rewritten by avoiding the inverse of covariance matrix by using the Eigen values and Eigen vectors of it. The MQDF is derived from QDF by substituting few of badly estimated lower-most Eigen values by an appropriate constant. The estimation errors of non-dominant Eigen vectors and Eigen values of covariance matrix for which the training data is insufficient can be controlled by this approach. MQDF has been successfully shown to improve the character recognition performance [1]. The usage of MQDF in HMM improves the computation, storage and modeling power of HMM when there is limited training data. We have got encouraging results on offline handwritten character (NIST database) and word recognition in English using MQDF HMMs.

  9. Robust visual tracking via multiscale deep sparse networks

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo

    2017-04-01

    In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.

  10. NoSQL technologies for the CMS Conditions Database

    NASA Astrophysics Data System (ADS)

    Sipos, Roland

    2015-12-01

    With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions. We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL databases to support a key-value representation of the CMS Conditions. The definition of the database infrastructure is based on the need of storing the conditions as BLOBs. Because of this, each condition can reach the size that may require special treatment (splitting) in these NoSQL databases. As big binary objects may be problematic in several database systems, and also to give an accurate baseline, a testing framework extension was implemented to measure the characteristics of the handling of arbitrary binary data in these databases. Based on the evaluation, prototypes of a document store, using a column-oriented and plain key-value store, are deployed. An adaption layer to access the backends in the CMS Offline software was developed to provide transparent support for these NoSQL databases in the CMS context. Additional data modelling approaches and considerations in the software layer, deployment and automatization of the databases are also covered in the research. In this paper we present the results of the evaluation as well as a performance comparison of the prototypes studied.

  11. GERICOS: A Generic Framework for the Development of On-Board Software

    NASA Astrophysics Data System (ADS)

    Plasson, P.; Cuomo, C.; Gabriel, G.; Gauthier, N.; Gueguen, L.; Malac-Allain, L.

    2016-08-01

    This paper presents an overview of the GERICOS framework (GEneRIC Onboard Software), its architecture, its various layers and its future evolutions. The GERICOS framework, developed and qualified by LESIA, offers a set of generic, reusable and customizable software components for the rapid development of payload flight software. The GERICOS framework has a layered structure. The first layer (GERICOS::CORE) implements the concept of active objects and forms an abstraction layer over the top of real-time kernels. The second layer (GERICOS::BLOCKS) offers a set of reusable software components for building flight software based on generic solutions to recurrent functionalities. The third layer (GERICOS::DRIVERS) implements software drivers for several COTS IP cores of the LEON processor ecosystem.

  12. Integrating Online and Offline Three-Dimensional Deep Learning for Automated Polyp Detection in Colonoscopy Videos.

    PubMed

    Lequan Yu; Hao Chen; Qi Dou; Jing Qin; Pheng Ann Heng

    2017-01-01

    Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer prevention and diagnosis. Traditional manual screening is time consuming, operator dependent, and error prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intraclass variations in polyp size, color, shape, and texture, and low interclass variations between polyps and hard mimics. In this paper, we propose a novel offline and online three-dimensional (3-D) deep learning integration framework by leveraging the 3-D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with the previous methods employing hand-crafted features or 2-D convolutional neural network, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors.

  13. A dual mode breath sampler for the collection of the end-tidal and dead space fractions.

    PubMed

    Salvo, P; Ferrari, C; Persia, R; Ghimenti, S; Lomonaco, T; Bellagambi, F; Di Francesco, F

    2015-06-01

    This work presents a breath sampler prototype automatically collecting end-tidal (single and multiple breaths) or dead space air fractions (multiple breaths). This result is achieved by real time measurements of the CO2 partial pressure and airflow during the expiratory and inspiratory phases. Suitable algorithms, used to control a solenoid valve, guarantee that a Nalophan(®) bag is filled with the selected breath fraction even if the subject under test hyperventilates. The breath sampler has low pressure drop (<0.5 kPa) and uses inert or disposable components to avoid bacteriological risk for the patients and contamination of the breath samples. A fully customisable software interface allows a real time control of the hardware and software status. The performances of the breath sampler were evaluated by comparing (a) the CO2 partial pressure calculated during the sampling with the CO2 pressure measured off-line within the Nalophan(®) bag; (b) the concentrations of four selected volatile organic compounds in dead space, end-tidal and mixed breath fractions. Results showed negligible deviations between calculated and off-line CO2 pressure values and the distributions of the selected compounds into dead space, end-tidal and mixed breath fractions were in agreement with their chemical-physical properties. Copyright © 2015. Published by Elsevier Ltd.

  14. The high-rate data challenge: computing for the CBM experiment

    NASA Astrophysics Data System (ADS)

    Friese, V.; CBM Collaboration

    2017-10-01

    The Compressed Baryonic Matter experiment (CBM) is a next-generation heavy-ion experiment to be operated at the FAIR facility, currently under construction in Darmstadt, Germany. A key feature of CBM is very high interaction rate, exceeding those of contemporary nuclear collision experiments by several orders of magnitude. Such interaction rates forbid a conventional, hardware-triggered readout; instead, experiment data will be freely streaming from self-triggered front-end electronics. In order to reduce the huge raw data volume to a recordable rate, data will be selected exclusively on CPU, which necessitates partial event reconstruction in real-time. Consequently, the traditional segregation of online and offline software vanishes; an integrated on- and offline data processing concept is called for. In this paper, we will report on concepts and developments for computing for CBM as well as on the status of preparations for its first physics run.

  15. PDB@: an offline toolkit for exploration and analysis of PDB files.

    PubMed

    Mani, Udayakumar; Ravisankar, Sadhana; Ramakrishnan, Sai Mukund

    2013-12-01

    Protein Data Bank (PDB) is a freely accessible archive of the 3-D structural data of biological molecules. Structure based studies offers a unique vantage point in inferring the properties of a protein molecule from structural data. This is too big a task to be done manually. Moreover, there is no single tool, software or server that comprehensively analyses all structure-based properties. The objective of the present work is to develop an offline computational toolkit, PDB@ containing in-built algorithms that help categorizing the structural properties of a protein molecule. The user has the facility to view and edit the PDB file to his need. Some features of the present work are unique in itself and others are an improvement over existing tools. Also, the representation of protein properties in both graphical and textual formats helps in predicting all the necessary details of a protein molecule on a single platform.

  16. Framework programmable platform for the advanced software development workstation. Integration mechanism design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.

  17. Professional Ethics of Software Engineers: An Ethical Framework.

    PubMed

    Lurie, Yotam; Mark, Shlomo

    2016-04-01

    The purpose of this article is to propose an ethical framework for software engineers that connects software developers' ethical responsibilities directly to their professional standards. The implementation of such an ethical framework can overcome the traditional dichotomy between professional skills and ethical skills, which plagues the engineering professions, by proposing an approach to the fundamental tasks of the practitioner, i.e., software development, in which the professional standards are intrinsically connected to the ethical responsibilities. In so doing, the ethical framework improves the practitioner's professionalism and ethics. We call this approach Ethical-Driven Software Development (EDSD), as an approach to software development. EDSD manifests the advantages of an ethical framework as an alternative to the all too familiar approach in professional ethics that advocates "stand-alone codes of ethics". We believe that one outcome of this synergy between professional and ethical skills is simply better engineers. Moreover, since there are often different software solutions, which the engineer can provide to an issue at stake, the ethical framework provides a guiding principle, within the process of software development, that helps the engineer evaluate the advantages and disadvantages of different software solutions. It does not and cannot affect the end-product in and of-itself. However, it can and should, make the software engineer more conscious and aware of the ethical ramifications of certain engineering decisions within the process.

  18. Latino Adolescents' Perceived Discrimination in Online and Offline Settings: An Examination of Cultural Risk and Protective Factors

    ERIC Educational Resources Information Center

    Umaña-Taylor, Adriana J.; Tynes, Brendesha M.; Toomey, Russell B.; Williams, David R.; Mitchell, Kimberly J.

    2015-01-01

    Guided by a risk and resilience framework, the current study examined the associations between Latino adolescents' ("n" = 219; "M" [subscript age] = 14.35; "SD" = 1.75) perceptions of ethnic discrimination in multiple settings (e.g., online, school) and several domains of adjustment (e.g., mental health, academic),…

  19. Interactive 3D-PDF Presentations for the Simulation and Quantification of Extended Endoscopic Endonasal Surgical Approaches.

    PubMed

    Mavar-Haramija, Marija; Prats-Galino, Alberto; Méndez, Juan A Juanes; Puigdelívoll-Sánchez, Anna; de Notaris, Matteo

    2015-10-01

    A three-dimensional (3D) model of the skull base was reconstructed from the pre- and post-dissection head CT images and embedded in a Portable Document Format (PDF) file, which can be opened by freely available software and used offline. The CT images were segmented using a specific 3D software platform for biomedical data, and the resulting 3D geometrical models of anatomical structures were used for dual purpose: to simulate the extended endoscopic endonasal transsphenoidal approaches and to perform the quantitative analysis of the procedures. The analysis consisted of bone removal quantification and the calculation of quantitative parameters (surgical freedom and exposure area) of each procedure. The results are presented in three PDF documents containing JavaScript-based functions. The 3D-PDF files include reconstructions of the nasal structures (nasal septum, vomer, middle turbinates), the bony structures of the anterior skull base and maxillofacial region and partial reconstructions of the optic nerve, the hypoglossal and vidian canals and the internal carotid arteries. Alongside the anatomical model, axial, sagittal and coronal CT images are shown. Interactive 3D presentations were created to explain the surgery and the associated quantification methods step-by-step. The resulting 3D-PDF files allow the user to interact with the model through easily available software, free of charge and in an intuitive manner. The files are available for offline use on a personal computer and no previous specialized knowledge in informatics is required. The documents can be downloaded at http://hdl.handle.net/2445/55224 .

  20. ROSAT Science Data Center

    NASA Technical Reports Server (NTRS)

    Murray, Stephen; Pisarski, Ryszard L. (Technical Monitor)

    2001-01-01

    This report provides a summary of the Smithsonian Astrophysical Observatory (SAO) ROSAT SCIENCE DATA CENTER (RSDC) activities for the recent years of our contract. Details have already been reported in the monthly reports. The SAO was responsible for the High Resolution Imager (HRI) detector on ROSAT. We also provided and supported the HRI standard analysis software used in the pipeline processing (SASS). Working with our colleagues at the Max Planck in Garching Germany (MPE), we fixed bugs and provided enhancements. The last major effort in this area was the port from VMS/VAX to VMS/ALPHA architecture. In 1998, a timing bug was found in the HRI standard processing system which degraded the positional accuracy because events accessed incorrect aspect solutions. The bug was fixed and we developed off-line correction routines and provided them to the community. The Post Reduction Off-line Software (PROS) package was developed by SAO and runs in the IRAF environment. Although in recent years PROS was not a contractual responsibility of the RSDC, we continued to maintain the system and provided new capabilities such as the ability to deal with simulated AXAF data in preparation for the NASA call for proposals for Chandra. Our most recent activities in this area included the debugging necessary for newer versions of IRAF which broke some of our software. At SAO we have an operating version of PROS and hope to release a patch even though almost all functionality that was lost was subsequently recovered via an IRAF patch (i.e. most of our problems were caused by an IRAF bug).

  1. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: Earth System Modeling Software Framework Survey

    NASA Technical Reports Server (NTRS)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.

  2. Data handling with SAM and art at the NO vA experiment

    DOE PAGES

    Aurisano, A.; Backhouse, C.; Davies, G. S.; ...

    2015-12-23

    During operations, NOvA produces between 5,000 and 7,000 raw files per day with peaks in excess of 12,000. These files must be processed in several stages to produce fully calibrated and reconstructed analysis files. In addition, many simulated neutrino interactions must be produced and processed through the same stages as data. To accommodate the large volume of data and Monte Carlo, production must be possible both on the Fermilab grid and on off-site farms, such as the ones accessible through the Open Science Grid. To handle the challenge of cataloging these files and to facilitate their off-line processing, we havemore » adopted the SAM system developed at Fermilab. SAM indexes files according to metadata, keeps track of each file's physical locations, provides dataset management facilities, and facilitates data transfer to off-site grids. To integrate SAM with Fermilab's art software framework and the NOvA production workflow, we have developed methods to embed metadata into our configuration files, art files, and standalone ROOT files. A module in the art framework propagates the embedded information from configuration files into art files, and from input art files to output art files, allowing us to maintain a complete processing history within our files. Embedding metadata in configuration files also allows configuration files indexed in SAM to be used as inputs to Monte Carlo production jobs. Further, SAM keeps track of the input files used to create each output file. Parentage information enables the construction of self-draining datasets which have become the primary production paradigm used at NOvA. In this study we will present an overview of SAM at NOvA and how it has transformed the file production framework used by the experiment.« less

  3. Open Source Tools for Seismicity Analysis

    NASA Astrophysics Data System (ADS)

    Powers, P.

    2010-12-01

    The spatio-temporal analysis of seismicity plays an important role in earthquake forecasting and is integral to research on earthquake interactions and triggering. For instance, the third version of the Uniform California Earthquake Rupture Forecast (UCERF), currently under development, will use Epidemic Type Aftershock Sequences (ETAS) as a model for earthquake triggering. UCERF will be a "living" model and therefore requires robust, tested, and well-documented ETAS algorithms to ensure transparency and reproducibility. Likewise, as earthquake aftershock sequences unfold, real-time access to high quality hypocenter data makes it possible to monitor the temporal variability of statistical properties such as the parameters of the Omori Law and the Gutenberg Richter b-value. Such statistical properties are valuable as they provide a measure of how much a particular sequence deviates from expected behavior and can be used when assigning probabilities of aftershock occurrence. To address these demands and provide public access to standard methods employed in statistical seismology, we present well-documented, open-source JavaScript and Java software libraries for the on- and off-line analysis of seismicity. The Javascript classes facilitate web-based asynchronous access to earthquake catalog data and provide a framework for in-browser display, analysis, and manipulation of catalog statistics; implementations of this framework will be made available on the USGS Earthquake Hazards website. The Java classes, in addition to providing tools for seismicity analysis, provide tools for modeling seismicity and generating synthetic catalogs. These tools are extensible and will be released as part of the open-source OpenSHA Commons library.

  4. Motivations for Social Media Use and Impact on Political Participation in China: A Cognitive and Communication Mediation Approach.

    PubMed

    Chen, Zhuo; Chan, Michael

    2017-02-01

    Integrating uses and gratifications theory and the cognitive/communication mediation model: this study examines Chinese students' use of social media and subsequent impact on political participation. An integrative framework is proposed where media use, political expression, and political cognitions (efficacy and knowledge) play important mediating roles between audience motivations and participation. Structural equation analyses showed support for the integrated model. Guidance and social utility motivations exhibited different indirect effects on online and offline participation through social media news, discussion, and political efficacy. Entertainment motivations exhibited no direct or indirect effects. Contrary to expectations and previous literature, surveillance motivations exhibited negative direct and indirect effects on offline participation, which may be attributed to the particular Chinese social and political context. Implications of the findings are discussed.

  5. Bridging online and offline social networks: Multiplex analysis

    NASA Astrophysics Data System (ADS)

    Filiposka, Sonja; Gajduk, Andrej; Dimitrova, Tamara; Kocarev, Ljupco

    2017-04-01

    We show that three basic actor characteristics, namely normalized reciprocity, three cycles, and triplets, can be expressed using an unified framework that is based on computing the similarity index between two sets associated with the actor: the set of her/his friends and the set of those considering her/him as a friend. These metrics are extended to multiplex networks and then computed for two friendship networks generated by collecting data from two groups of undergraduate students. We found that in offline communication strong and weak ties are (almost) equally presented, while in online communication weak ties are dominant. Moreover, weak ties are much less reciprocal than strong ties. However, across different layers of the multiplex network reciprocities are preserved, while triads (measured with normalized three cycles and triplets) are not significant.

  6. Optic Nerve Head Measurements With Optical Coherence Tomography: A Phantom-Based Study Reveals Differences Among Clinical Devices

    PubMed Central

    Agrawal, Anant; Baxi, Jigesh; Calhoun, William; Chen, Chieh-Li; Ishikawa, Hiroshi; Schuman, Joel S.; Wollstein, Gadi; Hammer, Daniel X.

    2016-01-01

    Purpose Optical coherence tomography (OCT) can monitor for glaucoma by measuring dimensions of the optic nerve head (ONH) cup and disc. Multiple clinical studies have shown that different OCT devices yield different estimates of retinal dimensions. We developed phantoms mimicking ONH morphology as a new way to compare ONH measurements from different clinical OCT devices. Methods Three phantoms were fabricated to model the ONH: One normal and two with glaucomatous anatomies. Phantoms were scanned with Stratus, RTVue, and Cirrus clinical devices, and with a laboratory OCT system as a reference. We analyzed device-reported ONH measurements of cup-to-disc ratio (CDR) and cup volume and compared them with offline measurements done manually and with a custom software algorithm, respectively. Results The mean absolute difference between clinical devices with device-reported measurements versus offline measurements was 0.082 vs. 0.013 for CDR and 0.044 mm3 vs. 0.019 mm3 for cup volume. Statistically significant differences between devices were present for 16 of 18 comparisons of device-reported measurements from the phantoms. Offline Cirrus measurements tended to be significantly different from those from Stratus and RTVue. Conclusions The interdevice differences in CDR and cup volume are primarily caused by the devices' proprietary ONH analysis algorithms. The three devices yield more similar ONH measurements when a consistent offline analysis technique is applied. Scan pattern on the ONH also may be a factor in the measurement differences. This phantom-based study has provided unique insights into characteristics of OCT measurements of the ONH. PMID:27409500

  7. Software Coherence in Multiprocessor Memory Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bolosky, William Joseph

    1993-01-01

    Processors are becoming faster and multiprocessor memory interconnection systems are not keeping up. Therefore, it is necessary to have threads and the memory they access as near one another as possible. Typically, this involves putting memory or caches with the processors, which gives rise to the problem of coherence: if one processor writes an address, any other processor reading that address must see the new value. This coherence can be maintained by the hardware or with software intervention. Systems of both types have been built in the past; the hardware-based systems tended to outperform the software ones. However, the ratio of processor to interconnect speed is now so high that the extra overhead of the software systems may no longer be significant. This issue is explored both by implementing a software maintained system and by introducing and using the technique of offline optimal analysis of memory reference traces. It finds that in properly built systems, software maintained coherence can perform comparably to or even better than hardware maintained coherence. The architectural features necessary for efficient software coherence to be profitable include a small page size, a fast trap mechanism, and the ability to execute instructions while remote memory references are outstanding.

  8. Noncoherent sampling technique for communications parameter estimations

    NASA Technical Reports Server (NTRS)

    Su, Y. T.; Choi, H. J.

    1985-01-01

    This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.

  9. Manipulation and handling processes off-line programming and optimization with use of K-Roset

    NASA Astrophysics Data System (ADS)

    Gołda, G.; Kampa, A.

    2017-08-01

    Contemporary trends in development of efficient, flexible manufacturing systems require practical implementation of modern “Lean production” concepts for maximizing customer value through minimizing all wastes in manufacturing and logistics processes. Every FMS is built on the basis of automated and robotized production cells. Except flexible CNC machine tools and other equipments, the industrial robots are primary elements of the system. In the studies, authors look for wastes of time and cost in real tasks of robots, during manipulation processes. According to aspiration for optimization of handling and manipulation processes with use of the robots, the application of modern off-line programming methods and computer simulation, is the best solution and it is only way to minimize unnecessary movements and other instructions. The modelling process of robotized production cell and offline programming of Kawasaki robots in AS-Language will be described. The simulation of robotized workstation will be realized with use of virtual reality software K-Roset. Authors show the process of industrial robot’s programs improvement and optimization in terms of minimizing the number of useless manipulator movements and unnecessary instructions. This is realized in order to shorten the time of production cycles. This will also reduce costs of handling, manipulations and technological process.

  10. The influence of computer-generated path on the robot’s effector stability of motion

    NASA Astrophysics Data System (ADS)

    Foit, K.; Banaś, W.; Gwiazda, A.; Ćwikła, G.

    2017-08-01

    The off-line trajectory planning is often carried out due to economical and practical reasons: the robot is not excluded from the production process and the operator could benefit from testing programs in the virtual environment. On the other hand, the dedicated off-line programming and simulation software is often limited in features and is intended to roughly check the program. It should be expected that the arm of the real robot’s manipulator will realize the trajectory in different manner: the acceleration and deceleration phases may trigger the vibrations of the kinematic chain that could affect the precision of effector positioning and degrade the quality of process realized by the robot. The purpose of this work is the analysis of the selected cases, when the robot’s effector has been moved along the programmed path. The off-line generated, test trajectories have different arrangement of points: such approach has allowed evaluating the time needed to complete the each of the tasks, as well as measuring the level of the vibration of the robot’s wrist. All tests were performed without the load. The conclusions of the experiment may be useful during the trajectory planning in order to avoid the critical configuration of points.

  11. Evaluation of Microcomputer-Based Operation and Maintenance Management Systems for Army Water/Wastewater Treatment Plant Operation.

    DTIC Science & Technology

    1986-07-01

    COMPUTER-AIDED OPERATION MANAGEMENT SYSTEM ................. 29 Functions of an Off-Line Computer-Aided Operation Management System Applications of...System Comparisons 85 DISTRIBUTION 5V J. • 0. FIGURES Number Page 1 Hardware Components 21 2 Basic Functions of a Computer-Aided Operation Management System...Plant Visits 26 4 Computer-Aided Operation Management Systems Reviewed for Analysis of Basic Functions 29 5 Progress of Software System Installation and

  12. Finite-Fault and Other New Capabilities of CISN ShakeAlert

    NASA Astrophysics Data System (ADS)

    Boese, M.; Felizardo, C.; Heaton, T. H.; Hudnut, K. W.; Hauksson, E.

    2013-12-01

    Over the past 6 years, scientists at Caltech, UC Berkeley, the Univ. of Southern California, the Univ. of Washington, the US Geological Survey, and ETH Zurich (Switzerland) have developed the 'ShakeAlert' earthquake early warning demonstration system for California and the Pacific Northwest. We have now started to transform this system into a stable end-to-end production system that will be integrated into the daily routine operations of the CISN and PNSN networks. To quickly determine the earthquake magnitude and location, ShakeAlert currently processes and interprets real-time data-streams from several hundred seismic stations within the California Integrated Seismic Network (CISN) and the Pacific Northwest Seismic Network (PNSN). Based on these parameters, the 'UserDisplay' software predicts and displays the arrival and intensity of shaking at a given user site. Real-time ShakeAlert feeds are currently being shared with around 160 individuals, companies, and emergency response organizations to gather feedback about the system performance, to educate potential users about EEW, and to identify needs and applications of EEW in a future operational warning system. To improve the performance during large earthquakes (M>6.5), we have started to develop, implement, and test a number of new algorithms for the ShakeAlert system: the 'FinDer' (Finite Fault Rupture Detector) algorithm provides real-time estimates of locations and extents of finite-fault ruptures from high-frequency seismic data. The 'GPSlip' algorithm estimates the fault slip along these ruptures using high-rate real-time GPS data. And, third, a new type of ground-motion prediction models derived from over 415,000 rupture simulations along active faults in southern California improves MMI intensity predictions for large earthquakes with consideration of finite-fault, rupture directivity, and basin response effects. FinDer and GPSlip are currently being real-time and offline tested in a separate internal ShakeAlert installation at Caltech. Real-time position and displacement time series from around 100 GPS sensors are obtained in JSON format from RTK/PPP(AR) solutions using the RTNet software at USGS Pasadena. However, we have also started to investigate the usage of onsite (in-receiver) processing using NetR9 with RTX and tracebuf2 output format. A number of changes to the ShakeAlert processing, xml message format, and the usage of this information in the UserDisplay software were necessary to handle the new finite-fault and slip information from the FinDer and GPSlip algorithms. In addition, we have developed a framework for end-to-end off-line testing with archived and simulated waveform data using the Earthworm tankplayer. Detailed background information about the algorithms, processing, and results from these test runs will be presented.

  13. Towards a Comprehensive Catalog of Volcanic Seismicity

    NASA Astrophysics Data System (ADS)

    Thompson, G.

    2014-12-01

    Catalogs of earthquakes located using differential travel-time techniques are a core product of volcano observatories, and while vital, they represent an incomplete perspective of volcanic seismicity. Many (often most) earthquakes are too small to locate accurately, and are omitted from available catalogs. Low frequency events, tremor and signals related to rockfalls, pyroclastic flows and lahars are not systematically catalogued, and yet from a hazard management perspective are exceedingly important. Because STA/LTA detection schemes break down in the presence of high amplitude tremor, swarms or dome collapses, catalogs may suggest low seismicity when seismicity peaks. We propose to develop a workflow and underlying software toolbox that can be applied to near-real-time and offline waveform data to produce comprehensive catalogs of volcanic seismicity. Existing tools to detect and locate phaseless signals will be adapted to fit within this framework. For this proof of concept the toolbox will be developed in MATLAB, extending the existing GISMO toolbox (an object-oriented MATLAB toolbox for seismic data analysis). Existing database schemas such as the CSS 3.0 will need to be extended to describe this wider range of volcano-seismic signals. WOVOdat may already incorporate many of the additional tables needed. Thus our framework may act as an interface between volcano observatories (or campaign-style research projects) and WOVOdat. We aim to take the further step of reducing volcano-seismic catalogs to sets of continuous metrics that are useful for recognizing data trends, and for feeding alarm systems and forecasting techniques. Previous experience has shown that frequency index, peak frequency, mean frequency, mean event rate, median event rate, and cumulative magnitude (or energy) are potentially useful metrics to generate for all catalogs at a 1-minute sample rate (directly comparable with RSAM and similar metrics derived from continuous data). Our framework includes tools to plot these metrics in a consistent manner. We work with data from unrest at Redoubt volcano and Soufriere Hills volcano to develop our framework.

  14. Offline and online civic engagement among adolescents and young adults from three ethnic groups.

    PubMed

    Jugert, Philipp; Eckstein, Katharina; Noack, Peter; Kuhn, Alexandra; Benbow, Alison

    2013-01-01

    Levels of civic engagement are assumed to vary according to numerous social and psychological characteristics, but not much is known about online civic engagement. This study aimed to investigate differences and similarities in young people's offline and online civic engagement and to clarify, based on Ajzen's theory of planned behavior (TPB), associations between motivation for civic engagement, peer and parental norms, collective efficacy, and civic engagement. The sample consisted of 755 youth (native German, ethnic German Diaspora, and Turkish migrants) from two age groups (16-18 and 19-26; mean age 20.5 years; 52 % female). Results showed that ethnic group membership and age moderated the frequency of engagement behavior, with Turkish migrants taking part more than native Germans, who were followed by ethnic German Diaspora migrants. Analyses based on TPB showed good fit for a model relating intention for offline and online civic engagement to motivation for civic engagement, peer and parental norms, and collective efficacy. Ethnic group moderated the findings for offline civic engagement and questioned the universality of some model parameters (e.g., peer and parental norms). This study showed the utility of the TPB framework for studying civic engagement but also reveals that the predictive utility of peer and parental norms seems to vary depending on the group and the behavior under study. This study highlights the importance of including minority samples in the study of civic engagement in order to identify between-group similarities and differences.

  15. An Approach for Automatic Generation of Adaptive Hypermedia in Education with Multilingual Knowledge Discovery Techniques

    ERIC Educational Resources Information Center

    Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana

    2007-01-01

    This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…

  16. Inferring social ties from geographic coincidences.

    PubMed

    Crandall, David J; Backstrom, Lars; Cosley, Dan; Suri, Siddharth; Huttenlocher, Daniel; Kleinberg, Jon

    2010-12-28

    We investigate the extent to which social ties between people can be inferred from co-occurrence in time and space: Given that two people have been in approximately the same geographic locale at approximately the same time, on multiple occasions, how likely are they to know each other? Furthermore, how does this likelihood depend on the spatial and temporal proximity of the co-occurrences? Such issues arise in data originating in both online and offline domains as well as settings that capture interfaces between online and offline behavior. Here we develop a framework for quantifying the answers to such questions, and we apply this framework to publicly available data from a social media site, finding that even a very small number of co-occurrences can result in a high empirical likelihood of a social tie. We then present probabilistic models showing how such large probabilities can arise from a natural model of proximity and co-occurrence in the presence of social ties. In addition to providing a method for establishing some of the first quantifiable estimates of these measures, our findings have potential privacy implications, particularly for the ways in which social structures can be inferred from public online records that capture individuals' physical locations over time.

  17. Development, deployment and operations of ATLAS databases

    NASA Astrophysics Data System (ADS)

    Vaniachine, A. V.; Schmitt, J. G. v. d.

    2008-07-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.

  18. Benchmarking the ATLAS software through the Kit Validation engine

    NASA Astrophysics Data System (ADS)

    De Salvo, Alessandro; Brasolin, Franco

    2010-04-01

    The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.

  19. AOFlagger: RFI Software

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.

    2010-10-01

    The RFI software presented here can automatically flag data and can be used to analyze the data in a measurement. The purpose of flagging is to mark samples that are affected by interfering sources such as radio stations, airplanes, electrical fences or other transmitting interferers. The tools in the package are meant for offline use. The software package contains a graphical interface ("rfigui") that can be used to visualize a measurement set and analyze mitigation techniques. It also contains a console flagger ("rficonsole") that can execute a script of mitigation functions without the overhead of a graphical environment. All tools were written in C++. The software has been tested extensively on low radio frequencies (150 MHz or lower) produced by the WSRT and LOFAR telescopes. LOFAR is the Low Frequency Array that is built in and around the Netherlands. Higher frequencies should work as well. Some of the methods implemented are the SumThreshold, the VarThreshold and the singular value decomposition (SVD) method. Included also are several surface fitting algorithms. The software is published under the GNU General Public License version 3.

  20. Simulation and animation of sensor-driven robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, C.; Trivedi, M.M.; Bidlack, C.R.

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aide the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the usersmore » visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.« less

  1. Software Quality Control at Belle II

    NASA Astrophysics Data System (ADS)

    Ritter, M.; Kuhr, T.; Hauth, T.; Gebard, T.; Kristof, M.; Pulvermacher, C.; Belle Software Group, II

    2017-10-01

    Over the last seven years the software stack of the next generation B factory experiment Belle II has grown to over one million lines of C++ and Python code, counting only the part included in offline software releases. There are several thousand commits to the central repository by about 100 individual developers per year. To keep a coherent software stack of high quality that it can be sustained and used efficiently for data acquisition, simulation, reconstruction, and analysis over the lifetime of the Belle II experiment is a challenge. A set of tools is employed to monitor the quality of the software and provide fast feedback to the developers. They are integrated in a machinery that is controlled by a buildbot master and automates the quality checks. The tools include different compilers, cppcheck, the clang static analyzer, valgrind memcheck, doxygen, a geometry overlap checker, a check for missing or extra library links, unit tests, steering file level tests, a sophisticated high-level validation suite, and an issue tracker. The technological development infrastructure is complemented by organizational means to coordinate the development.

  2. Three Software Tools for Viewing Sectional Planes, Volume Models, and Surface Models of a Cadaver Hand.

    PubMed

    Chung, Beom Sun; Chung, Min Suk; Shin, Byeong Seok; Kwon, Koojoo

    2018-02-19

    The hand anatomy, including the complicated hand muscles, can be grasped by using computer-assisted learning tools with high quality two-dimensional images and three-dimensional models. The purpose of this study was to present up-to-date software tools that promote learning of stereoscopic morphology of the hand. On the basis of horizontal sectioned images and outlined images of a male cadaver, vertical planes, volume models, and surface models were elaborated. Software to browse pairs of the sectioned and outlined images in orthogonal planes and software to peel and rotate the volume models, as well as a portable document format (PDF) file to select and rotate the surface models, were produced. All of the software tools were downloadable free of charge and usable off-line. The three types of tools for viewing multiple aspects of the hand could be adequately employed according to individual needs. These new tools involving the realistic images of a cadaver and the diverse functions are expected to improve comprehensive knowledge of the hand shape. © 2018 The Korean Academy of Medical Sciences.

  3. Three Software Tools for Viewing Sectional Planes, Volume Models, and Surface Models of a Cadaver Hand

    PubMed Central

    2018-01-01

    Background The hand anatomy, including the complicated hand muscles, can be grasped by using computer-assisted learning tools with high quality two-dimensional images and three-dimensional models. The purpose of this study was to present up-to-date software tools that promote learning of stereoscopic morphology of the hand. Methods On the basis of horizontal sectioned images and outlined images of a male cadaver, vertical planes, volume models, and surface models were elaborated. Software to browse pairs of the sectioned and outlined images in orthogonal planes and software to peel and rotate the volume models, as well as a portable document format (PDF) file to select and rotate the surface models, were produced. Results All of the software tools were downloadable free of charge and usable off-line. The three types of tools for viewing multiple aspects of the hand could be adequately employed according to individual needs. Conclusion These new tools involving the realistic images of a cadaver and the diverse functions are expected to improve comprehensive knowledge of the hand shape. PMID:29441756

  4. Data-Driven Software Framework for Web-Based ISS Telescience

    NASA Technical Reports Server (NTRS)

    Tso, Kam S.

    2005-01-01

    Software that enables authorized users to monitor and control scientific payloads aboard the International Space Station (ISS) from diverse terrestrial locations equipped with Internet connections is undergoing development. This software reflects a data-driven approach to distributed operations. A Web-based software framework leverages prior developments in Java and Extensible Markup Language (XML) to create portable code and portable data, to which one can gain access via Web-browser software on almost any common computer. Open-source software is used extensively to minimize cost; the framework also accommodates enterprise-class server software to satisfy needs for high performance and security. To accommodate the diversity of ISS experiments and users, the framework emphasizes openness and extensibility. Users can take advantage of available viewer software to create their own client programs according to their particular preferences, and can upload these programs for custom processing of data, generation of views, and planning of experiments. The same software system, possibly augmented with a subset of data and additional software tools, could be used for public outreach by enabling public users to replay telescience experiments, conduct their experiments with simulated payloads, and create their own client programs and other custom software.

  5. Framework Programmable Platform for the Advanced Software Development Workstation: Preliminary system design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.

  6. A Model Independent S/W Framework for Search-Based Software Testing

    PubMed Central

    Baik, Jongmoon

    2014-01-01

    In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314

  7. An Interoperability Framework and Capability Profiling for Manufacturing Software

    NASA Astrophysics Data System (ADS)

    Matsuda, M.; Arai, E.; Nakano, N.; Wakai, H.; Takeda, H.; Takata, M.; Sasaki, H.

    ISO/TC184/SC5/WG4 is working on ISO16100: Manufacturing software capability profiling for interoperability. This paper reports on a manufacturing software interoperability framework and a capability profiling methodology which were proposed and developed through this international standardization activity. Within the context of manufacturing application, a manufacturing software unit is considered to be capable of performing a specific set of function defined by a manufacturing software system architecture. A manufacturing software interoperability framework consists of a set of elements and rules for describing the capability of software units to support the requirements of a manufacturing application. The capability profiling methodology makes use of the domain-specific attributes and methods associated with each specific software unit to describe capability profiles in terms of unit name, manufacturing functions, and other needed class properties. In this methodology, manufacturing software requirements are expressed in terns of software unit capability profiles.

  8. Computing at h1 - Experience and Future

    NASA Astrophysics Data System (ADS)

    Eckerlin, G.; Gerhards, R.; Kleinwort, C.; KrÜNer-Marquis, U.; Egli, S.; Niebergall, F.

    The H1 experiment has now been successfully operating at the electron proton collider HERA at DESY for three years. During this time the computing environment has gradually shifted from a mainframe oriented environment to the distributed server/client Unix world. This transition is now almost complete. Computing needs are largely determined by the present amount of 1.5 TB of reconstructed data per year (1994), corresponding to 1.2 × 107 accepted events. All data are centrally available at DESY. In addition to data analysis, which is done in all collaborating institutes, most of the centrally organized Monte Carlo production is performed outside of DESY. New software tools to cope with offline computing needs include CENTIPEDE, a tool for the use of distributed batch and interactive resources for Monte Carlo production, and H1 UNIX, a software package for automatic updates of H1 software on all UNIX platforms.

  9. Proteopedia: Exciting Advances in the 3D Encyclopedia of Biomolecular Structure

    NASA Astrophysics Data System (ADS)

    Prilusky, Jaime; Hodis, Eran; Sussman, Joel L.

    Proteopedia is a collaborative, 3D web-encyclopedia of protein, nucleic acid and other structures. Proteopedia ( http://www.proteopedia.org ) presents 3D biomolecule structures in a broadly accessible manner to a diverse scientific audience through easy-to-use molecular visualization tools integrated into a wiki environment that anyone with a user account can edit. We describe recent advances in the web resource in the areas of content and software. In terms of content, we describe a large growth in user-added content as well as improvements in automatically-generated content for all PDB entry pages in the resource. In terms of software, we describe new features ranging from the capability to create pages hidden from public view to the capability to export pages for offline viewing. New software features also include an improved file-handling system and availability of biological assemblies of protein structures alongside their asymmetric units.

  10. Positron lifetime setup based on DRS4 evaluation board

    NASA Astrophysics Data System (ADS)

    Petriska, M.; Sojak, S.; Slugeň, V.

    2014-04-01

    A digital positron lifetime setup based on DRS4 evaluation board designed at the Paul Scherrer Institute has been constructed and tested in the Positron annihilation laboratory Slovak University of Technology Bratislava. The high bandwidth, low power consumption and short readout time make DRS4 chip attractive for positron annihilation lifetime (PALS) setup, replacing traditional ADCs and TDCs. A software for PALS setup online and offline pulse analysis was developed with Qt,Qwt and ALGLIB libraries.

  11. The ATLAS conditions database architecture for the Muon spectrometer

    NASA Astrophysics Data System (ADS)

    Verducci, Monica; ATLAS Muon Collaboration

    2010-04-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  12. Fostering Multirepresentational Levels of Chemical Concepts: A Framework to Develop Educational Software

    ERIC Educational Resources Information Center

    Marson, Guilherme A.; Torres, Bayardo B.

    2011-01-01

    This work presents a convenient framework for developing interactive chemical education software to facilitate the integration of macroscopic, microscopic, and symbolic dimensions of chemical concepts--specifically, via the development of software for gel permeation chromatography. The instructional role of the software was evaluated in a study…

  13. Software Engineering Frameworks: Textbooks vs. Student Perceptions

    ERIC Educational Resources Information Center

    McMaster, Kirby; Hadfield, Steven; Wolthuis, Stuart; Sambasivam, Samuel

    2012-01-01

    This research examines the frameworks used by Computer Science and Information Systems students at the conclusion of their first semester of study of Software Engineering. A questionnaire listing 64 Software Engineering concepts was given to students upon completion of their first Software Engineering course. This survey was given to samples of…

  14. Software framework for the upcoming MMT Observatory primary mirror re-aluminization

    NASA Astrophysics Data System (ADS)

    Gibson, J. Duane; Clark, Dusty; Porter, Dallan

    2014-07-01

    Details of the software framework for the upcoming in-situ re-aluminization of the 6.5m MMT Observatory (MMTO) primary mirror are presented. This framework includes: 1) a centralized key-value store and data structure server for data exchange between software modules, 2) a newly developed hardware-software interface for faster data sampling and better hardware control, 3) automated control algorithms that are based upon empirical testing, modeling, and simulation of the aluminization process, 4) re-engineered graphical user interfaces (GUI's) that use state-of-the-art web technologies, and 5) redundant relational databases for data logging. Redesign of the software framework has several objectives: 1) automated process control to provide more consistent and uniform mirror coatings, 2) optional manual control of the aluminization process, 3) modular design to allow flexibility in process control and software implementation, 4) faster data sampling and logging rates to better characterize the approximately 100-second aluminization event, and 5) synchronized "real-time" web application GUI's to provide all users with exactly the same data. The framework has been implemented as four modules interconnected by a data store/server. The four modules are integrated into two Linux system services that start automatically at boot-time and remain running at all times. Performance of the software framework is assessed through extensive testing within 2.0 meter and smaller coating chambers at the Sunnyside Test Facility. The redesigned software framework helps ensure that a better performing and longer lasting coating will be achieved during the re-aluminization of the MMTO primary mirror.

  15. Framework Support For Knowledge-Based Software Development

    NASA Astrophysics Data System (ADS)

    Huseth, Steve

    1988-03-01

    The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.

  16. A general observatory control software framework design for existing small and mid-size telescopes

    NASA Astrophysics Data System (ADS)

    Ge, Liang; Lu, Xiao-Meng; Jiang, Xiao-Jun

    2015-07-01

    A general framework for observatory control software would help to improve the efficiency of observation and operation of telescopes, and would also be advantageous for remote and joint observations. We describe a general framework for observatory control software, which considers principles of flexibility and inheritance to meet the expectations from observers and technical personnel. This framework includes observation scheduling, device control and data storage. The design is based on a finite state machine that controls the whole process.

  17. THE EPA MULTIMEDIA INTEGRATED MODELING SYSTEM SOFTWARE SUITE

    EPA Science Inventory

    The U.S. EPA is developing a Multimedia Integrated Modeling System (MIMS) framework that will provide a software infrastructure or environment to support constructing, composing, executing, and evaluating complex modeling studies. The framework will include (1) common software ...

  18. HCI∧2 framework: a software framework for multimodal human-computer interaction systems.

    PubMed

    Shen, Jie; Pantic, Maja

    2013-12-01

    This paper presents a novel software framework for the development and research in the area of multimodal human-computer interface (MHCI) systems. The proposed software framework, which is called the HCI∧2 Framework, is built upon publish/subscribe (P/S) architecture. It implements a shared-memory-based data transport protocol for message delivery and a TCP-based system management protocol. The latter ensures that the integrity of system structure is maintained at runtime. With the inclusion of bridging modules, the HCI∧2 Framework is interoperable with other software frameworks including Psyclone and ActiveMQ. In addition to the core communication middleware, we also present the integrated development environment (IDE) of the HCI∧2 Framework. It provides a complete graphical environment to support every step in a typical MHCI system development process, including module development, debugging, packaging, and management, as well as the whole system management and testing. The quantitative evaluation indicates that our framework outperforms other similar tools in terms of average message latency and maximum data throughput under a typical single PC scenario. To demonstrate HCI∧2 Framework's capabilities in integrating heterogeneous modules, we present several example modules working with a variety of hardware and software. We also present an example of a full system developed using the proposed HCI∧2 Framework, which is called the CamGame system and represents a computer game based on hand-held marker(s) and low-cost camera(s).

  19. General Aviation Data Framework

    NASA Technical Reports Server (NTRS)

    Blount, Elaine M.; Chung, Victoria I.

    2006-01-01

    The Flight Research Services Directorate at the NASA Langley Research Center (LaRC) provides development and operations services associated with three general aviation (GA) aircraft used for research experiments. The GA aircraft includes a Cessna 206X Stationair, a Lancair Colombia 300X, and a Cirrus SR22X. Since 2004, the GA Data Framework software was designed and implemented to gather data from a varying set of hardware and software sources as well as enable transfer of the data to other computers or devices. The key requirements for the GA Data Framework software include platform independence, the ability to reuse the framework for different projects without changing the framework code, graphics display capabilities, and the ability to vary the interfaces and their performance. Data received from the various devices is stored in shared memory. This paper concentrates on the object oriented software design patterns within the General Aviation Data Framework, and how they enable the construction of project specific software without changing the base classes. The issues of platform independence and multi-threading which enable interfaces to run at different frame rates are also discussed in this paper.

  20. Support for Online Calibration in the ALICE HLT Framework

    NASA Astrophysics Data System (ADS)

    Krzewicki, Mikolaj; Rohr, David; Zampolli, Chiara; Wiechula, Jens; Gorbunov, Sergey; Chauvin, Alex; Vorobyev, Ivan; Weber, Steffen; Schweda, Kai; Shahoyan, Ruben; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    The ALICE detector employs sub detectors sensitive to environmental conditions such as pressure and temperature, e.g. the time projection chamber (TPC). A precise reconstruction of particle trajectories requires precise calibration of these detectors. Performing the calibration in real time in the HLT improves the online reconstruction and potentially renders certain offline calibration steps obsolete, speeding up offline physics analysis. For LHC Run 3, starting in 2020 when data reduction will rely on reconstructed data, online calibration becomes a necessity. In order to run the calibration online, the HLT now supports the processing of tasks that typically run offline. These tasks run massively in parallel on all HLT compute nodes and their output is gathered and merged periodically. The calibration results are both stored offline for later use and fed back into the HLT chain via a feedback loop in order to apply calibration information to the online track reconstruction. Online calibration and feedback loop are subject to certain time constraints in order to provide up-to-date calibration information and they must not interfere with ALICE data taking. Our approach to run these tasks in asynchronous processes enables us to separate them from normal data taking in a way that makes it failure resilient. We performed a first test of online TPC drift time calibration under real conditions during the heavy-ion run in December 2015. We present an analysis and conclusions of this first test, new improvements and developments based on this, as well as our current scheme to commission this for production use.

  1. openECA Platform and Analytics Alpha Test Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Russell

    The objective of the Open and Extensible Control and Analytics (openECA) Platform for Phasor Data project is to develop an open source software platform that significantly accelerates the production, use, and ongoing development of real-time decision support tools, automated control systems, and off-line planning systems that (1) incorporate high-fidelity synchrophasor data and (2) enhance system reliability while enabling the North American Electric Reliability Corporation (NERC) operating functions of reliability coordinator, transmission operator, and/or balancing authority to be executed more effectively.

  2. openECA Platform and Analytics Beta Demonstration Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Russell

    The objective of the Open and Extensible Control and Analytics (openECA) Platform for Phasor Data project is to develop an open source software platform that significantly accelerates the production, use, and ongoing development of real-time decision support tools, automated control systems, and off-line planning systems that (1) incorporate high-fidelity synchrophasor data and (2) enhance system reliability while enabling the North American Electric Reliability Corporation (NERC) operating functions of reliability coordinator, transmission operator, and/or balancing authority to be executed more effectively.

  3. System Software Framework for System of Systems Avionics

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.; Peterson, Benjamin L; Thompson, Hiram C.

    2005-01-01

    Project Constellation implements NASA's vision for space exploration to expand human presence in our solar system. The engineering focus of this project is developing a system of systems architecture. This architecture allows for the incremental development of the overall program. Systems can be built and connected in a "Lego style" manner to generate configurations supporting various mission objectives. The development of the avionics or control systems of such a massive project will result in concurrent engineering. Also, each system will have software and the need to communicate with other (possibly heterogeneous) systems. Fortunately, this design problem has already been solved during the creation and evolution of systems such as the Internet and the Department of Defense's successful effort to standardize distributed simulation (now IEEE 1516). The solution relies on the use of a standard layered software framework and a communication protocol. A standard framework and communication protocol is suggested for the development and maintenance of Project Constellation systems. The ARINC 653 standard is a great start for such a common software framework. This paper proposes a common system software framework that uses the Real Time Publish/Subscribe protocol for framework-to-framework communication to extend ARINC 653. It is highly recommended that such a framework be established before development. This is important for the success of concurrent engineering. The framework provides an infrastructure for general system services and is designed for flexibility to support a spiral development effort.

  4. Experimentation in software engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Selby, R. W.; Hutchens, D. H.

    1986-01-01

    Experimentation in software engineering supports the advancement of the field through an iterative learning process. In this paper, a framework for analyzing most of the experimental work performed in software engineering over the past several years is presented. A variety of experiments in the framework is described and their contribution to the software engineering discipline is discussed. Some useful recommendations for the application of the experimental process in software engineering are included.

  5. A Prototype for the Support of Integrated Software Process Development and Improvement

    NASA Astrophysics Data System (ADS)

    Porrawatpreyakorn, Nalinpat; Quirchmayr, Gerald; Chutimaskul, Wichian

    An efficient software development process is one of key success factors for quality software. Not only can the appropriate establishment but also the continuous improvement of integrated project management and of the software development process result in efficiency. This paper hence proposes a software process maintenance framework which consists of two core components: an integrated PMBOK-Scrum model describing how to establish a comprehensive set of project management and software engineering processes and a software development maturity model advocating software process improvement. Besides, a prototype tool to support the framework is introduced.

  6. A Catchment-Based Land Surface Model for GCMs and the Framework for its Evaluation

    NASA Technical Reports Server (NTRS)

    Ducharen, A.; Koster, R. D.; Suarez, M. J.; Kumar, P.

    1998-01-01

    A new GCM-scale land surface modeling strategy that explicitly accounts for subgrid soil moisture variability and its effects on evaporation and runoff is now being explored. In a break from traditional modeling strategies, the continental surface is disaggregated into a mosaic of hydrological catchments, with boundaries that are not dictated by a regular grid but by topography. Within each catchment, the variability of soil moisture is deduced from TOP-MODEL equations with a special treatment of the unsaturated zone. This paper gives an overview of this new approach and presents the general framework for its off-line evaluation over North-America.

  7. Framework Based Guidance Navigation and Control Flight Software Development

    NASA Technical Reports Server (NTRS)

    McComas, David

    2007-01-01

    This viewgraph presentation describes NASA's guidance navigation and control flight software development background. The contents include: 1) NASA/Goddard Guidance Navigation and Control (GN&C) Flight Software (FSW) Development Background; 2) GN&C FSW Development Improvement Concepts; and 3) GN&C FSW Application Framework.

  8. Long-term object tracking combined offline with online learning

    NASA Astrophysics Data System (ADS)

    Hu, Mengjie; Wei, Zhenzhong; Zhang, Guangjun

    2016-04-01

    We propose a simple yet effective method for long-term object tracking. Different from the traditional visual tracking method, which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion. To summarize, our algorithm can be roughly decomposed into an initialization stage and a tracking stage. In the initialization stage, an offline detector is trained to get the object appearance information at the category level, which is used for detecting the potential target and initializing the tracking stage. The tracking stage consists of three modules: the online tracking module, detection module, and decision module. A pretrained detector is used for maintaining drift of the online tracker, while the online tracker is used for filtering out false positive detections. A confidence selection mechanism is proposed to optimize the object location based on the online tracker and detection. If the target is lost, the pretrained detector is utilized to reinitialize the whole algorithm when the target is relocated. During experiments, we evaluate our method on several challenging video sequences, and it demonstrates huge improvement compared with detection and online tracking only.

  9. Coupling of metal-organic frameworks-containing monolithic capillary-based selective enrichment with matrix-assisted laser desorption ionization-time-of-flight mass spectrometry for efficient analysis of protein phosphorylation.

    PubMed

    Li, Daojin; Yin, Danyang; Chen, Yang; Liu, Zhen

    2017-05-19

    Protein phosphorylation is a major post-translational modification, which plays a vital role in cellular signaling of numerous biological processes. Mass spectrometry (MS) has been an essential tool for the analysis of protein phosphorylation, for which it is a key step to selectively enrich phosphopeptides from complex biological samples. In this study, metal-organic frameworks (MOFs)-based monolithic capillary has been successfully prepared as an effective sorbent for the selective enrichment of phosphopeptides and has been off-line coupled with matrix-assisted laser desorption ionization-time-of-flight mass spectrometry (MALDI-TOF MS) for efficient analysis of phosphopeptides. Using š-casein as a representative phosphoprotein, efficient phosphorylation analysis by this off-line platform was verified. Phosphorylation analysis of a nonfat milk sample was also demonstrated. Through introducing large surface areas and highly ordered pores of MOFs into monolithic column, the MOFs-based monolithic capillary exhibited several significant advantages, such as excellent selectivity toward phosphopeptides, superb tolerance to interference and simple operation procedure. Because of these highly desirable properties, the MOFs-based monolithic capillary could be a useful tool for protein phosphorylation analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Novel Real-time Alignment and Calibration of the LHCb detector in Run2

    NASA Astrophysics Data System (ADS)

    Martinelli, Maurizio; LHCb Collaboration

    2017-10-01

    LHCb has introduced a novel real-time detector alignment and calibration strategy for LHC Run2. Data collected at the start of the fill are processed in a few minutes and used to update the alignment parameters, while the calibration constants are evaluated for each run. This procedure improves the quality of the online reconstruction. For example, the vertex locator is retracted and reinserted for stable beam conditions in each fill to be centred on the primary vertex position in the transverse plane. Consequently its position changes on a fill-by-fill basis. Critically, this new real-time alignment and calibration procedure allows identical constants to be used in the online and offline reconstruction, thus improving the correlation between triggered and offline-selected events. This offers the opportunity to optimise the event selection in the trigger by applying stronger constraints. The required computing time constraints are met thanks to a new dedicated framework using the multi-core farm infrastructure for the trigger. The motivation for a real-time alignment and calibration of the LHCb detector is discussed from both the operational and physics performance points of view. Specific challenges of this novel configuration are discussed, as well as the working procedures of the framework and its performance.

  11. Element Load Data Processor (ELDAP) Users Manual

    NASA Technical Reports Server (NTRS)

    Ramsey, John K., Jr.; Ramsey, John K., Sr.

    2015-01-01

    Often, the shear and tensile forces and moments are extracted from finite element analyses to be used in off-line calculations for evaluating the integrity of structural connections involving bolts, rivets, and welds. Usually the maximum forces and moments are desired for use in the calculations. In situations where there are numerous structural connections of interest for numerous load cases, the effort in finding the true maximum force and/or moment combinations among all fasteners and welds and load cases becomes difficult. The Element Load Data Processor (ELDAP) software described herein makes this effort manageable. This software eliminates the possibility of overlooking the worst-case forces and moments that could result in erroneous positive margins of safety and/or selecting inconsistent combinations of forces and moments resulting in false negative margins of safety. In addition to forces and moments, any scalar quantity output in a PATRAN report file may be evaluated with this software. This software was originally written to fill an urgent need during the structural analysis of the Ares I-X Interstage segment. As such, this software was coded in a straightforward manner with no effort made to optimize or minimize code or to develop a graphical user interface.

  12. Net-VISA used as a complement to standard software at the CTBTO: initial operational experience with next-generation software.

    NASA Astrophysics Data System (ADS)

    Le Bras, R. J.; Arora, N. S.; Kushida, N.; Kebede, F.; Feitio, P.; Tomuta, E.

    2017-12-01

    The International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has reached out to the broader scientific community through a series of conferences, the later one of which took place in June 2017 in Vienna, Austria. Stemming out of this outreach effort, after the inception of research and development efforts in 2009, the NET-VISA software, following a Bayesian modelling approach, has been elaborated to improve on the key step of automatic association of joint seismic, hydro-acoustic, and infrasound detections. When compared with the current operational system, it has been consistently shown on off-line tests to improve the overlap with the analyst-reviewed Reviewed Event Bulletin (REB) by ten percent for an average of 85% overlap, while the inconsistency rate is essentially the same at about 50%. Testing by analysts in realistic conditions on a few days of data has also demonstrated the software performance in finding additional events which qualify for publication in the REB. Starting in August 2017, the automatic events produced by the software will be reviewed by analysts at the CTBTO, and we report on the initial evaluation of this introduction into operations.

  13. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: An Earth Modeling System Software Framework Strawman Design that Integrates Cactus and UCLA/UCB Distributed Data Broker

    NASA Technical Reports Server (NTRS)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.

  14. A Validation Framework for the Long Term Preservation of High Energy Physics Data

    NASA Astrophysics Data System (ADS)

    Ozerov, Dmitri; South, David M.

    2014-06-01

    The study group on data preservation in high energy physics, DPHEP, is moving to a new collaboration structure, which will focus on the implementation of preservation projects, such as those described in the group's large scale report published in 2012. One such project is the development of a validation framework, which checks the compatibility of evolving computing environments and technologies with the experiments software for as long as possible, with the aim of substantially extending the lifetime of the analysis software, and hence of the usability of the data. The framework is designed to automatically test and validate the software and data of an experiment against changes and upgrades to the computing environment, as well as changes to the experiment software itself. Technically, this is realised using a framework capable of hosting a number of virtual machine images, built with different configurations of operating systems and the relevant software, including any necessary external dependencies.

  15. Distributed Computing Framework for Synthetic Radar Application

    NASA Technical Reports Server (NTRS)

    Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael

    2006-01-01

    We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.

  16. Data acquisition and processing system for the HT-6M tokamak fusion experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Y.T.; Liu, G.C.; Pang, J.Q.

    1987-08-01

    This paper describes a high-speed data acquisition and processing system which has been successfully operated on the HT-6M tokamak fusion experimental device. The system collects, archives and analyzes up to 512 kilobytes of data from each shot of the experiment. A shot lasts 50-150 milliseconds and occurs every 5-10 minutes. The system consists of two PDP-11/24 computer systems. One PDP-11/24 is used for real-time data taking and on-line data analysis. It is based upon five CAMAC crates organized into a parallel branch. Another PDP-11/24 is used for off-line data processing. Both data acquisition software RSX-DAS and data processing software RSX-DAPmore » have modular, multi-tasking and concurrent processing features.« less

  17. dCache, towards Federated Identities & Anonymized Delegation

    NASA Astrophysics Data System (ADS)

    Ashish, A.; Millar, AP; Mkrtchyan, T.; Fuhrmann, P.; Behrmann, G.; Sahakyan, M.; Adeyemi, O. S.; Starek, J.; Litvintsev, D.; Rossi, A.

    2017-10-01

    For over a decade, dCache has relied on the authentication and authorization infrastructure (AAI) offered by VOMS, Kerberos, Xrootd etc. Although the established infrastructure has worked well and provided sufficient security, the implementation of procedures and the underlying software is often seen as a burden, especially by smaller communities trying to adopt existing HEP software stacks [1]. Moreover, scientists are increasingly dependent on service portals for data access [2]. In this paper, we describe how federated identity management systems can facilitate the transition from traditional AAI infrastructure to novel solutions like OpenID Connect. We investigate the advantages offered by OpenID Connect in regards to ‘delegation of authentication’ and ‘credential delegation for offline access’. Additionally, we demonstrate how macaroons can provide a more fine-granular authorization mechanism that supports anonymized delegation.

  18. Tracking at High Level Trigger in CMS

    NASA Astrophysics Data System (ADS)

    Tosi, M.

    2016-04-01

    The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.

  19. New prospective 4D-CT for mitigating the effects of irregular respiratory motion

    NASA Astrophysics Data System (ADS)

    Pan, Tinsu; Martin, Rachael M.; Luo, Dershan

    2017-08-01

    Artifact caused by irregular respiration is a major source of error in 4D-CT imaging. We propose a new prospective 4D-CT to mitigate this source of error without new hardware, software or off-line data-processing on the GE CT scanner. We utilize the cine CT scan in the design of the new prospective 4D-CT. The cine CT scan at each position can be stopped by the operator when an irregular respiration occurs, and resumed when the respiration becomes regular. This process can be repeated at one or multiple scan positions. After the scan, a retrospective reconstruction is initiated on the CT console to reconstruct only the images corresponding to the regular respiratory cycles. The end result is a 4D-CT free of irregular respiration. To prove feasibility, we conducted a phantom and six patient studies. The artifacts associated with the irregular respiratory cycles could be removed from both the phantom and patient studies. A new prospective 4D-CT scanning and processing technique to mitigate the impact of irregular respiration in 4D-CT has been demonstrated. This technique can save radiation dose because the repeat scans are only at the scan positions where an irregular respiration occurs. Current practice is to repeat the scans at all positions. There is no cost to apply this technique because it is applicable on the GE CT scanner without new hardware, software or off-line data-processing.

  20. A software framework for real-time multi-modal detection of microsleeps.

    PubMed

    Knopp, Simon J; Bones, Philip J; Weddell, Stephen J; Jones, Richard D

    2017-09-01

    A software framework is described which was designed to process EEG, video of one eye, and head movement in real time, towards achieving early detection of microsleeps for prevention of fatal accidents, particularly in transport sectors. The framework is based around a pipeline structure with user-replaceable signal processing modules. This structure can encapsulate a wide variety of feature extraction and classification techniques and can be applied to detecting a variety of aspects of cognitive state. Users of the framework can implement signal processing plugins in C++ or Python. The framework also provides a graphical user interface and the ability to save and load data to and from arbitrary file formats. Two small studies are reported which demonstrate the capabilities of the framework in typical applications: monitoring eye closure and detecting simulated microsleeps. While specifically designed for microsleep detection/prediction, the software framework can be just as appropriately applied to (i) other measures of cognitive state and (ii) development of biomedical instruments for multi-modal real-time physiological monitoring and event detection in intensive care, anaesthesiology, cardiology, neurosurgery, etc. The software framework has been made freely available for researchers to use and modify under an open source licence.

  1. Implementation of a multi-threaded framework for large-scale scientific applications

    DOE PAGES

    Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; ...

    2015-05-22

    The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less

  2. The Five 'R's' for Developing Trusted Software Frameworks to increase confidence in, and maximise reuse of, Open Source Software.

    NASA Astrophysics Data System (ADS)

    Fraser, Ryan; Gross, Lutz; Wyborn, Lesley; Evans, Ben; Klump, Jens

    2015-04-01

    Recent investments in HPC, cloud and Petascale data stores, have dramatically increased the scale and resolution that earth science challenges can now be tackled. These new infrastructures are highly parallelised and to fully utilise them and access the large volumes of earth science data now available, a new approach to software stack engineering needs to be developed. The size, complexity and cost of the new infrastructures mean any software deployed has to be reliable, trusted and reusable. Increasingly software is available via open source repositories, but these usually only enable code to be discovered and downloaded. As a user it is hard for a scientist to judge the suitability and quality of individual codes: rarely is there information on how and where codes can be run, what the critical dependencies are, and in particular, on the version requirements and licensing of the underlying software stack. A trusted software framework is proposed to enable reliable software to be discovered, accessed and then deployed on multiple hardware environments. More specifically, this framework will enable those who generate the software, and those who fund the development of software, to gain credit for the effort, IP, time and dollars spent, and facilitate quantification of the impact of individual codes. For scientific users, the framework delivers reviewed and benchmarked scientific software with mechanisms to reproduce results. The trusted framework will have five separate, but connected components: Register, Review, Reference, Run, and Repeat. 1) The Register component will facilitate discovery of relevant software from multiple open source code repositories. The registration process of the code should include information about licensing, hardware environments it can be run on, define appropriate validation (testing) procedures and list the critical dependencies. 2) The Review component is targeting on the verification of the software typically against a set of benchmark cases. This will be achieved by linking the code in the software framework to peer review forums such as Mozilla Science or appropriate Journals (e.g. Geoscientific Model Development Journal) to assist users to know which codes to trust. 3) Referencing will be accomplished by linking the Software Framework to groups such as Figshare or ImpactStory that help disseminate and measure the impact of scientific research, including program code. 4) The Run component will draw on information supplied in the registration process, benchmark cases described in the review and relevant information to instantiate the scientific code on the selected environment. 5) The Repeat component will tap into existing Provenance Workflow engines that will automatically capture information that relate to a particular run of that software, including identification of all input and output artefacts, and all elements and transactions within that workflow. The proposed trusted software framework will enable users to rapidly discover and access reliable code, reduce the time to deploy it and greatly facilitate sharing, reuse and reinstallation of code. Properly designed it could enable an ability to scale out to massively parallel systems and be accessed nationally/ internationally for multiple use cases, including Supercomputer centres, cloud facilities, and local computers.

  3. BioContainers: an open-source and community-driven framework for software standardization.

    PubMed

    da Veiga Leprevost, Felipe; Grüning, Björn A; Alves Aflitos, Saulo; Röst, Hannes L; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I; Perez-Riverol, Yasset

    2017-08-15

    BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). The software is freely available at github.com/BioContainers/. yperez@ebi.ac.uk. © The Author(s) 2017. Published by Oxford University Press.

  4. BioContainers: an open-source and community-driven framework for software standardization

    PubMed Central

    da Veiga Leprevost, Felipe; Grüning, Björn A.; Alves Aflitos, Saulo; Röst, Hannes L.; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C.; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I.; Perez-Riverol, Yasset

    2017-01-01

    Abstract Motivation BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). Availability and Implementation The software is freely available at github.com/BioContainers/. Contact yperez@ebi.ac.uk PMID:28379341

  5. Surgical model-view-controller simulation software framework for local and collaborative applications

    PubMed Central

    Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu

    2010-01-01

    Purpose Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. Methods A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. Results The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. Conclusion A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users. PMID:20714933

  6. Surgical model-view-controller simulation software framework for local and collaborative applications.

    PubMed

    Maciel, Anderson; Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu

    2011-07-01

    Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users.

  7. Exploiting Early Intent Recognition for Competitive Advantage

    DTIC Science & Technology

    2009-01-01

    basketball [Bhan- dari et al., 1997; Jug et al., 2003], and Robocup soccer sim- ulations [Riley and Veloso, 2000; 2002; Kuhlmann et al., 2006] and non...actions (e.g. before, after, around). Jug et al. [2003] used a similar framework for offline basketball game analysis. More recently, Hess et al...and K. Ramanujam. Advanced Scout: Data mining and knowledge discovery in NBA data. Data Mining and Knowledge Discovery, 1(1):121–125, 1997. [Chang

  8. A Theoretical Analysis: Physical Unclonable Functions and The Software Protection Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nithyanand, Rishab; Solis, John H.

    2011-09-01

    Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure (within reasonable error bounds) but hard to clone. This property of unclonability is due to the accepted hardness of replicating the multitude of uncontrollable manufacturing characteristics and makes PUFs useful in solving problems such as device authentication, software protection, licensing, and certified execution. In this paper, we focus on the effectiveness of PUFs for software protection and show that traditional non-computational (black-box) PUFs cannot solve the problem against real world adversaries in offline settings. Our contributionsmore » are the following: We provide two real world adversary models (weak and strong variants) and present definitions for security against the adversaries. We continue by proposing schemes secure against the weak adversary and show that no scheme is secure against a strong adversary without the use of trusted hardware. Finally, we present a protection scheme secure against strong adversaries based on trusted hardware.« less

  9. HPC Software Stack Testing Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garvey, Cormac

    The HPC Software stack testing framework (hpcswtest) is used in the INL Scientific Computing Department to test the basic sanity and integrity of the HPC Software stack (Compilers, MPI, Numerical libraries and Applications) and to quickly discover hard failures, and as a by-product it will indirectly check the HPC infrastructure (network, PBS and licensing servers).

  10. Visual Object Recognition and Tracking of Tools

    NASA Technical Reports Server (NTRS)

    English, James; Chang, Chu-Yin; Tardella, Neil

    2011-01-01

    A method has been created to automatically build an algorithm off-line, using computer-aided design (CAD) models, and to apply this at runtime. The object type is discriminated, and the position and orientation are identified. This system can work with a single image and can provide improved performance using multiple images provided from videos. The spatial processing unit uses three stages: (1) segmentation; (2) initial type, pose, and geometry (ITPG) estimation; and (3) refined type, pose, and geometry (RTPG) calculation. The image segmentation module files all the tools in an image and isolates them from the background. For this, the system uses edge-detection and thresholding to find the pixels that are part of a tool. After the pixels are identified, nearby pixels are grouped into blobs. These blobs represent the potential tools in the image and are the product of the segmentation algorithm. The second module uses matched filtering (or template matching). This approach is used for condensing synthetic images using an image subspace that captures key information. Three degrees of orientation, three degrees of position, and any number of degrees of freedom in geometry change are included. To do this, a template-matching framework is applied. This framework uses an off-line system for calculating template images, measurement images, and the measurements of the template images. These results are used online to match segmented tools against the templates. The final module is the RTPG processor. Its role is to find the exact states of the tools given initial conditions provided by the ITPG module. The requirement that the initial conditions exist allows this module to make use of a local search (whereas the ITPG module had global scope). To perform the local search, 3D model matching is used, where a synthetic image of the object is created and compared to the sensed data. The availability of low-cost PC graphics hardware allows rapid creation of synthetic images. In this approach, a function of orientation, distance, and articulation is defined as a metric on the difference between the captured image and a synthetic image with an object in the given orientation, distance, and articulation. The synthetic image is created using a model that is looked up in an object-model database. A composable software architecture is used for implementation. Video is first preprocessed to remove sensor anomalies (like dead pixels), and then is processed sequentially by a prioritized list of tracker-identifiers.

  11. Enhanced Aircraft Platform Availability Through Advanced Maintenance Concepts and Technologies (Amelioration de la Disponibilite des Plateformes D’Aeronefs par L’Utilisation des Technologies et des Concepts Evolues de Maintenance)

    DTIC Science & Technology

    2011-06-01

    DeLong, W., Yepez, S., Reedy, D. and White, S., “Use of Composite Materials, Health Monitoring and Self Healing Concepts to Refurbish our Civil and...Health Monitoring and Self Healing Concepts to Refurbish Our Civil and Military Infrastructure”, Sandia National Laboratories Report SAND2007-5547...failure without the need for the system to go off-line. Recovery Blocks and Self - Healing (Software) The backwards

  12. Tesla: An application for real-time data analysis in High Energy Physics

    NASA Astrophysics Data System (ADS)

    Aaij, R.; Amato, S.; Anderlini, L.; Benson, S.; Cattaneo, M.; Clemencic, M.; Couturier, B.; Frank, M.; Gligorov, V. V.; Head, T.; Jones, C.; Komarov, I.; Lupton, O.; Matev, R.; Raven, G.; Sciascia, B.; Skwarnicki, T.; Spradlin, P.; Stahl, S.; Storaci, B.; Vesterinen, M.

    2016-11-01

    Upgrades to the LHCb computing infrastructure in the first long shutdown of the LHC have allowed for high quality decay information to be calculated by the software trigger making a separate offline event reconstruction unnecessary. Furthermore, the storage space of the triggered candidate is an order of magnitude smaller than the entire raw event that would otherwise need to be persisted. Tesla is an application designed to process the information calculated by the trigger, with the resulting output used to directly perform physics measurements.

  13. High Resolution Gamma Ray Spectroscopy at MHz Counting Rates With LaBr3 Scintillators for Fusion Plasma Applications

    NASA Astrophysics Data System (ADS)

    Nocente, M.; Tardocchi, M.; Olariu, A.; Olariu, S.; Pereira, R. C.; Chugunov, I. N.; Fernandes, A.; Gin, D. B.; Grosso, G.; Kiptily, V. G.; Neto, A.; Shevelev, A. E.; Silva, M.; Sousa, J.; Gorini, G.

    2013-04-01

    High resolution γ-ray spectroscopy measurements at MHz counting rates were carried out at nuclear accelerators, combining a LaBr 3(Ce) detector with dedicated hardware and software solutions based on digitization and off-line analysis. Spectra were measured at counting rates up to 4 MHz, with little or no degradation of the energy resolution, adopting a pile up rejection algorithm. The reported results represent a step forward towards the final goal of high resolution γ-ray spectroscopy measurements on a burning plasma device.

  14. Graphics modelling of non-contact thickness measuring robotics work cell

    NASA Technical Reports Server (NTRS)

    Warren, Charles W.

    1990-01-01

    A system was developed for measuring, in real time, the thickness of a sprayable insulation during its application. The system was graphically modelled, off-line, using a state-of-the-art graphics workstation and associated software. This model was to contain a 3D color model of a workcell containing a robot and an air bearing turntable. A communication link was established between the graphics workstations and the robot's controller. Sequences of robot motion generated by the computer simulation are transmitted to the robot for execution.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damiani, D.; Dubrovin, M.; Gaponenko, I.

    Psana(Photon Science Analysis) is a software package that is used to analyze data produced by the Linac Coherent Light Source X-ray free-electron laser at the SLAC National Accelerator Laboratory. The project began in 2011, is written primarily in C++ with some Python, and provides user interfaces in both C++ and Python. Most users use the Python interface. The same code can be run in real time while data are being taken as well as offline, executing on many nodes/cores using MPI for parallelization. It is publicly available and installable on the RHEL5/6/7 operating systems.

  16. Flight Software Development for the CHEOPS Instrument with the CORDET Framework

    NASA Astrophysics Data System (ADS)

    Cechticky, V.; Ottensamer, R.; Pasetti, A.

    2015-09-01

    CHEOPS is an ESA S-class mission dedicated to the precise measurement of radii of already known exoplanets using ultra-high precision photometry. The instrument flight software controlling the instrument and handling the science data is developed by the University of Vienna using the CORDET Framework offered by P&P Software GmbH. The CORDET Framework provides a generic software infrastructure for PUS-based applications. This paper describes how the framework is used for the CHEOPS application software to provide a consistent solution for to the communication and control services, event handling and FDIR procedures. This approach is innovative in four respects: (a) it is a true third-party reuse; (b) re-use is done at specification, validation and code level; (c) the re-usable assets and their qualification data package are entirely open-source; (d) re-use is based on call-back with the application developer providing functions which are called by the reusable architecture. File names missing from here on out (I tried to mimic the files names from before.)

  17. Craniux: A LabVIEW-Based Modular Software Framework for Brain-Machine Interface Research

    PubMed Central

    Degenhart, Alan D.; Kelly, John W.; Ashmore, Robin C.; Collinger, Jennifer L.; Tyler-Kabara, Elizabeth C.; Weber, Douglas J.; Wang, Wei

    2011-01-01

    This paper presents “Craniux,” an open-access, open-source software framework for brain-machine interface (BMI) research. Developed in LabVIEW, a high-level graphical programming environment, Craniux offers both out-of-the-box functionality and a modular BMI software framework that is easily extendable. Specifically, it allows researchers to take advantage of multiple features inherent to the LabVIEW environment for on-the-fly data visualization, parallel processing, multithreading, and data saving. This paper introduces the basic features and system architecture of Craniux and describes the validation of the system under real-time BMI operation using simulated and real electrocorticographic (ECoG) signals. Our results indicate that Craniux is able to operate consistently in real time, enabling a seamless work flow to achieve brain control of cursor movement. The Craniux software framework is made available to the scientific research community to provide a LabVIEW-based BMI software platform for future BMI research and development. PMID:21687575

  18. Craniux: a LabVIEW-based modular software framework for brain-machine interface research.

    PubMed

    Degenhart, Alan D; Kelly, John W; Ashmore, Robin C; Collinger, Jennifer L; Tyler-Kabara, Elizabeth C; Weber, Douglas J; Wang, Wei

    2011-01-01

    This paper presents "Craniux," an open-access, open-source software framework for brain-machine interface (BMI) research. Developed in LabVIEW, a high-level graphical programming environment, Craniux offers both out-of-the-box functionality and a modular BMI software framework that is easily extendable. Specifically, it allows researchers to take advantage of multiple features inherent to the LabVIEW environment for on-the-fly data visualization, parallel processing, multithreading, and data saving. This paper introduces the basic features and system architecture of Craniux and describes the validation of the system under real-time BMI operation using simulated and real electrocorticographic (ECoG) signals. Our results indicate that Craniux is able to operate consistently in real time, enabling a seamless work flow to achieve brain control of cursor movement. The Craniux software framework is made available to the scientific research community to provide a LabVIEW-based BMI software platform for future BMI research and development.

  19. Problem Solving Frameworks for Mathematics and Software Development

    ERIC Educational Resources Information Center

    McMaster, Kirby; Sambasivam, Samuel; Blake, Ashley

    2012-01-01

    In this research, we examine how problem solving frameworks differ between Mathematics and Software Development. Our methodology is based on the assumption that the words used frequently in a book indicate the mental framework of the author. We compared word frequencies in a sample of 139 books that discuss problem solving. The books were grouped…

  20. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  1. Parallels in Computer-Aided Design Framework and Software Development Environment Efforts.

    DTIC Science & Technology

    1992-05-01

    de - sign kits, and tool and design management frameworks. Also, books about software engineer- ing environments [Long 91] and electronic design...tool integration [Zarrella 90], and agreement upon a universal de - sign automation framework, such as the CAD Framework Initiative (CFI) [Malasky 91...ments: identification, control, status accounting, and audit and review. The paper by Dart ex- tracts 15 CM concepts from existing SDEs and tools

  2. Applying a Framework to Evaluate Assignment Marking Software: A Case Study on Lightwork

    ERIC Educational Resources Information Center

    Heinrich, Eva; Milne, John

    2012-01-01

    This article presents the findings of a qualitative evaluation on the effect of a specialised software tool on the efficiency and quality of assignment marking. The software, Lightwork, combines with the Moodle learning management system and provides support through marking rubrics and marker allocations. To enable the evaluation a framework has…

  3. An Offline-Online Android Application for Hazard Event Mapping Using WebGIS Open Source Technologies

    NASA Astrophysics Data System (ADS)

    Olyazadeh, Roya; Jaboyedoff, Michel; Sudmeier-Rieux, Karen; Derron, Marc-Henri; Devkota, Sanjaya

    2016-04-01

    Nowadays, Free and Open Source Software (FOSS) plays an important role in better understanding and managing disaster risk reduction around the world. National and local government, NGOs and other stakeholders are increasingly seeking and producing data on hazards. Most of the hazard event inventories and land use mapping are based on remote sensing data, with little ground truthing, creating difficulties depending on the terrain and accessibility. Open Source WebGIS tools offer an opportunity for quicker and easier ground truthing of critical areas in order to analyse hazard patterns and triggering factors. This study presents a secure mobile-map application for hazard event mapping using Open Source WebGIS technologies such as Postgres database, Postgis, Leaflet, Cordova and Phonegap. The objectives of this prototype are: 1. An Offline-Online android mobile application with advanced Geospatial visualisation; 2. Easy Collection and storage of events information applied services; 3. Centralized data storage with accessibility by all the service (smartphone, standard web browser); 4. Improving data management by using active participation in hazard event mapping and storage. This application has been implemented as a low-cost, rapid and participatory method for recording impacts from hazard events and includes geolocation (GPS data and Internet), visualizing maps with overlay of satellite images, viewing uploaded images and events as cluster points, drawing and adding event information. The data can be recorded in offline (Android device) or online version (all browsers) and consequently uploaded through the server whenever internet is available. All the events and records can be visualized by an administrator and made public after approval. Different user levels can be defined to access the data for communicating the information. This application was tested for landslides in post-earthquake Nepal but can be used for any other type of hazards such as flood, avalanche, etc. Keywords: Offline, Online, WebGIS Open source, Android, Hazard Event Mapping

  4. Achieving Agility and Stability in Large-Scale Software Development

    DTIC Science & Technology

    2013-01-16

    temporary team is assigned to prepare layers and frameworks for future feature teams. Presentation Layer Domain Layer Data Access Layer Framework...http://www.sei.cmu.edu/training/ elearning ~ Software Engineering Institute CarnegieMellon

  5. Sequencing the oligosaccharide pool in the low molecular weight heparin dalteparin with offline HPLC and ESI-MS/MS.

    PubMed

    Wang, Zhangjie; Zhang, Tianji; Xie, Shaoshuai; Liu, Xinyue; Li, Hongmei; Linhardt, Robert J; Chi, Lianli

    2018-03-01

    Low molecular weight heparins (LMWHs) are widely used anticoagulant drugs. The composition and sequence of LMWH oligosaccharides determine their safety and efficacy. The short oligosaccharide pool in LMWHs undergoes more depolymerization reactions than the longer chains and is the most sensitive indicator of the manufacturing process. Electrospray ionization tandem mass spectrometry (ESI-MS/MS) has been demonstrated as a powerful tool to sequence synthetic heparin oligosaccharide but never been applied to analyze complicated mixture like LMWHs. We established an offline strong anion exchange (SAX)-high performance liquid chromatography (HPLC) and ESI-MS/MS approach to sequence the short oligosaccharides of dalteparin sodium. With the help of in-house developed MS/MS interpretation software, the sequences of 18 representative species ranging from tetrasaccharide to octasaccharide were obtained. Interestingly, we found a novel 2,3-disulfated hexauronic acid structure and reconfirmed it by complementary heparinase digestion and LC-MS/MS analysis. This approach provides straightforward and in-depth insight to the structure of LMWHs and the reaction mechanism of heparin depolymerization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Connected: Recommendations and Techniques in Order to Employ Internet Tools for the Enhancement of Online Therapeutic Relationships. Experiences from Italy.

    PubMed

    Manfrida, Gianmarco; Albertini, Valentina; Eisenberg, Erica

    2017-01-01

    The article explores the different types of therapeutic relationship that can evolve both on- and offline, thanks to the use of tools, such as software and applications, which enable therapists and patients contact outside of the traditional setting. Given the premise that it is practically impossible today to maintain a relationship without the use of internet and telephones, it becomes necessary to question the ways in which the online space can become a useful extension of the therapeutic setting. The authors, starting from a consideration regarding the specificity of the online therapeutic relationship, analyze the best ways to use text and email messaging with patients. Furthermore, specific interactions via group chats are presented, for example, to coordinate a therapeutic team involving several professionals. Further, video chat settings are discussed through a clinical case presentation. Lastly, the therapist's management of social networks is debated, underscoring the importance for the therapists that his or her online identity be consistent with the offline image which patients are introduced to in the traditional setting of the therapy room.

  7. Size characterization of airborne SiO2 nanoparticles with on-line and off-line measurement techniques: an interlaboratory comparison study

    NASA Astrophysics Data System (ADS)

    Motzkus, C.; Macé, T.; Gaie-Levrel, F.; Ducourtieux, S.; Delvallee, A.; Dirscherl, K.; Hodoroaba, V.-D.; Popov, I.; Popov, O.; Kuselman, I.; Takahata, K.; Ehara, K.; Ausset, P.; Maillé, M.; Michielsen, N.; Bondiguel, S.; Gensdarmes, F.; Morawska, L.; Johnson, G. R.; Faghihi, E. M.; Kim, C. S.; Kim, Y. H.; Chu, M. C.; Guardado, J. A.; Salas, A.; Capannelli, G.; Costa, C.; Bostrom, T.; Jämting, Å. K.; Lawn, M. A.; Adlem, L.; Vaslin-Reimann, S.

    2013-10-01

    Results of an interlaboratory comparison on size characterization of SiO2 airborne nanoparticles using on-line and off-line measurement techniques are discussed. This study was performed in the framework of Technical Working Area (TWA) 34—"Properties of Nanoparticle Populations" of the Versailles Project on Advanced Materials and Standards (VAMAS) in the project no. 3 "Techniques for characterizing size distribution of airborne nanoparticles". Two types of nano-aerosols, consisting of (1) one population of nanoparticles with a mean diameter between 30.3 and 39.0 nm and (2) two populations of non-agglomerated nanoparticles with mean diameters between, respectively, 36.2-46.6 nm and 80.2-89.8 nm, were generated for characterization measurements. Scanning mobility particle size spectrometers (SMPS) were used for on-line measurements of size distributions of the produced nano-aerosols. Transmission electron microscopy, scanning electron microscopy, and atomic force microscopy were used as off-line measurement techniques for nanoparticles characterization. Samples were deposited on appropriate supports such as grids, filters, and mica plates by electrostatic precipitation and a filtration technique using SMPS controlled generation upstream. The results of the main size distribution parameters (mean and mode diameters), obtained from several laboratories, were compared based on metrological approaches including metrological traceability, calibration, and evaluation of the measurement uncertainty. Internationally harmonized measurement procedures for airborne SiO2 nanoparticles characterization are proposed.

  8. Off-line programming motion and process commands for robotic welding of Space Shuttle main engines

    NASA Technical Reports Server (NTRS)

    Ruokangas, C. C.; Guthmiller, W. A.; Pierson, B. L.; Sliwinski, K. E.; Lee, J. M. F.

    1987-01-01

    The off-line-programming software and hardware being developed for robotic welding of the Space Shuttle main engine are described and illustrated with diagrams, drawings, graphs, and photographs. The menu-driven workstation-based interactive programming system is designed to permit generation of both motion and process commands for the robotic workcell by weld engineers (with only limited knowledge of programming or CAD systems) on the production floor. Consideration is given to the user interface, geometric-sources interfaces, overall menu structure, weld-parameter data base, and displays of run time and archived data. Ongoing efforts to address limitations related to automatic-downhand-configuration coordinated motion, a lack of source codes for the motion-control software, CAD data incompatibility, interfacing with the robotic workcell, and definition of the welding data base are discussed.

  9. TU-C-BRE-11: 3D EPID-Based in Vivo Dosimetry: A Major Step Forward Towards Optimal Quality and Safety in Radiation Oncology Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mijnheer, B; Mans, A; Olaciregui-Ruiz, I

    Purpose: To develop a 3D in vivo dosimetry method that is able to substitute pre-treatment verification in an efficient way, and to terminate treatment delivery if the online measured 3D dose distribution deviates too much from the predicted dose distribution. Methods: A back-projection algorithm has been further developed and implemented to enable automatic 3D in vivo dose verification of IMRT/VMAT treatments using a-Si EPIDs. New software tools were clinically introduced to allow automated image acquisition, to periodically inspect the record-and-verify database, and to automatically run the EPID dosimetry software. The comparison of the EPID-reconstructed and planned dose distribution is donemore » offline to raise automatically alerts and to schedule actions when deviations are detected. Furthermore, a software package for online dose reconstruction was also developed. The RMS of the difference between the cumulative planned and reconstructed 3D dose distributions was used for triggering a halt of a linac. Results: The implementation of fully automated 3D EPID-based in vivo dosimetry was able to replace pre-treatment verification for more than 90% of the patient treatments. The process has been fully automated and integrated in our clinical workflow where over 3,500 IMRT/VMAT treatments are verified each year. By optimizing the dose reconstruction algorithm and the I/O performance, the delivered 3D dose distribution is verified in less than 200 ms per portal image, which includes the comparison between the reconstructed and planned dose distribution. In this way it was possible to generate a trigger that can stop the irradiation at less than 20 cGy after introducing large delivery errors. Conclusion: The automatic offline solution facilitated the large scale clinical implementation of 3D EPID-based in vivo dose verification of IMRT/VMAT treatments; the online approach has been successfully tested for various severe delivery errors.« less

  10. A Software Rejuvenation Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chau, Savio

    2009-01-01

    A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.

  11. A software framework for developing measurement applications under variable requirements.

    PubMed

    Arpaia, Pasquale; Buzio, Marco; Fiscarelli, Lucio; Inglese, Vitaliano

    2012-11-01

    A framework for easily developing software for measurement and test applications under highly and fast-varying requirements is proposed. The framework allows the software quality, in terms of flexibility, usability, and maintainability, to be maximized. Furthermore, the development effort is reduced and finalized, by relieving the test engineer of development details. The framework can be configured for satisfying a large set of measurement applications in a generic field for an industrial test division, a test laboratory, or a research center. As an experimental case study, the design, the implementation, and the assessment inside the application to a measurement scenario of magnet testing at the European Organization for Nuclear Research is reported.

  12. A Methodological Framework for Enterprise Information System Requirements Derivation

    NASA Astrophysics Data System (ADS)

    Caplinskas, Albertas; Paškevičiūtė, Lina

    Current information systems (IS) are enterprise-wide systems supporting strategic goals of the enterprise and meeting its operational business needs. They are supported by information and communication technologies (ICT) and other software that should be fully integrated. To develop software responding to real business needs, we need requirements engineering (RE) methodology that ensures the alignment of requirements for all levels of enterprise system. The main contribution of this chapter is a requirement-oriented methodological framework allowing to transform business requirements level by level into software ones. The structure of the proposed framework reflects the structure of Zachman's framework. However, it has other intentions and is purposed to support not the design but the RE issues.

  13. On-line estimation and detection of abnormal substrate concentrations in WWTPs using a software sensor: a benchmark study.

    PubMed

    Benazzi, F; Gernaey, K V; Jeppsson, U; Katebi, R

    2007-08-01

    In this paper, a new approach for on-line monitoring and detection of abnormal readily biodegradable substrate (S(s)) and slowly biodegradable substrate (X(s)) concentrations, for example due to input of toxic loads from the sewer, or due to influent substrate shock load, is proposed. Considering that measurements of S(s) and X(s) concentrations are not available in real wastewater treatment plants, the S(s) / X(s) software sensor can activate an alarm with a response time of about 60 and 90 minutes, respectively, based on the dissolved oxygen measurement. The software sensor implementation is based on an extended Kalman filter observer and disturbances are modelled using fast Fourier transform and spectrum analyses. Three case studies are described. The first one illustrates the fast and accurate convergence of the extended Kalman filter algorithm, which is achieved in less than 2 hours. Furthermore, the difficulties of estimating X(s) when off-line analysis is not available are depicted, and the S(s) / X(s) software sensor performances when no measurements of S(s) and X(s) are available are illustrated. Estimation problems related to the death-regeneration concept of the activated sludge model no.1 and possible application of the software sensor in wastewater monitoring are discussed.

  14. A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints.

    PubMed

    Sundharam, Sakthivel Manikandan; Navet, Nicolas; Altmeyer, Sebastian; Havet, Lionel

    2018-02-20

    Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system.

  15. A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints

    PubMed Central

    Navet, Nicolas; Havet, Lionel

    2018-01-01

    Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system. PMID:29461489

  16. A Buyer Behaviour Framework for the Development and Design of Software Agents in E-Commerce.

    ERIC Educational Resources Information Center

    Sproule, Susan; Archer, Norm

    2000-01-01

    Software agents are computer programs that run in the background and perform tasks autonomously as delegated by the user. This paper blends models from marketing research and findings from the field of decision support systems to build a framework for the design of software agents to support in e-commerce buying applications. (Contains 35…

  17. Paleo Data Assimilation of Pseudo-Tree-Ring-Width Chronologies in a Climate Model

    NASA Astrophysics Data System (ADS)

    Fallah Hassanabadi, B.; Acevedo, W.; Reich, S.; Cubasch, U.

    2016-12-01

    Using the Time-Averaged Ensemble Kalman Filter (EnKF) and a forward model, we assimilate the pseudo Tree-Ring-Width (TRW) chronologies into an Atmospheric Global Circulation model. This study investigates several aspects of Paleo-Data Assimilation (PDA) within a perfect-model set-up: (i) we test the performance of several forward operators in the framework of a PDA-based climate reconstruction, (ii) compare the PDA-based simulations' skill against the free ensemble runs and (iii) inverstigate the skill of the "online" (with cycling) DA and the "off-line" (no-cycling) DA. In our experiments, the "online" (with cycling) PDA approach did not outperform the "off-line" (no-cycling) one, despite its considerable additional implementation complexity. On the other hand, it was observed that the error reduction achieved by assimilating a particular pseudo-TRW chronology is modulated by the strength of the yearly internal variability of the model at the chronology site. This result might help the dendrochronology community to optimize their sampling efforts.

  18. Online Calibration of the TPC Drift Time in the ALICE High Level Trigger

    NASA Astrophysics Data System (ADS)

    Rohr, David; Krzewicki, Mikolaj; Zampolli, Chiara; Wiechula, Jens; Gorbunov, Sergey; Chauvin, Alex; Vorobyev, Ivan; Weber, Steffen; Schweda, Kai; Lindenstruth, Volker

    2017-06-01

    A Large Ion Collider Experiment (ALICE) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. The high level trigger (HLT) is a compute cluster, which reconstructs collisions as recorded by the ALICE detector in real-time. It employs a custom online data-transport framework to distribute data and workload among the compute nodes. ALICE employs subdetectors that are sensitive to environmental conditions such as pressure and temperature, e.g., the time projection chamber (TPC). A precise reconstruction of particle trajectories requires calibration of these detectors. Performing calibration in real time in the HLT improves the online reconstructions and renders certain offline calibration steps obsolete speeding up offline physics analysis. For LHC Run 3, starting in 2020 when data reduction will rely on reconstructed data, online calibration becomes a necessity. Reconstructed particle trajectories build the basis for the calibration making a fast online-tracking mandatory. The main detectors used for this purpose are the TPC and Inner Tracking System. Reconstructing the trajectories in the TPC is the most compute-intense step. We present several improvements to the ALICE HLT developed to facilitate online calibration. The main new development for online calibration is a wrapper that can run ALICE offline analysis and calibration tasks inside the HLT. In addition, we have added asynchronous processing capabilities to support long-running calibration tasks in the HLT framework, which runs event-synchronously otherwise. In order to improve the resiliency, an isolated process performs the asynchronous operations such that even a fatal error does not disturb data taking. We have complemented the original loop-free HLT chain with ZeroMQ data-transfer components. The ZeroMQ components facilitate a feedback loop that inserts the calibration result created at the end of the chain back into tracking components at the beginning of the chain, after a short delay. All these new features are implemented in a general way, such that they have use-cases aside from online calibration. In order to gather sufficient statistics for the calibration, the asynchronous calibration component must process enough events per time interval. Since the calibration is valid only for a certain time period, the delay until the feedback loop provides updated calibration data must not be too long. A first full-scale test of the online calibration functionality was performed during 2015 heavy-ion run under real conditions. Since then, online calibration is enabled and benchmarked in 2016 proton-proton data taking. We present a timing analysis of this first online-calibration test, which concludes that the HLT is capable of online TPC drift time calibration fast enough to calibrate the tracking via the feedback loop. We compare the calibration results with the offline calibration and present a comparison of the residuals of the TPC cluster coordinates with respect to offline reconstruction.

  19. A tool for selective inline quantification of co-eluting proteins in chromatography using spectral analysis and partial least squares regression.

    PubMed

    Brestrich, Nina; Briskot, Till; Osberghaus, Anna; Hubbuch, Jürgen

    2014-07-01

    Selective quantification of co-eluting proteins in chromatography is usually performed by offline analytics. This is time-consuming and can lead to late detection of irregularities in chromatography processes. To overcome this analytical bottleneck, a methodology for selective protein quantification in multicomponent mixtures by means of spectral data and partial least squares regression was presented in two previous studies. In this paper, a powerful integration of software and chromatography hardware will be introduced that enables the applicability of this methodology for a selective inline quantification of co-eluting proteins in chromatography. A specific setup consisting of a conventional liquid chromatography system, a diode array detector, and a software interface to Matlab® was developed. The established tool for selective inline quantification was successfully applied for a peak deconvolution of a co-eluting ternary protein mixture consisting of lysozyme, ribonuclease A, and cytochrome c on SP Sepharose FF. Compared to common offline analytics based on collected fractions, no loss of information regarding the retention volumes and peak flanks was observed. A comparison between the mass balances of both analytical methods showed, that the inline quantification tool can be applied for a rapid determination of pool yields. Finally, the achieved inline peak deconvolution was successfully applied to make product purity-based real-time pooling decisions. This makes the established tool for selective inline quantification a valuable approach for inline monitoring and control of chromatographic purification steps and just in time reaction on process irregularities. © 2014 Wiley Periodicals, Inc.

  20. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  1. ControlShell - A real-time software framework

    NASA Technical Reports Server (NTRS)

    Schneider, Stanley A.; Ullman, Marc A.; Chen, Vincent W.

    1991-01-01

    ControlShell is designed to enable modular design and impplementation of real-time software. It is an object-oriented tool-set for real-time software system programming. It provides a series of execution and data interchange mechansims that form a framework for building real-time applications. These mechanisms allow a component-based approach to real-time software generation and mangement. By defining a set of interface specifications for intermodule interaction, ControlShell provides a common platform that is the basis for real-time code development and exchange.

  2. Software Framework for Peer Data-Management Services

    NASA Technical Reports Server (NTRS)

    Hughes, John; Hardman, Sean; Crichton, Daniel; Hyon, Jason; Kelly, Sean; Tran, Thuy

    2007-01-01

    Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-to-peer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.

  3. A comparison of item response models for accuracy and speed of item responses with applications to adaptive testing.

    PubMed

    van Rijn, Peter W; Ali, Usama S

    2017-05-01

    We compare three modelling frameworks for accuracy and speed of item responses in the context of adaptive testing. The first framework is based on modelling scores that result from a scoring rule that incorporates both accuracy and speed. The second framework is the hierarchical modelling approach developed by van der Linden (2007, Psychometrika, 72, 287) in which a regular item response model is specified for accuracy and a log-normal model for speed. The third framework is the diffusion framework in which the response is assumed to be the result of a Wiener process. Although the three frameworks differ in the relation between accuracy and speed, one commonality is that the marginal model for accuracy can be simplified to the two-parameter logistic model. We discuss both conditional and marginal estimation of model parameters. Models from all three frameworks were fitted to data from a mathematics and spelling test. Furthermore, we applied a linear and adaptive testing mode to the data off-line in order to determine differences between modelling frameworks. It was found that a model from the scoring rule framework outperformed a hierarchical model in terms of model-based reliability, but the results were mixed with respect to correlations with external measures. © 2017 The British Psychological Society.

  4. Architecture of a framework for providing information services for public transport.

    PubMed

    García, Carmelo R; Pérez, Ricardo; Lorenzo, Alvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino

    2012-01-01

    This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained.

  5. A Software Framework for Aircraft Simulation

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.

    2008-01-01

    The National Aeronautics and Space Administration Dryden Flight Research Center has a long history in developing simulations of experimental fixed-wing aircraft from gliders to suborbital vehicles on platforms ranging from desktop simulators to pilot-in-the-loop/aircraft-in-the-loop simulators. Regardless of the aircraft or simulator hardware, much of the software framework is common to all NASA Dryden simulators. Some of this software has withstood the test of time, but in recent years the push toward high-fidelity user-friendly simulations has resulted in some significant changes. This report presents an overview of the current NASA Dryden simulation software framework and capabilities with an emphasis on the new features that have permitted NASA to develop more capable simulations while maintaining the same staffing levels.

  6. Property-Based Software Engineering Measurement

    NASA Technical Reports Server (NTRS)

    Briand, Lionel; Morasca, Sandro; Basili, Victor R.

    1995-01-01

    Little theory exists in the field of software system measurement. Concepts such as complexity, coupling, cohesion or even size are very often subject to interpretation and appear to have inconsistent definitions in the literature. As a consequence, there is little guidance provided to the analyst attempting to define proper measures for specific problems. Many controversies in the literature are simply misunderstandings and stem from the fact that some people talk about different measurement concepts under the same label (complexity is the most common case). There is a need to define unambiguously the most important measurement concepts used in the measurement of software products. One way of doing so is to define precisely what mathematical properties characterize these concepts regardless of the specific software artifacts to which these concepts are applied. Such a mathematical framework could generate a consensus in the software engineering community and provide a means for better communication among researchers, better guidelines for analysis, and better evaluation methods for commercial static analyzers for practitioners. In this paper, we propose a mathematical framework which is generic, because it is not specific to any particular software artifact, and rigorous, because it is based on precise mathematical concepts. This framework defines several important measurement concepts (size, length, complexity, cohesion, coupling). It is not intended to be complete or fully objective; other frameworks could have been proposed and different choices could have been made. However, we believe that the formalism and properties we introduce are convenient and intuitive. In addition, we have reviewed the literature on this subject and compared it with our work. This framework contributes constructively to a firmer theoretical ground of software measurement.

  7. Property-Based Software Engineering Measurement

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Morasca, Sandro; Basili, Victor R.

    1997-01-01

    Little theory exists in the field of software system measurement. Concepts such as complexity, coupling, cohesion or even size are very often subject to interpretation and appear to have inconsistent definitions in the literature. As a consequence, there is little guidance provided to the analyst attempting to define proper measures for specific problems. Many controversies in the literature are simply misunderstandings and stem from the fact that some people talk about different measurement concepts under the same label (complexity is the most common case). There is a need to define unambiguously the most important measurement concepts used in the measurement of software products. One way of doing so is to define precisely what mathematical properties characterize these concepts, regardless of the specific software artifacts to which these concepts are applied. Such a mathematical framework could generate a consensus in the software engineering community and provide a means for better communication among researchers, better guidelines for analysts, and better evaluation methods for commercial static analyzers for practitioners. In this paper, we propose a mathematical framework which is generic, because it is not specific to any particular software artifact and rigorous, because it is based on precise mathematical concepts. We use this framework to propose definitions of several important measurement concepts (size, length, complexity, cohesion, coupling). It does not intend to be complete or fully objective; other frameworks could have been proposed and different choices could have been made. However, we believe that the formalisms and properties we introduce are convenient and intuitive. This framework contributes constructively to a firmer theoretical ground of software measurement.

  8. ESPC Common Model Architecture Earth System Modeling Framework (ESMF) Software and Application Development

    DTIC Science & Technology

    2015-09-30

    originate from NASA , NOAA , and community modeling efforts, and support for creation of the suite was shared by sponsors from other agencies. ESPS...Framework (ESMF) Software and Application Development Cecelia Deluca NESII/CIRES/ NOAA Earth System Research Laboratory 325 Broadway Boulder, CO...Capability (NUOPC) was established between NOAA and Navy to develop a common software architecture for easy and efficient interoperability. The

  9. Diagnosis and Prognosis of Weapon Systems

    NASA Technical Reports Server (NTRS)

    Nolan, Mary; Catania, Rebecca; deMare, Gregory

    2005-01-01

    The Prognostics Framework is a set of software tools with an open architecture that affords a capability to integrate various prognostic software mechanisms and to provide information for operational and battlefield decision-making and logistical planning pertaining to weapon systems. The Prognostics NASA Tech Briefs, February 2005 17 Framework is also a system-level health -management software system that (1) receives data from performance- monitoring and built-in-test sensors and from other prognostic software and (2) processes the received data to derive a diagnosis and a prognosis for a weapon system. This software relates the diagnostic and prognostic information to the overall health of the system, to the ability of the system to perform specific missions, and to needed maintenance actions and maintenance resources. In the development of the Prognostics Framework, effort was focused primarily on extending previously developed model-based diagnostic-reasoning software to add prognostic reasoning capabilities, including capabilities to perform statistical analyses and to utilize information pertaining to deterioration of parts, failure modes, time sensitivity of measured values, mission criticality, historical data, and trends in measurement data. As thus extended, the software offers an overall health-monitoring capability.

  10. Open-source framework for documentation of scientific software written on MATLAB-compatible programming languages

    NASA Astrophysics Data System (ADS)

    Konnik, Mikhail V.; Welsh, James

    2012-09-01

    Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.

  11. Eighteen-Month Outcomes of Titanium Frameworks Using Computer-Aided Design and Computer-Aided Manufacturing Method.

    PubMed

    Turkyilmaz, Ilser; Asar, Neset Volkan

    2017-06-01

    The aim of the report is to introduce a new software and a new scanner with a noncontact laser probe and to present outcomes of computer-aided design and computer-aided manufacturing titanium frameworks using this new software and scanner with a laser probe. Seven patients received 40 implants placed using a 1-stage protocol. After all implants were planned using an implant planning software (NobelClinician), either 5 or 6 implants were placed in each edentulous arch. Each edentulous arch was treated with a fixed dental prosthesis using implant-supported complete-arch milled-titanium framework using the software (NobelProcera) and the scanner. All patients were followed up for 18 ± 3 months. Implant survival, prosthesis survival, framework fit, marginal bone levels, and maintenance requirements were evaluated. One implant was lost during the follow-up period, giving the implant survival rate of 97.5%; 0.4 ± 0.2 mm marginal bone loss was noted for all implants after 18 ± 3 months. None of the prostheses needed a replacement, indicating the prosthesis success rate of 100%. The results of this clinical study suggest that titanium frameworks fabricated using the software and scanner presented in this study fit accurately and may be a viable option to restore edentulous arches.

  12. [Construction of educational software about personality disorders].

    PubMed

    Botti, Nadja Cristiane Lappann; Carneiro, Ana Luíza Marques; Almeida, Camila Souza; Pereira, Cíntia Braga Silva

    2011-01-01

    The study describes the experience of building educational software in the area of mental health. The software was developed to enable the nursing student identify personality disorders. In this process, we applied the pedagogical framework of Vygotsky and the theoretical framework of the diagnostic criteria defined by DSM-IV. From these references were identified personality disorders characters in stories and / or children's movies. The software development bank was built with multimedia graphics data, sound and explanatory. The software developed like educational game like questions with increasing levels of difficulty. The software was developed with Microsoft Office PowerPoint 2007. It is believed in the validity of this strategy for teaching-learning to the area of mental health nursing.

  13. CASS—CFEL-ASG software suite

    NASA Astrophysics Data System (ADS)

    Foucar, Lutz; Barty, Anton; Coppola, Nicola; Hartmann, Robert; Holl, Peter; Hoppe, Uwe; Kassemeyer, Stephan; Kimmel, Nils; Küpper, Jochen; Scholz, Mirko; Techert, Simone; White, Thomas A.; Strüder, Lothar; Ullrich, Joachim

    2012-10-01

    The Max Planck Advanced Study Group (ASG) at the Center for Free Electron Laser Science (CFEL) has created the CFEL-ASG Software Suite CASS to view, process and analyse multi-parameter experimental data acquired at Free Electron Lasers (FELs) using the CFEL-ASG Multi Purpose (CAMP) instrument Strüder et al. (2010) [6]. The software is based on a modular design so that it can be adjusted to accommodate the needs of all the various experiments that are conducted with the CAMP instrument. In fact, this allows the use of the software in all experiments where multiple detectors are involved. One of the key aspects of CASS is that it can be used either 'on-line', using a live data stream from the free-electron laser facility's data acquisition system to guide the experiment, and 'off-line', on data acquired from a previous experiment which has been saved to file. Program summary Program title: CASS Catalogue identifier: AEMP_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence, version 3 No. of lines in distributed program, including test data, etc.: 167073 No. of bytes in distributed program, including test data, etc.: 1065056 Distribution format: tar.gz Programming language: C++. Computer: Intel x86-64. Operating system: GNU/Linux (for information about restrictions see outlook). RAM: >8 GB Classification: 2.3, 3, 15, 16.4. External routines: Qt-Framework[1], SOAP[2], (optional HDF5[3], VIGRA[4], ROOT[5], QWT[6]) Nature of problem: Analysis and visualisation of scientific data acquired at Free-Electron-Lasers Solution method: Generalise data access and storage so that a variety of small programming pieces can be linked to form a complex analysis chain. Unusual features: Complex analysis chains can be built without recompiling the program Additional comments: An updated extensive documentation of CASS is available at [7]. Running time: Depending on the data size and complexity of analysis algorithms. References: [1] http://qt.nokia.com [2] http://www.cs.fsu.edu/~engelen/soap.html [3] http://www.hdfgroup.org/HDF5/ [4] http://hci.iwr.uni-heidelberg.de/vigra/ [5] http://root.cern.ch [6] http://qwt.sourceforge.net/ [7] http://www.mpi-hd.mpg.de/personalhomes/gitasg/cass

  14. NASA Software Documentation Standard

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as "Standard") is designed to support the documentation of all software developed for NASA; its goal is to provide a framework and model for recording the essential information needed throughout the development life cycle and maintenance of a software system. The NASA Software Documentation Standard can be applied to the documentation of all NASA software. The Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. The basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  15. The IVTANTHERMO-Online database for thermodynamic properties of individual substances with web interface

    NASA Astrophysics Data System (ADS)

    Belov, G. V.; Dyachkov, S. A.; Levashov, P. R.; Lomonosov, I. V.; Minakov, D. V.; Morozov, I. V.; Sineva, M. A.; Smirnov, V. N.

    2018-01-01

    The database structure, main features and user interface of an IVTANTHERMO-Online system are reviewed. This system continues the series of the IVTANTHERMO packages developed in JIHT RAS. It includes the database for thermodynamic properties of individual substances and related software for analysis of experimental results, data fitting, calculation and estimation of thermodynamical functions and thermochemistry quantities. In contrast to the previous IVTANTHERMO versions it has a new extensible database design, the client-server architecture, a user-friendly web interface with a number of new features for online and offline data processing.

  16. Surface Operations Systems Improve Airport Efficiency

    NASA Technical Reports Server (NTRS)

    2009-01-01

    With Small Business Innovation Research (SBIR) contracts from Ames Research Center, Mosaic ATM of Leesburg, Virginia created software to analyze surface operations at airports. Surface surveillance systems, which report locations every second for thousands of air and ground vehicles, generate massive amounts of data, making gathering and analyzing this information difficult. Mosaic?s Surface Operations Data Analysis and Adaptation (SODAA) tool is an off-line support tool that can analyze how well the airport surface operation is working and can help redesign procedures to improve operations. SODAA helps researchers pinpoint trends and correlations in vast amounts of recorded airport operations data.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.

    Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate–carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimationmore » system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model–data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. Furthermore, the new improved parameters for JULES are presented along with the associated uncertainties for each parameter.« less

  18. Implementation of the ground level enhancement alert software at NMDB database

    NASA Astrophysics Data System (ADS)

    Mavromichalaki, Helen; Souvatzoglou, George; Sarlanis, Christos; Mariatos, George; Papaioannou, Athanasios; Belov, Anatoly; Eroshenko, Eugenia; Yanke, Victor; NMDB Team

    2010-11-01

    The European Commission is supporting the real-time database for high-resolution neutron monitor measurements (NMDB) as an e-Infrastructures project in the Seventh Framework Programme in the Capacities section. The realization of the NMDB will provide the opportunity for several applications most of which will be implemented in real-time. An important application will be the establishment of an Alert signal when dangerous solar particle events are heading to the Earth, resulting into a ground level enhancement (GLE) registered by neutron monitors (NMs). The cosmic ray community has been occupied with the question of establishing such an Alert for many years and recently several groups succeeded in creating a proper algorithm capable of detecting space weather threats in an off-line mode. A lot of original work has been done to this direction and every group working in this field performed routine runs for all GLE cases, resulting into statistical analyses of GLE events. The next step was to make this algorithm as accurate as possible and most importantly, working in real-time. This was achieved when, during the last GLE observed so far, a real-time GLE Alert signal was produced. In this work, the steps of this procedure as well as the functionality of this algorithm for both the scientific community and users are being discussed. Nevertheless, the transition of the Alert algorithm to the NMDB is also being discussed.

  19. Land-surface parameter optimisation using data assimilation techniques: the adJULES system V1.0

    NASA Astrophysics Data System (ADS)

    Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.; Luke, Catherine M.

    2016-08-01

    Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model-data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. The new improved parameters for JULES are presented along with the associated uncertainties for each parameter.

  20. Space station data system analysis/architecture study. Task 2: Options development DR-5. Volume 1: Technology options

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The second task in the Space Station Data System (SSDS) Analysis/Architecture Study is the development of an information base that will support the conduct of trade studies and provide sufficient data to make key design/programmatic decisions. This volume identifies the preferred options in the technology category and characterizes these options with respect to performance attributes, constraints, cost, and risk. The technology category includes advanced materials, processes, and techniques that can be used to enhance the implementation of SSDS design structures. The specific areas discussed are mass storage, including space and round on-line storage and off-line storage; man/machine interface; data processing hardware, including flight computers and advanced/fault tolerant computer architectures; and software, including data compression algorithms, on-board high level languages, and software tools. Also discussed are artificial intelligence applications and hard-wire communications.

  1. Unresolved Galaxy Classifier for ESA/Gaia mission: Support Vector Machines approach

    NASA Astrophysics Data System (ADS)

    Bellas-Velidis, Ioannis; Kontizas, Mary; Dapergolas, Anastasios; Livanou, Evdokia; Kontizas, Evangelos; Karampelas, Antonios

    A software package Unresolved Galaxy Classifier (UGC) is being developed for the ground-based pipeline of ESA's Gaia mission. It aims to provide an automated taxonomic classification and specific parameters estimation analyzing Gaia BP/RP instrument low-dispersion spectra of unresolved galaxies. The UGC algorithm is based on a supervised learning technique, the Support Vector Machines (SVM). The software is implemented in Java as two separate modules. An offline learning module provides functions for SVM-models training. Once trained, the set of models can be repeatedly applied to unknown galaxy spectra by the pipeline's application module. A library of galaxy models synthetic spectra, simulated for the BP/RP instrument, is used to train and test the modules. Science tests show a very good classification performance of UGC and relatively good regression performance, except for some of the parameters. Possible approaches to improve the performance are discussed.

  2. Architecture of a Framework for Providing Information Services for Public Transport

    PubMed Central

    García, Carmelo R.; Pérez, Ricardo; Lorenzo, Álvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino

    2012-01-01

    This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained. PMID:22778585

  3. AGATE: Adversarial Game Analysis for Tactical Evaluation

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L.

    2013-01-01

    AGATE generates a set of ranked strategies that enables an autonomous vehicle to track/trail another vehicle that is trying to break the contact using evasive tactics. The software is efficient (can be run on a laptop), scales well with environmental complexity, and is suitable for use onboard an autonomous vehicle. The software will run in near-real-time (2 Hz) on most commercial laptops. Existing software is usually run offline in a planning mode, and is not used to control an unmanned vehicle actively. JPL has developed a system for AGATE that uses adversarial game theory (AGT) methods (in particular, leader-follower and pursuit-evasion) to enable an autonomous vehicle (AV) to maintain tracking/ trailing operations on a target that is employing evasive tactics. The AV trailing, tracking, and reacquisition operations are characterized by imperfect information, and are an example of a non-zero sum game (a positive payoff for the AV is not necessarily an equal loss for the target being tracked and, potentially, additional adversarial boats). Previously, JPL successfully applied the Nash equilibrium method for onboard control of an autonomous ground vehicle (AGV) travelling over hazardous terrain.

  4. Integrated computer-aided design using minicomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  5. Models and Frameworks: A Synergistic Association for Developing Component-Based Applications

    PubMed Central

    Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A.; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development. PMID:25147858

  6. Models and frameworks: a synergistic association for developing component-based applications.

    PubMed

    Alonso, Diego; Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development.

  7. Generic Software Architecture for Prognostics (GSAP) User Guide

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher Allen; Daigle, Matthew John; Watkins, Jason; Sankararaman, Shankar; Goebel, Kai

    2016-01-01

    The Generic Software Architecture for Prognostics (GSAP) is a framework for applying prognostics. It makes applying prognostics easier by implementing many of the common elements across prognostic applications. The standard interface enables reuse of prognostic algorithms and models across systems using the GSAP framework.

  8. Awake, Offline Processing during Associative Learning

    PubMed Central

    Nestor, Adrian; Tarr, Michael J.; Creswell, J. David

    2016-01-01

    Offline processing has been shown to strengthen memory traces and enhance learning in the absence of conscious rehearsal or awareness. Here we evaluate whether a brief, two-minute offline processing period can boost associative learning and test a memory reactivation account for these offline processing effects. After encoding paired associates, subjects either completed a distractor task for two minutes or were immediately tested for memory of the pairs in a counterbalanced, within-subjects functional magnetic resonance imaging study. Results showed that brief, awake, offline processing improves memory for associate pairs. Moreover, multi-voxel pattern analysis of the neuroimaging data suggested reactivation of encoded memory representations in dorsolateral prefrontal cortex during offline processing. These results signify the first demonstration of awake, active, offline enhancement of associative memory and suggest that such enhancement is accompanied by the offline reactivation of encoded memory representations. PMID:27119345

  9. Awake, Offline Processing during Associative Learning.

    PubMed

    Bursley, James K; Nestor, Adrian; Tarr, Michael J; Creswell, J David

    2016-01-01

    Offline processing has been shown to strengthen memory traces and enhance learning in the absence of conscious rehearsal or awareness. Here we evaluate whether a brief, two-minute offline processing period can boost associative learning and test a memory reactivation account for these offline processing effects. After encoding paired associates, subjects either completed a distractor task for two minutes or were immediately tested for memory of the pairs in a counterbalanced, within-subjects functional magnetic resonance imaging study. Results showed that brief, awake, offline processing improves memory for associate pairs. Moreover, multi-voxel pattern analysis of the neuroimaging data suggested reactivation of encoded memory representations in dorsolateral prefrontal cortex during offline processing. These results signify the first demonstration of awake, active, offline enhancement of associative memory and suggest that such enhancement is accompanied by the offline reactivation of encoded memory representations.

  10. The NOvA software testing framework

    NASA Astrophysics Data System (ADS)

    Tamsett, M.; C Group

    2015-12-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study vε appearance in a vμ beam. NOvA has already produced more than one million Monte Carlo and detector generated files amounting to more than 1 PB in size. This data is divided between a number of parallel streams such as far and near detector beam spills, cosmic ray backgrounds, a number of data-driven triggers and over 20 different Monte Carlo configurations. Each of these data streams must be processed through the appropriate steps of the rapidly evolving, multi-tiered, interdependent NOvA software framework. In total there are greater than 12 individual software tiers, each of which performs a different function and can be configured differently depending on the input stream. In order to regularly test and validate that all of these software stages are working correctly NOvA has designed a powerful, modular testing framework that enables detailed validation and benchmarking to be performed in a fast, efficient and accessible way with minimal expert knowledge. The core of this system is a novel series of python modules which wrap, monitor and handle the underlying C++ software framework and then report the results to a slick front-end web-based interface. This interface utilises modern, cross-platform, visualisation libraries to render the test results in a meaningful way. They are fast and flexible, allowing for the easy addition of new tests and datasets. In total upwards of 14 individual streams are regularly tested amounting to over 70 individual software processes, producing over 25 GB of output files. The rigour enforced through this flexible testing framework enables NOvA to rapidly verify configurations, results and software and thus ensure that data is available for physics analysis in a timely and robust manner.

  11. A Framework of the Use of Information in Software Testing

    ERIC Educational Resources Information Center

    Kaveh, Payman

    2010-01-01

    With the increasing role that software systems play in our daily lives, software quality has become extremely important. Software quality is impacted by the efficiency of the software testing process. There are a growing number of software testing methodologies, models, and initiatives to satisfy the need to improve software quality. The main…

  12. ICW eHealth Framework.

    PubMed

    Klein, Karsten; Wolff, Astrid C; Ziebold, Oliver; Liebscher, Thomas

    2008-01-01

    The ICW eHealth Framework (eHF) is a powerful infrastructure and platform for the development of service-oriented solutions in the health care business. It is the culmination of many years of experience of ICW in the development and use of in-house health care solutions and represents the foundation of ICW product developments based on the Java Enterprise Edition (Java EE). The ICW eHealth Framework has been leveraged to allow development by external partners - enabling adopters a straightforward integration into ICW solutions. The ICW eHealth Framework consists of reusable software components, development tools, architectural guidelines and conventions defining a full software-development and product lifecycle. From the perspective of a partner, the framework provides services and infrastructure capabilities for integrating applications within an eHF-based solution. This article introduces the ICW eHealth Framework's basic architectural concepts and technologies. It provides an overview of its module and component model, describes the development platform that supports the complete software development lifecycle of health care applications and outlines technological aspects, mainly focusing on application development frameworks and open standards.

  13. Evolution of the ATLAS Software Framework towards Concurrency

    NASA Astrophysics Data System (ADS)

    Jones, R. W. L.; Stewart, G. A.; Leggett, C.; Wynne, B. M.

    2015-05-01

    The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. Maximising performance per watt will be a key metric, so all of these cores must be used as efficiently as possible. In order to address the deficiencies of the current framework, ATLAS has embarked upon two projects: first, a practical demonstration of the use of multi-threading in our reconstruction software, using the GaudiHive framework; second, an exercise to gather requirements for an updated framework, going back to the first principles of how event processing occurs. In this paper we report on both these aspects of our work. For the hive based demonstrators, we discuss what changes were necessary in order to allow the serially designed ATLAS code to run, both to the framework and to the tools and algorithms used. We report on what general lessons were learned about the code patterns that had been employed in the software and which patterns were identified as particularly problematic for multi-threading. These lessons were fed into our considerations of a new framework and we present preliminary conclusions on this work. In particular we identify areas where the framework can be simplified in order to aid the implementation of a concurrent event processing scheme. Finally, we discuss the practical difficulties involved in migrating a large established code base to a multi-threaded framework and how this can be achieved for LHC Run 3.

  14. Things online social networking can take away: Reminders of social networking sites undermine the desirability of offline socializing and pleasures.

    PubMed

    Li, Shiang-Shiang; Chang, Yevvon Yi-Chi; Chiou, Wen-Bin

    2017-04-01

    People are beginning to develop symbiotic relationships with social networking sites (SNSs), which provide users with abundant opportunities for social interaction. We contend that if people perceive SNSs as sources of social connection, the idea of SNSs may reduce the desire to pursue offline social activities and offline pleasures. Experiment 1 demonstrated that priming with SNSs was associated with a weakened desirability of offline social activities and an increased inclination to work alone. Felt relatedness mediated the link between SNS primes and reduced desire to engage in offline social activities. Experiment 2 showed that exposure to SNS primes reduced the desirability of offline socializing and lowered the desire for offline pleasurable experiences as well. Moreover, heavy users were more susceptible to this detrimental effect. We provide the first experimental evidence that the idea of online social networking may modulate users' engagement in offline social activities and offline pleasures. Hence, online social networking may satisfy the need for relatedness but undercut the likelihood of reaping enjoyment from offline social life. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  15. Towards Archetypes-Based Software Development

    NASA Astrophysics Data System (ADS)

    Piho, Gunnar; Roost, Mart; Perkins, David; Tepandi, Jaak

    We present a framework for the archetypes based engineering of domains, requirements and software (Archetypes-Based Software Development, ABD). An archetype is defined as a primordial object that occurs consistently and universally in business domains and in business software systems. An archetype pattern is a collaboration of archetypes. Archetypes and archetype patterns are used to capture conceptual information into domain specific models that are utilized by ABD. The focus of ABD is on software factories - family-based development artefacts (domain specific languages, patterns, frameworks, tools, micro processes, and others) that can be used to build the family members. We demonstrate the usage of ABD for developing laboratory information management system (LIMS) software for the Clinical and Biomedical Proteomics Group, at the Leeds Institute of Molecular Medicine, University of Leeds.

  16. Software Reviews Since Acquisition Reform - The Artifact Perspective

    DTIC Science & Technology

    2004-01-01

    Risk Management OLD NEW Slide 13Acquisition of Software Intensive Systems 2004 – Peter Hantos Single, basic software paradigm Single processor Low...software risk mitigation related trade-offs must be done together Integral Software Engineering Activities Process Maturity and Quality Frameworks Quality

  17. A Framework for Testing Scientific Software: A Case Study of Testing Amsterdam Discrete Dipole Approximation Software

    NASA Astrophysics Data System (ADS)

    Shao, Hongbing

    Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.

  18. Coupled dam safety analysis using WinDAM

    USDA-ARS?s Scientific Manuscript database

    Windows® Dam Analysis Modules (WinDAM) is a set of modular software components that can be used to analyze overtopping and internal erosion of embankment dams. Dakota is an extensive software framework for design exploration and simulation. These tools can be coupled to create a powerful framework...

  19. Development and use of mathematical models and software frameworks for integrated analysis of agricultural systems and associated water use impacts

    USGS Publications Warehouse

    Fowler, K. R.; Jenkins, E.W.; Parno, M.; Chrispell, J.C.; Colón, A. I.; Hanson, Randall T.

    2016-01-01

    The development of appropriate water management strategies requires, in part, a methodology for quantifying and evaluating the impact of water policy decisions on regional stakeholders. In this work, we describe the framework we are developing to enhance the body of resources available to policy makers, farmers, and other community members in their e orts to understand, quantify, and assess the often competing objectives water consumers have with respect to usage. The foundation for the framework is the construction of a simulation-based optimization software tool using two existing software packages. In particular, we couple a robust optimization software suite (DAKOTA) with the USGS MF-OWHM water management simulation tool to provide a flexible software environment that will enable the evaluation of one or multiple (possibly competing) user-defined (or stakeholder) objectives. We introduce the individual software components and outline the communication strategy we defined for the coupled development. We present numerical results for case studies related to crop portfolio management with several defined objectives. The objectives are not optimally satisfied for any single user class, demonstrating the capability of the software tool to aid in the evaluation of a variety of competing interests.

  20. Relationships of online exhaled, offline exhaled, and ambient nitric oxide in an epidemiologic survey of schoolchildren.

    PubMed

    Linn, William S; Berhane, Kiros T; Rappaport, Edward B; Bastain, Tracy M; Avol, Edward L; Gilliland, Frank D

    2009-11-01

    Field measurements of exhaled nitric oxide (FeNO) and ambient nitric oxide (NO) are useful to assess both respiratory health and short-term air pollution exposure. Online real-time measurement maximizes data quality and comparability with clinical studies, but offline delayed measurement may be more practical for large epidemiological studies. To facilitate cross-comparison in larger studies, we measured FeNO and concurrent ambient NO both online and offline in 362 children at 14 schools in 8 Southern California communities. Offline breath samples were collected in bags at 100 ml/s expiratory flow with deadspace discard; online FeNO was measured at 50 ml/s. Scrubbing of ambient NO from inhaled air appeared to be nearly 100% effective online, but 50-75% effective offline. Offline samples were stored at 2-8 degrees C and analyzed 2-26 h later at a central laboratory. Offline and online FeNO showed a nearly (but not completely) linear relationship (R(2)=0.90); unadjusted means (ranges) were 10 (4-94) and 15 (3-181) p.p.b., respectively. Ambient NO concentration range was 0-212 p.p.b. Offline FeNO was positively related to ambient NO (r=0.30, P<0.0001), unlike online FeNO (r=0.09, P=0.08), indicating that ambient NO artifactually influenced offline measurements. Offline FeNO differed between schools (P<0.001); online FeNO did not (P=0.26), suggesting artifacts related to offline bag storage and transport. Artifact effects were small in comparison with between-subject variance of FeNO. An empirical statistical model predicting individual online FeNO from offline FeNO, ambient NO, and lag time before offline analysis gave R(2)=0.94. Analyses of school or age differences yielded similar results from measured or model-predicted online FeNO. Either online or offline measurement of exhaled NO and concurrent ambient NO can be useful in field epidemiology. Influence of ambient NO on exhaled NO should be examined carefully, particularly for offline measurements.

  1. Towards a comprehensive framework for reuse: A reuse-enabling software evolution environment

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Rombach, H. D.

    1988-01-01

    Reuse of products, processes and knowledge will be the key to enable the software industry to achieve the dramatic improvement in productivity and quality required to satisfy the anticipated growing demand. Although experience shows that certain kinds of reuse can be successful, general success has been elusive. A software life-cycle technology which allows broad and extensive reuse could provide the means to achieving the desired order-of-magnitude improvements. The scope of a comprehensive framework for understanding, planning, evaluating and motivating reuse practices and the necessary research activities is outlined. As a first step towards such a framework, a reuse-enabling software evolution environment model is introduced which provides a basis for the effective recording of experience, the generalization and tailoring of experience, the formalization of experience, and the (re-)use of experience.

  2. A Framework for Teaching Software Development Methods

    ERIC Educational Resources Information Center

    Dubinsky, Yael; Hazzan, Orit

    2005-01-01

    This article presents a study that aims at constructing a teaching framework for software development methods in higher education. The research field is a capstone project-based course, offered by the Technion's Department of Computer Science, in which Extreme Programming is introduced. The research paradigm is an Action Research that involves…

  3. Frameworks Coordinate Scientific Data Management

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Jet Propulsion Laboratory computer scientists developed a unique software framework to help NASA manage its massive amounts of science data. Through a partnership with the Apache Software Foundation of Forest Hill, Maryland, the technology is now available as an open-source solution and is in use by cancer researchers and pediatric hospitals.

  4. A tailored 200 parameter VME based data acquisition system for IBA at the Lund Ion Beam Analysis Facility - Hardware and software

    NASA Astrophysics Data System (ADS)

    Elfman, Mikael; Ros, Linus; Kristiansson, Per; Nilsson, E. J. Charlotta; Pallon, Jan

    2016-03-01

    With the recent advances towards modern Ion Beam Analysis (IBA), going from one- or few-parameter detector systems to multi-parameter systems, it has been necessary to expand and replace the more than twenty years old CAMAC based system. A new VME multi-parameter (presently up to 200 channels) data acquisition and control system has been developed and implemented at the Lund Ion Beam Analysis Facility (LIBAF). The system is based on the VX-511 Single Board Computer (SBC), acting as master with arbiter functionality and consists of standard VME modules like Analog to Digital Converters (ADC's), Charge to Digital Converters (QDC's), Time to Digital Converters (TDC's), scaler's, IO-cards, high voltage and waveform units. The modules have been specially selected to support all of the present detector systems in the laboratory, with the option of future expansion. Typically, the detector systems consist of silicon strip detectors, silicon drift detectors and scintillator detectors, for detection of charged particles, X-rays and γ-rays. The data flow of the raw data buffers out from the VME bus to the final storage place on a 16 terabyte network attached storage disc (NAS-disc) is described. The acquisition process, remotely controlled over one of the SBCs ethernet channels, is also discussed. The user interface is written in the Kmax software package, and is used to control the acquisition process as well as for advanced online and offline data analysis through a user-friendly graphical user interface (GUI). In this work the system implementation, layout and performance are presented. The user interface and possibilities for advanced offline analysis are also discussed and illustrated.

  5. Development of a software framework for data assimilation and its applications for streamflow forecasting in Japan

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Yorozu, K.; Kim, S.

    2012-04-01

    Data assimilation methods have received increased attention to accomplish uncertainty assessment and enhancement of forecasting capability in various areas. Despite of their potentials, applicable software frameworks to probabilistic approaches and data assimilation are still limited because the most of hydrologic modeling software are based on a deterministic approach. In this study, we developed a hydrological modeling framework for sequential data assimilation, so called MPI-OHyMoS. MPI-OHyMoS allows user to develop his/her own element models and to easily build a total simulation system model for hydrological simulations. Unlike process-based modeling framework, this software framework benefits from its object-oriented feature to flexibly represent hydrological processes without any change of the main library. Sequential data assimilation based on the particle filters is available for any hydrologic models based on MPI-OHyMoS considering various sources of uncertainty originated from input forcing, parameters and observations. The particle filters are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions. In MPI-OHyMoS, ensemble simulations are parallelized, which can take advantage of high performance computing (HPC) system. We applied this software framework for short-term streamflow forecasting of several catchments in Japan using a distributed hydrologic model. Uncertainty of model parameters and remotely-sensed rainfall data such as X-band or C-band radar is estimated and mitigated in the sequential data assimilation.

  6. Primal-dual techniques for online algorithms and mechanisms

    NASA Astrophysics Data System (ADS)

    Liaghat, Vahid

    An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.

  7. ACS from development to operations

    NASA Astrophysics Data System (ADS)

    Caproni, Alessandro; Colomer, Pau; Jeram, Bogdan; Sommer, Heiko; Chiozzi, Gianluca; Mañas, Miguel M.

    2016-08-01

    The ALMA Common Software (ACS), provides the infrastructure of the distributed software system of ALMA and other projects. ACS, built on top of CORBA and Data Distribution Service (DDS) middleware, is based on a Component- Container paradigm and hides the complexity of the middleware allowing the developer to focus on domain specific issues. The transition of the ALMA observatory from construction to operations brings with it that ACS effort focuses primarily on scalability, stability and robustness rather than on new features. The transition came together with a shorter release cycle and a more extensive testing. For scalability, the most problematic area has been the CORBA notification service, used to implement the publisher subscriber pattern because of the asynchronous nature of the paradigm: a lot of effort has been spent to improve its stability and recovery from run time errors. The original bulk data mechanism, implemented using the CORBA Audio/Video Streaming Service, showed its limitations and has been replaced with a more performant and scalable DDS implementation. Operational needs showed soon the difference between releases cycles for Online software (i.e. used during observations) and Offline software, which requires much more frequent releases. This paper attempts to describe the impact the transition from construction to operations had on ACS, the solution adopted so far and a look into future evolution.

  8. Co-design of software and hardware to implement remote sensing algorithms

    NASA Astrophysics Data System (ADS)

    Theiler, James P.; Frigo, Janette R.; Gokhale, Maya; Szymanski, John J.

    2002-01-01

    Both for offline searches through large data archives and for onboard computation at the sensor head, there is a growing need for ever-more rapid processing of remote sensing data. For many algorithms of use in remote sensing, the bulk of the processing takes place in an ``inner loop'' with a large number of simple operations. For these algorithms, dramatic speedups can often be obtained with specialized hardware. The difficulty and expense of digital design continues to limit applicability of this approach, but the development of new design tools is making this approach more feasible, and some notable successes have been reported. On the other hand, it is often the case that processing can also be accelerated by adopting a more sophisticated algorithm design. Unfortunately, a more sophisticated algorithm is much harder to implement in hardware, so these approaches are often at odds with each other. With careful planning, however, it is sometimes possible to combine software and hardware design in such a way that each complements the other, and the final implementation achieves speedup that would not have been possible with a hardware-only or a software-only solution. We will in particular discuss the co-design of software and hardware to achieve substantial speedup of algorithms for multispectral image segmentation and for endmember identification.

  9. Kameleon Live: An Interactive Cloud Based Analysis and Visualization Platform for Space Weather Researchers

    NASA Astrophysics Data System (ADS)

    Pembroke, A. D.; Colbert, J. A.

    2015-12-01

    The Community Coordinated Modeling Center (CCMC) provides hosting for many of the simulations used by the space weather community of scientists, educators, and forecasters. CCMC users may submit model runs through the Runs on Request system, which produces static visualizations of model output in the browser, while further analysis may be performed off-line via Kameleon, CCMC's cross-language access and interpolation library. Off-line analysis may be suitable for power-users, but storage and coding requirements present a barrier to entry for non-experts. Moreover, a lack of a consistent framework for analysis hinders reproducibility of scientific findings. To that end, we have developed Kameleon Live, a cloud based interactive analysis and visualization platform. Kameleon Live allows users to create scientific studies built around selected runs from the Runs on Request database, perform analysis on those runs, collaborate with other users, and disseminate their findings among the space weather community. In addition to showcasing these novel collaborative analysis features, we invite feedback from CCMC users as we seek to advance and improve on the new platform.

  10. Software design and implementation concepts for an interoperable medical communication framework.

    PubMed

    Besting, Andreas; Bürger, Sebastian; Kasparick, Martin; Strathen, Benjamin; Portheine, Frank

    2018-02-23

    The new IEEE 11073 service-oriented device connectivity (SDC) standard proposals for networked point-of-care and surgical devices constitutes the basis for improved interoperability due to its independence of vendors. To accelerate the distribution of the standard a reference implementation is indispensable. However, the implementation of such a framework has to overcome several non-trivial challenges. First, the high level of complexity of the underlying standard must be reflected in the software design. An efficient implementation has to consider the limited resources of the underlying hardware. Moreover, the frameworks purpose of realizing a distributed system demands a high degree of reliability of the framework itself and its internal mechanisms. Additionally, a framework must provide an easy-to-use and fail-safe application programming interface (API). In this work, we address these challenges by discussing suitable software engineering principles and practical coding guidelines. A descriptive model is developed that identifies key strategies. General feasibility is shown by outlining environments in which our implementation has been utilized.

  11. Continuous integration for concurrent MOOSE framework and application development on GitHub

    DOE PAGES

    Slaughter, Andrew E.; Peterson, John W.; Gaston, Derek R.; ...

    2015-11-20

    For the past several years, Idaho National Laboratory’s MOOSE framework team has employed modern software engineering techniques (continuous integration, joint application/framework source code repos- itories, automated regression testing, etc.) in developing closed-source multiphysics simulation software (Gaston et al., Journal of Open Research Software vol. 2, article e10, 2014). In March 2014, the MOOSE framework was released under an open source license on GitHub, significantly expanding and diversifying the pool of current active and potential future contributors on the project. Despite this recent growth, the same philosophy of concurrent framework and application development continues to guide the project’s development roadmap. Severalmore » specific practices, including techniques for managing multiple repositories, conducting automated regression testing, and implementing a cascading build process are discussed in this short paper. Furthermore, special attention is given to describing the manner in which these practices naturally synergize with the GitHub API and GitHub-specific features such as issue tracking, Pull Requests, and project forks.« less

  12. Continuous integration for concurrent MOOSE framework and application development on GitHub

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaughter, Andrew E.; Peterson, John W.; Gaston, Derek R.

    For the past several years, Idaho National Laboratory’s MOOSE framework team has employed modern software engineering techniques (continuous integration, joint application/framework source code repos- itories, automated regression testing, etc.) in developing closed-source multiphysics simulation software (Gaston et al., Journal of Open Research Software vol. 2, article e10, 2014). In March 2014, the MOOSE framework was released under an open source license on GitHub, significantly expanding and diversifying the pool of current active and potential future contributors on the project. Despite this recent growth, the same philosophy of concurrent framework and application development continues to guide the project’s development roadmap. Severalmore » specific practices, including techniques for managing multiple repositories, conducting automated regression testing, and implementing a cascading build process are discussed in this short paper. Furthermore, special attention is given to describing the manner in which these practices naturally synergize with the GitHub API and GitHub-specific features such as issue tracking, Pull Requests, and project forks.« less

  13. Design and implementation of new control room system in Damavand tokamak

    NASA Astrophysics Data System (ADS)

    Rasouli, H.; Zamanian, H.; Gheidi, M.; Kheiri-Fard, M.; Kouhi, A.

    2017-07-01

    The aim of this paper is design and implementation of an up-to-date control room. The previous control room had a lot of constraints and it was not apposite to the sophisticated diagnostic systems as well as to the modern control and multivariable systems. Although it provided the best output for the considered experiments and implementing offline algorithms among all similar plants, it needed to be developed to provide more capability for complex algorithm mechanisms and this work introduces our efforts in this area. Accordingly, four leading systems were designed and implemented, including real-time control system, online Data Acquisition System (DAS), offline DAS, monitoring and data transmission system. In the control system, three real-time control modules were established based on Digital Signal Processor (DSP). Thanks to them, implementation of the classic and linear and nonlinear intelligent controllers was possible to control the plasma position and its elongation. Also, online DAS was constructed in two modules. Using them, voltages and currents of charge for the capacitor banks and pressure of different parts in vacuum vessel were measured and monitored. Likewise, by real-time processing of the online data, the safety protocol of plant performance was accomplished. In addition, the offline DAS was organized in 13 modules based on Field Programmable Gate Array (FPGA). This system can be used for gathering all diagnostic, control, and performance data in 156 channels. Data transmission system and storing mechanism in the server was provided by data transmitting network and MDSplus standard protocol. Moreover, monitoring software was designed so that it could display the required plots for physical analyses. Taking everything into account, this new platform can improve the quality and quantity of research activities in plasma physics for Damavand tokamak.

  14. An Ontology and a Software Framework for Competency Modeling and Management

    ERIC Educational Resources Information Center

    Paquette, Gilbert

    2007-01-01

    The importance given to competency management is well justified. Acquiring new competencies is the central goal of any education or knowledge management process. Thus, it must be embedded in any software framework as an instructional engineering tool, to inform the runtime environment of the knowledge that is processed by actors, and their…

  15. Software cost/resource modeling: Software quality tradeoff measurement

    NASA Technical Reports Server (NTRS)

    Lawler, R. W.

    1980-01-01

    A conceptual framework for treating software quality from a total system perspective is developed. Examples are given to show how system quality objectives may be allocated to hardware and software; to illustrate trades among quality factors, both hardware and software, to achieve system performance objectives; and to illustrate the impact of certain design choices on software functionality.

  16. A Framework for Simulation of Aircraft Flyover Noise Through a Non-Standard Atmosphere

    NASA Technical Reports Server (NTRS)

    Arntzen, Michael; Rizzi, Stephen A.; Visser, Hendrikus G.; Simons, Dick G.

    2012-01-01

    This paper describes a new framework for the simulation of aircraft flyover noise through a non-standard atmosphere. Central to the framework is a ray-tracing algorithm which defines multiple curved propagation paths, if the atmosphere allows, between the moving source and listener. Because each path has a different emission angle, synthesis of the sound at the source must be performed independently for each path. The time delay, spreading loss and absorption (ground and atmosphere) are integrated along each path, and applied to each synthesized aircraft noise source to simulate a flyover. A final step assigns each resulting signal to its corresponding receiver angle for the simulation of a flyover in a virtual reality environment. Spectrograms of the results from a straight path and a curved path modeling assumption are shown. When the aircraft is at close range, the straight path results are valid. Differences appear especially when the source is relatively far away at shallow elevation angles. These differences, however, are not significant in common sound metrics. While the framework used in this work performs off-line processing, it is conducive to real-time implementation.

  17. Connections between online harassment and offline violence among youth in Central Thailand.

    PubMed

    Ojanen, Timo Tapani; Boonmongkon, Pimpawun; Samakkeekarom, Ronnapoom; Samoh, Nattharat; Cholratana, Mudjalin; Guadamuz, Thomas Ebanan

    2015-06-01

    Increasing evidence indicates that face-to-face (offline) youth violence and online harassment are closely interlinked, but evidence from Asian countries remains limited. This study was conducted to quantitatively assess the associations between offline violence and online harassment among youth in Central Thailand. Students and out-of-school youth (n=1,234, age: 15-24 years) residing, studying, and/or working in a district in Central Thailand were surveyed. Participants were asked about their involvement in online harassment and in verbal, physical, sexual, and domestic types of offline violence, as perpetrators, victims, and witnesses within a 1-year period. Multivariable logistic regression was used to assess independent associations between different kinds of involvement in offline violence and online harassment. Perpetration and victimization within the past year were both reported by roughly half of the youth both online and offline. Over three quarters had witnessed violence or harassment. Perpetrating online harassment was independently associated with being a victim online (adjusted odds ratio [AOR]=10.1; 95% CI [7.5, 13.6]), and perpetrating offline violence was independently associated with being a victim offline (AOR=11.1; 95% CI [8.1, 15.0]). Perpetrating online harassment was independently associated with perpetrating offline violence (AOR=2.7; 95% CI [1.9, 3.8]), and being a victim online was likewise independently associated with being a victim offline (AOR=2.6; 95% CI [1.9, 3.6]). Online harassment and offline violence are interlinked among Thai youth, as in other countries studied so far. Interventions to reduce either might best address both together. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Connections Between Online Harassment and Offline Violence among Youth in Central Thailand

    PubMed Central

    Ojanen, Timo Tapani; Boonmongkon, Pimpawun; Samakkeekarom, Ronnapoom; Samoh, Nattharat; Cholratana, Mudjalin

    2015-01-01

    Increasing evidence indicates that face-to-face (offline) youth violence and online harassment are closely interlinked, but evidence from Asian countries remains limited. This study was conducted to quantitatively assess the associations between offline violence and online harassment among youth in Central Thailand. Students and out-of-school youth (n = 1,234, age: 15-24 years) residing, studying, and/or working in a district in Central Thailand were surveyed. Participants were asked about their involvement in online harassment and in verbal, physical, sexual, and domestic types of offline violence, as perpetrators, victims, and witnesses within a 1-year period. Multivariable logistic regression was used to assess independent associations between different kinds of involvement in offline violence and online harassment. Perpetration and victimization within the past year were both reported by roughly half of the youth both online and offline. Over three quarters had witnessed violence or harassment. Perpetrating online harassment was independently associated with being a victim online (adjusted odds ratio [AOR] = 10.1; 95% CI [7.5, 13.6]), and perpetrating offline violence was independently associated with being a victim offline (AOR = 11.1; 95% CI [8.1, 15.0]). Perpetrating online harassment was independently associated with perpetrating offline violence (AOR = 2.7; 95% CI [1.9, 3.8]), and being a victim online was likewise independently associated with being a victim offline (AOR = 2.6; 95% CI [1.9, 3.6]). Online harassment and offline violence are interlinked among Thai youth, as in other countries studied so far. Interventions to reduce either might best address both together. PMID:25913812

  19. Flexible and Low-Cost Measurements for Space Software Development- The Measurements Exploration Framework

    NASA Astrophysics Data System (ADS)

    Marculescu, Bogdan; Feldt, Robert; Torkar, Richard; Green, Lars-Goran; Liljegren, Thomas; Hult, Erika

    2011-08-01

    Verification and validation is an important part of software development and accounts for significant amounts of the costs associated with such a project. For developers of life or mission critical systems, such as software being developed for space applications, a balance must be reached between ensuring the quality of the system by extensive and rigorous testing and reducing costs and allowing the company to compete.Ensuring the quality of any system starts with a quality development process. To evaluate both the software development process and the product itself, measurements are needed. A balance must be then struck between ensuring the best possible quality of both process and product on the one hand, and reducing the cost of performing requirements on the other.A number of measurements have already been defined and are being used. For some of these, data collection can be automated as well, further lowering costs associated with implementing them. In practice, however, there may be situations where existing measurements are unsuitable for a variety of reasons.This paper describes a framework for creating low cost, flexible measurements in areas where initial information is scarce. The framework, called The Measurements Exploration Framework, is aimed in particular at the Space Software development industry and was developed is such an environment.

  20. Architecture for autonomy

    NASA Astrophysics Data System (ADS)

    Broten, Gregory S.; Monckton, Simon P.; Collier, Jack; Giesbrecht, Jared

    2006-05-01

    In 2002 Defence R&D Canada changed research direction from pure tele-operated land vehicles to general autonomy for land, air, and sea craft. The unique constraints of the military environment coupled with the complexity of autonomous systems drove DRDC to carefully plan a research and development infrastructure that would provide state of the art tools without restricting research scope. DRDC's long term objectives for its autonomy program address disparate unmanned ground vehicle (UGV), unattended ground sensor (UGS), air (UAV), and subsea and surface (UUV and USV) vehicles operating together with minimal human oversight. Individually, these systems will range in complexity from simple reconnaissance mini-UAVs streaming video to sophisticated autonomous combat UGVs exploiting embedded and remote sensing. Together, these systems can provide low risk, long endurance, battlefield services assuming they can communicate and cooperate with manned and unmanned systems. A key enabling technology for this new research is a software architecture capable of meeting both DRDC's current and future requirements. DRDC built upon recent advances in the computing science field while developing its software architecture know as the Architecture for Autonomy (AFA). Although a well established practice in computing science, frameworks have only recently entered common use by unmanned vehicles. For industry and government, the complexity, cost, and time to re-implement stable systems often exceeds the perceived benefits of adopting a modern software infrastructure. Thus, most persevere with legacy software, adapting and modifying software when and wherever possible or necessary -- adopting strategic software frameworks only when no justifiable legacy exists. Conversely, academic programs with short one or two year projects frequently exploit strategic software frameworks but with little enduring impact. The open-source movement radically changes this picture. Academic frameworks, open to public scrutiny and modification, now rival commercial frameworks in both quality and economic impact. Further, industry now realizes that open source frameworks can reduce cost and risk of systems engineering. This paper describes the Architecture for Autonomy implemented by DRDC and how this architecture meets DRDC's current needs. It also presents an argument for why this architecture should also satisfy DRDC's future requirements as well.

  1. Development of software for computing forming information using a component based approach

    NASA Astrophysics Data System (ADS)

    Ko, Kwang Hee; Park, Jiing Seo; Kim, Jung; Kim, Young Bum; Shin, Jong Gye

    2009-12-01

    In shipbuilding industry, the manufacturing technology> has advanced at an unprecedented pace for the last decade. As a result, many automatic systems for cutting, welding, etc. have been developed and employed in the manufacturing process and accordingly the productivity has been increased drastically. Despite such improvement in the manufacturing technology', however, development of an automatic system for fabricating a curved hull plate remains at the beginning stage since hardware and software for the automation of the curved hull fabrication process should be developed differently depending on the dimensions of plates, forming methods and manufacturing processes of each shipyard. To deal with this problem, it is necessary> to create a "plug-in ''framework, which can adopt various kinds of hardware and software to construct a full automatic fabrication system. In this paper, a frame-work for automatic fabrication of curved hull plates is proposed, which consists of four components and related software. In particular the software module for computing fabrication information is developed by using the ooCBD development methodology; which can interface with other hardware and software with minimum effort. Examples of the proposed framework applied to medium and large shipyards are presented.

  2. Endnote Web tutorial for BJCVS/RBCCV

    PubMed Central

    de Oliveira, Marcos Aurélio Barboza; dos Santos, Carlos Alberto; Brandi, Antônio Carlos; Botelho, Paulo Henrique Husseini; Sciarra, Adília Maria Pires; Braile, Domingo Marcolino

    2015-01-01

    At present, many useful tools for reference management are available for use. They can be either off-line softwares or accessible Websites to all users in the internet. Their target is to facilitate the production of scientific text. But, to accomplish that, the featured bibliographic style should be effectively inserted, and the program has to be free. Here in this tutorial, we present Endnote Web®, a bibliographic reference management program comprising these two requirements: it contains the Brazilian Journal of Cardiovascular Surgery reference format and its use is free for charge after sign-in in IP registered terminal in Web of Science®. PMID:26107457

  3. Development of the Data Acquisition and Processing System for a Pulsed 2-Micron Coherent Doppler Lidar System

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.

    2010-01-01

    A general overview of the development of a data acquisition and processing system is presented for a pulsed, 2-micron coherent Doppler Lidar system located in NASA Langley Research Center in Hampton, Virginia, USA. It is a comprehensive system that performs high-speed data acquisition, analysis, and data display both in real time and offline. The first flight missions are scheduled for the summer of 2010 as part of the NASA Genesis and Rapid Intensification Processes (GRIP) campaign for the study of hurricanes. The system as well as the control software is reviewed and its requirements and unique features are discussed.

  4. Robot welding process control

    NASA Technical Reports Server (NTRS)

    Romine, Peter L.

    1991-01-01

    This final report documents the development and installation of software and hardware for Robotic Welding Process Control. Primary emphasis is on serial communications between the CYRO 750 robotic welder, Heurikon minicomputer running Hunter & Ready VRTX, and an IBM PC/AT, for offline programming and control and closed-loop welding control. The requirements for completion of the implementation of the Rocketdyne weld tracking control are discussed. The procedure for downloading programs from the Intergraph, over the network, is discussed. Conclusions are made on the results of this task, and recommendations are made for efficient implementation of communications, weld process control development, and advanced process control procedures using the Heurikon.

  5. Real-time calibration and alignment of the LHCb RICH detectors

    NASA Astrophysics Data System (ADS)

    HE, Jibo

    2017-12-01

    In 2015, the LHCb experiment established a new and unique software trigger strategy with the purpose of increasing the purity of the signal events by applying the same algorithms online and offline. To achieve this, real-time calibration and alignment of all LHCb sub-systems is needed to provide vertexing, tracking, and particle identification of the best possible quality. The calibration of the refractive index of the RICH radiators, the calibration of the Hybrid Photon Detector image, and the alignment of the RICH mirror system, are reported in this contribution. The stability of the RICH performance and the particle identification performance are also discussed.

  6. A Framework for Performing Verification and Validation in Reuse Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1997-01-01

    Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  7. Research and Design of the Three-tier Distributed Network Management System Based on COM / COM + and DNA

    NASA Astrophysics Data System (ADS)

    Liang, Likai; Bi, Yushen

    Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.

  8. Use of social network sites and instant messaging does not lead to increased offline social network size, or to emotionally closer relationships with offline network members.

    PubMed

    Pollet, Thomas V; Roberts, Sam G B; Dunbar, Robin I M

    2011-04-01

    The effect of Internet use on social relationships is still a matter of intense debate. This study examined the relationships between use of social media (instant messaging and social network sites), network size, and emotional closeness in a sample of 117 individuals aged 18 to 63 years old. Time spent using social media was associated with a larger number of online social network "friends." However, time spent using social media was not associated with larger offline networks, or feeling emotionally closer to offline network members. Further, those that used social media, as compared to non-users of social media, did not have larger offline networks, and were not emotionally closer to offline network members. These results highlight the importance of considering potential time and cognitive constraints on offline social networks when examining the impact of social media use on social relationships.

  9. On-Line and Off-Line Assessment of Metacognition

    ERIC Educational Resources Information Center

    Saraç, Seda; Karakelle, Sema

    2012-01-01

    The study investigates the interrelationships between different on-line and off-line measures for assessing metacognition. The participants were 47 fifth grade elementary students. Metacognition was assessed through two off-line and two on-line measures. The off-line measures consisted of a teacher rating scale and a self-report questionnaire. The…

  10. Modeling and Detecting Feature Interactions among Integrated Services of Home Network Systems

    NASA Astrophysics Data System (ADS)

    Igaki, Hiroshi; Nakamura, Masahide

    This paper presents a framework for formalizing and detecting feature interactions (FIs) in the emerging smart home domain. We first establish a model of home network system (HNS), where every networked appliance (or the HNS environment) is characterized as an object consisting of properties and methods. Then, every HNS service is defined as a sequence of method invocations of the appliances. Within the model, we next formalize two kinds of FIs: (a) appliance interactions and (b) environment interactions. An appliance interaction occurs when two method invocations conflict on the same appliance, whereas an environment interaction arises when two method invocations conflict indirectly via the environment. Finally, we propose offline and online methods that detect FIs before service deployment and during execution, respectively. Through a case study with seven practical services, it is shown that the proposed framework is generic enough to capture feature interactions in HNS integrated services. We also discuss several FI resolution schemes within the proposed framework.

  11. A framework for learning and planning against switching strategies in repeated games

    NASA Astrophysics Data System (ADS)

    Hernandez-Leal, Pablo; Munoz de Cote, Enrique; Sucar, L. Enrique

    2014-04-01

    Intelligent agents, human or artificial, often change their behaviour as they interact with other agents. For an agent to optimise its performance when interacting with such agents, it must be capable of detecting and adapting according to such changes. This work presents an approach on how to effectively deal with non-stationary switching opponents in a repeated game context. Our main contribution is a framework for online learning and planning against opponents that switch strategies. We present how two opponent modelling techniques work within the framework and prove the usefulness of the approach experimentally in the iterated prisoner's dilemma, when the opponent is modelled as an agent that switches between different strategies (e.g. TFT, Pavlov and Bully). The results of both models were compared against each other and against a state-of-the-art non-stationary reinforcement learning technique. Results reflect that our approach obtains competitive results without needing an offline training phase, as opposed to the state-of-the-art techniques.

  12. Online and offline video game use in adolescents: measurement invariance and problem severity.

    PubMed

    Smohai, Máté; Urbán, Róbert; Griffiths, Mark D; Király, Orsolya; Mirnics, Zsuzsanna; Vargha, András; Demetrovics, Zsolt

    2017-01-01

    Despite the increasing popularity of video game playing, little is known about the similarities and differences between online and offline video game players. The aims of this study were (i) to test the applicability and the measurement invariance of the previously developed Problematic Online Gaming Questionnaire (POGQ) in both online and offline gamers and to (ii) examine the differences in these groups. Video game use habits and POGQ were assessed in a sample of 1,964 (71% male) adolescent videogame players. Those gamers who played at least sometimes in an online context were considered "online gamers," while those who played videogames exclusively offline were considered "offline gamers." Confirmatory factor analysis supported the measurement invariance across online and offline videogame players. According to the multiple indicators multiple causes (MIMIC) model, online gamers were more likely to score higher on overuse, interpersonal conflict, and social isolation subscales of the POGQ. The results of the present study suggest that online and offline gaming can be assessed using the same psychometric instrument. These findings open the possibility for future research studies concerning problematic video gaming to include participants who exclusively play online or offline games, or both. However, the study also identified important structural features about how online and offline gaming might contribute differently to problematic use. These results provide important information that could be utilized in parental education and the prevention program about the possible detrimental consequences of online vs. offline video gaming.

  13. ActiveTutor: Towards More Adaptive Features in an E-Learning Framework

    ERIC Educational Resources Information Center

    Fournier, Jean-Pierre; Sansonnet, Jean-Paul

    2008-01-01

    Purpose: This paper aims to sketch the emerging notion of auto-adaptive software when applied to e-learning software. Design/methodology/approach: The study and the implementation of the auto-adaptive architecture are based on the operational framework "ActiveTutor" that is used for teaching the topic of computer science programming in first-grade…

  14. Developing a Pedagogical-Technical Framework to Improve Creative Writing

    ERIC Educational Resources Information Center

    Chong, Stefanie Xinyi; Lee, Chien-Sing

    2012-01-01

    There are many evidences of motivational and educational benefits from the use of learning software. However, there is a lack of study with regards to the teaching of creative writing. This paper aims to bridge the following gaps: first, the need for a proper framework for scaffolding creative writing through learning software; second, the lack of…

  15. PyPWA: A partial-wave/amplitude analysis software framework

    NASA Astrophysics Data System (ADS)

    Salgado, Carlos

    2016-05-01

    The PyPWA project aims to develop a software framework for Partial Wave and Amplitude Analysis of data; providing the user with software tools to identify resonances from multi-particle final states in photoproduction. Most of the code is written in Python. The software is divided into two main branches: one general-shell where amplitude's parameters (or any parametric model) are to be estimated from the data. This branch also includes software to produce simulated data-sets using the fitted amplitudes. A second branch contains a specific realization of the isobar model (with room to include Deck-type and other isobar model extensions) to perform PWA with an interface into the computer resources at Jefferson Lab. We are currently implementing parallelism and vectorization using the Intel's Xeon Phi family of coprocessors.

  16. EMMA: a new paradigm in configurable software

    DOE PAGES

    Nogiec, J. M.; Trombly-Freytag, K.

    2017-11-23

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  17. EMMA: A New Paradigm in Configurable Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nogiec, J. M.; Trombly-Freytag, K.

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. It provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  18. EMMA: a new paradigm in configurable software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nogiec, J. M.; Trombly-Freytag, K.

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  19. EMMA: a new paradigm in configurable software

    NASA Astrophysics Data System (ADS)

    Nogiec, J. M.; Trombly-Freytag, K.

    2017-10-01

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. It provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  20. A Cloud-based, Open-Source, Command-and-Control Software Paradigm for Space Situational Awareness (SSA)

    NASA Astrophysics Data System (ADS)

    Melton, R.; Thomas, J.

    With the rapid growth in the number of space actors, there has been a marked increase in the complexity and diversity of software systems utilized to support SSA target tracking, indication, warning, and collision avoidance. Historically, most SSA software has been constructed with "closed" proprietary code, which limits interoperability, inhibits the code transparency that some SSA customers need to develop domain expertise, and prevents the rapid injection of innovative concepts into these systems. Open-source aerospace software, a rapidly emerging, alternative trend in code development, is based on open collaboration, which has the potential to bring greater transparency, interoperability, flexibility, and reduced development costs. Open-source software is easily adaptable, geared to rapidly changing mission needs, and can generally be delivered at lower costs to meet mission requirements. This paper outlines Ball's COSMOS C2 system, a fully open-source, web-enabled, command-and-control software architecture which provides several unique capabilities to move the current legacy SSA software paradigm to an open source model that effectively enables pre- and post-launch asset command and control. Among the unique characteristics of COSMOS is the ease with which it can integrate with diverse hardware. This characteristic enables COSMOS to serve as the command-and-control platform for the full life-cycle development of SSA assets, from board test, to box test, to system integration and test, to on-orbit operations. The use of a modern scripting language, Ruby, also permits automated procedures to provide highly complex decision making for the tasking of SSA assets based on both telemetry data and data received from outside sources. Detailed logging enables quick anomaly detection and resolution. Integrated real-time and offline data graphing renders the visualization of the both ground and on-orbit assets simple and straightforward.

  1. Recent Survey and Application of the simSUNDT Software

    NASA Astrophysics Data System (ADS)

    Persson, G.; Wirdelius, H.

    2010-02-01

    The simSUNDT software is based on a previous developed program (SUNDT). The latest version has been customized in order to generate realistic synthetic data (including a grain noise model), compatible with a number of off-line analysis software. The software consists of a Windows®-based preprocessor and postprocessor together with a mathematical kernel (UTDefect), dealing with the actual mathematical modeling. The model employs various integral transforms and integral equation and enables simulations of the entire ultrasonic testing situation. The model is completely three-dimensional though the simulated component is two-dimensional, bounded by the scanning surface and a planar back surface as an option. It is of great importance that inspection methods that are applied are proper validated and that their capability of detection of cracks and defects are quantified. In order to achieve this, statistical methods such as Probability of Detection (POD) often are applied, with the ambition to estimate the detectability as a function of defect size. Despite the fact that the proposed procedure with the utilization of test pieces is very expensive, it also tends to introduce a number of possible misalignments between the actual NDT situation that is to be performed and the proposed experimental simulation. The presentation will describe the developed model that will enable simulation of a phased array NDT inspection and the ambition to use this simulation software to generate POD information. The paper also includes the most recent developments of the model including some initial experimental validation of the phased array probe model.

  2. A Unified Framework for Periodic, On-Demand, and User-Specified Software Information

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.

    2004-01-01

    Although grid computing can increase the number of resources available to a user; not all resources on the grid may have a software environment suitable for running a given application. To provide users with the necessary assistance for selecting resources with compatible software environments and/or for automatically establishing such environments, it is necessary to have an accurate source of information about the software installed across the grid. This paper presents a new OGSI-compliant software information service that has been implemented as part of NASA's Information Power Grid project. This service is built on top of a general framework for reconciling information from periodic, on-demand, and user-specified sources. Information is retrieved using standard XPath queries over a single unified namespace independent of the information's source. Two consumers of the provided software information, the IPG Resource Broker and the IPG Neutralization Service, are briefly described.

  3. Dynamic Weather Routes Architecture Overview

    NASA Technical Reports Server (NTRS)

    Eslami, Hassan; Eshow, Michelle

    2014-01-01

    Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.

  4. Land-surface parameter optimisation using data assimilation techniques: the adJULES system V1.0

    DOE PAGES

    Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.; ...

    2016-08-25

    Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate–carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimationmore » system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model–data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. Furthermore, the new improved parameters for JULES are presented along with the associated uncertainties for each parameter.« less

  5. Grid Computing at GSI for ALICE and FAIR - present and future

    NASA Astrophysics Data System (ADS)

    Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten

    2012-12-01

    The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE@CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.

  6. DoD Application Store: Enabling C2 Agility?

    DTIC Science & Technology

    2014-06-01

    Framework, will include automated delivery of software patches, web applications, widgets and mobile application packages. The envisioned DoD...Marketplace within the Ozone Widget Framework, will include automated delivery of software patches, web applications, widgets and mobile application...current needs. DoD has started to make inroads within this environment with several Programs of Record (PoR) embracing widgets and other mobile

  7. An application framework for computer-aided patient positioning in radiation therapy.

    PubMed

    Liebler, T; Hub, M; Sanner, C; Schlegel, W

    2003-09-01

    The importance of exact patient positioning in radiation therapy increases with the ongoing improvements in irradiation planning and treatment. Therefore, new ways to overcome precision limitations of current positioning methods in fractionated treatment have to be found. The Department of Medical Physics at the German Cancer Research Centre (DKFZ) follows different video-based approaches to increase repositioning precision. In this context, the modular software framework FIVE (Fast Integrated Video-based Environment) has been designed and implemented. It is both hardware- and platform-independent and supports merging position data by integrating various computer-aided patient positioning methods. A highly precise optical tracking system and several subtraction imaging techniques have been realized as modules to supply basic video-based repositioning techniques. This paper describes the common framework architecture, the main software modules and their interfaces. An object-oriented software engineering process has been applied using the UML, C + + and the Qt library. The significance of the current framework prototype for the application in patient positioning as well as the extension to further application areas will be discussed. Particularly in experimental research, where special system adjustments are often necessary, the open design of the software allows problem-oriented extensions and adaptations.

  8. Fast data transmission in dynamic data acquisition system for plasma diagnostics

    NASA Astrophysics Data System (ADS)

    Byszuk, Adrian; Poźniak, Krzysztof; Zabołotny, Wojciech M.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Cieszewski, Radosław; Juszczyk, Bartłomiej; Kolasiński, Piotr; Zienkiewicz, Paweł; Chernyshova, Maryna; Czarski, Tomasz

    2014-11-01

    This paper describes architecture of a new data acquisition system (DAQ) targeted mainly at plasma diagnostic experiments. Modular architecture, in combination with selected hardware components, allows for straightforward reconfiguration of the whole system, both offline and online. Main emphasis will be put into the implementation of data transmission subsystem in said system. One of the biggest advantages of described system is modular architecture with well defined boundaries between main components: analog frontend (AFE), digital backplane and acquisition/control software. Usage of a FPGA chips allows for a high flexibility in design of analog frontends, including ADC <--> FPGA interface. Data transmission between backplane boards and user software was accomplished with the use of industry-standard PCI Express (PCIe) technology. PCIe implementation includes both FPGA firmware and Linux device driver. High flexibility of PCIe connections was accomplished due to use of configurable PCIe switch. Whenever it's possible, described DAQ system tries to make use of standard off-the-shelf (OTF) components, including typical x86 CPU & motherboard (acting as PCIe controller) and cabling.

  9. Realtime Multichannel System for Beat to Beat QT Interval Variability

    NASA Technical Reports Server (NTRS)

    Starc, Vito; Schlegel, Todd T.

    2006-01-01

    The measurement of beat-to-beat QT interval variability (QTV) shows clinical promise for identifying several types of cardiac pathology. However, until now, there has been no device capable of displaying, in real time on a beattobeat basis, changes in QTV in all 12 conventional leads in a continuously monitored patient. While several software programs have been designed to analyze QTV, heretofore, such programs have all involved only a few channels (at most) and/or have required laborious user interaction or offline calculations and postprocessing, limiting their clinical utility. This paper describes a PC-based ECG software program that in real time, acquires, analyzes and displays QTV and also PQ interval variability (PQV) in each of the eight independent channels that constitute the 12lead conventional ECG. The system also processes certain related signals that are derived from singular value decomposition and that help to reduce the overall effects of noise on the realtime QTV and PQV results.

  10. GammaLib and ctools. A software framework for the analysis of astronomical gamma-ray data

    NASA Astrophysics Data System (ADS)

    Knödlseder, J.; Mayer, M.; Deil, C.; Cayrou, J.-B.; Owen, E.; Kelley-Hoskins, N.; Lu, C.-C.; Buehler, R.; Forest, F.; Louge, T.; Siejkowski, H.; Kosack, K.; Gerard, L.; Schulz, A.; Martin, P.; Sanchez, D.; Ohm, S.; Hassan, T.; Brau-Nogué, S.

    2016-08-01

    The field of gamma-ray astronomy has seen important progress during the last decade, yet to date no common software framework has been developed for the scientific analysis of gamma-ray telescope data. We propose to fill this gap by means of the GammaLib software, a generic library that we have developed to support the analysis of gamma-ray event data. GammaLib was written in C++ and all functionality is available in Python through an extension module. Based on this framework we have developed the ctools software package, a suite of software tools that enables flexible workflows to be built for the analysis of Imaging Air Cherenkov Telescope event data. The ctools are inspired by science analysis software available for existing high-energy astronomy instruments, and they follow the modular ftools model developed by the High Energy Astrophysics Science Archive Research Center. The ctools were written in Python and C++, and can be either used from the command line via shell scripts or directly from Python. In this paper we present the GammaLib and ctools software versions 1.0 that were released at the end of 2015. GammaLib and ctools are ready for the science analysis of Imaging Air Cherenkov Telescope event data, and also support the analysis of Fermi-LAT data and the exploitation of the COMPTEL legacy data archive. We propose using ctools as the science tools software for the Cherenkov Telescope Array Observatory.

  11. Risk-Informed Safety Assurance and Probabilistic Assessment of Mission-Critical Software-Intensive Systems

    NASA Technical Reports Server (NTRS)

    Guarro, Sergio B.

    2010-01-01

    This report validates and documents the detailed features and practical application of the framework for software intensive digital systems risk assessment and risk-informed safety assurance presented in the NASA PRA Procedures Guide for Managers and Practitioner. This framework, called herein the "Context-based Software Risk Model" (CSRM), enables the assessment of the contribution of software and software-intensive digital systems to overall system risk, in a manner which is entirely compatible and integrated with the format of a "standard" Probabilistic Risk Assessment (PRA), as currently documented and applied for NASA missions and applications. The CSRM also provides a risk-informed path and criteria for conducting organized and systematic digital system and software testing so that, within this risk-informed paradigm, the achievement of a quantitatively defined level of safety and mission success assurance may be targeted and demonstrated. The framework is based on the concept of context-dependent software risk scenarios and on the modeling of such scenarios via the use of traditional PRA techniques - i.e., event trees and fault trees - in combination with more advanced modeling devices such as the Dynamic Flowgraph Methodology (DFM) or other dynamic logic-modeling representations. The scenarios can be synthesized and quantified in a conditional logic and probabilistic formulation. The application of the CSRM method documented in this report refers to the MiniAERCam system designed and developed by the NASA Johnson Space Center.

  12. Abstracted Workow Framework with a Structure from Motion Application

    NASA Astrophysics Data System (ADS)

    Rossi, Adam J.

    In scientific and engineering disciplines, from academia to industry, there is an increasing need for the development of custom software to perform experiments, construct systems, and develop products. The natural mindset initially is to shortcut and bypass all overhead and process rigor in order to obtain an immediate result for the problem at hand, with the misconception that the software will simply be thrown away at the end. In a majority of the cases, it turns out the software persists for many years, and likely ends up in production systems for which it was not initially intended. In the current study, a framework that can be used in both industry and academic applications mitigates underlying problems associated with developing scientific and engineering software. This results in software that is much more maintainable, documented, and usable by others, specifically allowing new users to extend capabilities of components already implemented in the framework. There is a multi-disciplinary need in the fields of imaging science, computer science, and software engineering for a unified implementation model, which motivates the development of an abstracted software framework. Structure from motion (SfM) has been identified as one use case where the abstracted workflow framework can improve research efficiencies and eliminate implementation redundancies in scientific fields. The SfM process begins by obtaining 2D images of a scene from different perspectives. Features from the images are extracted and correspondences are established. This provides a sufficient amount of information to initialize the problem for fully automated processing. Transformations are established between views, and 3D points are established via triangulation algorithms. The parameters for the camera models for all views / images are solved through bundle adjustment, establishing a highly consistent point cloud. The initial sparse point cloud and camera matrices are used to generate a dense point cloud through patch based techniques or densification algorithms such as Semi-Global Matching (SGM). The point cloud can be visualized or exploited by both humans and automated techniques. In some cases the point cloud is "draped" with original imagery in order to enhance the 3D model for a human viewer. The SfM workflow can be implemented in the abstracted framework, making it easily leverageable and extensible by multiple users. Like many processes in scientific and engineering domains, the workflow described for SfM is complex and requires many disparate components to form a functional system, often utilizing algorithms implemented by many users in different languages / environments and without knowledge of how the component fits into the larger system. In practice, this generally leads to issues interfacing the components, building the software for desired platforms, understanding its concept of operations, and how it can be manipulated in order to fit the desired function for a particular application. In addition, other scientists and engineers instinctively wish to analyze the performance of the system, establish new algorithms, optimize existing processes, and establish new functionality based on current research. This requires a framework whereby new components can be easily plugged in without affecting the current implemented functionality. The need for a universal programming environment establishes the motivation for the development of the abstracted workflow framework. This software implementation, named Catena, provides base classes from which new components must derive in order to operate within the framework. The derivation mandates requirements be satisfied in order to provide a complete implementation. Additionally, the developer must provide documentation of the component in terms of its overall function and inputs. The interface input and output values corresponding to the component must be defined in terms of their respective data types, and the implementation uses mechanisms within the framework to retrieve and send the values. This process requires the developer to componentize their algorithm rather than implement it monolithically. Although the requirements of the developer are slightly greater, the benefits realized from using Catena far outweigh the overhead, and results in extensible software. This thesis provides a basis for the abstracted workflow framework concept and the Catena software implementation. The benefits are also illustrated using a detailed examination of the SfM process as an example application.

  13. NASA software documentation standard software engineering program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as Standard) can be applied to the documentation of all NASA software. This Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. This basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  14. Recent developments in the CCP-EM software suite.

    PubMed

    Burnley, Tom; Palmer, Colin M; Winn, Martyn

    2017-06-01

    As part of its remit to provide computational support to the cryo-EM community, the Collaborative Computational Project for Electron cryo-Microscopy (CCP-EM) has produced a software framework which enables easy access to a range of programs and utilities. The resulting software suite incorporates contributions from different collaborators by encapsulating them in Python task wrappers, which are then made accessible via a user-friendly graphical user interface as well as a command-line interface suitable for scripting. The framework includes tools for project and data management. An overview of the design of the framework is given, together with a survey of the functionality at different levels. The current CCP-EM suite has particular strength in the building and refinement of atomic models into cryo-EM reconstructions, which is described in detail.

  15. Recent developments in the CCP-EM software suite

    PubMed Central

    Burnley, Tom

    2017-01-01

    As part of its remit to provide computational support to the cryo-EM community, the Collaborative Computational Project for Electron cryo-Microscopy (CCP-EM) has produced a software framework which enables easy access to a range of programs and utilities. The resulting software suite incorporates contributions from different collaborators by encapsulating them in Python task wrappers, which are then made accessible via a user-friendly graphical user interface as well as a command-line interface suitable for scripting. The framework includes tools for project and data management. An overview of the design of the framework is given, together with a survey of the functionality at different levels. The current CCP-EM suite has particular strength in the building and refinement of atomic models into cryo-EM reconstructions, which is described in detail. PMID:28580908

  16. Distributed software framework and continuous integration in hydroinformatics systems

    NASA Astrophysics Data System (ADS)

    Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao

    2017-08-01

    When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.

  17. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies

    PubMed Central

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A.

    2016-01-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving “live partial-area taxonomies” is demonstrated. PMID:27345947

  18. The Diamond Beamline Controls and Data Acquisition Software Architecture

    NASA Astrophysics Data System (ADS)

    Rees, N.

    2010-06-01

    The software for the Diamond Light Source beamlines[1] is based on two complementary software frameworks: low level control is provided by the Experimental Physics and Industrial Control System (EPICS) framework[2][3] and the high level user interface is provided by the Java based Generic Data Acquisition or GDA[4][5]. EPICS provides a widely used, robust, generic interface across a wide range of hardware where the user interfaces are focused on serving the needs of engineers and beamline scientists to obtain detailed low level views of all aspects of the beamline control systems. The GDA system provides a high-level system that combines an understanding of scientific concepts, such as reciprocal lattice coordinates, a flexible python syntax scripting interface for the scientific user to control their data acquisition, and graphical user interfaces where necessary. This paper describes the beamline software architecture in more detail, highlighting how these complementary frameworks provide a flexible system that can accommodate a wide range of requirements.

  19. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies.

    PubMed

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A

    2016-08-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving "live partial-area taxonomies" is demonstrated. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Towards an Open, Distributed Software Architecture for UxS Operations

    NASA Technical Reports Server (NTRS)

    Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Allen, B. Danette

    2015-01-01

    To address the growing need to evaluate, test, and certify an ever expanding ecosystem of UxS platforms in preparation of cultural integration, NASA Langley Research Center's Autonomy Incubator (AI) has taken on the challenge of developing a software framework in which UxS platforms developed by third parties can be integrated into a single system which provides evaluation and testing, mission planning and operation, and out-of-the-box autonomy and data fusion capabilities. This software framework, named AEON (Autonomous Entity Operations Network), has two main goals. The first goal is the development of a cross-platform, extensible, onboard software system that provides autonomy at the mission execution and course-planning level, a highly configurable data fusion framework sensitive to the platform's available sensor hardware, and plug-and-play compatibility with a wide array of computer systems, sensors, software, and controls hardware. The second goal is the development of a ground control system that acts as a test-bed for integration of the proposed heterogeneous fleet, and allows for complex mission planning, tracking, and debugging capabilities. The ground control system should also be highly extensible and allow plug-and-play interoperability with third party software systems. In order to achieve these goals, this paper proposes an open, distributed software architecture which utilizes at its core the Data Distribution Service (DDS) standards, established by the Object Management Group (OMG), for inter-process communication and data flow. The design decisions proposed herein leverage the advantages of existing robotics software architectures and the DDS standards to develop software that is scalable, high-performance, fault tolerant, modular, and readily interoperable with external platforms and software.

  1. Software And Systems Engineering Risk Management

    DTIC Science & Technology

    2010-04-01

    RSKM 2004 COSO Enterprise RSKM Framework 2006 ISO/IEC 16085 Risk Management Process 2008 ISO/IEC 12207 Software Lifecycle Processes 2009 ISO/IEC...1 Software And Systems Engineering Risk Management John Walz VP Technical and Conferences Activities, IEEE Computer Society Vice-Chair Planning...Software & Systems Engineering Standards Committee, IEEE Computer Society US TAG to ISO TMB Risk Management Working Group Systems and Software

  2. The Software Architecture of the Upgraded ESA DRAMA Software Suite

    NASA Astrophysics Data System (ADS)

    Kebschull, Christopher; Flegel, Sven; Gelhaus, Johannes; Mockel, Marek; Braun, Vitali; Radtke, Jonas; Wiedemann, Carsten; Vorsmann, Peter; Sanchez-Ortiz, Noelia; Krag, Holger

    2013-08-01

    In the beginnings of man's space flight activities there was the belief that space is so big that everybody could use it without any repercussions. However during the last six decades the increasing use of Earth's orbits has lead to a rapid growth in the space debris environment, which has a big influence on current and future space missions. For this reason ESA issued the "Requirements on Space Debris Mitigation for ESA Projects" [1] in 2008, which apply to all ESA missions henceforth. The DRAMA (Debris Risk Assessment and Mitigation Analysis) software suite had been developed to support the planning of space missions to comply with these requirements. During the last year the DRAMA software suite has been upgraded under ESA contract by TUBS and DEIMOS to include additional tools and increase the performance of existing ones. This paper describes the overall software architecture of the ESA DRAMA software suite. Specifically the new graphical user interface, which manages the five main tools ARES (Assessment of Risk Event Statistics), MIDAS (MASTER-based Impact Flux and Damage Assessment Software), OSCAR (Orbital Spacecraft Active Removal), CROC (Cross Section of Complex Bodies) and SARA (Re-entry Survival and Risk Analysis) is being discussed. The advancements are highlighted as well as the challenges that arise from the integration of the five tool interfaces. A framework had been developed at the ILR and was used for MASTER-2009 and PROOF-2009. The Java based GUI framework, enables the cross-platform deployment, and its underlying model-view-presenter (MVP) software pattern, meet strict design requirements necessary to ensure a robust and reliable method of operation in an environment where the GUI is separated from the processing back-end. While the GUI framework evolved with each project, allowing an increasing degree of integration of services like validators for input fields, it has also increased in complexity. The paper will conclude with an outlook on the future development of the GUI framework, where the potential for advancements will be shown.

  3. A system and method for online high-resolution mapping of gastric slow-wave activity.

    PubMed

    Bull, Simon H; O'Grady, Gregory; Du, Peng; Cheng, Leo K

    2014-11-01

    High-resolution (HR) mapping employs multielectrode arrays to achieve spatially detailed analyses of propagating bioelectrical events. A major current limitation is that spatial analyses must currently be performed "off-line" (after experiments), compromising timely recording feedback and restricting experimental interventions. These problems motivated development of a system and method for "online" HR mapping. HR gastric recordings were acquired and streamed to a novel software client. Algorithms were devised to filter data, identify slow-wave events, eliminate corrupt channels, and cluster activation events. A graphical user interface animated data and plotted electrograms and maps. Results were compared against off-line methods. The online system analyzed 256-channel serosal recordings with no unexpected system terminations with a mean delay 18 s. Activation time marking sensitivity was 0.92; positive predictive value was 0.93. Abnormal slow-wave patterns including conduction blocks, ectopic pacemaking, and colliding wave fronts were reliably identified. Compared to traditional analysis methods, online mapping had comparable results with equivalent coverage of 90% of electrodes, average RMS errors of less than 1 s, and CC of activation maps of 0.99. Accurate slow-wave mapping was achieved in near real-time, enabling monitoring of recording quality and experimental interventions targeted to dysrhythmic onset. This work also advances the translation of HR mapping toward real-time clinical application.

  4. Offline detection of broken rotor bars in AC induction motors

    NASA Astrophysics Data System (ADS)

    Powers, Craig Stephen

    ABSTRACT. OFFLINE DETECTION OF BROKEN ROTOR BARS IN AC INDUCTION MOTORS. The detection of the broken rotor bar defect in medium- and large-sized AC induction machines is currently one of the most difficult tasks for the motor condition and monitoring industry. If a broken rotor bar defect goes undetected, it can cause a catastrophic failure of an expensive machine. If a broken rotor bar defect is falsely determined, it wastes time and money to physically tear down and inspect the machine only to find an incorrect diagnosis. Previous work in 2009 at Baker/SKF-USA in collaboration with the Korea University has developed a prototype instrument that has been highly successful in correctly detecting the broken rotor bar defect in ACIMs where other methods have failed. Dr. Sang Bin and his students at the Korea University have been using this prototype instrument to help the industry save money in the successful detection of the BRB defect. A review of the current state of motor conditioning and monitoring technology for detecting the broken rotor bar defect in ACIMs shows improved detection of this fault is still relevant. An analysis of previous work in the creation of this prototype instrument leads into the refactoring of the software and hardware into something more deployable, cost effective and commercially viable.

  5. A multi-GPU real-time dose simulation software framework for lung radiotherapy.

    PubMed

    Santhanam, A P; Min, Y; Neelakkantan, H; Papp, N; Meeks, S L; Kupelian, P A

    2012-09-01

    Medical simulation frameworks facilitate both the preoperative and postoperative analysis of the patient's pathophysical condition. Of particular importance is the simulation of radiation dose delivery for real-time radiotherapy monitoring and retrospective analyses of the patient's treatment. In this paper, a software framework tailored for the development of simulation-based real-time radiation dose monitoring medical applications is discussed. A multi-GPU-based computational framework coupled with inter-process communication methods is introduced for simulating the radiation dose delivery on a deformable 3D volumetric lung model and its real-time visualization. The model deformation and the corresponding dose calculation are allocated among the GPUs in a task-specific manner and is performed in a pipelined manner. Radiation dose calculations are computed on two different GPU hardware architectures. The integration of this computational framework with a front-end software layer and back-end patient database repository is also discussed. Real-time simulation of the dose delivered is achieved at once every 120 ms using the proposed framework. With a linear increase in the number of GPU cores, the computational time of the simulation was linearly decreased. The inter-process communication time also improved with an increase in the hardware memory. Variations in the delivered dose and computational speedup for variations in the data dimensions are investigated using D70 and D90 as well as gEUD as metrics for a set of 14 patients. Computational speed-up increased with an increase in the beam dimensions when compared with a CPU-based commercial software while the error in the dose calculation was <1%. Our analyses show that the framework applied to deformable lung model-based radiotherapy is an effective tool for performing both real-time and retrospective analyses.

  6. Normal values of offline exhaled and nasal nitric oxide in healthy children and teens using chemiluminescence.

    PubMed

    Menou, A; Babeanu, D; Paruit, H N; Ordureau, A; Guillard, S; Chambellan, A

    2017-08-21

    Nitric oxide (NO) can be used to detect respiratory or ciliary diseases. Fractional exhaled nitric oxide (FeNO) measurement can reflect ongoing eosinophilic airway inflammation and has a diagnostic utility as a test for asthma screening and follow-up while nasal nitric oxide (nNO) is a valuable screening tool for the diagnosis of primary ciliary dyskinesia. The possibility of collecting airway gas samples in an offline manner offers the advantage to extend these measures and improve the screening and management of these diseases, but normal values from healthy children and teens remain sparse. Samples were consecutively collected using the offline method for eNO and nNO chemiluminescence measurement in 88 and 31 healthy children and teens, respectively. Offline eNO measurement was also performed in 30 consecutive children with naïve asthma and/or respiratory allergy. The normal offline eNO value was determined by the following regression equation -8.206 + 0.176 × height. The upper limit of the norm for the offline eNO value was 27.4 parts per billion (ppb). A separate analysis was performed in children, pre-teens and teens, for which offline eNO was 13.6 ± 4.7 ppb, 16.3 ± 13.7 ppb and 20.0 ± 7.2 ppb, respectively. The optimal cut-off value of the offline eNO to predict asthma or respiratory allergies was 23.3 ppb, with a sensitivity and specificity of 77% and 91%, respectively. Mean offline nNO was determined at 660 ppb with the lower limit of the norm at 197 ppb. The use of offline eNO and nNO normal values should favour the widespread screening of respiratory diseases in children of school age in their usual environment.

  7. Upper Secondary and Vocational Level Teachers at Social Software

    ERIC Educational Resources Information Center

    Valtonen, Teemu; Kontkanen, Sini; Dillon, Patrick; Kukkonen, Jari; Väisänen, Pertti

    2014-01-01

    This study focuses on upper secondary and vocational level teachers as users of social software i.e. what software they use during their leisure and work and for what purposes they use software in teaching. The study is theorised within a technological pedagogical content knowledge framework, the emphasis is especially on technological knowledge…

  8. Software Requirements Specification for Lunar IceCube

    NASA Astrophysics Data System (ADS)

    Glaser-Garbrick, Michael R.

    Lunar IceCube is a 6U satellite that will orbit the moon to measure water volatiles as a function of position, altitude, and time, and measure in its various phases. Lunar IceCube, is a collaboration between Morehead State University, Vermont Technical University, Busek, and NASA. The Software Requirements Specification will serve as contract between the overall team and the developers of the flight software. It will provide a system's overview of the software that will be developed for Lunar IceCube, in that it will detail all of the interconnects and protocols for each subsystem's that Lunar IceCube will utilize. The flight software will be written in SPARK to the fullest extent, due to SPARK's unique ability to make software free of any errors. The LIC flight software does make use of a general purpose, reusable application framework called CubedOS. This framework imposes some structuring requirements on the architecture and design of the flight software, but it does not impose any high level requirements. It will also detail the tools that we will be using for Lunar IceCube, such as why we will be utilizing VxWorks.

  9. GeoFramework: A Modeling Framework for Solid Earth Geophysics

    NASA Astrophysics Data System (ADS)

    Gurnis, M.; Aivazis, M.; Tromp, J.; Tan, E.; Thoutireddy, P.; Liu, Q.; Choi, E.; Dicaprio, C.; Chen, M.; Simons, M.; Quenette, S.; Appelbe, B.; Aagaard, B.; Williams, C.; Lavier, L.; Moresi, L.; Law, H.

    2003-12-01

    As data sets in geophysics become larger and of greater relevance to other earth science disciplines, and as earth science becomes more interdisciplinary in general, modeling tools are being driven in new directions. There is now a greater need to link modeling codes to one another, link modeling codes to multiple datasets, and to make modeling software available to non modeling specialists. Coupled with rapid progress in computer hardware (including the computational speed afforded by massively parallel computers), progress in numerical algorithms, and the introduction of software frameworks, these lofty goals of merging software in geophysics are now possible. The GeoFramework project, a collaboration between computer scientists and geoscientists, is a response to these needs and opportunities. GeoFramework is based on and extends Pyre, a Python-based modeling framework, recently developed to link solid (Lagrangian) and fluid (Eulerian) models, as well as mesh generators, visualization packages, and databases, with one another for engineering applications. The utility and generality of Pyre as a general purpose framework in science is now being recognized. Besides its use in engineering and geophysics, it is also being used in particle physics and astronomy. Geology and geophysics impose their own unique requirements on software frameworks which are not generally available in existing frameworks and so there is a need for research in this area. One of the special requirements is the way Lagrangian and Eulerian codes will need to be linked in time and space within a plate tectonics context. GeoFramework has grown beyond its initial goal of linking a limited number of exiting codes together. The following codes are now being reengineered within the context of Pyre: Tecton, 3-D FE Visco-elastic code for lithospheric relaxation; CitComS, a code for spherical mantle convection; SpecFEM3D, a SEM code for global and regional seismic waves; eqsim, a FE code for dynamic earthquake rupture; SNAC, a developing 3-D coded based on the FLAC method for visco-elastoplastic deformation; SNARK, a 3-D FE-PIC method for viscoplastic deformation; and gPLATES an open source paleogeographic/plate tectonics modeling package. We will demonstrate how codes can be linked with themselves, such as a regional and global model of mantle convection and a visco-elastoplastic representation of the crust within viscous mantle flow. Finally, we will describe how http://GeoFramework.org has become a distribution site for a suite of modeling software in geophysics.

  10. Frequency of Victimization Experiences and Well-Being Among Online, Offline, and Combined Victims on Social Online Network Sites of German Children and Adolescents.

    PubMed

    Glüer, Michael; Lohaus, Arnold

    2015-01-01

    Victimization is associated with negative developmental outcomes in childhood and adolescence. However, previous studies have provided mixed results regarding the association between offline and online victimization and indicators of social, psychological, and somatic well-being. In this study, we investigated 1,890 German children and adolescents (grades 5-10, mean age = 13.9; SD = 2.1) with and without offline or online victimization experiences who participated in a social online network (SNS). Online questionnaires were used to assess previous victimization (offline, online, combined, and without), somatic and psychological symptoms, self-esteem, and social self-concept (social competence, resistance to peer influence, esteem by others). In total, 1,362 (72.1%) children and adolescents reported being a member of at least one SNS, and 377 students (28.8%) reported previous victimization. Most children and adolescents had offline victimization experiences (17.5%), whereas 2.7% reported online victimization, and 8.6% reported combined experiences. Girls reported more online and combined victimization, and boys reported more offline victimization. The type of victimization (offline, online, combined) was associated with increased reports of psychological and somatic symptoms, lower self-esteem and esteem by others, and lower resistance to peer influences. The effects were comparable for the groups with offline and online victimization. They were, however, increased in the combined group in comparison to victims with offline experiences alone.

  11. Frequency of Victimization Experiences and Well-Being Among Online, Offline, and Combined Victims on Social Online Network Sites of German Children and Adolescents

    PubMed Central

    Glüer, Michael; Lohaus, Arnold

    2015-01-01

    Victimization is associated with negative developmental outcomes in childhood and adolescence. However, previous studies have provided mixed results regarding the association between offline and online victimization and indicators of social, psychological, and somatic well-being. In this study, we investigated 1,890 German children and adolescents (grades 5–10, mean age = 13.9; SD = 2.1) with and without offline or online victimization experiences who participated in a social online network (SNS). Online questionnaires were used to assess previous victimization (offline, online, combined, and without), somatic and psychological symptoms, self-esteem, and social self-concept (social competence, resistance to peer influence, esteem by others). In total, 1,362 (72.1%) children and adolescents reported being a member of at least one SNS, and 377 students (28.8%) reported previous victimization. Most children and adolescents had offline victimization experiences (17.5%), whereas 2.7% reported online victimization, and 8.6% reported combined experiences. Girls reported more online and combined victimization, and boys reported more offline victimization. The type of victimization (offline, online, combined) was associated with increased reports of psychological and somatic symptoms, lower self-esteem and esteem by others, and lower resistance to peer influences. The effects were comparable for the groups with offline and online victimization. They were, however, increased in the combined group in comparison to victims with offline experiences alone. PMID:26734598

  12. Onyx-Advanced Aeropropulsion Simulation Framework Created

    NASA Technical Reports Server (NTRS)

    Reed, John A.

    2001-01-01

    The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.

  13. Structure simulation with calculated NMR parameters - integrating COSMOS into the CCPN framework.

    PubMed

    Schneider, Olaf; Fogh, Rasmus H; Sternberg, Ulrich; Klenin, Konstantin; Kondov, Ivan

    2012-01-01

    The Collaborative Computing Project for NMR (CCPN) has build a software framework consisting of the CCPN data model (with APIs) for NMR related data, the CcpNmr Analysis program and additional tools like CcpNmr FormatConverter. The open architecture allows for the integration of external software to extend the abilities of the CCPN framework with additional calculation methods. Recently, we have carried out the first steps for integrating our software Computer Simulation of Molecular Structures (COSMOS) into the CCPN framework. The COSMOS-NMR force field unites quantum chemical routines for the calculation of molecular properties with a molecular mechanics force field yielding the relative molecular energies. COSMOS-NMR allows introducing NMR parameters as constraints into molecular mechanics calculations. The resulting infrastructure will be made available for the NMR community. As a first application we have tested the evaluation of calculated protein structures using COSMOS-derived 13C Cα and Cβ chemical shifts. In this paper we give an overview of the methodology and a roadmap for future developments and applications.

  14. Software Framework for Development of Web-GIS Systems for Analysis of Georeferenced Geophysical Data

    NASA Astrophysics Data System (ADS)

    Okladnikov, I.; Gordov, E. P.; Titov, A. G.

    2011-12-01

    Georeferenced datasets (meteorological databases, modeling and reanalysis results, remote sensing products, etc.) are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated software framework for rapid development of providing such support information-computational systems based on Web-GIS technologies has been created. The software framework consists of 3 basic parts: computational kernel developed using ITTVIS Interactive Data Language (IDL), a set of PHP-controllers run within specialized web portal, and JavaScript class library for development of typical components of web mapping application graphical user interface (GUI) based on AJAX technology. Computational kernel comprise of number of modules for datasets access, mathematical and statistical data analysis and visualization of results. Specialized web-portal consists of web-server Apache, complying OGC standards Geoserver software which is used as a base for presenting cartographical information over the Web, and a set of PHP-controllers implementing web-mapping application logic and governing computational kernel. JavaScript library aiming at graphical user interface development is based on GeoExt library combining ExtJS Framework and OpenLayers software. Based on the software framework an information-computational system for complex analysis of large georeferenced data archives was developed. Structured environmental datasets available for processing now include two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, meteorological observational data for the territory of the former USSR for the 20th century, and others. Current version of the system is already involved into a scientific research process. Particularly, recently the system was successfully used for analysis of Siberia climate changes and its impact in the region. The software framework presented allows rapid development of Web-GIS systems for geophysical data analysis thus providing specialists involved into multidisciplinary research projects with reliable and practical instruments for complex analysis of climate and ecosystems changes on global and regional scales. This work is partially supported by RFBR grants #10-07-00547, #11-05-01190, and SB RAS projects 4.31.1.5, 4.31.2.7, 4, 8, 9, 50 and 66.

  15. Model and Interoperability using Meta Data Annotations

    NASA Astrophysics Data System (ADS)

    David, O.

    2011-12-01

    Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside. While providing all those capabilities, a significant reduction in the size of the model source code was achieved. To support the benefit of annotations for a modeler, studies were conducted to evaluate the effectiveness of an annotation based framework approach with other modeling frameworks and libraries, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A typical hydrological model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks.

  16. MMORPG escapism predicts decreased well-being: examination of gaming time, game realism beliefs, and online social support for offline problems.

    PubMed

    Kaczmarek, Lukasz D; Drążkowski, Dariusz

    2014-05-01

    Massively multiplayer online role-playing game (MMORPG) escapists are individuals who indulge in the MMORPG environment to avoid real world problems. Though a relationship between escapism and deteriorated well-being has been established, little is known about particular pathways that mediate this relationship. In the current study, we examined this topic by testing an integrative model of MMORPG escapism, which includes game realism beliefs, gaming time, offline social support, and online social support for offline problems. MMORPG players (N=1,056) completed measures of escapist motivation, game realism beliefs, social support, well-being, and reported gaming time. The tested structural equation model had a good fit to the data. We found that individuals with escapist motivation endorsed stronger game realism beliefs and spent more time playing MMORPGs, which, in turn, increased online support but decreased offline social support. Well-being was favorably affected by both online and offline social support, although offline social support had a stronger effect. The higher availability of online social support for offline problems did not compensate for the lower availability of offline support among MMORPG escapists. Understanding the psychological factors related to depletion of social resources in MMORPG players can help optimize MMORPGs as leisure activities.

  17. Moving health promotion communities online: a review of the literature.

    PubMed

    Sunderland, Naomi; Beekhuyzen, Jenine; Kendall, Elizabeth; Wolski, Malcom

    There is a need to enhance the effectiveness and reach of complex health promotion initiatives by providing opportunities for diverse health promotion practitioners and others to interact in online settings. This paper reviews the existing literature on how to take health promotion communities and networks into online settings. A scoping review of relevant bodies of literature and empirical evidence was undertaken to provide an interpretive synthesis of existing knowledge on the topic. Sixteen studies were identified between 1986 and 2007. Relatively little research has been conducted on the process of taking existing offline communities and networks into online settings. However, more research has focused on offline (i.e. not mediated via computer networks); 'virtual' (purely online with no offline interpersonal contact); and 'multiplex' communities (i.e. those that interact across both online and offline settings). Results are summarised under three themes: characteristics of communities in online and offline settings; issues in moving offline communities online, and designing online communities to match community needs. Existing health promotion initiatives can benefit from online platforms that promote community building and knowledge sharing. Online e-health promotion settings and communities can successfully integrate with existing offline settings and communities to form 'multiplex' communities (i.e. communities that operate fluently across both online and offline settings).

  18. Software Geometry in Simulations

    NASA Astrophysics Data System (ADS)

    Alion, Tyler; Viren, Brett; Junk, Tom

    2015-04-01

    The Long Baseline Neutrino Experiment (LBNE) involves many detectors. The experiment's near detector (ND) facility, may ultimately involve several detectors. The far detector (FD) will be significantly larger than any other Liquid Argon (LAr) detector yet constructed; many prototype detectors are being constructed and studied to motivate a plethora of proposed FD designs. Whether it be a constructed prototype or a proposed ND/FD design, every design must be simulated and analyzed. This presents a considerable challenge to LBNE software experts; each detector geometry must be described to the simulation software in an efficient way which allows for multiple authors to easily collaborate. Furthermore, different geometry versions must be tracked throughout their use. We present a framework called General Geometry Description (GGD), written and developed by LBNE software collaborators for managing software to generate geometries. Though GGD is flexible enough to be used by any experiment working with detectors, we present it's first use in generating Geometry Description Markup Language (GDML) files to interface with LArSoft, a framework of detector simulations, event reconstruction, and data analyses written for all LAr technology users at Fermilab. Brett is the other of the framework discussed here, the General Geometry Description (GGD).

  19. Exploring the Use of a Test Automation Framework

    NASA Technical Reports Server (NTRS)

    Cervantes, Alex

    2009-01-01

    It is known that software testers, more often than not, lack the time needed to fully test the delivered software product within the time period allotted to them. When problems in the implementation phase of a development project occur, it normally causes the software delivery date to slide. As a result, testers either need to work longer hours, or supplementary resources need to be added to the test team in order to meet aggressive test deadlines. One solution to this problem is to provide testers with a test automation framework to facilitate the development of automated test solutions.

  20. Methodology for Software Reliability Prediction. Volume 1.

    DTIC Science & Technology

    1987-11-01

    SPACECRAFT 0 MANNED SPACECRAFT B ATCH SYSTEM AIRBORNE AVIONICS 0 UNMANNED EVENT C014TROL a REAL TIME CLOSED 0 UNMANNED SPACECRAFT LOOP OPERATINS SPACECRAFT...software reliability. A Software Reliability Measurement Framework was established which spans the life cycle of a software system and includes the...specification, prediction, estimation, and assessment of software reliability. Data from 59 systems , representing over 5 million lines of code, were

  1. DAQ for commissioning and calibration of a multichannel analyzer of scintillation counters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tortorici, F.; Jones, M.; Bellini, V.

    We report the status of the Data Acquisition (DAQ) system for the Coordinate Detector (CDET) module of the Super Bigbite Spectrometer facility at Hall A of Thomas Jefferson Accelerator Facility. Presently, the DAQ is fully assembled and tested with one CDET module. The commissioning of CDET module, that is the goal of the tests presented here, consists essentially in the measures of the amplitude and time-over-threshold of signals from cosmic rays. Hardware checks, the developing of DAQ control and off-line analysis software are ongoing; the module currently seems to work roughly accordingly to expectations. Data presented in this note aremore » still preliminary.« less

  2. Real-time inverse kinematics and inverse dynamics for lower limb applications using OpenSim

    PubMed Central

    Modenese, L.; Lloyd, D.G.

    2017-01-01

    Real-time estimation of joint angles and moments can be used for rapid evaluation in clinical, sport, and rehabilitation contexts. However, real-time calculation of kinematics and kinetics is currently based on approximate solutions or generic anatomical models. We present a real-time system based on OpenSim solving inverse kinematics and dynamics without simplifications at 2000 frame per seconds with less than 31.5ms of delay. We describe the software architecture, sensitivity analyses to minimise delays and errors, and compare offline and real-time results. This system has the potential to strongly impact current rehabilitation practices enabling the use of personalised musculoskeletal models in real-time. PMID:27723992

  3. Biosphere-Atmosphere Transfer Scheme (BATS) version le as coupled to the NCAR community climate model. Technical note. [NCAR (National Center for Atmospheric Research)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickinson, R.E.; Henderson-Sellers, A.; Kennedy, P.J.

    A comprehensive model of land-surface processes has been under development suitable for use with various National Center for Atmospheric Research (NCAR) General Circulation Models (GCMs). Special emphasis has been given to describing properly the role of vegetation in modifying the surface moisture and energy budgets. The result of these efforts has been incorporated into a boundary package, referred to as the Biosphere-Atmosphere Transfer Scheme (BATS). The current frozen version, BATS1e is a piece of software about four thousand lines of code that runs as an offline version or coupled to the Community Climate Model (CCM).

  4. Real-time inverse kinematics and inverse dynamics for lower limb applications using OpenSim.

    PubMed

    Pizzolato, C; Reggiani, M; Modenese, L; Lloyd, D G

    2017-03-01

    Real-time estimation of joint angles and moments can be used for rapid evaluation in clinical, sport, and rehabilitation contexts. However, real-time calculation of kinematics and kinetics is currently based on approximate solutions or generic anatomical models. We present a real-time system based on OpenSim solving inverse kinematics and dynamics without simplifications at 2000 frame per seconds with less than 31.5 ms of delay. We describe the software architecture, sensitivity analyses to minimise delays and errors, and compare offline and real-time results. This system has the potential to strongly impact current rehabilitation practices enabling the use of personalised musculoskeletal models in real-time.

  5. Building Energy Monitoring and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Feng, Wei; Lu, Alison

    This project aimed to develop a standard methodology for building energy data definition, collection, presentation, and analysis; apply the developed methods to a standardized energy monitoring platform, including hardware and software, to collect and analyze building energy use data; and compile offline statistical data and online real-time data in both countries for fully understanding the current status of building energy use. This helps decode the driving forces behind the discrepancy of building energy use between the two countries; identify gaps and deficiencies of current building energy monitoring, data collection, and analysis; and create knowledge and tools to collect and analyzemore » good building energy data to provide valuable and actionable information for key stakeholders.« less

  6. Data-driven prognosis: a multi-physics approach verified via balloon burst experiment.

    PubMed

    Chandra, Abhijit; Kar, Oliva

    2015-04-08

    A multi-physics formulation for data-driven prognosis (DDP) is developed. Unlike traditional predictive strategies that require controlled offline measurements or 'training' for determination of constitutive parameters to derive the transitional statistics, the proposed DDP algorithm relies solely on in situ measurements. It uses a deterministic mechanics framework, but the stochastic nature of the solution arises naturally from the underlying assumptions regarding the order of the conservation potential as well as the number of dimensions involved. The proposed DDP scheme is capable of predicting onset of instabilities. Because the need for offline testing (or training) is obviated, it can be easily implemented for systems where such a priori testing is difficult or even impossible to conduct. The prognosis capability is demonstrated here via a balloon burst experiment where the instability is predicted using only online visual observations. The DDP scheme never failed to predict the incipient failure, and no false-positives were issued. The DDP algorithm is applicable to other types of datasets. Time horizons of DDP predictions can be adjusted by using memory over different time windows. Thus, a big dataset can be parsed in time to make a range of predictions over varying time horizons.

  7. Using Social Media for Social Comparison and Feedback-Seeking: Gender and Popularity Moderate Associations with Depressive Symptoms.

    PubMed

    Nesi, Jacqueline; Prinstein, Mitchell J

    2015-11-01

    This study examined specific technology-based behaviors (social comparison and interpersonal feedback-seeking) that may interact with offline individual characteristics to predict concurrent depressive symptoms among adolescents. A total of 619 students (57 % female; mean age 14.6) completed self-report questionnaires at 2 time points. Adolescents reported on levels of depressive symptoms at baseline, and 1 year later on depressive symptoms, frequency of technology use (cell phones, Facebook, and Instagram), excessive reassurance-seeking, and technology-based social comparison and feedback-seeking. Adolescents also completed sociometric nominations of popularity. Consistent with hypotheses, technology-based social comparison and feedback-seeking were associated with depressive symptoms. Popularity and gender served as moderators of this effect, such that the association was particularly strong among females and adolescents low in popularity. Associations were found above and beyond the effects of overall frequency of technology use, offline excessive reassurance-seeking, and prior depressive symptoms. Findings highlight the utility of examining the psychological implications of adolescents' technology use within the framework of existing interpersonal models of adolescent depression and suggest the importance of more nuanced approaches to the study of adolescents' media use.

  8. Using Social Media for Social Comparison and Feedback-Seeking: Gender and Popularity Moderate Associations with Depressive Symptoms

    PubMed Central

    2018-01-01

    This study examined specific technology-based behaviors (social comparison and interpersonal feedback-seeking) that may interact with offline individual characteristics to predict concurrent depressive symptoms among adolescents. A total of 619 students (57 % female; mean age 14.6) completed self-report questionnaires at 2 time points. Adolescents reported on levels of depressive symptoms at baseline, and 1 year later on depressive symptoms, frequency of technology use (cell phones, Facebook, and Instagram), excessive reassurance-seeking, and technology-based social comparison and feedback-seeking. Adolescents also completed sociometric nominations of popularity. Consistent with hypotheses, technology-based social comparison and feedback-seeking were associated with depressive symptoms. Popularity and gender served as moderators of this effect, such that the association was particularly strong among females and adolescents low in popularity. Associations were found above and beyond the effects of overall frequency of technology use, offline excessive reassurance-seeking, and prior depressive symptoms. Findings highlight the utility of examining the psychological implications of adolescents’ technology use within the framework of existing interpersonal models of adolescent depression and suggest the importance of more nuanced approaches to the study of adolescents’ media use. PMID:25899879

  9. Living Design Memory: Framework, Implementation, Lessons Learned.

    ERIC Educational Resources Information Center

    Terveen, Loren G.; And Others

    1995-01-01

    Discusses large-scale software development and describes the development of the Designer Assistant to improve software development effectiveness. Highlights include the knowledge management problem; related work, including artificial intelligence and expert systems, software process modeling research, and other approaches to organizational memory;…

  10. Engagement in social media environments for individuals with who use augmentative and alternative communication.

    PubMed

    Caron, Jessica

    2016-10-14

    Communicative interactions, despite the mode (e.g., face-to-face, online) rely on the communication skills of each individual participating. Some individuals have significant speech and language impairments and require the use of augmentative and alternative communication (AAC) (i.e., signs, speech generating devices) to maximize their communication participation across a variety of on and offline contexts. Use of social media has brought about changes to communication environments, contributing new contexts for engagement. To provide a framework for considering application of engagement theory for interventions around social media use by individuals who use AAC. The author has applied examples from qualitative social media and AAC research to a framework of engagement. No formal data collection was used. Social media use has become a conventional form of communication. Yet recognition of the value of social media (and other electronic modalities) for individuals who use AAC has not been fully translated into practice. The examples used illustrated how the proposed framework can assist in clinical practice and future research directions. Engagement, including the proposed framework for considerations of social media engagement activities, can provide a systematic way to approach social media use for individuals who use AAC.

  11. The relative importance of online victimization in understanding depression, delinquency, and substance use.

    PubMed

    Mitchell, Kimberly J; Ybarra, Michele; Finkelhor, David

    2007-11-01

    This article explores the relationship between online and offline forms of interpersonal victimization, with depressive symptomatology, delinquency, and substance use. In a national sample of 1,501 youth Internet users (ages 10-17 years), 57% reported some form of offline interpersonal victimization (e.g., bullying, sexual abuse), and 23% reported an online interpersonal victimization (i.e., sexual solicitation and harassment) in the past year. Nearly three fourths (73%) of youth reporting an online victimization also reported an offline victimization. Virtually all types of online and offline victimization were independently related to depressive symptomatology, delinquent behavior, and substance use. Even after adjusting for the total number of different offline victimizations, youth with online sexual solicitation were still almost 2 times more likely to report depressive symptomatology and high substance use. Findings reiterate the importance of screening for a variety of different types of victimization in mental health settings, including both online and offline forms.

  12. The Need for V&V in Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1997-01-01

    V&V is currently performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to entire' domain or product line rather than a critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. engineering. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for activities.

  13. Hybrid Optimization Parallel Search PACKage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  14. HelioScan: a software framework for controlling in vivo microscopy setups with high hardware flexibility, functional diversity and extendibility.

    PubMed

    Langer, Dominik; van 't Hoff, Marcel; Keller, Andreas J; Nagaraja, Chetan; Pfäffli, Oliver A; Göldi, Maurice; Kasper, Hansjörg; Helmchen, Fritjof

    2013-04-30

    Intravital microscopy such as in vivo imaging of brain dynamics is often performed with custom-built microscope setups controlled by custom-written software to meet specific requirements. Continuous technological advancement in the field has created a need for new control software that is flexible enough to support the biological researcher with innovative imaging techniques and provide the developer with a solid platform for quickly and easily implementing new extensions. Here, we introduce HelioScan, a software package written in LabVIEW, as a platform serving this dual role. HelioScan is designed as a collection of components that can be flexibly assembled into microscope control software tailored to the particular hardware and functionality requirements. Moreover, HelioScan provides a software framework, within which new functionality can be implemented in a quick and structured manner. A specific HelioScan application assembles at run-time from individual software components, based on user-definable configuration files. Due to its component-based architecture, HelioScan can exploit synergies of multiple developers working in parallel on different components in a community effort. We exemplify the capabilities and versatility of HelioScan by demonstrating several in vivo brain imaging modes, including camera-based intrinsic optical signal imaging for functional mapping of cortical areas, standard two-photon laser-scanning microscopy using galvanometric mirrors, and high-speed in vivo two-photon calcium imaging using either acousto-optic deflectors or a resonant scanner. We recommend HelioScan as a convenient software framework for the in vivo imaging community. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Fusion of P300 and eye-tracker data for spelling using BCI2000

    NASA Astrophysics Data System (ADS)

    Kalika, Dmitry; Collins, Leslie; Caves, Kevin; Throckmorton, Chandra

    2017-10-01

    Objective. Various augmentative and alternative communication (AAC) devices have been developed in order to aid communication for individuals with communication disorders. Recently, there has been interest in combining EEG data and eye-gaze data with the goal of developing a hybrid (or ‘fused’) BCI (hBCI) AAC system. This work explores the effectiveness of a speller that fuses data from an eye-tracker and the P300 speller in order to create a hybrid P300 speller. Approach. This hybrid speller collects both eye-tracking and EEG data in parallel, and the user spells characters on the screen in the same way that they would if they were only using the P300 speller. Online and offline experiments were performed. The online experiments measured the performance of the speller for sixteen non-disabled participants, while the offline simulations were used to assess the robustness of the hybrid system. Main results. Online results showed that for fifteen non-disabled participants, using eye-gaze in a Bayesian framework with EEG data from the P300 speller improved accuracy (0.0163+/- 2.72 , 0.085+/- 0.111 , 0.080+/- 0.106 for estimated, medium and high variance configurations) and reduced the average number of flashes required to spell a character compared to the standard P300 speller that relies solely on EEG data (-53.27+/- 25.87 , -36.15+/- 19.3 , -18.85+/- 12.43 for estimated, medium and high variance configurations). Offline simulations indicate that the system provides more robust performance than a standalone eye gaze system. Significance. The results of this work on non-disabled participants shows the potential efficacy of hybrid P300 and eye-tracker speller. Further validation on the amyotrophic lateral sceloris population is needed to assess the benefit of this hybrid system.

  16. Citizen Observatories: A Standards Based Architecture

    NASA Astrophysics Data System (ADS)

    Simonis, Ingo

    2015-04-01

    A number of large-scale research projects are currently under way exploring the various components of citizen observatories, e.g. CITI-SENSE (http://www.citi-sense.eu), Citclops (http://citclops.eu), COBWEB (http://cobwebproject.eu), OMNISCIENTIS (http://www.omniscientis.eu), and WeSenseIt (http://www.wesenseit.eu). Common to all projects is the motivation to develop a platform enabling effective participation by citizens in environmental projects, while considering important aspects such as security, privacy, long-term storage and availability, accessibility of raw and processed data and its proper integration into catalogues and international exchange and collaboration systems such as GEOSS or INSPIRE. This paper describes the software architecture implemented for setting up crowdsourcing campaigns using standardized components, interfaces, security features, and distribution capabilities. It illustrates the Citizen Observatory Toolkit, a software suite that allows defining crowdsourcing campaigns, to invite registered and unregistered participants to participate in crowdsourcing campaigns, and to analyze, process, and visualize raw and quality enhanced crowd sourcing data and derived products. The Citizen Observatory Toolkit is not a single software product. Instead, it is a framework of components that are built using internationally adopted standards wherever possible (e.g. OGC standards from Sensor Web Enablement, GeoPackage, and Web Mapping and Processing Services, as well as security and metadata/cataloguing standards), defines profiles of those standards where necessary (e.g. SWE O&M profile, SensorML profile), and implements design decisions based on the motivation to maximize interoperability and reusability of all components. The toolkit contains tools to set up, manage and maintain crowdsourcing campaigns, allows building on-demand apps optimized for the specific sampling focus, supports offline and online sampling modes using modern cell phones with built-in sensing technologies, automates the upload of the raw data, and handles conflation services to match quality requirements and analysis challenges. The strict implementation of all components using internationally adopted standards ensures maximal interoperability and reusability of all components. The Citizen Observatory Toolkit is currently developed as part of the COBWEB research project. COBWEB is partially funded by the European Programme FP7/2007-2013 under grant agreement n° 308513; part of the topic ENV.2012.6.5-1 "Developing community based environmental monitoring and information systems using innovative and novel earth observation applications.

  17. Rapid Development of Custom Software Architecture Design Environments

    DTIC Science & Technology

    1999-08-01

    the tools themselves. This dissertation describes a new approach to capturing and using architectural design expertise in software architecture design environments...A language and tools are presented for capturing and encapsulating software architecture design expertise within a conceptual framework...of architectural styles and design rules. The design expertise thus captured is supported with an incrementally configurable software architecture

  18. A comparison of the use of bony anatomy and internal markers for offline verification and an evaluation of the potential benefit of online and offline verification protocols for prostate radiotherapy.

    PubMed

    McNair, Helen A; Hansen, Vibeke N; Parker, Christopher C; Evans, Phil M; Norman, Andrew; Miles, Elizabeth; Harris, Emma J; Del-Acroix, Louise; Smith, Elizabeth; Keane, Richard; Khoo, Vincent S; Thompson, Alan C; Dearnaley, David P

    2008-05-01

    To evaluate the utility of intraprostatic markers in the treatment verification of prostate cancer radiotherapy. Specific aims were: to compare the effectiveness of offline correction protocols, either using gold markers or bony anatomy; to estimate the potential benefit of online correction protocol's using gold markers; to determine the presence and effect of intrafraction motion. Thirty patients with three gold markers inserted had pretreatment and posttreatment images acquired and were treated using an offline correction protocol and gold markers. Retrospectively, an offline protocol was applied using bony anatomy and an online protocol using gold markers. The systematic errors were reduced from 1.3, 1.9, and 2.5 mm to 1.1, 1.1, and 1.5 mm in the right-left (RL), superoinferior (SI), and anteroposterior (AP) directions, respectively, using the offline correction protocol and gold markers instead of bony anatomy. The subsequent decrease in margins was 1.7, 3.3, and 4 mm in the RL, SI, and AP directions, respectively. An offline correction protocol combined with an online correction protocol in the first four fractions reduced random errors further to 0.9, 1.1, and 1.0 mm in the RL, SI, and AP directions, respectively. A daily online protocol reduced all errors to <1 mm. Intrafraction motion had greater impact on the effectiveness of the online protocol than the offline protocols. An offline protocol using gold markers is effective in reducing the systematic error. The value of online protocols is reduced by intrafraction motion.

  19. Online and offline awareness deficits: Anosognosia for spatial neglect.

    PubMed

    Chen, Peii; Toglia, Joan

    2018-04-12

    Anosognosia for spatial neglect (ASN) can be offline or online. Offline ASN is general unawareness of having experienced spatial deficits. Online ASN is an awareness deficit of underestimating spatial difficulties that likely to occur in an upcoming task (anticipatory ASN) or have just occurred during the task (emergent ASN). We explored the relationships among spatial neglect, offline ASN, anticipatory ASN, and emergent ASN. Research Method/Design: Forty-four survivors of stroke answered questionnaires assessing offline and online self-awareness of spatial problems. The online questionnaire was asked immediately before and after each of 4 tests for spatial neglect, including shape cancellation, address and sentence copying, telephone dialing, and indented paragraph reading. Participants were certain they had difficulties in daily spatial tasks (offline awareness), in the task they were about to perform (anticipatory awareness) and had just performed (emergent awareness). Nonetheless, they consistently overestimated their spatial abilities, indicating ASN. Offline and online ASN appeared independent. Online ASN improved after task execution. Neglect severity was not positively correlated with offline ASN. Greater neglect severity correlated with both greater anticipatory and emergent ASN. Regardless of neglect severity, we found task-specific differences in emergent ASN but not in anticipatory ASN. Individuals with spatial neglect acknowledge their spatial difficulty (certainty of error occurrence) but may not necessarily recognize the extent of their difficulty (accuracy of error estimation). Our findings suggest that offline and online ASN are independent. A potential implication from the study is that familiar and challenging tasks may facilitate emergence of self-awareness. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pachuilo, Andrew R; Ragan, Eric; Goodall, John R

    Visualization tools can take advantage of multiple coordinated views to support analysis of large, multidimensional data sets. Effective design of such views and layouts can be challenging, but understanding users analysis strategies can inform design improvements. We outline an approach for intelligent design configuration of visualization tools with multiple coordinated views, and we discuss a proposed software framework to support the approach. The proposed software framework could capture and learn from user interaction data to automate new compositions of views and widgets. Such a framework could reduce the time needed for meta analysis of the visualization use and lead tomore » more effective visualization design.« less

  1. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  2. Prospective comparison of speckle tracking longitudinal bidimensional strain between two vendors.

    PubMed

    Castel, Anne-Laure; Szymanski, Catherine; Delelis, François; Levy, Franck; Menet, Aymeric; Mailliet, Amandine; Marotte, Nathalie; Graux, Pierre; Tribouilloy, Christophe; Maréchaux, Sylvestre

    2014-02-01

    Speckle tracking is a relatively new, largely angle-independent technique used for the evaluation of myocardial longitudinal strain (LS). However, significant differences have been reported between LS values obtained by speckle tracking with the first generation of software products. To compare LS values obtained with the most recently released equipment from two manufacturers. Systematic scanning with head-to-head acquisition with no modification of the patient's position was performed in 64 patients with equipment from two different manufacturers, with subsequent off-line post-processing for speckle tracking LS assessment (Philips QLAB 9.0 and General Electric [GE] EchoPAC BT12). The interobserver variability of each software product was tested on a randomly selected set of 20 echocardiograms from the study population. GE and Philips interobserver coefficients of variation (CVs) for global LS (GLS) were 6.63% and 5.87%, respectively, indicating good reproducibility. Reproducibility was very variable for regional and segmental LS values, with CVs ranging from 7.58% to 49.21% with both software products. The concordance correlation coefficient (CCC) between GLS values was high at 0.95, indicating substantial agreement between the two methods. While good agreement was observed between midwall and apical regional strains with the two software products, basal regional strains were poorly correlated. The agreement between the two software products at a segmental level was very variable; the highest correlation was obtained for the apical cap (CCC 0.90) and the poorest for basal segments (CCC range 0.31-0.56). A high level of agreement and reproducibility for global but not for basal regional or segmental LS was found with two vendor-dependent software products. This finding may help to reinforce clinical acceptance of GLS in everyday clinical practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  3. Upgrading Custom Simulink Library Components for Use in Newer Versions of Matlab

    NASA Technical Reports Server (NTRS)

    Stewart, Camiren L.

    2014-01-01

    The Spaceport Command and Control System (SCCS) at Kennedy Space Center (KSC) is a control system for monitoring and launching manned launch vehicles. Simulations of ground support equipment (GSE) and the launch vehicle systems are required throughout the life cycle of SCCS to test software, hardware, and procedures to train the launch team. The simulations of the GSE at the launch site in conjunction with off-line processing locations are developed using Simulink, a piece of Commercial Off-The-Shelf (COTS) software. The simulations that are built are then converted into code and ran in a simulation engine called Trick, a Government off-the-shelf (GOTS) piece of software developed by NASA. In the world of hardware and software, it is not uncommon to see the products that are utilized be upgraded and patched or eventually fade away into an obsolete status. In the case of SCCS simulation software, Matlab, a MathWorks product, has released a number of stable versions of Simulink since the deployment of the software on the Development Work Stations in the Linux environment (DWLs). The upgraded versions of Simulink has introduced a number of new tools and resources that, if utilized fully and correctly, will save time and resources during the overall development of the GSE simulation and its correlating documentation. Unfortunately, simply importing the already built simulations into the new Matlab environment will not suffice as it will produce results that may not be expected as they were in the version that is currently being utilized. Thus, an upgrade execution plan was developed and executed to fully upgrade the simulation environment to one of the latest versions of Matlab.

  4. Integrating software architectures for distributed simulations and simulation analysis communities.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Fellig, Daniel; Linebarger, John Michael

    2005-10-01

    The one-year Software Architecture LDRD (No.79819) was a cross-site effort between Sandia California and Sandia New Mexico. The purpose of this research was to further develop and demonstrate integrating software architecture frameworks for distributed simulation and distributed collaboration in the homeland security domain. The integrated frameworks were initially developed through the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), sited at SNL/CA, and the National Infrastructure Simulation & Analysis Center (NISAC), sited at SNL/NM. The primary deliverable was a demonstration of both a federation of distributed simulations and a federation of distributed collaborative simulation analysis communities in the context ofmore » the same integrated scenario, which was the release of smallpox in San Diego, California. To our knowledge this was the first time such a combination of federations under a single scenario has ever been demonstrated. A secondary deliverable was the creation of the standalone GroupMeld{trademark} collaboration client, which uses the GroupMeld{trademark} synchronous collaboration framework. In addition, a small pilot experiment that used both integrating frameworks allowed a greater range of crisis management options to be performed and evaluated than would have been possible without the use of the frameworks.« less

  5. Analyzing Members' Motivations to Participate in Role-Playing and Self-Expression Based Virtual Communities

    NASA Astrophysics Data System (ADS)

    Lee, Young Eun; Saharia, Aditya

    With the rapid growth of computer mediated communication technologies in the last two decades, various types of virtual communities have emerged. Some communities provide a role playing arena, enabled by avatars, while others provide an arena for expressing and promoting detailed personal profiles to enhance their offline social networks. Due to different focus of these virtual communities, different factors motivate members to participate in these communities. In this study, we examine differences in members’ motivations to participate in role-playing versus self-expression based virtual communities. To achieve this goal, we apply the Wang and Fesenmaier (2004) framework, which explains members’ participation in terms of their functional, social, psychological, and hedonic needs. The primary contributions of this study are two folds: First, it demonstrates differences between role-playing and self-expression based communities. Second, it provides a comprehensive framework describing members’ motivation to participate in virtual communities.

  6. DEVELOP MULTI-STRESSOR, OPEN ARCHITECTURE MODELING FRAMEWORK FOR ECOLOGICAL EXPOSURE FROM SITE TO WATERSHED SCALE

    EPA Science Inventory

    A number of multimedia modeling frameworks are currently being developed. The Multimedia Integrated Modeling System (MIMS) is one of these frameworks. A framework should be seen as more of a multimedia modeling infrastructure than a single software system. This infrastructure do...

  7. UAF: a generic OPC unified architecture framework

    NASA Astrophysics Data System (ADS)

    Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans

    2012-09-01

    As an emerging Service Oriented Architecture (SOA) specically designed for industrial automation and process control, the OPC Unied Architecture specication should be regarded as an attractive candidate for controlling scientic instrumentation. Even though an industry-backed standard such as OPC UA can oer substantial added value to these projects, its inherent complexity poses an important obstacle for adopting the technology. Building OPC UA applications requires considerable eort, even when taking advantage of a COTS Software Development Kit (SDK). The OPC Unied Architecture Framework (UAF) attempts to reduce this burden by introducing an abstraction layer between the SDK and the application code in order to achieve a better separation of the technical and the functional concerns. True to its industrial origin, the primary requirement of the framework is to maintain interoperability by staying close to the standard specications, and by expecting the minimum compliance from other OPC UA servers and clients. UAF can therefore be regarded as a software framework to quickly and comfortably develop and deploy OPC UA-based applications, while remaining compatible to third party OPC UA-compliant toolkits, servers (such as PLCs) and clients (such as SCADA software). In the rst phase, as covered by this paper, only the client-side of UAF has been tackled in order to transparently handle discovery, session management, subscriptions, monitored items etc. We describe the design principles and internal architecture of our open-source software project, the rst results of the framework running at the Mercator Telescope, and we give a preview of the planned server-side implementation.

  8. The Double Meaning of Online Social Space: Three-Way Interactions Among Social Anxiety, Online Social Behavior, and Offline Social Behavior.

    PubMed

    Koo, Hoon Jung; Woo, Sungbum; Yang, Eunjoo; Kwon, Jung Hye

    2015-09-01

    The present study aimed to investigate how online and offline social behavior interact with each other ultimately to affect the well-being of socially anxious adolescents. Based on previous studies, it was assumed that there might be three-way interactive effects among online social behavior, offline social behavior, and social anxiety regarding the relationship with well-being. To measure social anxiety, online and offline social behavior, and mental well-being, self-report questionnaires such as the Korean-Social Avoidance and Distress Scale, Korean version of the Relational Maintenance Behavior Questionnaire, and Korean version of Mental Health Continuum Short Form were administered to 656 Korean adolescents. Hierarchical regression analysis revealed that the three-way interaction of online social behavior, offline social behavior, and social anxiety was indeed significant. First, online social behavior was associated with lower well-being of adolescents with higher social anxiety under conditions of low engagement in offline social behavior. In contrast, a higher level of online social behavior predicted greater well-being for individuals with high social anxiety under conditions of more engagement in offline social behavior. Second, online social behavior was not significantly related to well-being in youths with low social anxiety under conditions of both high and low engagement in offline social behavior. Implications and limitations of this study were discussed.

  9. Automating Software Design Metrics.

    DTIC Science & Technology

    1984-02-01

    INTRODUCTION 1 ", ... 0..1 1.2 HISTORICAL PERSPECTIVE High quality software is of interest to both the software engineering com- munity and its users. As...contributions of many other software engineering efforts, most notably [MCC 77] and [Boe 83b], which have defined and refined a framework for quantifying...AUTOMATION OF DESIGN METRICS Software metrics can be useful within the context of an integrated soft- ware engineering environment. The purpose of this

  10. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.

  11. Progressive sample processing of band selection for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Liu, Keng-Hao; Chien, Hung-Chang; Chen, Shih-Yu

    2017-10-01

    Band selection (BS) is one of the most important topics in hyperspectral image (HSI) processing. The objective of BS is to find a set of representative bands that can represent the whole image with lower inter-band redundancy. Many types of BS algorithms were proposed in the past. However, most of them can be carried on in an off-line manner. It means that they can only be implemented on the pre-collected data. Those off-line based methods are sometime useless for those applications that are timeliness, particular in disaster prevention and target detection. To tackle this issue, a new concept, called progressive sample processing (PSP), was proposed recently. The PSP is an "on-line" framework where the specific type of algorithm can process the currently collected data during the data transmission under band-interleavedby-sample/pixel (BIS/BIP) protocol. This paper proposes an online BS method that integrates a sparse-based BS into PSP framework, called PSP-BS. In PSP-BS, the BS can be carried out by updating BS result recursively pixel by pixel in the same way that a Kalman filter does for updating data information in a recursive fashion. The sparse regression is solved by orthogonal matching pursuit (OMP) algorithm, and the recursive equations of PSP-BS are derived by using matrix decomposition. The experiments conducted on a real hyperspectral image show that the PSP-BS can progressively output the BS status with very low computing time. The convergence of BS results during the transmission can be quickly achieved by using a rearranged pixel transmission sequence. This significant advantage allows BS to be implemented in a real time manner when the HSI data is transmitted pixel by pixel.

  12. PARTONS: PARtonic Tomography Of Nucleon Software. A computing framework for the phenomenology of Generalized Parton Distributions

    NASA Astrophysics Data System (ADS)

    Berthou, B.; Binosi, D.; Chouika, N.; Colaneri, L.; Guidal, M.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.; Sabatié, F.; Sznajder, P.; Wagner, J.

    2018-06-01

    We describe the architecture and functionalities of a C++ software framework, coined PARTONS, dedicated to the phenomenology of Generalized Parton Distributions. These distributions describe the three-dimensional structure of hadrons in terms of quarks and gluons, and can be accessed in deeply exclusive lepto- or photo-production of mesons or photons. PARTONS provides a necessary bridge between models of Generalized Parton Distributions and experimental data collected in various exclusive production channels. We outline the specification of the PARTONS framework in terms of practical needs, physical content and numerical capacity. This framework will be useful for physicists - theorists or experimentalists - not only to develop new models, but also to interpret existing measurements and even design new experiments.

  13. Gender differences in online and offline self-disclosure in pre-adolescence and adolescence.

    PubMed

    Valkenburg, Patti M; Sumter, Sindy R; Peter, Jochen

    2011-06-01

    Although there is developmental research on the prevalence of offline self-disclosure in pre-adolescence and adolescence, it is still unknown (a) how boys' and girls'online self-disclosure develops in this period and (b) how online and offline self-disclosure interact with each other. We formulated three hypotheses to explain the possible interaction between online and offline self-disclosure: the displacement, the rich-get-richer, and the rehearsal hypothesis. We surveyed 690 pre-adolescents and adolescents (10-17 years) at three time points with half-year intervals in between. We found significant gender differences in the developmental trajectories of self-disclosure. For girls, both online and offline self-disclosure increased sharply during pre- (10-11 years) and early adolescence (12-13 years), and then stabilized in middle and late adolescence. For boys, the same trajectory was found although the increase in self-disclosure started 2 years later. We found most support for the rehearsal hypothesis: Both boys and girls seemed to use online self-disclosure to rehearse offline self-disclosure skills. This particularly held for boys in early adolescence who typically have difficulty disclosing themselves offline.

  14. FRAMES-2.0 Software System: Frames 2.0 Pest Integration (F2PEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castleton, Karl J.; Meyer, Philip D.

    2009-06-17

    The implementation of the FRAMES 2.0 F2PEST module is described, including requirements, design, and specifications of the software. This module integrates the PEST parameter estimation software within the FRAMES 2.0 environmental modeling framework. A test case is presented.

  15. Framework for Risk Analysis in Multimedia Environmental Systems: Modeling Individual Steps of a Risk Assessment Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Anuj; Castleton, Karl J.; Hoopes, Bonnie L.

    2004-06-01

    The study of the release and effects of chemicals in the environment and their associated risks to humans is central to public and private decision making. FRAMES 1.X, Framework for Risk Analysis in Multimedia Environmental Systems, is a systems modeling software platform, developed by PNNL, Pacific Northwest National Laboratory, that helps scientists study the release and effects of chemicals on a source to outcome basis, create environmental models for similar risk assessment and management problems. The unique aspect of FRAMES is to dynamically introduce software modules representing individual components of a risk assessment (e.g., source release of contaminants, fate andmore » transport in various environmental media, exposure, etc.) within a software framework, manipulate their attributes and run simulations to obtain results. This paper outlines the fundamental constituents of FRAMES 2.X, an enhanced version of FRAMES 1.X, that greatly improve the ability of the module developers to “plug” their self-developed software modules into the system. The basic design, the underlying principles and a discussion of the guidelines for module developers are presented.« less

  16. Integrated Systems Health Management (ISHM) Toolkit

    NASA Technical Reports Server (NTRS)

    Venkatesh, Meera; Kapadia, Ravi; Walker, Mark; Wilkins, Kim

    2013-01-01

    A framework of software components has been implemented to facilitate the development of ISHM systems according to a methodology based on Reliability Centered Maintenance (RCM). This framework is collectively referred to as the Toolkit and was developed using General Atomics' Health MAP (TM) technology. The toolkit is intended to provide assistance to software developers of mission-critical system health monitoring applications in the specification, implementation, configuration, and deployment of such applications. In addition to software tools designed to facilitate these objectives, the toolkit also provides direction to software developers in accordance with an ISHM specification and development methodology. The development tools are based on an RCM approach for the development of ISHM systems. This approach focuses on defining, detecting, and predicting the likelihood of system functional failures and their undesirable consequences.

  17. A Generic Software Architecture For Prognostics

    NASA Technical Reports Server (NTRS)

    Teubert, Christopher; Daigle, Matthew J.; Sankararaman, Shankar; Goebel, Kai; Watkins, Jason

    2017-01-01

    Prognostics is a systems engineering discipline focused on predicting end-of-life of components and systems. As a relatively new and emerging technology, there are few fielded implementations of prognostics, due in part to practitioners perceiving a large hurdle in developing the models, algorithms, architecture, and integration pieces. As a result, no open software frameworks for applying prognostics currently exist. This paper introduces the Generic Software Architecture for Prognostics (GSAP), an open-source, cross-platform, object-oriented software framework and support library for creating prognostics applications. GSAP was designed to make prognostics more accessible and enable faster adoption and implementation by industry, by reducing the effort and investment required to develop, test, and deploy prognostics. This paper describes the requirements, design, and testing of GSAP. Additionally, a detailed case study involving battery prognostics demonstrates its use.

  18. Basic test framework for the evaluation of text line segmentation and text parameter extraction.

    PubMed

    Brodić, Darko; Milivojević, Dragan R; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.

  19. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction

    PubMed Central

    Brodić, Darko; Milivojević, Dragan R.; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms. PMID:22399932

  20. Conformal Prediction Based on K-Nearest Neighbors for Discrimination of Ginsengs by a Home-Made Electronic Nose

    PubMed Central

    Sun, Xiyang; Miao, Jiacheng; Wang, You; Luo, Zhiyuan; Li, Guang

    2017-01-01

    An estimate on the reliability of prediction in the applications of electronic nose is essential, which has not been paid enough attention. An algorithm framework called conformal prediction is introduced in this work for discriminating different kinds of ginsengs with a home-made electronic nose instrument. Nonconformity measure based on k-nearest neighbors (KNN) is implemented separately as underlying algorithm of conformal prediction. In offline mode, the conformal predictor achieves a classification rate of 84.44% based on 1NN and 80.63% based on 3NN, which is better than that of simple KNN. In addition, it provides an estimate of reliability for each prediction. In online mode, the validity of predictions is guaranteed, which means that the error rate of region predictions never exceeds the significance level set by a user. The potential of this framework for detecting borderline examples and outliers in the application of E-nose is also investigated. The result shows that conformal prediction is a promising framework for the application of electronic nose to make predictions with reliability and validity. PMID:28805721

  1. A structural model decomposition framework for systems health management

    NASA Astrophysics Data System (ADS)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  2. A Structural Model Decomposition Framework for Systems Health Management

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  3. Interrater Reliability and Diagnostic Performance of Subjective Evaluation of Sublingual Microcirculation Images by Physicians and Nurses: A Multicenter Observational Study.

    PubMed

    Lima, Alexandre; López, Alejandra; van Genderen, Michel E; Hurtado, Francisco Javier; Angulo, Martin; Grignola, Juan C; Shono, Atsuko; van Bommel, Jasper

    2015-09-01

    This was a cross-sectional multicenter study to investigate the ability of physicians and nurses from three different countries to subjectively evaluate sublingual microcirculation images and thereby discriminate normal from abnormal sublingual microcirculation based on flow and density abnormalities. Forty-five physicians and 61 nurses (mean age, 36 ± 10 years; 44 males) from three different centers in The Netherlands (n = 61), Uruguay (n = 12), and Japan (n = 33) were asked to subjectively evaluate a sample of 15 microcirculation videos randomly selected from an experimental model of endotoxic shock in pigs. All videos were first analyzed offline using the A.V.A. software by an independent, experienced investigator and were categorized as good, bad, or very bad microcirculation based on the microvascular flow index, perfused capillary density, and proportion of perfused capillaries. Then, the videos were randomly assigned to the examiners, who were instructed to subjectively categorize each image as good, bad, or very bad. An interrater analysis was performed, and sensitivity and specificity tests were calculated to evaluate the proportion of A.V.A. score abnormalities that the examiners correctly identified. The κ statistics indicated moderate agreement in the evaluation of microcirculation abnormalities using three categories, i.e., good, bad, or very bad (κ = 0.48), and substantial agreement using two categories, i.e., normal (good) and abnormal (bad or very bad) (κ = 0.66). There was no significant difference between the κ three and κ two statistics. We found that the examiner's subjective evaluations had good diagnostic performance and were highly sensitive (84%; 95% confidence interval, 81%-86%) and specific (87%; 95% confidence interval, 84%-90%) for sublingual microcirculatory abnormalities as assessed using the A.V.A. software. The subjective evaluations of sublingual microcirculation by physicians and nurses agreed well with a conventional offline analysis and were highly sensitive and specific for sublingual microcirculatory abnormalities.

  4. Development of Two Analytical Methods Based on Reverse Phase Chromatographic and SDS-PAGE Gel for Assessment of Deglycosylation Yield in N-Glycan Mapping.

    PubMed

    Eckard, Anahita D; Dupont, David R; Young, Johnie K

    2018-01-01

    N -lined glycosylation is one of the critical quality attributes (CQA) for biotherapeutics impacting the safety and activity of drug product. Changes in pattern and level of glycosylation can significantly alter the intrinsic properties of the product and, therefore, have to be monitored throughout its lifecycle. Therefore fast, precise, and unbiased N -glycan mapping assay is desired. To ensure these qualities, using analytical methods that evaluate completeness of deglycosylation is necessary. For quantification of deglycosylation yield, methods such as reduced liquid chromatography-mass spectrometry (LC-MS) and reduced capillary gel electrophoresis (CGE) have been commonly used. Here we present development of two additional methods to evaluate deglycosylation yield: one based on LC using reverse phase (RP) column and one based on reduced sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE gel) with offline software (GelAnalyzer). With the advent of rapid deglycosylation workflows in the market for N -glycan profiling replacing overnight incubation, we have aimed to quantify the level of deglycosylation in a selected rapid deglycosylation workflow. Our results have shown well resolved peaks of glycosylated and deglycosylated protein species with RP-LC method allowing simple quantification of deglycosylation yield of protein with high confidence. Additionally a good correlation, ≥0.94, was found between deglycosylation yields estimated by RP-LC method and that of reduced SDS-PAGE gel method with offline software. Evaluation of rapid deglycosylation protocol from GlycanAssure™ HyPerformance assay kit performed on fetuin and RNase B has shown complete deglycosylation within the recommended protocol time when evaluated with these techniques. Using this kit, N -glycans from NIST mAb were prepared in 1.4 hr and analyzed by hydrophilic interaction chromatography (HILIC) ultrahigh performance LC (UHPLC) equipped with a fluorescence detector (FLD). 37 peaks were resolved with good resolution. Excellent sample preparation repeatability was found with relative standard deviation (RSD) of <5% for peaks with >0.5% relative area.

  5. Composable Framework Support for Software-FMEA Through Model Execution

    NASA Astrophysics Data System (ADS)

    Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco

    2016-08-01

    Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.

  6. A Comparison of the Use of Bony Anatomy and Internal Markers for Offline Verification and an Evaluation of the Potential Benefit of Online and Offline Verification Protocols for Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McNair, Helen A.; Hansen, Vibeke N.; Parker, Christopher

    2008-05-01

    Purpose: To evaluate the utility of intraprostatic markers in the treatment verification of prostate cancer radiotherapy. Specific aims were: to compare the effectiveness of offline correction protocols, either using gold markers or bony anatomy; to estimate the potential benefit of online correction protocol's using gold markers; to determine the presence and effect of intrafraction motion. Methods and Materials: Thirty patients with three gold markers inserted had pretreatment and posttreatment images acquired and were treated using an offline correction protocol and gold markers. Retrospectively, an offline protocol was applied using bony anatomy and an online protocol using gold markers. Results: Themore » systematic errors were reduced from 1.3, 1.9, and 2.5 mm to 1.1, 1.1, and 1.5 mm in the right-left (RL), superoinferior (SI), and anteroposterior (AP) directions, respectively, using the offline correction protocol and gold markers instead of bony anatomy. The subsequent decrease in margins was 1.7, 3.3, and 4 mm in the RL, SI, and AP directions, respectively. An offline correction protocol combined with an online correction protocol in the first four fractions reduced random errors further to 0.9, 1.1, and 1.0 mm in the RL, SI, and AP directions, respectively. A daily online protocol reduced all errors to <1 mm. Intrafraction motion had greater impact on the effectiveness of the online protocol than the offline protocols. Conclusions: An offline protocol using gold markers is effective in reducing the systematic error. The value of online protocols is reduced by intrafraction motion.« less

  7. Layout Study and Application of Mobile App Recommendation Approach Based On Spark Streaming Framework

    NASA Astrophysics Data System (ADS)

    Wang, H. T.; Chen, T. T.; Yan, C.; Pan, H.

    2018-05-01

    For App recommended areas of mobile phone software, made while using conduct App application recommended combined weighted Slope One algorithm collaborative filtering algorithm items based on further improvement of the traditional collaborative filtering algorithm in cold start, data matrix sparseness and other issues, will recommend Spark stasis parallel algorithm platform, the introduction of real-time streaming streaming real-time computing framework to improve real-time software applications recommended.

  8. A loosely coupled framework for terminology controlled distributed EHR search for patient cohort identification in clinical research.

    PubMed

    Zhao, Lei; Lim Choi Keung, Sarah N; Taweel, Adel; Tyler, Edward; Ogunsina, Ire; Rossiter, James; Delaney, Brendan C; Peterson, Kevin A; Hobbs, F D Richard; Arvanitis, Theodoros N

    2012-01-01

    Heterogeneous data models and coding schemes for electronic health records present challenges for automated search across distributed data sources. This paper describes a loosely coupled software framework based on the terminology controlled approach to enable the interoperation between the search interface and heterogeneous data sources. Software components interoperate via common terminology service and abstract criteria model so as to promote component reuse and incremental system evolution.

  9. Integrating Visualization Applications, such as ParaView, into HEP Software Frameworks for In-situ Event Displays

    NASA Astrophysics Data System (ADS)

    Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.

    2017-10-01

    ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks. Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.

  10. (Quickly) Testing the Tester via Path Coverage

    NASA Technical Reports Server (NTRS)

    Groce, Alex

    2009-01-01

    The configuration complexity and code size of an automated testing framework may grow to a point that the tester itself becomes a significant software artifact, prone to poor configuration and implementation errors. Unfortunately, testing the tester by using old versions of the software under test (SUT) may be impractical or impossible: test framework changes may have been motivated by interface changes in the tested system, or fault detection may become too expensive in terms of computing time to justify running until errors are detected on older versions of the software. We propose the use of path coverage measures as a "quick and dirty" method for detecting many faults in complex test frameworks. We also note the possibility of using techniques developed to diversify state-space searches in model checking to diversify test focus, and an associated classification of tester changes into focus-changing and non-focus-changing modifications.

  11. The SeaHorn Verification Framework

    NASA Technical Reports Server (NTRS)

    Gurfinkel, Arie; Kahsai, Temesghen; Komuravelli, Anvesh; Navas, Jorge A.

    2015-01-01

    In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code.

  12. A streamlined Python framework for AT-TPC data analysis

    NASA Astrophysics Data System (ADS)

    Taylor, J. Z.; Bradt, J.; Bazin, D.; Kuchera, M. P.

    2017-09-01

    User-friendly data analysis software has been developed for the Active-Target Time Projection Chamber (AT-TPC) experiment at the National Superconducting Cyclotron Laboratory at Michigan State University. The AT-TPC, commissioned in 2014, is a gas-filled detector that acts as both the detector and target for high-efficiency detection of low-intensity, exotic nuclear reactions. The pytpc framework is a Python package for analyzing AT-TPC data. The package was developed for the analysis of 46Ar(p, p) data. The existing software was used to analyze data produced by the 40Ar(p, p) experiment that ran in August, 2015. Usage of the package was documented in an analysis manual both to improve analysis steps and aid in the work of future AT-TPC users. Software features and analysis methods in the pytpc framework will be presented along with the 40Ar results.

  13. Implementation of Dynamic Extensible Adaptive Locally Exchangeable Measures (IDEALEM) v 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sim, Alex; Lee, Dongeun; Wu, K. John

    2016-03-04

    Handling large streaming data is essential for various applications such as network traffic analysis, social networks, energy cost trends, and environment modeling. However, it is in general intractable to store, compute, search, and retrieve large streaming data. This software addresses a fundamental issue, which is to reduce the size of large streaming data and still obtain accurate statistical analysis. As an example, when a high-speed network such as 100 Gbps network is monitored, the collected measurement data rapidly grows so that polynomial time algorithms (e.g., Gaussian processes) become intractable. One possible solution to reduce the storage of vast amounts ofmore » measured data is to store a random sample, such as one out of 1000 network packets. However, such static sampling methods (linear sampling) have drawbacks: (1) it is not scalable for high-rate streaming data, and (2) there is no guarantee of reflecting the underlying distribution. In this software, we implemented a dynamic sampling algorithm, based on the recent technology from the relational dynamic bayesian online locally exchangeable measures, that reduces the storage of data records in a large scale, and still provides accurate analysis of large streaming data. The software can be used for both online and offline data records.« less

  14. Qualis-SIS: automated standard curve generation and quality assessment for multiplexed targeted quantitative proteomic experiments with labeled standards.

    PubMed

    Mohammed, Yassene; Percy, Andrew J; Chambers, Andrew G; Borchers, Christoph H

    2015-02-06

    Multiplexed targeted quantitative proteomics typically utilizes multiple reaction monitoring and allows the optimized quantification of a large number of proteins. One challenge, however, is the large amount of data that needs to be reviewed, analyzed, and interpreted. Different vendors provide software for their instruments, which determine the recorded responses of the heavy and endogenous peptides and perform the response-curve integration. Bringing multiplexed data together and generating standard curves is often an off-line step accomplished, for example, with spreadsheet software. This can be laborious, as it requires determining the concentration levels that meet the required accuracy and precision criteria in an iterative process. We present here a computer program, Qualis-SIS, that generates standard curves from multiplexed MRM experiments and determines analyte concentrations in biological samples. Multiple level-removal algorithms and acceptance criteria for concentration levels are implemented. When used to apply the standard curve to new samples, the software flags each measurement according to its quality. From the user's perspective, the data processing is instantaneous due to the reactivity paradigm used, and the user can download the results of the stepwise calculations for further processing, if necessary. This allows for more consistent data analysis and can dramatically accelerate the downstream data analysis.

  15. BeeSpace Navigator: exploratory analysis of gene function using semantic indexing of biological literature.

    PubMed

    Sen Sarma, Moushumi; Arcoleo, David; Khetani, Radhika S; Chee, Brant; Ling, Xu; He, Xin; Jiang, Jing; Mei, Qiaozhu; Zhai, ChengXiang; Schatz, Bruce

    2011-07-01

    With the rapid decrease in cost of genome sequencing, the classification of gene function is becoming a primary problem. Such classification has been performed by human curators who read biological literature to extract evidence. BeeSpace Navigator is a prototype software for exploratory analysis of gene function using biological literature. The software supports an automatic analogue of the curator process to extract functions, with a simple interface intended for all biologists. Since extraction is done on selected collections that are semantically indexed into conceptual spaces, the curation can be task specific. Biological literature containing references to gene lists from expression experiments can be analyzed to extract concepts that are computational equivalents of a classification such as Gene Ontology, yielding discriminating concepts that differentiate gene mentions from other mentions. The functions of individual genes can be summarized from sentences in biological literature, to produce results resembling a model organism database entry that is automatically computed. Statistical frequency analysis based on literature phrase extraction generates offline semantic indexes to support these gene function services. The website with BeeSpace Navigator is free and open to all; there is no login requirement at www.beespace.illinois.edu for version 4. Materials from the 2010 BeeSpace Software Training Workshop are available at www.beespace.illinois.edu/bstwmaterials.php.

  16. ExpertEyes: open-source, high-definition eyetracking.

    PubMed

    Parada, Francisco J; Wyatte, Dean; Yu, Chen; Akavipat, Ruj; Emerick, Brandi; Busey, Thomas

    2015-03-01

    ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.

  17. SACA: Software Assisted Call Analysis--an interactive tool supporting content exploration, online guidance and quality improvement of counseling dialogues.

    PubMed

    Trinkaus, Hans L; Gaisser, Andrea E

    2010-09-01

    Nearly 30,000 individual inquiries are answered annually by the telephone cancer information service (CIS, KID) of the German Cancer Research Center (DKFZ). The aim was to develop a tool for evaluating these calls, and to support the complete counseling process interactively. A novel software tool is introduced, based on a structure similar to a music score. Treating the interaction as a "duet", guided by the CIS counselor, the essential contents of the dialogue are extracted automatically. For this, "trained speech recognition" is applied to the (known) counselor's part, and "keyword spotting" is used on the (unknown) client's part to pick out specific items from the "word streams". The outcomes fill an abstract score representing the dialogue. Pilot tests performed on a prototype of SACA (Software Assisted Call Analysis) resulted in a basic proof of concept: Demographic data as well as information regarding the situation of the caller could be identified. The study encourages following up on the vision of an integrated SACA tool for supporting calls online and performing statistics on its knowledge database offline. Further research perspectives are to check SACA's potential in comparison with established interaction analysis systems like RIAS. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  18. Factors influencing HIV serodisclosure among men who have sex with men in the US: An examination of online versus offline meeting environments and risk behaviors

    PubMed Central

    Noor, Syed WB; Rampalli, Krystal; Rosser, B.R. Simon

    2014-01-01

    One key component in HIV prevention is serostatus disclosure. Until recently, many studies have focused on interpersonal factors and minimally considered meeting venues as they pertain to disclosure. Using data (N=3309) from an online survey conducted across 16 US metropolitan statistical areas, we examined whether HIV serodisclosure varies by online/offline meeting venues in both protected and unprotected anal intercourse encounters. Most of the sample (76.9%) reported meeting men for sex (last 90 days) both online and offline, versus 12.7% offline only and 10.4% online only. After controlling for other variables, we found that the men who meet partners in both online and offline were 20~30% more likely to report disclosing their HIV status prior to sex than men who met their partners exclusively either offline or online. While previous studies have identified the Internet as a risk environment, our findings suggest bi-environmental partner seeking may also have beneficial effects. PMID:24743960

  19. Control software and electronics architecture design in the framework of the E-ELT instrumentation

    NASA Astrophysics Data System (ADS)

    Di Marcantonio, P.; Coretti, I.; Cirami, R.; Comari, M.; Santin, P.; Pucillo, M.

    2010-07-01

    During the last years the European Southern Observatory (ESO), in collaboration with other European astronomical institutes, has started several feasibility studies for the E-ELT (European-Extremely Large Telescope) instrumentation and post-focal adaptive optics. The goal is to create a flexible suite of instruments to deal with the wide variety of scientific questions astronomers would like to see solved in the coming decades. In this framework INAF-Astronomical Observatory of Trieste (INAF-AOTs) is currently responsible of carrying out the analysis and the preliminary study of the architecture of the electronics and control software of three instruments: CODEX (control software and electronics) and OPTIMOS-EVE/OPTIMOS-DIORAMAS (control software). To cope with the increased complexity and new requirements for stability, precision, real-time latency and communications among sub-systems imposed by these instruments, new solutions have been investigated by our group. In this paper we present the proposed software and electronics architecture based on a distributed common framework centered on the Component/Container model that uses OPC Unified Architecture as a standard layer to communicate with COTS components of three different vendors. We describe three working prototypes that have been set-up in our laboratory and discuss their performances, integration complexity and ease of deployment.

  20. Understanding the process of social network evolution: Online-offline integrated analysis of social tie formation

    PubMed Central

    Kwak, Doyeon

    2017-01-01

    It is important to consider the interweaving nature of online and offline social networks when we examine social network evolution. However, it is difficult to find any research that examines the process of social tie formation from an integrated perspective. In our study, we quantitatively measure offline interactions and examine the corresponding evolution of online social network in order to understand the significance of interrelationship between online and offline social factors in generating social ties. We analyze the radio signal strength indicator sensor data from a series of social events to understand offline interactions among the participants and measure the structural attributes of their existing online Facebook social networks. By monitoring the changes in their online social networks before and after offline interactions in a series of social events, we verify that the ability to develop an offline interaction into an online friendship is tied to the number of social connections that participants previously had, while the presence of shared mutual friends between a pair of participants disrupts potential new connections within the pre-designed offline social events. Thus, while our integrative approach enables us to confirm the theory of preferential attachment in the process of network formation, the common neighbor theory is not supported. Our dual-dimensional network analysis allows us to observe the actual process of social network evolution rather than to make predictions based on the assumption of self-organizing networks. PMID:28542367

  1. Understanding the process of social network evolution: Online-offline integrated analysis of social tie formation.

    PubMed

    Kwak, Doyeon; Kim, Wonjoon

    2017-01-01

    It is important to consider the interweaving nature of online and offline social networks when we examine social network evolution. However, it is difficult to find any research that examines the process of social tie formation from an integrated perspective. In our study, we quantitatively measure offline interactions and examine the corresponding evolution of online social network in order to understand the significance of interrelationship between online and offline social factors in generating social ties. We analyze the radio signal strength indicator sensor data from a series of social events to understand offline interactions among the participants and measure the structural attributes of their existing online Facebook social networks. By monitoring the changes in their online social networks before and after offline interactions in a series of social events, we verify that the ability to develop an offline interaction into an online friendship is tied to the number of social connections that participants previously had, while the presence of shared mutual friends between a pair of participants disrupts potential new connections within the pre-designed offline social events. Thus, while our integrative approach enables us to confirm the theory of preferential attachment in the process of network formation, the common neighbor theory is not supported. Our dual-dimensional network analysis allows us to observe the actual process of social network evolution rather than to make predictions based on the assumption of self-organizing networks.

  2. Multidisciplinary Optimization Branch Experience Using iSIGHT Software

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Korte, J. J.; Dunn, H. J.; Salas, A. O.

    1999-01-01

    The Multidisciplinary Optimization (MDO) Branch at NASA Langley Research Center is investigating frameworks for supporting multidisciplinary analysis and optimization research. An optimization framework call improve the design process while reducing time and costs. A framework provides software and system services to integrate computational tasks and allows the researcher to concentrate more on the application and less on the programming details. A framework also provides a common working environment and a full range of optimization tools, and so increases the productivity of multidisciplinary research teams. Finally, a framework enables staff members to develop applications for use by disciplinary experts in other organizations. Since the release of version 4.0, the MDO Branch has gained experience with the iSIGHT framework developed by Engineous Software, Inc. This paper describes experiences with four aerospace applications: (1) reusable launch vehicle sizing, (2) aerospike nozzle design, (3) low-noise rotorcraft trajectories, and (4) acoustic liner design. All applications have been successfully tested using the iSIGHT framework, except for the aerospike nozzle problem, which is in progress. Brief overviews of each problem are provided. The problem descriptions include the number and type of disciplinary codes, as well as all estimate of the multidisciplinary analysis execution time. In addition, the optimization methods, objective functions, design variables, and design constraints are described for each problem. Discussions on the experience gained and lessons learned are provided for each problem. These discussions include the advantages and disadvantages of using the iSIGHT framework for each case as well as the ease of use of various advanced features. Potential areas of improvement are identified.

  3. Model Based Analysis and Test Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  4. Software for Data Analysis with Graphical Models

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.; Roy, H. Scott

    1994-01-01

    Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.

  5. The Clinical Utilisation of Respiratory Elastance Software (CURE Soft): a bedside software for real-time respiratory mechanics monitoring and mechanical ventilation management.

    PubMed

    Szlavecz, Akos; Chiew, Yeong Shiong; Redmond, Daniel; Beatson, Alex; Glassenbury, Daniel; Corbett, Simon; Major, Vincent; Pretty, Christopher; Shaw, Geoffrey M; Benyo, Balazs; Desaive, Thomas; Chase, J Geoffrey

    2014-09-30

    Real-time patient respiratory mechanics estimation can be used to guide mechanical ventilation settings, particularly, positive end-expiratory pressure (PEEP). This work presents a software, Clinical Utilisation of Respiratory Elastance (CURE Soft), using a time-varying respiratory elastance model to offer this ability to aid in mechanical ventilation treatment. CURE Soft is a desktop application developed in JAVA. It has two modes of operation, 1) Online real-time monitoring decision support and, 2) Offline for user education purposes, auditing, or reviewing patient care. The CURE Soft has been tested in mechanically ventilated patients with respiratory failure. The clinical protocol, software testing and use of the data were approved by the New Zealand Southern Regional Ethics Committee. Using CURE Soft, patient's respiratory mechanics response to treatment and clinical protocol were monitored. Results showed that the patient's respiratory elastance (Stiffness) changed with the use of muscle relaxants, and responded differently to ventilator settings. This information can be used to guide mechanical ventilation therapy and titrate optimal ventilator PEEP. CURE Soft enables real-time calculation of model-based respiratory mechanics for mechanically ventilated patients. Results showed that the system is able to provide detailed, previously unavailable information on patient-specific respiratory mechanics and response to therapy in real-time. The additional insight available to clinicians provides the potential for improved decision-making, and thus improved patient care and outcomes.

  6. Model-based software process improvement

    NASA Technical Reports Server (NTRS)

    Zettervall, Brenda T.

    1994-01-01

    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  7. 31 CFR 363.21 - When may you require offline authentication and documentary evidence?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... authentication and documentary evidence? 363.21 Section 363.21 Money and Finance: Treasury Regulations Relating... TreasuryDirect § 363.21 When may you require offline authentication and documentary evidence? We may require offline authentication and documentary evidence at our option. [74 FR 19419, Apr. 29, 2009] ...

  8. The interplay between online and offline explorations of identity, relationships, and sex: a mixed-methods study with LGBT youth.

    PubMed

    DeHaan, Samantha; Kuper, Laura E; Magee, Joshua C; Bigelow, Lou; Mustanski, Brian S

    2013-01-01

    Although the Internet is commonly used by lesbian, gay, bisexual, and transgender (LGBT) youth to explore aspects of sexual health, little is known about how this usage relates to offline explorations and experiences. This study used a mixed-methods approach to investigate the interplay between online and offline explorations of multiple dimensions of sexual health, which include sexually transmitted infections, sexual identities, romantic relationships, and sexual behaviors. A diverse community sample of 32 LGBT youth (ages 16-24) completed semi-structured interviews, which were transcribed and then qualitatively coded to identify themes. Results indicated that, although many participants evaluated online sexual health resources with caution, they frequently used the Internet to compensate for perceived limitations in offline resources and relationships. Some participants turned to the Internet to find friends and romantic partners, citing the relative difficulty of establishing offline contact with LGBT peers. Further, participants perceived the Internet as an efficient way to discover offline LGBT events and services relevant to sexual health. These results suggest that LGBT youth are motivated to fill gaps in their offline sexual health resources (e.g., books and personal communications) with online information. The Internet is a setting that can be harnessed to provide support for the successful development of sexual health.

  9. SU-C-303-04: Evaluation of On- and Off-Line Bioluminescence Tomography System for Focal Irradiation Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B; Wang, K; Reyes, J

    Purpose: We have developed offline and on-board bioluminescence tomography(BLT) systems for the small animal radiation research platform(SARRP) for radiation guidance of soft tissue targets. We investigated the effectiveness of offline BLT guidance. Methods: CBCT is equipped on both the offline BLT system and SARRP that are 10 ft. apart. To evaluate the setup error during animal transport between the two systems, we implanted a luminescence source in the abdomen of anesthetized mice. Five mice were studied. After CBCT was acquired on both systems, source centers and correlation coefficients were calculated. CBCT was also used to generate object mesh for BLTmore » reconstruction. To assess target localization, we compared the localization of the luminescence source based on (1)on-board SARRP BLT and CBCT, (2)offline BLT and CBCT, and (3)offline BLT and SARRP CBCT. The 3rd comparison examines if an offline BLT system can be used to guide radiation when there is minimal target contrast in CBCT. Results: Our CBCT results show the offset of the light source center can be maintained within 0.2 mm during animal transport. The center of mass(CoM) of the light source reconstructed by the offline BLT has an offset of 1.0 ± 0.4 mm from the ‘true’ CoM as derived from the SARRP CBCT. The results compare well with the offset of 1.0 ± 0.2 mm using on-line BLT. Conclusion: With CBCT information provided by the SARRP and effective animal immobilization during transport, these findings support the use of offline BLT in close vicinity for accurate soft tissue target localization for irradiation. However, the disadvantage of the off-line system is reduced efficiency as care is required to maintain stable animal transport. We envisage a dual use system where the on-board arrangement allows convenient access to CBCT and avoids disturbance of animal setup. The off-line capability would support standalone longitudinal imaging studies. The work is supported by NIH R01CA158100 and Xstrahl Ltd. Drs. John Wong and Iulian Iordachita receive royalty payment from a licensing agreement between Xstrahl Ltd and Johns Hopkins University. John Wong also has a consultant agreement with Xstrahl Ltd.« less

  10. A Generic Ground Framework for Image Expertise Centres and Small-Sized Production Centres

    NASA Astrophysics Data System (ADS)

    Sellé, A.

    2009-05-01

    Initiated by the Pleiadas Earth Observation Program, the CNES (French Space Agency) has developed a generic collaborative framework for its image quality centre, highly customisable for any upcoming expertise centre. This collaborative framework has been design to be used by a group of experts or scientists that want to share data and processings and manage interfaces with external entities. Its flexible and scalable architecture complies with the core requirements: defining a user data model with no impact on the software (generic access data), integrating user processings with a GUI builder and built-in APIs, and offering a scalable architecture to fit any preformance requirement and accompany growing projects. The CNES jas given licensing grants for two software companies that will be able to redistribute this framework to any customer.

  11. Achieving Agility and Stability in Large-Scale Software Development

    DTIC Science & Technology

    2013-01-16

    temporary team is assigned to prepare layers and frameworks for future feature teams. Presentation Layer Domain Layer Data Access Layer...http://www.sei.cmu.edu/training/ elearning ~ Software Engineering Institute CarnegieMellon

  12. Interim Open Source Software (OSS) Policy

    EPA Pesticide Factsheets

    This interim Policy establishes a framework to implement the requirements of the Office of Management and Budget's (OMB) Federal Source Code Policy to achieve efficiency, transparency and innovation through reusable and open source software.

  13. An Integrated Software Package to Enable Predictive Simulation Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang

    The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less

  14. Power plant fault detection using artificial neural network

    NASA Astrophysics Data System (ADS)

    Thanakodi, Suresh; Nazar, Nazatul Shiema Moh; Joini, Nur Fazriana; Hidzir, Hidzrin Dayana Mohd; Awira, Mohammad Zulfikar Khairul

    2018-02-01

    The fault that commonly occurs in power plants is due to various factors that affect the system outage. There are many types of faults in power plants such as single line to ground fault, double line to ground fault, and line to line fault. The primary aim of this paper is to diagnose the fault in 14 buses power plants by using an Artificial Neural Network (ANN). The Multilayered Perceptron Network (MLP) that detection trained utilized the offline training methods such as Gradient Descent Backpropagation (GDBP), Levenberg-Marquardt (LM), and Bayesian Regularization (BR). The best method is used to build the Graphical User Interface (GUI). The modelling of 14 buses power plant, network training, and GUI used the MATLAB software.

  15. The SCUBA map reduction cookbook

    NASA Astrophysics Data System (ADS)

    Sandell, G.; Jessop, N.; Jenness, T.

    This cookbook tells you how to reduce and analyze maps obtained with SCUBA using the off-line SCUBA reduction package, SURF, and the Starlink KAPPA, Figaro, GAIA and CONVERT applications. The easiest way of using these packages is to run-up ORAC-DR, a general purpose pipeline for reducing data from any telescope. A set of data reduction recipes are available to ORAC-DR for use when working with scuba maps, these recipes utilize the SURF and KAPPA packages. This cookbook makes no attempts to explain why and how, for that there is a comprehensive Starlink User Note 216 which properly documents all the software tasks in SURF, which should be consulted for those who need to know details of a task, or how the task really works.

  16. A novel PMT test system based on waveform sampling

    NASA Astrophysics Data System (ADS)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  17. The ARAMIS project: a concept robot and technical design.

    PubMed

    Colizzi, Lucio; Lidonnici, Antonio; Pignolo, Loris

    2009-11-01

    To describe the ARAMIS (Automatic Recovery Arm Motility Integrated System) project, a concept robot applicable in the neuro-rehabilitation of the paretic upper limb after stroke. Methods, results and conclusion: The rationale and engineering of a state-of-the-art, hardware/software integrated robot system, its mechanics, ergonomics, electric/electronics features providing control, safety and suitability of use are described. An ARAMIS prototype has been built and is now available for clinical tests. It allows the therapist to design neuro-rehabilitative (synchronous or asynchronous) training protocols in which sample exercises are generated by a single exoskeleton (operated by the patient's unaffected arm or by the therapist's arm) and mirrored in real-time or offline by the exoskeleton supporting the paretic arm.

  18. Online and Offline Pattern Recognition in PANDA

    NASA Astrophysics Data System (ADS)

    Boca, Gianluigi

    2016-11-01

    PANDA is one of the four experiments that will run at the new facility FAIR that is being built in Darmstadt, Germany. It is a fixed target experiment: a beam of antiprotons collides on a jet proton target (the maximum center of mass energy is 5.46 GeV). The interaction rate at the startup will be 2MHz with the goal of reaching 20MHz at full luminosity. The beam of antiprotons will be essentially continuous. PANDA will have NO hardware trigger but only a software trigger, to allow for maximum flexibility in the physics program. All those characteristics are severe challenges for the reconstruction code that 1) must be fast, since it has to be validated up to 20MHz interaction rate; 2) must be able to reject fake tracks caused by the remnant hits, belonging to previous or later events in some slow detectors, for example the straw tubes in the central region. The Pattern Recognition (PR) of PANDA will have to run both online to achieve a first fast selection, and offline, at lower rate, for a more refined selection. In PANDA the PR code is continuously evolving; this contribution shows the present status. I will give an overview of three examples of PR following different strategies and/or implemented on different hardware (FPGA, GPUs, CPUs) and, when available, I will report the performances.

  19. Real Time Coincidence Detection Engine for High Count Rate Timestamp Based PET

    NASA Astrophysics Data System (ADS)

    Tetrault, M.-A.; Oliver, J. F.; Bergeron, M.; Lecomte, R.; Fontaine, R.

    2010-02-01

    Coincidence engines follow two main implementation flows: timestamp based systems and AND-gate based systems. The latter have been more widespread in recent years because of its lower cost and high efficiency. However, they are highly dependent on the selected electronic components, they have limited flexibility once assembled and they are customized to fit a specific scanner's geometry. Timestamp based systems are gathering more attention lately, especially with high channel count fully digital systems. These new systems must however cope with important singles count rates. One option is to record every detected event and postpone coincidence detection offline. For daily use systems, a real time engine is preferable because it dramatically reduces data volume and hence image preprocessing time and raw data management. This paper presents the timestamp based coincidence engine for the LabPET¿, a small animal PET scanner with up to 4608 individual readout avalanche photodiode channels. The engine can handle up to 100 million single events per second and has extensive flexibility because it resides in programmable logic devices. It can be adapted for any detector geometry or channel count, can be ported to newer, faster programmable devices and can have extra modules added to take advantage of scanner-specific features. Finally, the user can select between full processing mode for imaging protocols and minimum processing mode to study different approaches for coincidence detection with offline software.

  20. SYRMEP Tomo Project: a graphical user interface for customizing CT reconstruction workflows.

    PubMed

    Brun, Francesco; Massimi, Lorenzo; Fratini, Michela; Dreossi, Diego; Billé, Fulvio; Accardo, Agostino; Pugliese, Roberto; Cedola, Alessia

    2017-01-01

    When considering the acquisition of experimental synchrotron radiation (SR) X-ray CT data, the reconstruction workflow cannot be limited to the essential computational steps of flat fielding and filtered back projection (FBP). More refined image processing is often required, usually to compensate artifacts and enhance the quality of the reconstructed images. In principle, it would be desirable to optimize the reconstruction workflow at the facility during the experiment (beamtime). However, several practical factors affect the image reconstruction part of the experiment and users are likely to conclude the beamtime with sub-optimal reconstructed images. Through an example of application, this article presents SYRMEP Tomo Project (STP), an open-source software tool conceived to let users design custom CT reconstruction workflows. STP has been designed for post-beamtime (off-line use) and for a new reconstruction of past archived data at user's home institution where simple computing resources are available. Releases of the software can be downloaded at the Elettra Scientific Computing group GitHub repository https://github.com/ElettraSciComp/STP-Gui.

  1. Satellite-Based Stratospheric and Tropospheric Measurements: Determination of Global Ozone and Other Trace Species

    NASA Technical Reports Server (NTRS)

    Chance, Kelly

    2003-01-01

    This grant is an extension to our previous NASA Grant NAG5-3461, providing incremental funding to continue GOME (Global Ozone Monitoring Experiment) and SCIAMACHY (SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY) studies. This report summarizes research done under these grants through December 31, 2002. The research performed during this reporting period includes development and maintenance of scientific software for the GOME retrieval algorithms, consultation on operational software development for GOME, consultation and development for SCIAMACHY near-real-time (NRT) and off-line (OL) data products, and participation in initial SCIAMACHY validation studies. The Global Ozone Monitoring Experiment was successfully launched on the ERS-2 satellite on April 20, 1995, and remains working in normal fashion. SCIAMACHY was launched March 1, 2002 on the ESA Envisat satellite. Three GOME-2 instruments are now scheduled to fly on the Metop series of operational meteorological satellites (Eumetsat). K. Chance is a member of the reconstituted GOME Scientific Advisory Group, which will guide the GOME-2 program as well as the continuing ERS-2 GOME program.

  2. Satellite-Based Stratospheric and Tropospheric Measurements: Determination of Global Ozone and Other Trace Species

    NASA Astrophysics Data System (ADS)

    Chance, Kelly

    2003-02-01

    This grant is an extension to our previous NASA Grant NAG5-3461, providing incremental funding to continue GOME (Global Ozone Monitoring Experiment) and SCIAMACHY (SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY) studies. This report summarizes research done under these grants through December 31, 2002. The research performed during this reporting period includes development and maintenance of scientific software for the GOME retrieval algorithms, consultation on operational software development for GOME, consultation and development for SCIAMACHY near-real-time (NRT) and off-line (OL) data products, and participation in initial SCIAMACHY validation studies. The Global Ozone Monitoring Experiment was successfully launched on the ERS-2 satellite on April 20, 1995, and remains working in normal fashion. SCIAMACHY was launched March 1, 2002 on the ESA Envisat satellite. Three GOME-2 instruments are now scheduled to fly on the Metop series of operational meteorological satellites (Eumetsat). K. Chance is a member of the reconstituted GOME Scientific Advisory Group, which will guide the GOME-2 program as well as the continuing ERS-2 GOME program.

  3. U.S. Participation in the GOME and SCIAMACHY Projects

    NASA Technical Reports Server (NTRS)

    Chance, K. V.

    1996-01-01

    This report summarizes research done under NASA Grant NAGW-2541 from April 1, 1996 through March 31, 1997. The research performed during this reporting period includes development and maintenance of scientific software for the GOME retrieval algorithms, consultation on operational software development for GOME, consultation and development for SCIAMACHY near-real-time (NRT) and off-line (OL) data products, and development of infrared line-by-line atmospheric modeling and retrieval capability for SCIAMACHY. SAO also continues to participate in GOME validation studies, to the limit that can be accomplished at the present level of funding. The Global Ozone Monitoring Experiment was successfully launched on the ERS-2 satellite on April 20, 1995, and remains working in normal fashion. SCIAMACHY is currently in instrument characterization. The first two European ozone monitoring instruments (OMI), to fly on the Metop series of operational meteorological satellites being planned by Eumetsat, have been selected to be GOME-type instruments (the first, in fact, will be the refurbished GOME flight spare). K. Chance is the U.S. member of the OMI Users Advisory Group.

  4. Identification of visual evoked response parameters sensitive to pilot mental state

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.

    1988-01-01

    Systems analysis techniques were developed and demonstrated for modeling the electroencephalographic (EEG) steady state visual evoked response (ssVER), for use in EEG data compression and as an indicator of mental workload. The study focused on steady state frequency domain stimulation and response analysis, implemented with a sum-of-sines (SOS) stimulus generator and an off-line describing function response analyzer. Three major tasks were conducted: (1) VER related systems identification material was reviewed; (2) Software for experiment control and data analysis was developed and implemented; and (3) ssVER identification and modeling was demonstrated, via a mental loading experiment. It was found that a systems approach to ssVER functional modeling can serve as the basis for eventual development of a mental workload indicator. The review showed how transient visual evoked response (tVER) and ssVER research are related at the functional level, the software development showed how systems techniques can be used for ssVER characterization, and the pilot experiment showed how a simple model can be used to capture the basic dynamic response of the ssVER, under varying loads.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Zhang, Zhao

    With each CMOS technology generation, leakage energy consumption has been dramatically increasing and hence, managing leakage power consumption of large last-level caches (LLCs) has become a critical issue in modern processor design. In this paper, we present EnCache, a novel software-based technique which uses dynamic profiling-based cache reconfiguration for saving cache leakage energy. EnCache uses a simple hardware component called profiling cache, which dynamically predicts energy efficiency of an application for 32 possible cache configurations. Using these estimates, system software reconfigures the cache to the most energy efficient configuration. EnCache uses dynamic cache reconfiguration and hence, it does not requiremore » offline profiling or tuning the parameter for each application. Furthermore, EnCache optimizes directly for the overall memory subsystem (LLC and main memory) energy efficiency instead of the LLC energy efficiency alone. The experiments performed with an x86-64 simulator and workloads from SPEC2006 suite confirm that EnCache provides larger energy saving than a conventional energy saving scheme. For single core and dual-core system configurations, the average savings in memory subsystem energy over a shared baseline configuration are 30.0% and 27.3%, respectively.« less

  6. The Profiles in Practice School Reporting Software.

    ERIC Educational Resources Information Center

    Griffin, Patrick

    "The Profiles in Practice: School Reporting Software" provides a framework for reports on different aspects of performance in an assessment program. This booklet is the installation guide and user manual for the Profiles in Practice software, which is included as a CD-ROM. The chapters of the guide are: (1) "Installation"; (2) "Starting the…

  7. Checklists for the Evaluation of Educational Software: Critical Review and Prospects.

    ERIC Educational Resources Information Center

    Tergan, Sigmar-Olaf

    1998-01-01

    Reviews strengths and weaknesses of check lists for the evaluation of computer software and outlines consequences for their practical application. Suggests an approach based on an instructional design model and a comprehensive framework to cope with problems of validity and predictive power of software evaluation. Discusses prospects of the…

  8. Application Development Methodology Appropriateness: An Exploratory Case Study Bridging the Gap between Framework Characteristics and Selection

    ERIC Educational Resources Information Center

    Williams, Lawrence H., Jr.

    2013-01-01

    This qualitative study analyzed experiences of twenty software developers. The research showed that all software development methodologies are distinct from each other. While some, such as waterfall, focus on traditional, plan-driven approaches that allow software requirements and design to evolve; others facilitate ambiguity and uncertainty by…

  9. Probabilistic Motor Sequence Yields Greater Offline and Less Online Learning than Fixed Sequence

    PubMed Central

    Du, Yue; Prashad, Shikha; Schoenbrun, Ilana; Clark, Jane E.

    2016-01-01

    It is well acknowledged that motor sequences can be learned quickly through online learning. Subsequently, the initial acquisition of a motor sequence is boosted or consolidated by offline learning. However, little is known whether offline learning can drive the fast learning of motor sequences (i.e., initial sequence learning in the first training session). To examine offline learning in the fast learning stage, we asked four groups of young adults to perform the serial reaction time (SRT) task with either a fixed or probabilistic sequence and with or without preliminary knowledge (PK) of the presence of a sequence. The sequence and PK were manipulated to emphasize either procedural (probabilistic sequence; no preliminary knowledge (NPK)) or declarative (fixed sequence; with PK) memory that were found to either facilitate or inhibit offline learning. In the SRT task, there were six learning blocks with a 2 min break between each consecutive block. Throughout the session, stimuli followed the same fixed or probabilistic pattern except in Block 5, in which stimuli appeared in a random order. We found that PK facilitated the learning of a fixed sequence, but not a probabilistic sequence. In addition to overall learning measured by the mean reaction time (RT), we examined the progressive changes in RT within and between blocks (i.e., online and offline learning, respectively). It was found that the two groups who performed the fixed sequence, regardless of PK, showed greater online learning than the other two groups who performed the probabilistic sequence. The groups who performed the probabilistic sequence, regardless of PK, did not display online learning, as indicated by a decline in performance within the learning blocks. However, they did demonstrate remarkably greater offline improvement in RT, which suggests that they are learning the probabilistic sequence offline. These results suggest that in the SRT task, the fast acquisition of a motor sequence is driven by concurrent online and offline learning. In addition, as the acquisition of a probabilistic sequence requires greater procedural memory compared to the acquisition of a fixed sequence, our results suggest that offline learning is more likely to take place in a procedural sequence learning task. PMID:26973502

  10. Probabilistic Motor Sequence Yields Greater Offline and Less Online Learning than Fixed Sequence.

    PubMed

    Du, Yue; Prashad, Shikha; Schoenbrun, Ilana; Clark, Jane E

    2016-01-01

    It is well acknowledged that motor sequences can be learned quickly through online learning. Subsequently, the initial acquisition of a motor sequence is boosted or consolidated by offline learning. However, little is known whether offline learning can drive the fast learning of motor sequences (i.e., initial sequence learning in the first training session). To examine offline learning in the fast learning stage, we asked four groups of young adults to perform the serial reaction time (SRT) task with either a fixed or probabilistic sequence and with or without preliminary knowledge (PK) of the presence of a sequence. The sequence and PK were manipulated to emphasize either procedural (probabilistic sequence; no preliminary knowledge (NPK)) or declarative (fixed sequence; with PK) memory that were found to either facilitate or inhibit offline learning. In the SRT task, there were six learning blocks with a 2 min break between each consecutive block. Throughout the session, stimuli followed the same fixed or probabilistic pattern except in Block 5, in which stimuli appeared in a random order. We found that PK facilitated the learning of a fixed sequence, but not a probabilistic sequence. In addition to overall learning measured by the mean reaction time (RT), we examined the progressive changes in RT within and between blocks (i.e., online and offline learning, respectively). It was found that the two groups who performed the fixed sequence, regardless of PK, showed greater online learning than the other two groups who performed the probabilistic sequence. The groups who performed the probabilistic sequence, regardless of PK, did not display online learning, as indicated by a decline in performance within the learning blocks. However, they did demonstrate remarkably greater offline improvement in RT, which suggests that they are learning the probabilistic sequence offline. These results suggest that in the SRT task, the fast acquisition of a motor sequence is driven by concurrent online and offline learning. In addition, as the acquisition of a probabilistic sequence requires greater procedural memory compared to the acquisition of a fixed sequence, our results suggest that offline learning is more likely to take place in a procedural sequence learning task.

  11. Off-Line Multidimensional Liquid Chromatography and Auto Sampling Result in Sample Loss in LC/LC–MS/MS

    PubMed Central

    2015-01-01

    Large-scale proteomics often employs two orthogonal separation methods to fractionate complex peptide mixtures. Fractionation can involve ion exchange separation coupled to reversed-phase separation or, more recently, two reversed-phase separations performed at different pH values. When multidimensional separations are combined with tandem mass spectrometry for protein identification, the strategy is often referred to as multidimensional protein identification technology (MudPIT). MudPIT has been used in either an automated (online) or manual (offline) format. In this study, we evaluated the performance of different MudPIT strategies by both label-free and tandem mass tag (TMT) isobaric tagging. Our findings revealed that online MudPIT provided more peptide/protein identifications and higher sequence coverage than offline platforms. When employing an off-line fractionation method with direct loading of samples onto the column from an eppendorf tube via a high-pressure device, a 5.3% loss in protein identifications is observed. When off-line fractionated samples are loaded via an autosampler, a 44.5% loss in protein identifications is observed compared with direct loading of samples onto a triphasic capillary column. Moreover, peptide recovery was significantly lower after offline fractionation than in online fractionation. Signal-to-noise (S/N) ratio, however, was not significantly altered between experimental groups. It is likely that offline sample collection results in stochastic peptide loss due to noncovalent adsorption to solid surfaces. Therefore, the use of the offline approaches should be considered carefully when processing minute quantities of valuable samples. PMID:25040086

  12. ATLAS particle detector CSC ROD software design and implementation, and, Addition of K physics to chi-squared analysis of FDQM

    NASA Astrophysics Data System (ADS)

    Hawkins, Donovan Lee

    In this thesis I present a software framework for use on the ATLAS muon CSC readout driver. This C++ framework uses plug-in Decoders incorporating hand-optimized assembly language routines to perform sparsification and data formatting. The software is designed with both flexibility and performance in mind, and runs on a custom 9U VME board using Texas Instruments TMS360C6203 digital signal processors. I describe the requirements of the software, the methods used in its design, and the results of testing the software with simulated data. I also present modifications to a chi-squared analysis of the Standard Model and Four Down Quark Model (FDQM) originally done by Dr. Dennis Silverman. The addition of four new experiments to the analysis has little effect on the Standard Model but provides important new restrictions on the FDQM. The method used to incorporate these new experiments is presented, and the consequences of their addition are reviewed.

  13. Ethnography, the Internet, and Youth Culture: Strategies for Examining Social Resistance and "Online-Offline" Relationships

    ERIC Educational Resources Information Center

    Wilson, Brian

    2006-01-01

    The integration of traditional (offline and face-to-face) and virtual ethnographic methods can aid researchers interested in developing understandings of relationships between online and offline cultural life, and examining the diffuse and sometimes global character of youth resistance. In constructing this argument, I have used insights from…

  14. Applying a Model of Communicative Influence in Education in Closed Online and Offline Courses

    ERIC Educational Resources Information Center

    Carr, Caleb T.

    2014-01-01

    This research explores communicative influences on cognitive learning and educational affect in online and offline courses limited to only enrolled students. A survey was conducted of students (N = 147) enrolled in online and offline courses within a single department during Summer, 2013. Respondents were asked about their classroom communication…

  15. Non-Markovian character in human mobility: Online and offline.

    PubMed

    Zhao, Zhi-Dan; Cai, Shi-Min; Lu, Yang

    2015-06-01

    The dynamics of human mobility characterizes the trajectories that humans follow during their daily activities and is the foundation of processes from epidemic spreading to traffic prediction and information recommendation. In this paper, we investigate a massive data set of human activity, including both online behavior of browsing websites and offline one of visiting towers based mobile terminations. The non-Markovian character observed from both online and offline cases is suggested by the scaling law in the distribution of dwelling time at individual and collective levels, respectively. Furthermore, we argue that the lower entropy and higher predictability in human mobility for both online and offline cases may originate from this non-Markovian character. However, the distributions of individual entropy and predictability show the different degrees of non-Markovian character between online and offline cases. To account for non-Markovian character in human mobility, we apply a protype model with three basic ingredients, namely, preferential return, inertial effect, and exploration to reproduce the dynamic process of online and offline human mobilities. The simulations show that the model has an ability to obtain characters much closer to empirical observations.

  16. Solidarity.com: is there a link between offline behavior and online donations?

    PubMed

    Eller, Anja

    2008-10-01

    Solidarity websites, such as The Hunger Site, where people can donate food at no financial cost and minimal effort, have become immensely popular and effective since 1999. These new forms of philanthropy are characterized by wide participation and direct assistance and feedback. The present longitudinal, quasi-experimental study aimed to examine whether online solidarity can be predicted by offline contact with, attitudes about, and altruistic behavior tendencies towards a population in need, asylum seekers. Fifty-seven university students completed two surveys, separated by 1 year. Prior to T1, only 9% of respondents had visited solidarity websites, while at T2 47% reported clicking. Multiple regression analysis showed that T2 visits to solidarity websites were (negatively) predicted by T1 quantity of contact, and marginally, by T1 general evaluation of asylum seekers. These long-term, offline-to-online effects are intriguing, although there were no effects of offline contact quality and altruistic behavior tendencies. Future research should further investigate the causal direction between offline and online behavior and the factors that might influence the link between offline and online attitudes and behavior.

  17. Factors influencing HIV serodisclosure among men who have sex with men in the US: an examination of online versus offline meeting environments and risk behaviors.

    PubMed

    Noor, Syed W B; Rampalli, Krystal; Rosser, B R Simon

    2014-09-01

    One key component in HIV prevention is serostatus disclosure. Until recently, many studies have focused on interpersonal factors and minimally considered meeting venues as they pertain to disclosure. Using data (N = 3,309) from an online survey conducted across 16 U.S. metropolitan statistical areas, we examined whether HIV serodisclosure varies by online/offline meeting venues in both protected and unprotected anal intercourse encounters. Most of the sample (76.9 %) reported meeting men for sex (last 90 days) both online and offline, versus 12.7 % offline only and 10.4 % online only. After controlling for other variables, we found that the men who meet partners in both online and offline were 20~30 % more likely to report disclosing their HIV status prior to sex than men who met their partners exclusively either offline or online. While previous studies have identified the Internet as a risk environment, our findings suggest bi-environmental partner seeking may also have beneficial effects.

  18. Elements of strategic capability for software outsourcing enterprises based on the resource

    NASA Astrophysics Data System (ADS)

    Shi, Wengeng

    2011-10-01

    Software outsourcing enterprises as an emerging high-tech enterprises, the rise of the speed and the number was very amazing. In addition to Chinese software outsourcing for giving preferential policies, the software outsourcing business has its ability to upgrade, and in general the software companies have not had the related characteristics. View from the resource base of the theory, the analysis software outsourcing companies have the ability and resources of rare and valuable and non-mimic, we try to give an initial framework for theoretical analysis based on this.

  19. Incorporating cost-benefit analyses into software assurance planning

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Sigal, B.; Cornford, S. L.; Hutchinson, P.

    2001-01-01

    The objective is to use cost-benefit analyses to identify, for a given project, optimal sets of software assurance activities. Towards this end we have incorporated cost-benefit calculations into a risk management framework.

  20. Design, implementation and validation of a novel open framework for agile development of mobile health applications.

    PubMed

    Banos, Oresti; Villalonga, Claudia; Garcia, Rafael; Saez, Alejandro; Damas, Miguel; Holgado-Terriza, Juan A; Lee, Sungyong; Pomares, Hector; Rojas, Ignacio

    2015-01-01

    The delivery of healthcare services has experienced tremendous changes during the last years. Mobile health or mHealth is a key engine of advance in the forefront of this revolution. Although there exists a growing development of mobile health applications, there is a lack of tools specifically devised for their implementation. This work presents mHealthDroid, an open source Android implementation of a mHealth Framework designed to facilitate the rapid and easy development of mHealth and biomedical apps. The framework is particularly planned to leverage the potential of mobile devices such as smartphones or tablets, wearable sensors and portable biomedical systems. These devices are increasingly used for the monitoring and delivery of personal health care and wellbeing. The framework implements several functionalities to support resource and communication abstraction, biomedical data acquisition, health knowledge extraction, persistent data storage, adaptive visualization, system management and value-added services such as intelligent alerts, recommendations and guidelines. An exemplary application is also presented along this work to demonstrate the potential of mHealthDroid. This app is used to investigate on the analysis of human behavior, which is considered to be one of the most prominent areas in mHealth. An accurate activity recognition model is developed and successfully validated in both offline and online conditions.

  1. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sussman, Alan

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  2. Wake Turbulence Mitigation for Departures (WTMD) Prototype System - Software Design Document

    NASA Technical Reports Server (NTRS)

    Sturdy, James L.

    2008-01-01

    This document describes the software design of a prototype Wake Turbulence Mitigation for Departures (WTMD) system that was evaluated in shadow mode operation at the Saint Louis (KSTL) and Houston (KIAH) airports. This document describes the software that provides the system framework, communications, user displays, and hosts the Wind Forecasting Algorithm (WFA) software developed by the M.I.T. Lincoln Laboratory (MIT-LL). The WFA algorithms and software are described in a separate document produced by MIT-LL.

  3. Neutron imaging data processing using the Mantid framework

    NASA Astrophysics Data System (ADS)

    Pouzols, Federico M.; Draper, Nicholas; Nagella, Sri; Yang, Erica; Sajid, Ahmed; Ross, Derek; Ritchie, Brian; Hill, John; Burca, Genoveva; Minniti, Triestino; Moreton-Smith, Christopher; Kockelmann, Winfried

    2016-09-01

    Several imaging instruments are currently being constructed at neutron sources around the world. The Mantid software project provides an extensible framework that supports high-performance computing for data manipulation, analysis and visualisation of scientific data. At ISIS, IMAT (Imaging and Materials Science & Engineering) will offer unique time-of-flight neutron imaging techniques which impose several software requirements to control the data reduction and analysis. Here we outline the extensions currently being added to Mantid to provide specific support for neutron imaging requirements.

  4. Open source libraries and frameworks for biological data visualisation: a guide for developers.

    PubMed

    Wang, Rui; Perez-Riverol, Yasset; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-04-01

    Recent advances in high-throughput experimental techniques have led to an exponential increase in both the size and the complexity of the data sets commonly studied in biology. Data visualisation is increasingly used as the key to unlock this data, going from hypothesis generation to model evaluation and tool implementation. It is becoming more and more the heart of bioinformatics workflows, enabling scientists to reason and communicate more effectively. In parallel, there has been a corresponding trend towards the development of related software, which has triggered the maturation of different visualisation libraries and frameworks. For bioinformaticians, scientific programmers and software developers, the main challenge is to pick out the most fitting one(s) to create clear, meaningful and integrated data visualisation for their particular use cases. In this review, we introduce a collection of open source or free to use libraries and frameworks for creating data visualisation, covering the generation of a wide variety of charts and graphs. We will focus on software written in Java, JavaScript or Python. We truly believe this software offers the potential to turn tedious data into exciting visual stories. © 2014 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Open source libraries and frameworks for biological data visualisation: A guide for developers

    PubMed Central

    Wang, Rui; Perez-Riverol, Yasset; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-01-01

    Recent advances in high-throughput experimental techniques have led to an exponential increase in both the size and the complexity of the data sets commonly studied in biology. Data visualisation is increasingly used as the key to unlock this data, going from hypothesis generation to model evaluation and tool implementation. It is becoming more and more the heart of bioinformatics workflows, enabling scientists to reason and communicate more effectively. In parallel, there has been a corresponding trend towards the development of related software, which has triggered the maturation of different visualisation libraries and frameworks. For bioinformaticians, scientific programmers and software developers, the main challenge is to pick out the most fitting one(s) to create clear, meaningful and integrated data visualisation for their particular use cases. In this review, we introduce a collection of open source or free to use libraries and frameworks for creating data visualisation, covering the generation of a wide variety of charts and graphs. We will focus on software written in Java, JavaScript or Python. We truly believe this software offers the potential to turn tedious data into exciting visual stories. PMID:25475079

  6. Contour Tracking in Echocardiographic Sequences via Sparse Representation and Dictionary Learning

    PubMed Central

    Huang, Xiaojie; Dione, Donald P.; Compas, Colin B.; Papademetris, Xenophon; Lin, Ben A.; Bregasi, Alda; Sinusas, Albert J.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets. PMID:24292554

  7. Associations between online friendship and Internet addiction among adolescents and emerging adults.

    PubMed

    Smahel, David; Brown, B Bradford; Blinka, Lukas

    2012-03-01

    The past decades have witnessed a dramatic increase in the number of youths using the Internet, especially for communicating with peers. Online activity can widen and strengthen the social networks of adolescents and emerging adults (Subrahmanyam & Smahel, 2011), but it also increases the risk of Internet addiction. Using a framework derived from Griffiths (2000a), this study examined associations between online friendship and Internet addiction in a representative sample (n = 394) of Czech youths ages 12-26 years (M = 18.58). Three different approaches to friendship were identified: exclusively offline, face-to-face oriented, Internet oriented, on the basis of the relative percentages of online and offline associates in participants' friendship networks. The rate of Internet addiction did not differ by age or gender but was associated with communication styles, hours spent online, and friendship approaches. The study revealed that effects between Internet addiction and approaches to friendship may be reciprocal: Being oriented toward having more online friends, preferring online communication, and spending more time online were related to increased risk of Internet addiction; on the other hand, there is an alternative causal explanation that Internet addiction and preference for online communication conditions young people's tendency to seek friendship from people met online. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  8. Cyberfaking: I can, so I will? Intentions to fake in online psychological testing.

    PubMed

    Grieve, Rachel; Elliott, Jade

    2013-05-01

    The aim of this study was to investigate whether intentions to fake online (cyberfaking) or in pencil-and-paper psychological testing differ. Participants (N=154) completed online questionnaires measuring attitudes toward faking, perceived behavioral control over faking, subjective norms regarding faking, and intentions to fake in future psychological assessment, with online and pencil-and-paper test administration scenarios compared. Participants showed similar intentions toward cyberfaking and faking in pencil-and-paper testing. However, participants held more positive attitudes toward cyberfaking than faking offline, greater perceived behavioral control over cyberfaking than offline faking, and more favorable subjective norms toward cyberfaking compared to offline faking. Analysis via multiple regression revealed that more positive attitudes toward cyberfaking, greater perceived behavioral control over cyberfaking, and more favorable subjective norms regarding cyberfaking were significantly related to the intention to cyberfake. In addition, more positive attitudes toward faking offline and greater perceived behavioral control over faking offline were significantly related to the intention to fake in offline tests. Overall, results indicated a similar pattern of relationship in the prediction of intentions to engage in faking regardless of the test administration modality scenario. Subjective norm, however, was not a significant predictor for faking offline. Future research could aim to include a behavioral faking outcome measure, as well as examine intentions to cyberfake in specific scenarios (for example, faking good or faking bad).

  9. NASA Data Acquisition System Software Development for Rocket Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Herbert, Phillip W., Sr.; Elliot, Alex C.; Graves, Andrew R.

    2015-01-01

    Current NASA propulsion test facilities include Stennis Space Center in Mississippi, Marshall Space Flight Center in Alabama, Plum Brook Station in Ohio, and White Sands Test Facility in New Mexico. Within and across these centers, a diverse set of data acquisition systems exist with different hardware and software platforms. The NASA Data Acquisition System (NDAS) is a software suite designed to operate and control many critical aspects of rocket engine testing. The software suite combines real-time data visualization, data recording to a variety formats, short-term and long-term acquisition system calibration capabilities, test stand configuration control, and a variety of data post-processing capabilities. Additionally, data stream conversion functions exist to translate test facility data streams to and from downstream systems, including engine customer systems. The primary design goals for NDAS are flexibility, extensibility, and modularity. Providing a common user interface for a variety of hardware platforms helps drive consistency and error reduction during testing. In addition, with an understanding that test facilities have different requirements and setups, the software is designed to be modular. One engine program may require real-time displays and data recording; others may require more complex data stream conversion, measurement filtering, or test stand configuration management. The NDAS suite allows test facilities to choose which components to use based on their specific needs. The NDAS code is primarily written in LabVIEW, a graphical, data-flow driven language. Although LabVIEW is a general-purpose programming language; large-scale software development in the language is relatively rare compared to more commonly used languages. The NDAS software suite also makes extensive use of a new, advanced development framework called the Actor Framework. The Actor Framework provides a level of code reuse and extensibility that has previously been difficult to achieve using LabVIEW. The

  10. JRTF: A Flexible Software Framework for Real-Time Control in Magnetic Confinement Nuclear Fusion Experiments

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Zheng, G. Z.; Zheng, W.; Chen, Z.; Yuan, T.; Yang, C.

    2016-04-01

    The magnetic confinement nuclear fusion experiments require various real-time control applications like plasma control. ITER has designed the Fast Plant System Controller (FPSC) for this job. ITER provided hardware and software standards and guidelines for building a FPSC. In order to develop various real-time FPSC applications efficiently, a flexible real-time software framework called J-TEXT real-time framework (JRTF) is developed by J-TEXT tokamak team. JRTF allowed developers to implement different functions as independent and reusable modules called Application Blocks (AB). The AB developers only need to focus on implementing the control tasks or the algorithms. The timing, scheduling, data sharing and eventing are handled by the JRTF pipelines. JRTF provides great flexibility on developing ABs. Unit test against ABs can be developed easily and ABs can even be used in non-JRTF applications. JRTF also provides interfaces allowing JRTF applications to be configured and monitored at runtime. JRTF is compatible with ITER standard FPSC hardware and ITER (Control, Data Access and Communication) CODAC Core software. It can be configured and monitored using (Experimental Physics and Industrial Control System) EPICS. Moreover the JRTF can be ported to different platforms and be integrated with supervisory control software other than EPICS. The paper presents the design and implementation of JRTF as well as brief test results.

  11. Tools and Approaches for the Construction of Knowledge Models from the Neuroscientific Literature

    PubMed Central

    Burns, Gully A. P. C.; Khan, Arshad M.; Ghandeharizadeh, Shahram; O’Neill, Mark A.; Chen, Yi-Shin

    2015-01-01

    Within this paper, we describe a neuroinformatics project (called “NeuroScholar,” http://www.neuroscholar.org/) that enables researchers to examine, manage, manipulate, and use the information contained within the published neuroscientific literature. The project is built within a multi-level, multi-component framework constructed with the use of software engineering methods that themselves provide code-building functionality for neuroinformaticians. We describe the different software layers of the system. First, we present a hypothetical usage scenario illustrating how NeuroScholar permits users to address large-scale questions in a way that would otherwise be impossible. We do this by applying NeuroScholar to a “real-world” neuroscience question: How is stress-related information processed in the brain? We then explain how the overall design of NeuroScholar enables the system to work and illustrate different components of the user interface. We then describe the knowledge management strategy we use to store interpretations. Finally, we describe the software engineering framework we have devised (called the “View-Primitive-Data Model framework,” [VPDMf]) to provide an open-source, accelerated software development environment for the project. We believe that NeuroScholar will be useful to experimental neuroscientists by helping them interact with the primary neuroscientific literature in a meaningful way, and to neuroinformaticians by providing them with useful, affordable software engineering tools. PMID:15055395

  12. Involuntary eye motion correction in retinal optical coherence tomography: Hardware or software solution?

    PubMed

    Baghaie, Ahmadreza; Yu, Zeyun; D'Souza, Roshan M

    2017-04-01

    In this paper, we review state-of-the-art techniques to correct eye motion artifacts in Optical Coherence Tomography (OCT) imaging. The methods for eye motion artifact reduction can be categorized into two major classes: (1) hardware-based techniques and (2) software-based techniques. In the first class, additional hardware is mounted onto the OCT scanner to gather information about the eye motion patterns during OCT data acquisition. This information is later processed and applied to the OCT data for creating an anatomically correct representation of the retina, either in an offline or online manner. In software based techniques, the motion patterns are approximated either by comparing the acquired data to a reference image, or by considering some prior assumptions about the nature of the eye motion. Careful investigations done on the most common methods in the field provides invaluable insight regarding future directions of the research in this area. The challenge in hardware-based techniques lies in the implementation aspects of particular devices. However, the results of these techniques are superior to those obtained from software-based techniques because they are capable of capturing secondary data related to eye motion during OCT acquisition. Software-based techniques on the other hand, achieve moderate success and their performance is highly dependent on the quality of the OCT data in terms of the amount of motion artifacts contained in them. However, they are still relevant to the field since they are the sole class of techniques with the ability to be applied to legacy data acquired using systems that do not have extra hardware to track eye motion. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Verification of Java Programs using Symbolic Execution and Invariant Generation

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina; Visser, Willem

    2004-01-01

    Software verification is recognized as an important and difficult problem. We present a norel framework, based on symbolic execution, for the automated verification of software. The framework uses annotations in the form of method specifications an3 loop invariants. We present a novel iterative technique that uses invariant strengthening and approximation for discovering these loop invariants automatically. The technique handles different types of data (e.g. boolean and numeric constraints, dynamically allocated structures and arrays) and it allows for checking universally quantified formulas. Our framework is built on top of the Java PathFinder model checking toolset and it was used for the verification of several non-trivial Java programs.

  14. A Unified Algebraic and Logic-Based Framework Towards Safe Routing Implementations

    DTIC Science & Technology

    2015-08-13

    Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative...and debugging several SDN applications. Example-based SDN synthesis. Recent emergence of software - defined networks offers an opportunity to design...domain of Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative networking

  15. Framework for multi-resolution analyses of advanced traffic management strategies [summary].

    DOT National Transportation Integrated Search

    2017-01-01

    Transportation planning relies extensively on software that can simulate and predict travel behavior in response to alternative transportation networks. However, different software packages view traffic at different scales. Some programs are based on...

  16. A pluggable framework for parallel pairwise sequence search.

    PubMed

    Archuleta, Jeremy; Feng, Wu-chun; Tilevich, Eli

    2007-01-01

    The current and near future of the computing industry is one of multi-core and multi-processor technology. Most existing sequence-search tools have been designed with a focus on single-core, single-processor systems. This discrepancy between software design and hardware architecture substantially hinders sequence-search performance by not allowing full utilization of the hardware. This paper presents a novel framework that will aid the conversion of serial sequence-search tools into a parallel version that can take full advantage of the available hardware. The framework, which is based on a software architecture called mixin layers with refined roles, enables modules to be plugged into the framework with minimal effort. The inherent modular design improves maintenance and extensibility, thus opening up a plethora of opportunities for advanced algorithmic features to be developed and incorporated while routine maintenance of the codebase persists.

  17. A GIS-based generic real-time risk assessment framework and decision tools for chemical spills in the river basin.

    PubMed

    Jiang, Jiping; Wang, Peng; Lung, Wu-seng; Guo, Liang; Li, Mei

    2012-08-15

    This paper presents a generic framework and decision tools of real-time risk assessment on Emergency Environmental Decision Support System for response to chemical spills in river basin. The generic "4-step-3-model" framework is able to delineate the warning area and the impact on vulnerable receptors considering four types of hazards referring to functional area, societal impact, and human health and ecology system. Decision tools including the stand-alone system and software components were implemented on GIS platform. A detailed case study on the Songhua River nitrobenzene spill illustrated the goodness of the framework and tool Spill first responders and decision makers of catchment management will benefit from the rich, visual and dynamic hazard information output from the software. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. GiPSi:a framework for open source/open architecture software development for organ-level surgical simulation.

    PubMed

    Cavuşoğlu, M Cenk; Göktekin, Tolga G; Tendick, Frank

    2006-04-01

    This paper presents the architectural details of an evolving open source/open architecture software framework for developing organ-level surgical simulations. Our goal is to facilitate shared development of reusable models, to accommodate heterogeneous models of computation, and to provide a framework for interfacing multiple heterogeneous models. The framework provides an application programming interface for interfacing dynamic models defined over spatial domains. It is specifically designed to be independent of the specifics of the modeling methods used, and therefore facilitates seamless integration of heterogeneous models and processes. Furthermore, each model has separate geometries for visualization, simulation, and interfacing, allowing the model developer to choose the most natural geometric representation for each case. Input/output interfaces for visualization and haptics for real-time interactive applications have also been provided.

  19. Supporting metabolomics with adaptable software: design architectures for the end-user.

    PubMed

    Sarpe, Vladimir; Schriemer, David C

    2017-02-01

    Large and disparate sets of LC-MS data are generated by modern metabolomics profiling initiatives, and while useful software tools are available to annotate and quantify compounds, the field requires continued software development in order to sustain methodological innovation. Advances in software development practices allow for a new paradigm in tool development for metabolomics, where increasingly the end-user can develop or redeploy utilities ranging from simple algorithms to complex workflows. Resources that provide an organized framework for development are described and illustrated with LC-MS processing packages that have leveraged their design tools. Full access to these resources depends in part on coding experience, but the emergence of workflow builders and pluggable frameworks strongly reduces the skill level required. Developers in the metabolomics community are encouraged to use these resources and design content for uptake and reuse. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Orthographic Software Modelling: A Novel Approach to View-Based Software Engineering

    NASA Astrophysics Data System (ADS)

    Atkinson, Colin

    The need to support multiple views of complex software architectures, each capturing a different aspect of the system under development, has been recognized for a long time. Even the very first object-oriented analysis/design methods such as the Booch method and OMT supported a number of different diagram types (e.g. structural, behavioral, operational) and subsequent methods such as Fusion, Kruchten's 4+1 views and the Rational Unified Process (RUP) have added many more views over time. Today's leading modeling languages such as the UML and SysML, are also oriented towards supporting different views (i.e. diagram types) each able to portray a different facets of a system's architecture. More recently, so called enterprise architecture frameworks such as the Zachman Framework, TOGAF and RM-ODP have become popular. These add a whole set of new non-functional views to the views typically emphasized in traditional software engineering environments.

  1. A general framework for parametric survival analysis.

    PubMed

    Crowther, Michael J; Lambert, Paul C

    2014-12-30

    Parametric survival models are being increasingly used as an alternative to the Cox model in biomedical research. Through direct modelling of the baseline hazard function, we can gain greater understanding of the risk profile of patients over time, obtaining absolute measures of risk. Commonly used parametric survival models, such as the Weibull, make restrictive assumptions of the baseline hazard function, such as monotonicity, which is often violated in clinical datasets. In this article, we extend the general framework of parametric survival models proposed by Crowther and Lambert (Journal of Statistical Software 53:12, 2013), to incorporate relative survival, and robust and cluster robust standard errors. We describe the general framework through three applications to clinical datasets, in particular, illustrating the use of restricted cubic splines, modelled on the log hazard scale, to provide a highly flexible survival modelling framework. Through the use of restricted cubic splines, we can derive the cumulative hazard function analytically beyond the boundary knots, resulting in a combined analytic/numerical approach, which substantially improves the estimation process compared with only using numerical integration. User-friendly Stata software is provided, which significantly extends parametric survival models available in standard software. Copyright © 2014 John Wiley & Sons, Ltd.

  2. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  3. A Framework for Performing V&V within Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1996-01-01

    Verification and validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In order to provide early detection of errors, V&V is conducted in parallel with system development, often beginning with the concept phase. In reuse-based software engineering, however, decisions on the requirements, design and even implementation of domain assets can be made prior to beginning development of a specific system. In this case, V&V must be performed during domain engineering in order to have an impact on system development. This paper describes a framework for performing V&V within architecture-centric, reuse-based software engineering. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  4. Online written consultation, telephone consultation and offline appointment: An examination of the channel effect in online health communities.

    PubMed

    Wu, Hong; Lu, Naiji

    2017-11-01

    The emergence of online health communities broadens and diversifies channels for patient-doctor interaction. Given limited medical resources, online health communities aim to provide better treatment by decreasing medical costs, making full use of available resources and providing more diverse channels for patients. This research examines how online channel usage affects offline channels, i.e., "Online Booking, Service in Hospitals" (OBSH), and how the channel effects change with doctors' online and offline reputation. The study uses data of 4254 doctors from a Chinese online health community. Our findings demonstrate a strong relationship between online health communities and offline hospital communication with an important moderating role for reputation. There are significant channel effects, wherein written consultation complements OBSH (β=3.320, p<0.10), but telephone consultation can be a readily substitute for OBSH (β=-9.854, p<0.001). We also find that doctors with higher online and offline reputations can attract more patients to use the OBSH (β online =0.433, p<0.001; β offline =2.318&2.123, p<0.001). Third, channel effects fluctuate, relative to doctors' online and offline reputations: doctors with higher online reputations mitigate substitution effects between telephone consultation and OBSH (β=0.064, p<0.01), and doctors with higher offline reputations mitigate complementary effects between written consultation and OBSH (β=-1.586&-1.417, p<0.001). This study contributes to both knowledge and practice. This study shows that there is channel effect in healthcare, websites' managers can encourage physicians to provide online services, especially for these physicians who do not have enough patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Framework for End-User Programming of Cross-Smart Space Applications

    PubMed Central

    Palviainen, Marko; Kuusijärvi, Jarkko; Ovaska, Eila

    2012-01-01

    Cross-smart space applications are specific types of software services that enable users to share information, monitor the physical and logical surroundings and control it in a way that is meaningful for the user's situation. For developing cross-smart space applications, this paper makes two main contributions: it introduces (i) a component design and scripting method for end-user programming of cross-smart space applications and (ii) a backend framework of components that interwork to support the brunt of the RDFScript translation, and the use and execution of ontology models. Before end-user programming activities, the software professionals must develop easy-to-apply Driver components for the APIs of existing software systems. Thereafter, end-users are able to create applications from the commands of the Driver components with the help of the provided toolset. The paper also introduces the reference implementation of the framework, tools for the Driver component development and end-user programming of cross-smart space applications and the first evaluation results on their application. PMID:23202169

  6. EarthCube's Assessment Framework: Ensuring Return on Investment

    NASA Astrophysics Data System (ADS)

    Lehnert, K.

    2016-12-01

    EarthCube is a community-governed, NSF-funded initiative to transform geoscience research by developing cyberinfrastructure that improves access, sharing, visualization, and analysis of all forms of geosciences data and related resources. EarthCube's goal is to enable geoscientists to tackle the challenges of understanding and predicting a complex and evolving solid Earth, hydrosphere, atmosphere, and space environment systems. EarthCube's infrastructure needs capabilities around data, software, and systems. It is essential for EarthCube to determine the value of new capabilities for the community and the progress of the overall effort to demonstrate its value to the science community and Return on Investment for the NSF. EarthCube is therefore developing an assessment framework for research proposals, projects funded by EarthCube, and the overall EarthCube program. As a first step, a software assessment framework has been developed that addresses the EarthCube Strategic Vision by promoting best practices in software development, complete and useful documentation, interoperability, standards adherence, open science, and education and training opportunities for research developers.

  7. Integrating Visualization Applications, such as ParaView, into HEP Software Frameworks for In-situ Event Displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.

    ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks.more » Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.« less

  8. Comparing online and offline self-disclosure: a systematic review.

    PubMed

    Nguyen, Melanie; Bin, Yu Sun; Campbell, Andrew

    2012-02-01

    Disclosure of personal information is believed to be more frequent in online compared to offline communication. However, this assumption is both theoretically and empirically contested. This systematic review examined existing research comparing online and offline self-disclosure to ascertain the evidence for current theories of online communication. Studies that compared online and offline disclosures in dyadic interactions were included for review. Contrary to expectations, disclosure was not consistently found to be greater in online contexts. Factors such as the relationship between the communicators, the specific mode of communication, and the context of the interaction appear to moderate the degree of disclosure. In relation to the theories of online communication, there is support for each theory. It is argued that the overlapping predictions of each theory and the current state of empirical research highlights a need for an overarching theory of communication that can account for disclosure in both online and offline interactions.

  9. MYRaf: A new Approach with IRAF for Astronomical Photometric Reduction

    NASA Astrophysics Data System (ADS)

    Kilic, Y.; Shameoni Niaei, M.; Özeren, F. F.; Yesilyaprak, C.

    2016-12-01

    In this study, the design and some developments of MYRaf software for astronomical photometric reduction are presented. MYRaf software is an easy to use, reliable, and has a fast IRAF aperture photometry GUI tools. MYRaf software is an important step for the automated software process of robotic telescopes, and uses IRAF, PyRAF, matplotlib, ginga, alipy, and Sextractor with the general-purpose and high-level programming language Python and uses the QT framework.

  10. Development of a Next Generation Concurrent Framework for the ATLAS Experiment

    NASA Astrophysics Data System (ADS)

    Calafiura, P.; Lampl, W.; Leggett, C.; Malon, D.; Stewart, G.; Wynne, B.

    2015-12-01

    The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. With current memory consumption for 64 bit ATLAS reconstruction in a high luminosity environment approaching 4GB, it will become impossible to fully occupy all cores in a machine without exhausting available memory. However, since maximizing performance per watt will be a key metric, a mechanism must be found to use all cores as efficiently as possible. In this paper we report on our progress with a practical demonstration of the use of multithreading in the ATLAS reconstruction software, using the GaudiHive framework. We have expanded support to Calorimeter, Inner Detector, and Tracking code, discussing what changes were necessary in order to allow the serially designed ATLAS code to run, both to the framework and to the tools and algorithms used. We report on both the performance gains, and what general lessons were learned about the code patterns that had been employed in the software and which patterns were identified as particularly problematic for multi-threading. We also present our findings on implementing a hybrid multi-threaded / multi-process framework, to take advantage of the strengths of each type of concurrency, while avoiding some of their corresponding limitations.

  11. Analysis of key technologies for virtual instruments metrology

    NASA Astrophysics Data System (ADS)

    Liu, Guixiong; Xu, Qingui; Gao, Furong; Guan, Qiuju; Fang, Qiang

    2008-12-01

    Virtual instruments (VIs) require metrological verification when applied as measuring instruments. Owing to the software-centered architecture, metrological evaluation of VIs includes two aspects: measurement functions and software characteristics. Complexity of software imposes difficulties on metrological testing of VIs. Key approaches and technologies for metrology evaluation of virtual instruments are investigated and analyzed in this paper. The principal issue is evaluation of measurement uncertainty. The nature and regularity of measurement uncertainty caused by software and algorithms can be evaluated by modeling, simulation, analysis, testing and statistics with support of powerful computing capability of PC. Another concern is evaluation of software features like correctness, reliability, stability, security and real-time of VIs. Technologies from software engineering, software testing and computer security domain can be used for these purposes. For example, a variety of black-box testing, white-box testing and modeling approaches can be used to evaluate the reliability of modules, components, applications and the whole VI software. The security of a VI can be assessed by methods like vulnerability scanning and penetration analysis. In order to facilitate metrology institutions to perform metrological verification of VIs efficiently, an automatic metrological tool for the above validation is essential. Based on technologies of numerical simulation, software testing and system benchmarking, a framework for the automatic tool is proposed in this paper. Investigation on implementation of existing automatic tools that perform calculation of measurement uncertainty, software testing and security assessment demonstrates the feasibility of the automatic framework advanced.

  12. [Software as medical devices/medical apps : Tasks, requirements, and experiences from the point of view of a competent authority].

    PubMed

    Terhechte, Arno

    2018-03-01

    Software can be classified as a medical device according to the Medical Device Directive 93/42/EEC. The number of software products and medical apps is continuously increasing and so too is the use in health institutions (e. g., in hospitals and doctors' surgeries) for diagnosis and therapy.Different aspects of standalone software and medical apps from the perspective of the authority responsible are presented. The quality system implemented to establish a risk-based systematic inspection and supervision of manufacturers is discussed. The legal framework, as well as additional standards that are the basis for inspection, are outlined. The article highlights special aspects that occur during inspection like verification of software and interfaces, and the clinical evaluation of software. The Bezirksregierung, as the local government authority responsible in North Rhine-Westphalia, is also in charge of inspection of health institutions. Therefore this article is not limited to the manufacturers placing the software on the market, but in addition it describes the management and use of software as a medical device in hospitals.The future legal framework, the Medical Device Regulation, will strengthen the requirements and engage notified bodies more than today in the conformity assessment of software as a medical device.Manufacturers, health institutions, notified bodies and the authorities responsible are in charge of intensifying their efforts towards software as a medical device. Mutual information, improvement of skills, and inspections will lead to compliance with regulatory requirements.

  13. Use of Annotations for Component and Framework Interoperability

    NASA Astrophysics Data System (ADS)

    David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.

    2009-12-01

    The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the western United States at the USDA NRCS National Water and Climate Center. PRMS is a component based modular precipitation-runoff model developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow and general basin hydrology. The new OMS 3.0 PRMS model source code is more concise and flexible as a result of using the new framework’s annotation based approach. The fully annotated components are now providing information directly for (i) model assembly and building, (ii) dataflow analysis for implicit multithreading, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks. As a prototype example, model code annotations were used to generate binding and mediation code to allow the use of OMS 3.0 model components within the OpenMI context.

  14. Test Driven Development: Lessons from a Simple Scientific Model

    NASA Astrophysics Data System (ADS)

    Clune, T. L.; Kuo, K.

    2010-12-01

    In the commercial software industry, unit testing frameworks have emerged as a disruptive technology that has permanently altered the process by which software is developed. Unit testing frameworks significantly reduce traditional barriers, both practical and psychological, to creating and executing tests that verify software implementations. A new development paradigm, known as test driven development (TDD), has emerged from unit testing practices, in which low-level tests (i.e. unit tests) are created by developers prior to implementing new pieces of code. Although somewhat counter-intuitive, this approach actually improves developer productivity. In addition to reducing the average time for detecting software defects (bugs), the requirement to provide procedure interfaces that enable testing frequently leads to superior design decisions. Although TDD is widely accepted in many software domains, its applicability to scientific modeling still warrants reasonable skepticism. While the technique is clearly relevant for infrastructure layers of scientific models such as the Earth System Modeling Framework (ESMF), numerical and scientific components pose a number of challenges to TDD that are not often encountered in commercial software. Nonetheless, our experience leads us to believe that the technique has great potential not only for developer productivity, but also as a tool for understanding and documenting the basic scientific assumptions upon which our models are implemented. We will provide a brief introduction to test driven development and then discuss our experience in using TDD to implement a relatively simple numerical model that simulates the growth of snowflakes. Many of the lessons learned are directly applicable to larger scientific models.

  15. A Software Framework for Remote Patient Monitoring by Using Multi-Agent Systems Support.

    PubMed

    Fernandes, Chrystinne Oliveira; Lucena, Carlos José Pereira De

    2017-03-27

    Although there have been significant advances in network, hardware, and software technologies, the health care environment has not taken advantage of these developments to solve many of its inherent problems. Research activities in these 3 areas make it possible to apply advanced technologies to address many of these issues such as real-time monitoring of a large number of patients, particularly where a timely response is critical. The objective of this research was to design and develop innovative technological solutions to offer a more proactive and reliable medical care environment. The short-term and primary goal was to construct IoT4Health, a flexible software framework to generate a range of Internet of things (IoT) applications, containing components such as multi-agent systems that are designed to perform Remote Patient Monitoring (RPM) activities autonomously. An investigation into its full potential to conduct such patient monitoring activities in a more proactive way is an expected future step. A framework methodology was selected to evaluate whether the RPM domain had the potential to generate customized applications that could achieve the stated goal of being responsive and flexible within the RPM domain. As a proof of concept of the software framework's flexibility, 3 applications were developed with different implementations for each framework hot spot to demonstrate potential. Agents4Health was selected to illustrate the instantiation process and IoT4Health's operation. To develop more concrete indicators of the responsiveness of the simulated care environment, an experiment was conducted while Agents4Health was operating, to measure the number of delays incurred in monitoring the tasks performed by agents. IoT4Health's construction can be highlighted as our contribution to the development of eHealth solutions. As a software framework, IoT4Health offers extensibility points for the generation of applications. Applications can extend the framework in the following ways: identification, collection, storage, recovery, visualization, monitoring, anomalies detection, resource notification, and dynamic reconfiguration. Based on other outcomes involving observation of the resulting applications, it was noted that its design contributed toward more proactive patient monitoring. Through these experimental systems, anomalies were detected in real time, with agents sending notifications instantly to the health providers. We conclude that the cost-benefit of the construction of a more generic and complex system instead of a custom-made software system demonstrated the worth of the approach, making it possible to generate applications in this domain in a more timely fashion. ©Chrystinne Oliveira Fernandes, Carlos José Pereira De Lucena. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 27.03.2017.

  16. LCG Persistency Framework (CORAL, COOL, POOL): Status and Outlook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valassi, A.; /CERN; Clemencic, M.

    2012-04-19

    The Persistency Framework consists of three software packages (CORAL, COOL and POOL) addressing the data access requirements of the LHC experiments in different areas. It is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that use this software to access their data. POOL is a hybrid technology store for C++ objects, metadata catalogs and collections. CORAL is a relational database abstraction layer with an SQL-free API. COOL provides specific software tools and components for the handling of conditions data. This paper reports on the status and outlook of the projectmore » and reviews in detail the usage of each package in the three experiments.« less

  17. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study

    PubMed Central

    Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-01-01

    Background The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. Objective The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. Methods We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. Results We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). Conclusions In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. PMID:28249833

  18. A Software Framework for Remote Patient Monitoring by Using Multi-Agent Systems Support

    PubMed Central

    2017-01-01

    Background Although there have been significant advances in network, hardware, and software technologies, the health care environment has not taken advantage of these developments to solve many of its inherent problems. Research activities in these 3 areas make it possible to apply advanced technologies to address many of these issues such as real-time monitoring of a large number of patients, particularly where a timely response is critical. Objective The objective of this research was to design and develop innovative technological solutions to offer a more proactive and reliable medical care environment. The short-term and primary goal was to construct IoT4Health, a flexible software framework to generate a range of Internet of things (IoT) applications, containing components such as multi-agent systems that are designed to perform Remote Patient Monitoring (RPM) activities autonomously. An investigation into its full potential to conduct such patient monitoring activities in a more proactive way is an expected future step. Methods A framework methodology was selected to evaluate whether the RPM domain had the potential to generate customized applications that could achieve the stated goal of being responsive and flexible within the RPM domain. As a proof of concept of the software framework’s flexibility, 3 applications were developed with different implementations for each framework hot spot to demonstrate potential. Agents4Health was selected to illustrate the instantiation process and IoT4Health’s operation. To develop more concrete indicators of the responsiveness of the simulated care environment, an experiment was conducted while Agents4Health was operating, to measure the number of delays incurred in monitoring the tasks performed by agents. Results IoT4Health’s construction can be highlighted as our contribution to the development of eHealth solutions. As a software framework, IoT4Health offers extensibility points for the generation of applications. Applications can extend the framework in the following ways: identification, collection, storage, recovery, visualization, monitoring, anomalies detection, resource notification, and dynamic reconfiguration. Based on other outcomes involving observation of the resulting applications, it was noted that its design contributed toward more proactive patient monitoring. Through these experimental systems, anomalies were detected in real time, with agents sending notifications instantly to the health providers. Conclusions We conclude that the cost-benefit of the construction of a more generic and complex system instead of a custom-made software system demonstrated the worth of the approach, making it possible to generate applications in this domain in a more timely fashion. PMID:28347973

  19. Lessons Learned From Developing A Streaming Data Framework for Scientific Analysis

    NASA Technical Reports Server (NTRS)

    Wheeler. Kevin R.; Allan, Mark; Curry, Charles

    2003-01-01

    We describe the development and usage of a streaming data analysis software framework. The framework is used for three different applications: Earth science hyper-spectral imaging analysis, Electromyograph pattern detection, and Electroencephalogram state determination. In each application the framework was used to answer a series of science questions which evolved with each subsequent answer. This evolution is summarized in the form of lessons learned.

  20. A Framework for the Evaluation of CASE Tool Learnability in Educational Environments

    ERIC Educational Resources Information Center

    Senapathi, Mali

    2005-01-01

    The aim of the research is to derive a framework for the evaluation of Computer Aided Software Engineering (CASE) tool learnability in educational environments. Drawing from the literature of Human Computer Interaction and educational research, a framework for evaluating CASE tool learnability in educational environments is derived. The two main…

  1. Improving component interoperability and reusability with the java connection framework (JCF): overview and application to the ages-w environmental model

    USDA-ARS?s Scientific Manuscript database

    Environmental modeling framework (EMF) design goals are multi-dimensional and often include many aspects of general software framework development. Many functional capabilities offered by current EMFs are closely related to interoperability and reuse aspects. For example, an EMF needs to support dev...

  2. Carpet: Adaptive Mesh Refinement for the Cactus Framework

    NASA Astrophysics Data System (ADS)

    Schnetter, Erik; Hawley, Scott; Hawke, Ian

    2016-11-01

    Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.

  3. Online to offline teaching model in optics education: resource sharing course and flipped class

    NASA Astrophysics Data System (ADS)

    Li, Xiaotong; Cen, Zhaofeng; Liu, Xiangdong; Zheng, Zhenrong

    2016-09-01

    Since the platform "Coursera" is created by the professors of Stanford University Andrew Ng and Daphne Koller, more and more universities have joined in it. From the very beginning, online education is not only about education itself, but also connected with social equality. This is especially significant for the economic transformation in China. In this paper the research and practice on informatization of optical education are described. Online to offline (O2O) education activities, such as online learning and offline meeting, online homework and online to offline discussion, online tests and online to offline evaluation, are combined into our teaching model in the course of Applied Optics. These various O2O strategies were implemented respectively in the autumn-winter small class and the spring-summer middle class according to the constructivism and the idea of open education. We have developed optical education resources such as videos of lectures, light transmission or ray trace animations, online tests, etc. We also divide the learning procedure into 4 steps: First, instead of being given a course offline, students will learn the course online; Second, once a week or two weeks, students will have a discussion in their study groups; Third, students will submit their homework and study reports; Fourth, they will do online and offline tests. The online optical education resources have been shared in some universities in China, together with new challenges to teachers and students when facing the revolution in the e-learning future.

  4. SpecPad: device-independent NMR data visualization and processing based on the novel DART programming language and Html5 Web technology.

    PubMed

    Guigas, Bruno

    2017-09-01

    SpecPad is a new device-independent software program for the visualization and processing of one-dimensional and two-dimensional nuclear magnetic resonance (NMR) time domain (FID) and frequency domain (spectrum) data. It is the result of a project to investigate whether the novel programming language DART, in combination with Html5 Web technology, forms a suitable base to write an NMR data evaluation software which runs on modern computing devices such as Android, iOS, and Windows tablets as well as on Windows, Linux, and Mac OS X desktop PCs and notebooks. Another topic of interest is whether this technique also effectively supports the required sophisticated graphical and computational algorithms. SpecPad is device-independent because DART's compiled executable code is JavaScript and can, therefore, be run by the browsers of PCs and tablets. Because of Html5 browser cache technology, SpecPad may be operated off-line. Network access is only required during data import or export, e.g. via a Cloud service, or for software updates. A professional and easy to use graphical user interface consistent across all hardware platforms supports touch screen features on mobile devices for zooming and panning and for NMR-related interactive operations such as phasing, integration, peak picking, or atom assignment. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. An open-source, FireWire camera-based, Labview-controlled image acquisition system for automated, dynamic pupillometry and blink detection.

    PubMed

    de Souza, John Kennedy Schettino; Pinto, Marcos Antonio da Silva; Vieira, Pedro Gabrielle; Baron, Jerome; Tierra-Criollo, Carlos Julio

    2013-12-01

    The dynamic, accurate measurement of pupil size is extremely valuable for studying a large number of neuronal functions and dysfunctions. Despite tremendous and well-documented progress in image processing techniques for estimating pupil parameters, comparatively little work has been reported on practical hardware issues involved in designing image acquisition systems for pupil analysis. Here, we describe and validate the basic features of such a system which is based on a relatively compact, off-the-shelf, low-cost FireWire digital camera. We successfully implemented two configurable modes of video record: a continuous mode and an event-triggered mode. The interoperability of the whole system is guaranteed by a set of modular software components hosted on a personal computer and written in Labview. An offline analysis suite of image processing algorithms for automatically estimating pupillary and eyelid parameters were assessed using data obtained in human subjects. Our benchmark results show that such measurements can be done in a temporally precise way at a sampling frequency of up to 120 Hz and with an estimated maximum spatial resolution of 0.03 mm. Our software is made available free of charge to the scientific community, allowing end users to either use the software as is or modify it to suit their own needs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Real-Time 12-Lead High-Frequency QRS Electrocardiography for Enhanced Detection of Myocardial Ischemia and Coronary Artery Disease

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Kulecz, Walter B.; DePalma, Jude L.; Feiveson, Alan H.; Wilson, John S.; Rahman, M. Atiar; Bungo, Michael W.

    2004-01-01

    Several studies have shown that diminution of the high-frequency (HF; 150-250 Hz) components present within the central portion of the QRS complex of an electrocardiogram (ECG) is a more sensitive indicator for the presence of myocardial ischemia than are changes in the ST segments of the conventional low-frequency ECG. However, until now, no device has been capable of displaying, in real time on a beat-to-beat basis, changes in these HF QRS ECG components in a continuously monitored patient. Although several software programs have been designed to acquire the HF components over the entire QRS interval, such programs have involved laborious off-line calculations and postprocessing, limiting their clinical utility. We describe a personal computer-based ECG software program developed recently at the National Aeronautics and Space Administration (NASA) that acquires, analyzes, and displays HF QRS components in each of the 12 conventional ECG leads in real time. The system also updates these signals and their related derived parameters in real time on a beat-to-beat basis for any chosen monitoring period and simultaneously displays the diagnostic information from the conventional (low-frequency) 12-lead ECG. The real-time NASA HF QRS ECG software is being evaluated currently in multiple clinical settings in North America. We describe its potential usefulness in the diagnosis of myocardial ischemia and coronary artery disease.

  7. A General Water Resources Regulation Software System in China

    NASA Astrophysics Data System (ADS)

    LEI, X.

    2017-12-01

    To avoid iterative development of core modules in water resource normal regulation and emergency regulation and improve the capability of maintenance and optimization upgrading of regulation models and business logics, a general water resources regulation software framework was developed based on the collection and analysis of common demands for water resources regulation and emergency management. It can provide a customizable, secondary developed and extensible software framework for the three-level platform "MWR-Basin-Province". Meanwhile, this general software system can realize business collaboration and information sharing of water resources regulation schemes among the three-level platforms, so as to improve the decision-making ability of national water resources regulation. There are four main modules involved in the general software system: 1) A complete set of general water resources regulation modules allows secondary developer to custom-develop water resources regulation decision-making systems; 2) A complete set of model base and model computing software released in the form of Cloud services; 3) A complete set of tools to build the concept map and model system of basin water resources regulation, as well as a model management system to calibrate and configure model parameters; 4) A database which satisfies business functions and functional requirements of general water resources regulation software can finally provide technical support for building basin or regional water resources regulation models.

  8. The Effects of the Use of Activity-Based Costing Software in the Learning Process: An Empirical Analysis

    ERIC Educational Resources Information Center

    Tan, Andrea; Ferreira, Aldónio

    2012-01-01

    This study investigates the influence of the use of accounting software in teaching activity-based costing (ABC) on the learning process. It draws upon the Theory of Planned Behaviour and uses the end-user computer satisfaction (EUCS) framework to examine students' satisfaction with the ABC software. The study examines students' satisfaction with…

  9. Software Framework for Controlling Unsupervised Scientific Instruments.

    PubMed

    Schmid, Benjamin; Jahr, Wiebke; Weber, Michael; Huisken, Jan

    2016-01-01

    Science outreach and communication are gaining more and more importance for conveying the meaning of today's research to the general public. Public exhibitions of scientific instruments can provide hands-on experience with technical advances and their applications in the life sciences. The software of such devices, however, is oftentimes not appropriate for this purpose. In this study, we describe a software framework and the necessary computer configuration that is well suited for exposing a complex self-built and software-controlled instrument such as a microscope to laymen under limited supervision, e.g. in museums or schools. We identify several aspects that must be met by such software, and we describe a design that can simultaneously be used to control either (i) a fully functional instrument in a robust and fail-safe manner, (ii) an instrument that has low-cost or only partially working hardware attached for illustration purposes or (iii) a completely virtual instrument without hardware attached. We describe how to assess the educational success of such a device, how to monitor its operation and how to facilitate its maintenance. The introduced concepts are illustrated using our software to control eduSPIM, a fluorescent light sheet microscope that we are currently exhibiting in a technical museum.

  10. Software for MR image overlay guided needle insertions: the clinical translation process

    NASA Astrophysics Data System (ADS)

    Ungi, Tamas; U-Thainual, Paweena; Fritz, Jan; Iordachita, Iulian I.; Flammang, Aaron J.; Carrino, John A.; Fichtinger, Gabor

    2013-03-01

    PURPOSE: Needle guidance software using augmented reality image overlay was translated from the experimental phase to support preclinical and clinical studies. Major functional and structural changes were needed to meet clinical requirements. We present the process applied to fulfill these requirements, and selected features that may be applied in the translational phase of other image-guided surgical navigation systems. METHODS: We used an agile software development process for rapid adaptation to unforeseen clinical requests. The process is based on iterations of operating room test sessions, feedback discussions, and software development sprints. The open-source application framework of 3D Slicer and the NA-MIC kit provided sufficient flexibility and stable software foundations for this work. RESULTS: All requirements were addressed in a process with 19 operating room test iterations. Most features developed in this phase were related to workflow simplification and operator feedback. CONCLUSION: Efficient and affordable modifications were facilitated by an open source application framework and frequent clinical feedback sessions. Results of cadaver experiments show that software requirements were successfully solved after a limited number of operating room tests.

  11. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  12. The New BaBar Data Reconstruction Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceseracciu, Antonio

    2003-06-02

    The BaBar experiment is characterized by extremely high luminosity, a complex detector, and a huge data volume, with increasing requirements each year. To fulfill these requirements a new control system has been designed and developed for the offline data reconstruction system. The new control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is activelymore » distributed, enforces the separation between different processing tiers by using different naming domains, and glues them together by dedicated brokers. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes this new control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 12 farms.« less

  13. The BaBar Data Reconstruction Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceseracciu, A

    2005-04-20

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a Control System has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system ismore » distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 9 farms.« less

  14. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Sewell, Christopher; Heitmann, Katrin

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less

  15. The BaBar Data Reconstruction Control System

    NASA Astrophysics Data System (ADS)

    Ceseracciu, A.; Piemontese, M.; Tehrani, F. S.; Pulliam, T. M.; Galeazzi, F.

    2005-08-01

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a control system has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of object oriented (OO) design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful finite state machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on /spl sim/450 CPUs organized in nine farms.

  16. Thinking as the control of imagination: a conceptual framework for goal-directed systems.

    PubMed

    Pezzulo, Giovanni; Castelfranchi, Cristiano

    2009-07-01

    This paper offers a conceptual framework which (re)integrates goal-directed control, motivational processes, and executive functions, and suggests a developmental pathway from situated action to higher level cognition. We first illustrate a basic computational (control-theoretic) model of goal-directed action that makes use of internal modeling. We then show that by adding the problem of selection among multiple action alternatives motivation enters the scene, and that the basic mechanisms of executive functions such as inhibition, the monitoring of progresses, and working memory, are required for this system to work. Further, we elaborate on the idea that the off-line re-enactment of anticipatory mechanisms used for action control gives rise to (embodied) mental simulations, and propose that thinking consists essentially in controlling mental simulations rather than directly controlling behavior and perceptions. We conclude by sketching an evolutionary perspective of this process, proposing that anticipation leveraged cognition, and by highlighting specific predictions of our model.

  17. A Framework for Software Reuse in Safety-Critical System of Systems

    DTIC Science & Technology

    2008-03-01

    environment.8 Pressman , on the other hand, defines a software component as a unit of composition with contractually specified and explicit context...2005, p654. 9 R.S. Pressman ., Software Engineering A Practitioner’s Approach, Sixth Edition, New York, NY.: McGraw-Hill, 2005, p817. 10 W.C. Lim...index.php. 79 Pressman , R.S., Software Engineering A Practitioner’s Approach, Sixth Edition, New York, NY.: McGraw-Hill, 2005. Radio Technical

  18. Craniux: A LabVIEW-Based Modular Software Framework for Brain-Machine Interface Research

    DTIC Science & Technology

    2011-01-01

    open-source BMI software solu- tions are currently available, we feel that the Craniux software package fills a specific need in the realm of BMI...data, such as cortical source imaging using EEG or MEG recordings. It is with these characteristics in mind that we feel the Craniux software package...S. Adee, “Dean Kamen’s ‘luke arm’ prosthesis readies for clinical trials,” IEEE Spectrum, February 2008, http://spectrum .ieee.org/biomedical

  19. The software-defined fast post-processing for GEM soft x-ray diagnostics in the Tungsten Environment in Steady-state Tokamak thermal fusion reactor

    NASA Astrophysics Data System (ADS)

    Krawczyk, Rafał Dominik; Czarski, Tomasz; Linczuk, Paweł; Wojeński, Andrzej; Kolasiński, Piotr; GÄ ska, Michał; Chernyshova, Maryna; Mazon, Didier; Jardin, Axel; Malard, Philippe; Poźniak, Krzysztof; Kasprowicz, Grzegorz; Zabołotny, Wojciech; Kowalska-Strzeciwilk, Ewa; Malinowski, Karol

    2018-06-01

    This article presents a novel software-defined server-based solutions that were introduced in the fast, real-time computation systems for soft X-ray diagnostics for the WEST (Tungsten Environment in Steady-state Tokamak) reactor in Cadarache, France. The objective of the research was to provide a fast processing of data at high throughput and with low latencies for investigating the interplay between the particle transport and magnetohydrodynamic activity. The long-term objective is to implement in the future a fast feedback signal in the reactor control mechanisms to sustain the fusion reaction. The implemented electronic measurement device is anticipated to be deployed in the WEST. A standalone software-defined computation engine was designed to handle data collected at high rates in the server back-end of the system. Signals are obtained from the front-end field-programmable gate array mezzanine cards that acquire and perform a selection from the gas electron multiplier detector. A fast, authorial library for plasma diagnostics was written in C++. It originated from reference offline MATLAB implementations. They were redesigned for runtime analysis during the experiment in the novel online modes of operation. The implementation allowed the benchmarking, evaluation, and optimization of plasma processing algorithms with the possibility to check the consistency with reference computations written in MATLAB. The back-end software and hardware architecture are presented with data evaluation mechanisms. The online modes of operation for the WEST are discussed. The results concerning the performance of the processing and the introduced functionality are presented.

  20. Modelling of surface fluxes and Urban Boundary Layer over an old mediterannean city core

    NASA Astrophysics Data System (ADS)

    Lemonsu, A.; Masson, V.; Grimmond, Cs. B.

    2003-04-01

    In the frameworks of the UBL(Urban Boundary Layer)-ESCOMPTE campaign, the Town Energy Balance (TEB) model was run in off-line mode for Marseille. TEB's performance is evaluated with observations of surface temperatures and surface energy balance fluxes collected during the campaign. Parameterization improvements allow to better represent the energy exchanges between the air inside the canyon and the atmosphere above the roof level. Then, high resolution Méso-NH simulations are done to study the 3-D structure and the evolution of the Urban Boundary Layer (UBL) over Marseille. Will will give a special attention to the impact of the seabord effects (sea-breeze circulation) on the UBL.

Top