Sample records for automatic coding system

  1. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  2. Diagnosis - Using automatic test equipment and artificial intelligence expert systems

    NASA Astrophysics Data System (ADS)

    Ramsey, J. E., Jr.

    Three expert systems (ATEOPS, ATEFEXPERS, and ATEFATLAS), which were created to direct automatic test equipment (ATE), are reviewed. The purpose of the project was to develop an expert system to troubleshoot the converter-programmer power supply card for the F-15 aircraft and have that expert system direct the automatic test equipment. Each expert system uses a different knowledge base or inference engine, basing the testing on the circuit schematic, test requirements document, or ATLAS code. Implementing generalized modules allows the expert systems to be used for any different unit under test. Using converted ATLAS to LISP code allows the expert system to direct any ATE using ATLAS. The constraint propagated frame system allows for the expansion of control by creating the ATLAS code, checking the code for good software engineering techniques, directing the ATE, and changing the test sequence as needed (planning).

  3. Automatic NEPHIS Coding of Descriptive Titles for Permuted Index Generation.

    ERIC Educational Resources Information Center

    Craven, Timothy C.

    1982-01-01

    Describes a system for the automatic coding of most descriptive titles which generates Nested Phrase Indexing System (NEPHIS) input strings of sufficient quality for permuted index production. A series of examples and an 11-item reference list accompany the text. (JL)

  4. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  5. Automatic mathematical modeling for space application

    NASA Technical Reports Server (NTRS)

    Wang, Caroline K.

    1987-01-01

    A methodology for automatic mathematical modeling is described. The major objective is to create a very friendly environment for engineers to design, maintain and verify their model and also automatically convert the mathematical model into FORTRAN code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine simulation mathematical model called Propulsion System Automatic Modeling (PSAM). PSAM provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. PSAM contains an initial set of component process elements for the Space Shuttle Main Engine simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. PSAM is then able to automatically generate the model and the FORTRAN code. A future goal is to download the FORTRAN code to the VAX/VMS system for conventional computation.

  6. Translating expert system rules into Ada code with validation and verification

    NASA Technical Reports Server (NTRS)

    Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam

    1991-01-01

    The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.

  7. Design and implementation of online automatic judging system

    NASA Astrophysics Data System (ADS)

    Liang, Haohui; Chen, Chaojie; Zhong, Xiuyu; Chen, Yuefeng

    2017-06-01

    For lower efficiency and poorer reliability in programming training and competition by currently artificial judgment, design an Online Automatic Judging (referred to as OAJ) System. The OAJ system including the sandbox judging side and Web side, realizes functions of automatically compiling and running the tested codes, and generating evaluation scores and corresponding reports. To prevent malicious codes from damaging system, the OAJ system utilizes sandbox, ensuring the safety of the system. The OAJ system uses thread pools to achieve parallel test, and adopt database optimization mechanism, such as horizontal split table, to improve the system performance and resources utilization rate. The test results show that the system has high performance, high reliability, high stability and excellent extensibility.

  8. A review of automatic patient identification options for public health care centers with restricted budgets.

    PubMed

    García-Betances, Rebeca I; Huerta, Mónica K

    2012-01-01

    A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies' backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones' present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients' identification processes in low-budget situations.

  9. A Review of Automatic Patient Identification Options for Public Health Care Centers with Restricted Budgets

    PubMed Central

    García-Betances, Rebeca I.; Huerta, Mónica K.

    2012-01-01

    A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies’ backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones’ present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients’ identification processes in low-budget situations. PMID:23569629

  10. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  11. Automated encoding of clinical documents based on natural language processing.

    PubMed

    Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George

    2004-01-01

    The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.

  12. Automatic HDL firmware generation for FPGA-based reconfigurable measurement and control systems with mezzanines in FMC standard

    NASA Astrophysics Data System (ADS)

    Wojenski, Andrzej; Kasprowicz, Grzegorz; Pozniak, Krzysztof T.; Romaniuk, Ryszard

    2013-10-01

    The paper describes a concept of automatic firmware generation for reconfigurable measurement systems, which uses FPGA devices and measurement cards in FMC standard. Following sections are described in details: automatic HDL code generation for FPGA devices, automatic communication interfaces implementation, HDL drivers for measurement cards, automatic serial connection between multiple measurement backplane boards, automatic build of memory map (address space), automatic generated firmware management. Presented solutions are required in many advanced measurement systems, like Beam Position Monitors or GEM detectors. This work is a part of a wider project for automatic firmware generation and management of reconfigurable systems. Solutions presented in this paper are based on previous publication in SPIE.

  13. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1990-01-01

    The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.

  14. A system for classifying wood-using industries and recording statistics for automatic data processing.

    Treesearch

    E.W. Fobes; R.W. Rowe

    1968-01-01

    A system for classifying wood-using industries and recording pertinent statistics for automatic data processing is described. Forms and coding instructions for recording data of primary processing plants are included.

  15. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  16. An Expert System for the Development of Efficient Parallel Code

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.

  17. Formally specifying the logic of an automatic guidance controller

    NASA Technical Reports Server (NTRS)

    Guaspari, David

    1990-01-01

    The following topics are covered in viewgraph form: (1) the Penelope Project; (2) the logic of an experimental automatic guidance control system for a 737; (3) Larch/Ada specification; (4) some failures of informal description; (5) description of mode changes caused by switches; (6) intuitive description of window status (chosen vs. current); (7) design of the code; (8) and specifying the code.

  18. Automatic Testcase Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.

  19. DoD Is Not Properly Monitoring the Initiation of Maintenance for Facilities at Kandahar Airfield, Afghanistan (REDACTED)

    DTIC Science & Technology

    2013-09-30

    fire sprinkler system during the initial construction of the RSOI facilities. The construction contract to build the RSOI...International Building Code. Compliant manual and automatic fire alarm and notification systems , portable fire extinguishers, fire sprinkler systems ...automatic fire sprinkler system that was not operational, a fire department connection that was obstructed, and a fire detection system

  20. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  1. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on the technical aspects of debugging computer code that has been automatically converted for use in a parallel computing system. Shared memory parallelization and distributed memory parallelization entail separate and distinct challenges for a debugging program. A prototype system has been developed which integrates various tools for the debugging of automatically parallelized programs including the CAPTools Database which provides variable definition information across subroutines as well as array distribution information.

  2. Automatic mathematical modeling for real time simulation system

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1988-01-01

    A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.

  3. Design of efficient and simple interface testing equipment for opto-electric tracking system

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Deng, Chao; Tian, Jing; Mao, Yao

    2016-10-01

    Interface testing for opto-electric tracking system is one important work to assure system running performance, aiming to verify the design result of every electronic interface matching the communication protocols or not, by different levels. Opto-electric tracking system nowadays is more complicated, composed of many functional units. Usually, interface testing is executed between units manufactured completely, highly depending on unit design and manufacture progress as well as relative people. As a result, it always takes days or weeks, inefficiently. To solve the problem, this paper promotes an efficient and simple interface testing equipment for opto-electric tracking system, consisting of optional interface circuit card, processor and test program. The hardware cards provide matched hardware interface(s), easily offered from hardware engineer. Automatic code generation technique is imported, providing adaption to new communication protocols. Automatic acquiring items, automatic constructing code architecture and automatic encoding are used to form a new program quickly with adaption. After simple steps, a standard customized new interface testing equipment with matching test program and interface(s) is ready for a waiting-test system in minutes. The efficient and simple interface testing equipment for opto-electric tracking system has worked for many opto-electric tracking system to test entire or part interfaces, reducing test time from days to hours, greatly improving test efficiency, with high software quality and stability, without manual coding. Used as a common tool, the efficient and simple interface testing equipment for opto-electric tracking system promoted by this paper has changed traditional interface testing method and created much higher efficiency.

  4. Model-Driven Engineering: Automatic Code Generation and Beyond

    DTIC Science & Technology

    2015-03-01

    and Weblogic as well as cloud environments such as Mi- crosoft Azure and Amazon Web Services®. Finally, while the generated code has dependencies on...code generation in the context of the full system lifecycle from development to sustainment. Acquisition programs in govern- ment or large commercial...Acquirers are concerned with the full system lifecycle, and they need confidence that the development methods will enable the system to meet the functional

  5. Development of an Automatic Differentiation Version of the FPX Rotor Code

    NASA Technical Reports Server (NTRS)

    Hu, Hong

    1996-01-01

    The ADIFOR2.0 automatic differentiator is applied to the FPX rotor code along with the grid generator GRGN3. The FPX is an eXtended Full-Potential CFD code for rotor calculations. The automatic differentiation version of the code is obtained, which provides both non-geometry and geometry sensitivity derivatives. The sensitivity derivatives via automatic differentiation are presented and compared with divided difference generated derivatives. The study shows that automatic differentiation method gives accurate derivative values in an efficient manner.

  6. AMPS/PC - AUTOMATIC MANUFACTURING PROGRAMMING SYSTEM

    NASA Technical Reports Server (NTRS)

    Schroer, B. J.

    1994-01-01

    The AMPS/PC system is a simulation tool designed to aid the user in defining the specifications of a manufacturing environment and then automatically writing code for the target simulation language, GPSS/PC. The domain of problems that AMPS/PC can simulate are manufacturing assembly lines with subassembly lines and manufacturing cells. The user defines the problem domain by responding to the questions from the interface program. Based on the responses, the interface program creates an internal problem specification file. This file includes the manufacturing process network flow and the attributes for all stations, cells, and stock points. AMPS then uses the problem specification file as input for the automatic code generator program to produce a simulation program in the target language GPSS. The output of the generator program is the source code of the corresponding GPSS/PC simulation program. The system runs entirely on an IBM PC running PC DOS Version 2.0 or higher and is written in Turbo Pascal Version 4 requiring 640K memory and one 360K disk drive. To execute the GPSS program, the PC must have resident the GPSS/PC System Version 2.0 from Minuteman Software. The AMPS/PC program was developed in 1988.

  7. Automatic vehicle location system

    NASA Technical Reports Server (NTRS)

    Hansen, G. R., Jr. (Inventor)

    1973-01-01

    An automatic vehicle detection system is disclosed, in which each vehicle whose location is to be detected carries active means which interact with passive elements at each location to be identified. The passive elements comprise a plurality of passive loops arranged in a sequence along the travel direction. Each of the loops is tuned to a chosen frequency so that the sequence of the frequencies defines the location code. As the vehicle traverses the sequence of the loops as it passes over each loop, signals only at the frequency of the loop being passed over are coupled from a vehicle transmitter to a vehicle receiver. The frequencies of the received signals in the receiver produce outputs which together represent a code of the traversed location. The code location is defined by a painted pattern which reflects light to a vehicle carried detector whose output is used to derive the code defined by the pattern.

  8. Verification and Validation in a Rapid Software Development Process

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Easterbrook, Steve M.

    1997-01-01

    The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.

  9. Specifications and programs for computer software validation

    NASA Technical Reports Server (NTRS)

    Browne, J. C.; Kleir, R.; Davis, T.; Henneman, M.; Haller, A.; Lasseter, G. L.

    1973-01-01

    Three software products developed during the study are reported and include: (1) FORTRAN Automatic Code Evaluation System, (2) the Specification Language System, and (3) the Array Index Validation System.

  10. Secure web-based invocation of large-scale plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.

    2004-12-01

    We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.

  11. Modelling Metamorphism by Abstract Interpretation

    NASA Astrophysics Data System (ADS)

    Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.

    Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.

  12. The UPSF code: a metaprogramming-based high-performance automatically parallelized plasma simulation framework

    NASA Astrophysics Data System (ADS)

    Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao

    2017-10-01

    UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.

  13. A procedure for automating CFD simulations of an inlet-bleed problem

    NASA Technical Reports Server (NTRS)

    Chyu, Wei J.; Rimlinger, Mark J.; Shih, Tom I.-P.

    1995-01-01

    A procedure was developed to improve the turn-around time for computational fluid dynamics (CFD) simulations of an inlet-bleed problem involving oblique shock-wave/boundary-layer interactions on a flat plate with bleed into a plenum through one or more circular holes. This procedure is embodied in a preprocessor called AUTOMAT. With AUTOMAT, once data for the geometry and flow conditions have been specified (either interactively or via a namelist), it will automatically generate all input files needed to perform a three-dimensional Navier-Stokes simulation of the prescribed inlet-bleed problem by using the PEGASUS and OVERFLOW codes. The input files automatically generated by AUTOMAT include those for the grid system and those for the initial and boundary conditions. The grid systems automatically generated by AUTOMAT are multi-block structured grids of the overlapping type. Results obtained by using AUTOMAT are presented to illustrate its capability.

  14. A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.

    2018-04-01

    The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.

  15. Automatic Certification of Kalman Filters for Reliable Code Generation

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd; Schumann, Johann; Richardson, Julian

    2005-01-01

    AUTOFILTER is a tool for automatically deriving Kalman filter code from high-level declarative specifications of state estimation problems. It can generate code with a range of algorithmic characteristics and for several target platforms. The tool has been designed with reliability of the generated code in mind and is able to automatically certify that the code it generates is free from various error classes. Since documentation is an important part of software assurance, AUTOFILTER can also automatically generate various human-readable documents, containing both design and safety related information. We discuss how these features address software assurance standards such as DO-178B.

  16. Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis

    PubMed Central

    Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.

    2014-01-01

    Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859

  17. AutoBayes Program Synthesis System Users Manual

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd

    2008-01-01

    Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.

  18. Container-code recognition system based on computer vision and deep neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  19. Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic

    NASA Technical Reports Server (NTRS)

    Leucht, Kurt W.; Semmel, Glenn S.

    2008-01-01

    The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.

  20. First- and Second-Order Sensitivity Analysis of a P-Version Finite Element Equation Via Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    1998-01-01

    Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.

  1. Automatic Coding of Short Text Responses via Clustering in Educational Assessment

    ERIC Educational Resources Information Center

    Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank

    2016-01-01

    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…

  2. 48 CFR 25.401 - Exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Disabled; and (5) Other acquisitions not using full and open competition, if authorized by Subpart 6.2 or 6... table: The service(Federal Service Codes from the Federal Procurement Data System Product/Service Code... military services overseas. X X X X (2) (i) Automatic data processing (ADP) telecommunications and...

  3. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    DOE PAGES

    O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; ...

    1995-01-01

    Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less

  4. The use of automatic programming techniques for fault tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Wild, C.

    1985-01-01

    It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.

  5. [Coding Causes of Death with IRIS Software. Impact in Navarre Mortality Statistic].

    PubMed

    Floristán Floristán, Yugo; Delfrade Osinaga, Josu; Carrillo Prieto, Jesus; Aguirre Perez, Jesus; Moreno-Iribas, Conchi

    2016-08-02

    There are few studies that analyze changes in mortality statistics derived from the use of IRIS software, an automatic system for coding multiple causes of death and for the selection of the underlying cause of death, compared to manual coding. This study evaluated the impact of the use of IRIS in the Navarre mortality statistic. We proceeded to double coding 5,060 death certificates corresponding to residents in Navarra in 2014. We calculated coincidence between the two encodings for ICD10 chapters and for the list of causes of the Spanish National Statistics Institute (INE-102) and we estimated the change on mortality rates. IRIS automatically coded 90% of death certificates. The coincidence to 4 characters and in the same chapter of the CIE10 was 79.1% and 92.0%, respectively. Furthermore, coincidence with the short INE-102 list was 88.3%. Higher matches were found in death certificate of people under 65 years. In comparison with manual coding there was an increase in deaths from endocrine diseases (31%), mental disorders (19%) and disease of nervous system (9%), while a decrease of genitourinary system diseases was observed (21%). The coincidence at level of ICD10 chapters coding by IRIS in comparison to manual coding was 9 out of 10 deaths, similar to what is observed in other studies. The implementation of IRIS has led to increased of endocrine diseases, especially diabetes and hyperlipidaemia, and mental disorders, especially dementias.

  6. PACS quality control and automatic problem notifier

    NASA Astrophysics Data System (ADS)

    Honeyman-Buck, Janice C.; Jones, Douglas; Frost, Meryll M.; Staab, Edward V.

    1997-05-01

    One side effect of installing a clinical PACS Is that users become dependent upon the technology and in some cases it can be very difficult to revert back to a film based system if components fail. The nature of system failures range from slow deterioration of function as seen in the loss of monitor luminance through sudden catastrophic loss of the entire PACS networks. This paper describes the quality control procedures in place at the University of Florida and the automatic notification system that alerts PACS personnel when a failure has happened or is anticipated. The goal is to recover from a failure with a minimum of downtime and no data loss. Routine quality control is practiced on all aspects of PACS, from acquisition, through network routing, through display, and including archiving. Whenever possible, the system components perform self and between platform checks for active processes, file system status, errors in log files, and system uptime. When an error is detected or a exception occurs, an automatic page is sent to a pager with a diagnostic code. Documentation on each code, trouble shooting procedures, and repairs are kept on an intranet server accessible only to people involved in maintaining the PACS. In addition to the automatic paging system for error conditions, acquisition is assured by an automatic fax report sent on a daily basis to all technologists acquiring PACS images to be used as a cross check that all studies are archived prior to being removed from the acquisition systems. Daily quality control is preformed to assure that studies can be moved from each acquisition and contrast adjustment. The results of selected quality control reports will be presented. The intranet documentation server will be described with the automatic pager system. Monitor quality control reports will be described and the cost of quality control will be quantified. As PACS is accepted as a clinical tool, the same standards of quality control must be established as are expected on other equipment used in the diagnostic process.

  7. From Verified Models to Verifiable Code

    NASA Technical Reports Server (NTRS)

    Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.

  8. Method and apparatus for data decoding and processing

    DOEpatents

    Hunter, Timothy M.; Levy, Arthur J.

    1992-01-01

    A system and technique is disclosed for automatically controlling the decoding and digitizaiton of an analog tape. The system includes the use of a tape data format which includes a plurality of digital codes recorded on the analog tape in a predetermined proximity to a period of recorded analog data. The codes associated with each period of analog data include digital identification codes prior to the analog data, a start of data code coincident with the analog data recording, and an end of data code subsequent to the associated period of recorded analog data. The formatted tape is decoded in a processing and digitization system which includes an analog tape player coupled to a digitizer to transmit analog information from the recorded tape over at least one channel to the digitizer. At the same time, the tape player is coupled to a decoder and interface system which detects and decodes the digital codes on the tape corresponding to each period of recorded analog data and controls tape movement and digitizer initiation in response to preprogramed modes. A host computer is also coupled to the decoder and interface system and the digitizer and programmed to initiate specific modes of data decoding through the decoder and interface system including the automatic compilation and storage of digital identification information and digitized data for the period of recorded analog data corresponding to the digital identification data, compilation and storage of selected digitized data representing periods of recorded analog data, and compilation of digital identification information related to each of the periods of recorded analog data.

  9. Return Difference Feedback Design for Robust Uncertainty Tolerance in Stochastic Multivariable Control Systems.

    DTIC Science & Technology

    1982-11-01

    D- R136 495 RETURN DIFFERENCE FEEDBACK DESIGN FOR ROBUSTj/ UNCERTAINTY TOLERANCE IN STO..(U) UNIVERSITY OF SOUTHERN CALIFORNIA LOS ANGELES DEPT OF...State and ZIP Code) 7. b6 ADORESS (City. Staft and ZIP Code) Department of Electrical Engineering -’M Directorate of Mathematical & Information Systems ...13. SUBJECT TERMS Continur on rverse ineeesaty and identify by block nmber) FIELD GROUP SUE. GR. Systems theory; control; feedback; automatic control

  10. Management of natural resources through automatic cartographic inventory

    NASA Technical Reports Server (NTRS)

    Rey, P.; Gourinard, Y.; Cambou, F. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Significant results of the ARNICA program from August 1972 - January 1973 have been: (1) establishment of image to object correspondence codes for all types of soil use and forestry in northern Spain; (2) establishment of a transfer procedure between qualitative (remote identification and remote interpretation) and quantitative (numerization, storage, automatic statistical cartography) use of images; (3) organization of microdensitometric data processing and automatic cartography software; and (4) development of a system for measuring reflectance simultaneous with imagery.

  11. Automatic Residential/Commercial Classification of Parcels with Solar Panel Detections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Omitaomu, Olufemi A; Kotikot, Susan

    A computational method to automatically detect solar panels on rooftops to aid policy and financial assessment of solar distributed generation. The code automatically classifies parcels containing solar panels in the U.S. as residential or commercial. The code allows the user to specify an input dataset containing parcels and detected solar panels, and then uses information about the parcels and solar panels to automatically classify the rooftops as residential or commercial using machine learning techniques. The zip file containing the code includes sample input and output datasets for the Boston and DC areas.

  12. Faunus: An object oriented framework for molecular simulation

    PubMed Central

    Lund, Mikael; Trulsson, Martin; Persson, Björn

    2008-01-01

    Background We present a C++ class library for Monte Carlo simulation of molecular systems, including proteins in solution. The design is generic and highly modular, enabling multiple developers to easily implement additional features. The statistical mechanical methods are documented by extensive use of code comments that – subsequently – are collected to automatically build a web-based manual. Results We show how an object oriented design can be used to create an intuitively appealing coding framework for molecular simulation. This is exemplified in a minimalistic C++ program that can calculate protein protonation states. We further discuss performance issues related to high level coding abstraction. Conclusion C++ and the Standard Template Library (STL) provide a high-performance platform for generic molecular modeling. Automatic generation of code documentation from inline comments has proven particularly useful in that no separate manual needs to be maintained. PMID:18241331

  13. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1988-01-01

    The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.

  14. Automatic Rock Detection and Mapping from HiRISE Imagery

    NASA Technical Reports Server (NTRS)

    Huertas, Andres; Adams, Douglas S.; Cheng, Yang

    2008-01-01

    This system includes a C-code software program and a set of MATLAB software tools for statistical analysis and rock distribution mapping. The major functions include rock detection and rock detection validation. The rock detection code has been evolved into a production tool that can be used by engineers and geologists with minor training.

  15. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  16. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  17. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  18. Vaccine Hesitancy in Discussion Forums: Computer-Assisted Argument Mining with Topic Models.

    PubMed

    Skeppstedt, Maria; Kerren, Andreas; Stede, Manfred

    2018-01-01

    Arguments used when vaccination is debated on Internet discussion forums might give us valuable insights into reasons behind vaccine hesitancy. In this study, we applied automatic topic modelling on a collection of 943 discussion posts in which vaccine was debated, and six distinct discussion topics were detected by the algorithm. When manually coding the posts ranked as most typical for these six topics, a set of semantically coherent arguments were identified for each extracted topic. This indicates that topic modelling is a useful method for automatically identifying vaccine-related discussion topics and for identifying debate posts where these topics are discussed. This functionality could facilitate manual coding of salient arguments, and thereby form an important component in a system for computer-assisted coding of vaccine-related discussions.

  19. Automating Traceability for Generated Software Artifacts

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Green, Jeffrey

    2004-01-01

    Program synthesis automatically derives programs from specifications of their behavior. One advantage of program synthesis, as opposed to manual coding, is that there is a direct link between the specification and the derived program. This link is, however, not very fine-grained: it can be best characterized as Program is-derived- from Specification. When the generated program needs to be understood or modified, more $ne-grained linking is useful. In this paper, we present a novel technique for automatically deriving traceability relations between parts of a specification and parts of the synthesized program. The technique is very lightweight and works -- with varying degrees of success - for any process in which one artifact is automatically derived from another. We illustrate the generality of the technique by applying it to two kinds of automatic generation: synthesis of Kalman Filter programs from speci3cations using the Aut- oFilter program synthesis system, and generation of assembly language programs from C source code using the GCC C compilel: We evaluate the effectiveness of the technique in the latter application.

  20. Interface Control Document for the EMPACT Module that Estimates Electric Power Transmission System Response to EMP-Caused Damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werley, Kenneth Alan; Mccown, Andrew William

    The EPREP code is designed to evaluate the effects of an Electro-Magnetic Pulse (EMP) on the electric power transmission system. The EPREP code embodies an umbrella framework that allows a user to set up analysis conditions and to examine analysis results. The code links to three major physics/engineering modules. The first module describes the EM wave in space and time. The second module evaluates the damage caused by the wave on specific electric power (EP) transmission system components. The third module evaluates the consequence of the damaged network on its (reduced) ability to provide electric power to meet demand. Thismore » third module is the focus of the present paper. The EMPACT code serves as the third module. The EMPACT name denotes EMP effects on Alternating Current Transmission systems. The EMPACT algorithms compute electric power transmission network flow solutions under severely damaged network conditions. Initial solutions are often characterized by unacceptible network conditions including line overloads and bad voltages. The EMPACT code contains algorithms to adjust optimally network parameters to eliminate network problems while minimizing outages. System adjustments include automatically adjusting control equipment (generator V control, variable transformers, and variable shunts), as well as non-automatic control of generator power settings and minimal load shedding. The goal is to evaluate the minimal loss of customer load under equilibrium (steady-state) conditions during peak demand.« less

  1. Automated apparatus and method of generating native code for a stitching machine

    NASA Technical Reports Server (NTRS)

    Miller, Jeffrey L. (Inventor)

    2000-01-01

    A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.

  2. Automated Concurrent Blackboard System Generation in C++

    NASA Technical Reports Server (NTRS)

    Kaplan, J. A.; McManus, J. W.; Bynum, W. L.

    1999-01-01

    In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.

  3. The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics

    NASA Astrophysics Data System (ADS)

    Ganander, Hans

    2003-10-01

    For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.

  4. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    PubMed

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  5. Analysis of automatic repeat request methods for deep-space downlinks

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Ekroot, L.

    1995-01-01

    Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.

  6. GSE, data management system programmers/User' manual

    NASA Technical Reports Server (NTRS)

    Schlagheck, R. A.; Dolerhie, B. D., Jr.; Ghiglieri, F. J.

    1974-01-01

    The GSE data management system is a computerized program which provides for a central storage source for key data associated with the mechanical ground support equipment (MGSE). Eight major sort modes can be requested by the user. Attributes that are printed automatically with each sort include the GSE end item number, description, class code, functional code, fluid media, use location, design responsibility, weight, cost, quantity, dimensions, and applicable documents. Multiple subsorts are available for the class code, functional code, fluid media, use location, design responsibility, and applicable document categories. These sorts and how to use them are described. The program and GSE data bank may be easily updated and expanded.

  7. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  8. Coding hazardous tree failures for a data management system

    Treesearch

    Lee A. Paine

    1978-01-01

    Codes for automatic data processing (ADP) are provided for hazardous tree failure data submitted on Report of Tree Failure forms. Definitions of data items and suggestions for interpreting ambiguously worded reports are also included. The manual is intended to insure the production of accurate and consistent punched ADP cards which are used in transfer of the data to...

  9. An Analysis of Elliptic Grid Generation Techniques Using an Implicit Euler Solver.

    DTIC Science & Technology

    1986-06-09

    automatic determination of the control fu.nction, . elements of covariant metric tensor in the elliptic grid generation system , from the Cm = 1,2,3...computational fluid d’nan1-cs code. Tne code Inclues a tnree-dimensional current research is aimed primaril: at algebraic generation system based on transfinite...start the iterative solution of the f. ow, nea, transfer, and combustion proble:s. elliptic generation system . Tn13 feature also .:ven-.ts :.t be made

  10. Generating Customized Verifiers for Automatically Generated Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2008-01-01

    Program verification using Hoare-style techniques requires many logical annotations. We have previously developed a generic annotation inference algorithm that weaves in all annotations required to certify safety properties for automatically generated code. It uses patterns to capture generator- and property-specific code idioms and property-specific meta-program fragments to construct the annotations. The algorithm is customized by specifying the code patterns and integrating them with the meta-program fragments for annotation construction. However, this is difficult since it involves tedious and error-prone low-level term manipulations. Here, we describe an annotation schema compiler that largely automates this customization task using generative techniques. It takes a collection of high-level declarative annotation schemas tailored towards a specific code generator and safety property, and generates all customized analysis functions and glue code required for interfacing with the generic algorithm core, thus effectively creating a customized annotation inference algorithm. The compiler raises the level of abstraction and simplifies schema development and maintenance. It also takes care of some more routine aspects of formulating patterns and schemas, in particular handling of irrelevant program fragments and irrelevant variance in the program structure, which reduces the size, complexity, and number of different patterns and annotation schemas that are required. The improvements described here make it easier and faster to customize the system to a new safety property or a new generator, and we demonstrate this by customizing it to certify frame safety of space flight navigation code that was automatically generated from Simulink models by MathWorks' Real-Time Workshop.

  11. Modular Expression Language for Ordinary Differential Equation Editing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blake, Robert C.

    MELODEEis a system for describing systems of initial value problem ordinary differential equations, and a compiler for the language that produces optimized code to integrate the differential equations. Features include rational polynomial approximation for expensive functions and automatic differentiation for symbolic jacobians

  12. A System to Automatically Classify and Name Any Individual Genome-Sequenced Organism Independently of Current Biological Classification and Nomenclature

    PubMed Central

    Song, Yuhyun; Leman, Scotland; Monteil, Caroline L.; Heath, Lenwood S.; Vinatzer, Boris A.

    2014-01-01

    A broadly accepted and stable biological classification system is a prerequisite for biological sciences. It provides the means to describe and communicate about life without ambiguity. Current biological classification and nomenclature use the species as the basic unit and require lengthy and laborious species descriptions before newly discovered organisms can be assigned to a species and be named. The current system is thus inadequate to classify and name the immense genetic diversity within species that is now being revealed by genome sequencing on a daily basis. To address this lack of a general intra-species classification and naming system adequate for today’s speed of discovery of new diversity, we propose a classification and naming system that is exclusively based on genome similarity and that is suitable for automatic assignment of codes to any genome-sequenced organism without requiring any phenotypic or phylogenetic analysis. We provide examples demonstrating that genome similarity-based codes largely align with current taxonomic groups at many different levels in bacteria, animals, humans, plants, and viruses. Importantly, the proposed approach is only slightly affected by the order of code assignment and can thus provide codes that reflect similarity between organisms and that do not need to be revised upon discovery of new diversity. We envision genome similarity-based codes to complement current biological nomenclature and to provide a universal means to communicate unambiguously about any genome-sequenced organism in fields as diverse as biodiversity research, infectious disease control, human and microbial forensics, animal breed and plant cultivar certification, and human ancestry research. PMID:24586551

  13. HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1989-01-01

    A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.

  14. Development of the FHR advanced natural circulation analysis code and application to FHR safety analysis

    DOE PAGES

    Guo, Z.; Zweibaum, N.; Shao, M.; ...

    2016-04-19

    The University of California, Berkeley (UCB) is performing thermal hydraulics safety analysis to develop the technical basis for design and licensing of fluoride-salt-cooled, high-temperature reactors (FHRs). FHR designs investigated by UCB use natural circulation for emergency, passive decay heat removal when normal decay heat removal systems fail. The FHR advanced natural circulation analysis (FANCY) code has been developed for assessment of passive decay heat removal capability and safety analysis of these innovative system designs. The FANCY code uses a one-dimensional, semi-implicit scheme to solve for pressure-linked mass, momentum and energy conservation equations. Graph theory is used to automatically generate amore » staggered mesh for complicated pipe network systems. Heat structure models have been implemented for three types of boundary conditions (Dirichlet, Neumann and Robin boundary conditions). Heat structures can be composed of several layers of different materials, and are used for simulation of heat structure temperature distribution and heat transfer rate. Control models are used to simulate sequences of events or trips of safety systems. A proportional-integral controller is also used to automatically make thermal hydraulic systems reach desired steady state conditions. A point kinetics model is used to model reactor kinetics behavior with temperature reactivity feedback. The underlying large sparse linear systems in these models are efficiently solved by using direct and iterative solvers provided by the SuperLU code on high performance machines. Input interfaces are designed to increase the flexibility of simulation for complicated thermal hydraulic systems. In conclusion, this paper mainly focuses on the methodology used to develop the FANCY code, and safety analysis of the Mark 1 pebble-bed FHR under development at UCB is performed.« less

  15. Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, S.

    2002-07-01

    As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less

  16. Automatic Ammunition Identification Technology Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weil, B.

    1993-01-01

    The Automatic Ammunition Identification Technology (AAIT) Project is an activity of the Robotics Process Systems Division at the Oak Ridge National Laboratory (ORNL) for the US Army's Project Manager-Ammunition Logistics (PM-AMMOLOG) at the Picatinny Arsenal in Picatinny, New Jersey. The project objective is to evaluate new two-dimensional bar code symbologies for potential use in ammunition logistics systems and automated reloading equipment. These new symbologies are a significant improvement over typical linear bar codes since machine-readable alphanumeric messages up to 2000 characters long are achievable. These compressed data symbologies are expected to significantly improve logistics and inventory management tasks and permitmore » automated feeding and handling of ammunition to weapon systems. The results will be increased throughout capability, better inventory control, reduction of human error, lower operation and support costs, and a more timely re-supply of various weapon systems. This paper will describe the capabilities of existing compressed data symbologies and the symbol testing activities being conducted at ORNL for the AAIT Project.« less

  17. Automatic Ammunition Identification Technology Project. Ammunition Logistics Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weil, B.

    1993-03-01

    The Automatic Ammunition Identification Technology (AAIT) Project is an activity of the Robotics & Process Systems Division at the Oak Ridge National Laboratory (ORNL) for the US Army`s Project Manager-Ammunition Logistics (PM-AMMOLOG) at the Picatinny Arsenal in Picatinny, New Jersey. The project objective is to evaluate new two-dimensional bar code symbologies for potential use in ammunition logistics systems and automated reloading equipment. These new symbologies are a significant improvement over typical linear bar codes since machine-readable alphanumeric messages up to 2000 characters long are achievable. These compressed data symbologies are expected to significantly improve logistics and inventory management tasks andmore » permit automated feeding and handling of ammunition to weapon systems. The results will be increased throughout capability, better inventory control, reduction of human error, lower operation and support costs, and a more timely re-supply of various weapon systems. This paper will describe the capabilities of existing compressed data symbologies and the symbol testing activities being conducted at ORNL for the AAIT Project.« less

  18. Tracking multiple surgical instruments in a near-infrared optical system.

    PubMed

    Cai, Ken; Yang, Rongqian; Lin, Qinyong; Wang, Zhigang

    2016-12-01

    Surgical navigation systems can assist doctors in performing more precise and more efficient surgical procedures to avoid various accidents. The near-infrared optical system (NOS) is an important component of surgical navigation systems. However, several surgical instruments are used during surgery, and effectively tracking all of them is challenging. A stereo matching algorithm using two intersecting lines and surgical instrument codes is proposed in this paper. In our NOS, the markers on the surgical instruments can be captured by two near-infrared cameras. After automatically searching and extracting their subpixel coordinates in the left and right images, the coordinates of the real and pseudo markers are determined by the two intersecting lines. Finally, the pseudo markers are removed to achieve accurate stereo matching by summing the codes for the distances between a specific marker with the other two markers on the surgical instrument. Experimental results show that the markers on the different surgical instruments can be automatically and accurately recognized. The NOS can accurately track multiple surgical instruments.

  19. Mining Software Usage with the Automatic Library Tracking Database (ALTD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadri, Bilel; Fahey, Mark R

    2013-01-01

    Tracking software usage is important for HPC centers, computer vendors, code developers and funding agencies to provide more efficient and targeted software support, and to forecast needs and guide HPC software effort towards the Exascale era. However, accurately tracking software usage on HPC systems has been a challenging task. In this paper, we present a tool called Automatic Library Tracking Database (ALTD) that has been developed and put in production on several Cray systems. The ALTD infrastructure prototype automatically and transparently stores information about libraries linked into an application at compilation time and also the executables launched in a batchmore » job. We will illustrate the usage of libraries, compilers and third party software applications on a system managed by the National Institute for Computational Sciences.« less

  20. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele

    2001-01-01

    This viewgraph presentation provides information on support sources available for the automatic parallelization of computer program. CAPTools, a support tool developed at the University of Greenwich, transforms, with user guidance, existing sequential Fortran code into parallel message passing code. Comparison routines are then run for debugging purposes, in essence, ensuring that the code transformation was accurate.

  1. Automatic Coding of Dialogue Acts in Collaboration Protocols

    ERIC Educational Resources Information Center

    Erkens, Gijsbert; Janssen, Jeroen

    2008-01-01

    Although protocol analysis can be an important tool for researchers to investigate the process of collaboration and communication, the use of this method of analysis can be time consuming. Hence, an automatic coding procedure for coding dialogue acts was developed. This procedure helps to determine the communicative function of messages in online…

  2. Building Automatic Grading Tools for Basic of Programming Lab in an Academic Institution

    NASA Astrophysics Data System (ADS)

    Harimurti, Rina; Iwan Nurhidayat, Andi; Asmunin

    2018-04-01

    The skills of computer programming is a core competency that must be mastered by students majoring in computer sciences. The best way to improve this skill is through the practice of writing many programs to solve various problems from simple to complex. It takes hard work and a long time to check and evaluate the results of student labs one by one, especially if the number of students a lot. Based on these constrain, web proposes Automatic Grading Tools (AGT), the application that can evaluate and deeply check the source code in C, C++. The application architecture consists of students, web-based applications, compilers, and operating systems. Automatic Grading Tools (AGT) is implemented MVC Architecture and using open source software, such as laravel framework version 5.4, PostgreSQL 9.6, Bootstrap 3.3.7, and jquery library. Automatic Grading Tools has also been tested for real problems by submitting source code in C/C++ language and then compiling. The test results show that the AGT application has been running well.

  3. ASA24 enables multiple automatically coded self-administered 24-hour recalls and food records

    Cancer.gov

    A freely available web-based tool for epidemiologic, interventional, behavioral, or clinical research from NCI that enables multiple automatically coded self-administered 24-hour recalls and food records.

  4. 40 CFR 51.362 - Motorist compliance enforcement program oversight.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... collection through the use of automatic data capture systems such as bar-code scanners or optical character... determination of compliance through parking lot surveys, road-side pull-overs, or other in-use vehicle...

  5. 40 CFR 51.362 - Motorist compliance enforcement program oversight.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... collection through the use of automatic data capture systems such as bar-code scanners or optical character... determination of compliance through parking lot surveys, road-side pull-overs, or other in-use vehicle...

  6. Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder

    NASA Technical Reports Server (NTRS)

    Staats, Matt

    2009-01-01

    We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.

  7. Integration of QR codes into an anesthesia information management system for resident case log management.

    PubMed

    Avidan, Alexander; Weissman, Charles; Levin, Phillip D

    2015-04-01

    Quick response (QR) codes containing anesthesia syllabus data were introduced into an anesthesia information management system. The code was generated automatically at the conclusion of each case and available for resident case logging using a smartphone or tablet. The goal of this study was to evaluate the use and usability/user-friendliness of such system. Resident case logging practices were assessed prior to introducing the QR codes. QR code use and satisfactions amongst residents was reassessed at three and six months. Before QR code introduction only 12/23 (52.2%) residents maintained a case log. Most of the remaining residents (9/23, 39.1%) expected to receive a case list from the anesthesia information management system database at the end of their residency. At three months and six months 17/26 (65.4%) and 15/25 (60.0%) residents, respectively, were using the QR codes. Satisfaction was rated as very good or good. QR codes for residents' case logging with smartphones or tablets were successfully introduced in an anesthesia information management system and used by most residents. QR codes can be successfully implemented into medical practice to support data transfer. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Automated real-time software development

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Walker, Carrie K.; Turkovich, John J.

    1993-01-01

    A Computer-Aided Software Engineering (CASE) system has been developed at the Charles Stark Draper Laboratory (CSDL) under the direction of the NASA Langley Research Center. The CSDL CASE tool provides an automated method of generating source code and hard copy documentation from functional application engineering specifications. The goal is to significantly reduce the cost of developing and maintaining real-time scientific and engineering software while increasing system reliability. This paper describes CSDL CASE and discusses demonstrations that used the tool to automatically generate real-time application code.

  9. How to differentiate collective variables in free energy codes: Computer-algebra code generation and automatic differentiation

    NASA Astrophysics Data System (ADS)

    Giorgino, Toni

    2018-07-01

    The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.

  10. The study on dynamic cadastral coding rules based on kinship relationship

    NASA Astrophysics Data System (ADS)

    Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng

    2007-06-01

    Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.

  11. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  12. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.

  13. Design and realization of an automatic weather station at island

    NASA Astrophysics Data System (ADS)

    Chen, Yong-hua; Li, Si-ren

    2011-10-01

    In this paper, the design and development of an automatic weather station monitoring is described. The proposed system consists of a set of sensors for measuring meteorological parameters (temperature, wind speed & direction, rain fall, visibility, etc.). To increase the reliability of the system, wind speed & direction are measured redundantly with duplicate sensors. The sensor signals are collected by the data logger CR1000 at several analog and digital inputs. The CR1000 and the sensors form a completely autonomous system which works with the other systems installed in the container. Communication with the master PC is accomplished over the method of Code Division Multiple Access (CDMA) with the Compact Caimore6550P CDMA DTU. The data are finally stored in tables on the CPU as well as on the CF-Card. The weather station was built as an efficient autonomous system which operates with the other systems to provide the required data for a fully automatic measurement system.

  14. Cost Reporting Elements and Activity Cost Tradeoffs for Defense System Software. Volume I. Study Results.

    DTIC Science & Technology

    1977-05-01

    C31) programs; (4) simulator/ trainer programs ; and (5) automatic test equipment software. Each of these five types of software represents a problem...coded in the same source language, say JOVIAL, then source—language statements would be a better measure, since that would automatically compensate...whether done at no (visible) cost or by renegotiation of the contract. Fig. 2.3 illustrates these with solid lines. It is conjec- tured that the change

  15. From assessment to improvement of elderly care in general practice using decision support to increase adherence to ACOVE quality indicators: study protocol for randomized control trial

    PubMed Central

    2014-01-01

    Background Previous efforts such as Assessing Care of Vulnerable Elders (ACOVE) provide quality indicators for assessing the care of elderly patients, but thus far little has been done to leverage this knowledge to improve care for these patients. We describe a clinical decision support system to improve general practitioner (GP) adherence to ACOVE quality indicators and a protocol for investigating impact on GPs’ adherence to the rules. Design We propose two randomized controlled trials among a group of Dutch GP teams on adherence to ACOVE quality indicators. In both trials a clinical decision support system provides un-intrusive feedback appearing as a color-coded, dynamically updated, list of items needing attention. The first trial pertains to real-time automatically verifiable rules. The second trial concerns non-automatically verifiable rules (adherence cannot be established by the clinical decision support system itself, but the GPs report whether they will adhere to the rules). In both trials we will randomize teams of GPs caring for the same patients into two groups, A and B. For the automatically verifiable rules, group A GPs receive support only for a specific inter-related subset of rules, and group B GPs receive support only for the remainder of the rules. For non-automatically verifiable rules, group A GPs receive feedback framed as actions with positive consequences, and group B GPs receive feedback framed as inaction with negative consequences. GPs indicate whether they adhere to non-automatically verifiable rules. In both trials, the main outcome measure is mean adherence, automatically derived or self-reported, to the rules. Discussion We relied on active end-user involvement in selecting the rules to support, and on a model for providing feedback displayed as color-coded real-time messages concerning the patient visiting the GP at that time, without interrupting the GP’s workflow with pop-ups. While these aspects are believed to increase clinical decision support system acceptance and its impact on adherence to the selected clinical rules, systems with these properties have not yet been evaluated. Trial registration Controlled Trials NTR3566 PMID:24642339

  16. Code query by example

    NASA Astrophysics Data System (ADS)

    Vaucouleur, Sebastien

    2011-02-01

    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  17. A new user-assisted segmentation and tracking technique for an object-based video editing system

    NASA Astrophysics Data System (ADS)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  18. Systems, methods and apparatus for generation and verification of policies in autonomic computing systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher A. (Inventor); Sterritt, Roy (Inventor); Truszkowski, Walter F. (Inventor); Hinchey, Michael G. (Inventor); Gracanin, Denis (Inventor); Rash, James L. (Inventor)

    2011-01-01

    Described herein is a method that produces fully (mathematically) tractable development of policies for autonomic systems from requirements through to code generation. This method is illustrated through an example showing how user formulated policies can be translated into a formal mode which can then be converted to code. The requirements-based programming method described provides faster, higher quality development and maintenance of autonomic systems based on user formulation of policies.Further, the systems, methods and apparatus described herein provide a way of analyzing policies for autonomic systems and facilities the generation of provably correct implementations automatically, which in turn provides reduced development time, reduced testing requirements, guarantees of correctness of the implementation with respect to the policies specified at the outset, and provides a higher degree of confidence that the policies are both complete and reasonable. The ability to specify the policy for the management of a system and then automatically generate an equivalent implementation greatly improves the quality of software, the survivability of future missions, in particular when the system will operate untended in very remote environments, and greatly reduces development lead times and costs.

  19. A plug-in to Eclipse for VHDL source codes: functionalities

    NASA Astrophysics Data System (ADS)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  20. Automatic generation of user material subroutines for biomechanical growth analysis.

    PubMed

    Young, Jonathan M; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A; Perucchio, Renato

    2010-10-01

    The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis package ABAQUS allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package MATHEMATICA and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress-stretch response of a material defined by a Fung-orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in ABAQUS. The MATHEMATICA UMAT generator is then extended to include continuum growth by adding a growth subroutine to the automatically generated UMAT. The MATHEMATICA UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT, we simulate the growth-based bending of a bilayered bar with differing fiber directions in a nongrowing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for a biomechanical growth analysis.

  1. Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems

    NASA Astrophysics Data System (ADS)

    Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.

    2008-08-01

    This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.

  2. SYMBOD - A computer program for the automatic generation of symbolic equations of motion for systems of hinge-connected rigid bodies

    NASA Technical Reports Server (NTRS)

    Macala, G. A.

    1983-01-01

    A computer program is described that can automatically generate symbolic equations of motion for systems of hinge-connected rigid bodies with tree topologies. The dynamical formulation underlying the program is outlined, and examples are given to show how a symbolic language is used to code the formulation. The program is applied to generate the equations of motion for a four-body model of the Galileo spacecraft. The resulting equations are shown to be a factor of three faster in execution time than conventional numerical subroutines.

  3. Continuous integration and quality control for scientific software

    NASA Astrophysics Data System (ADS)

    Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.

    2013-08-01

    Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central “Makefile”. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.

  4. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    NASA Technical Reports Server (NTRS)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Z.; Zweibaum, N.; Shao, M.

    The University of California, Berkeley (UCB) is performing thermal hydraulics safety analysis to develop the technical basis for design and licensing of fluoride-salt-cooled, high-temperature reactors (FHRs). FHR designs investigated by UCB use natural circulation for emergency, passive decay heat removal when normal decay heat removal systems fail. The FHR advanced natural circulation analysis (FANCY) code has been developed for assessment of passive decay heat removal capability and safety analysis of these innovative system designs. The FANCY code uses a one-dimensional, semi-implicit scheme to solve for pressure-linked mass, momentum and energy conservation equations. Graph theory is used to automatically generate amore » staggered mesh for complicated pipe network systems. Heat structure models have been implemented for three types of boundary conditions (Dirichlet, Neumann and Robin boundary conditions). Heat structures can be composed of several layers of different materials, and are used for simulation of heat structure temperature distribution and heat transfer rate. Control models are used to simulate sequences of events or trips of safety systems. A proportional-integral controller is also used to automatically make thermal hydraulic systems reach desired steady state conditions. A point kinetics model is used to model reactor kinetics behavior with temperature reactivity feedback. The underlying large sparse linear systems in these models are efficiently solved by using direct and iterative solvers provided by the SuperLU code on high performance machines. Input interfaces are designed to increase the flexibility of simulation for complicated thermal hydraulic systems. In conclusion, this paper mainly focuses on the methodology used to develop the FANCY code, and safety analysis of the Mark 1 pebble-bed FHR under development at UCB is performed.« less

  6. LogiKit - assisting complex logic specification and implementation for embedded control systems

    NASA Astrophysics Data System (ADS)

    Diglio, A.; Nicolodi, B.

    2002-07-01

    LogiKit provides an overall lifecycle solution. LogiKit is a powerful software engineering case toolkit for requirements specification, simulation and documentation. LogiKit also provides an automatic ADA software design, code and unit test generator.

  7. The integration of system specifications and program coding

    NASA Technical Reports Server (NTRS)

    Luebke, W. R.

    1970-01-01

    Experience in maintaining up-to-date documentation for one module of the large-scale Medical Literature Analysis and Retrieval System 2 (MEDLARS 2) is described. Several innovative techniques were explored in the development of this system's data management environment, particularly those that use PL/I as an automatic documenter. The PL/I data description section can provide automatic documentation by means of a master description of data elements that has long and highly meaningful mnemonic names and a formalized technique for the production of descriptive commentary. The techniques discussed are practical methods that employ the computer during system development in a manner that assists system implementation, provides interim documentation for customer review, and satisfies some of the deliverable documentation requirements.

  8. Software Considerations for Subscale Flight Testing of Experimental Control Laws

    NASA Technical Reports Server (NTRS)

    Murch, Austin M.; Cox, David E.; Cunningham, Kevin

    2009-01-01

    The NASA AirSTAR system has been designed to address the challenges associated with safe and efficient subscale flight testing of research control laws in adverse flight conditions. In this paper, software elements of this system are described, with an emphasis on components which allow for rapid prototyping and deployment of aircraft control laws. Through model-based design and automatic coding a common code-base is used for desktop analysis, piloted simulation and real-time flight control. The flight control system provides the ability to rapidly integrate and test multiple research control laws and to emulate component or sensor failures. Integrated integrity monitoring systems provide aircraft structural load protection, isolate the system from control algorithm failures, and monitor the health of telemetry streams. Finally, issues associated with software configuration management and code modularity are briefly discussed.

  9. A comparison of different methods to implement higher order derivatives of density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Dam, Hubertus J.J.

    Density functional theory is the dominant approach in electronic structure methods today. To calculate properties higher order derivatives of the density functionals are required. These derivatives might be implemented manually,by automatic differentiation, or by symbolic algebra programs. Different authors have cited different reasons for using the particular method of their choice. This paper presents work where all three approaches were used and the strengths and weaknesses of each approach are considered. It is found that all three methods produce code that is suffficiently performanted for practical applications, despite the fact that our symbolic algebra generated code and our automatic differentiationmore » code still have scope for significant optimization. The automatic differentiation approach is the best option for producing readable and maintainable code.« less

  10. Automatic Processing of Reactive Polymers

    NASA Technical Reports Server (NTRS)

    Roylance, D.

    1985-01-01

    A series of process modeling computer codes were examined. The codes use finite element techniques to determine the time-dependent process parameters operative during nonisothermal reactive flows such as can occur in reaction injection molding or composites fabrication. The use of these analytical codes to perform experimental control functions is examined; since the models can determine the state of all variables everywhere in the system, they can be used in a manner similar to currently available experimental probes. A small but well instrumented reaction vessel in which fiber-reinforced plaques are cured using computer control and data acquisition was used. The finite element codes were also extended to treat this particular process.

  11. 75 FR 6252 - Notice of Application for Approval of Discontinuance or Modification of a Railroad Signal System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-08

    ... Discontinuance or Modification of a Railroad Signal System or Relief From the Requirements of Title 49 Code of... approval for the discontinuance or modification of the signal system or relief from the requirements of 49... the conversion of dispatcher controlled holdout signals, 96L and 96R, to automatic signals, 8221 and...

  12. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. We reformulate MPI source into a task dependency graph representation, which partially orders the tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotation for a variety ofmore » applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo’s performance meets or exceeds that of labor-intensive hand coding. As a result, the translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a well-known library.« less

  13. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE PAGES

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric; ...

    2017-03-06

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. We reformulate MPI source into a task dependency graph representation, which partially orders the tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotation for a variety ofmore » applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo’s performance meets or exceeds that of labor-intensive hand coding. As a result, the translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a well-known library.« less

  14. Composing Data Parallel Code for a SPARQL Graph Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste

    Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basicmore » graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.« less

  15. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. Bamboo reformulates MPI source into the form of a task dependency graph that expresses a partial ordering among tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotationmore » for a variety of applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo's performance meets or exceeds that of labor-intensive hand coding. The translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a wellknown library.« less

  16. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  17. Techniques for Developing an Acquisition Strategy by Profiling Software Risks

    DTIC Science & Technology

    2006-08-01

    Drivers...................................................................................... 13 Figure 8: BMW 745Li Software... BMW 745Li, shown in Figure 8, is a good illustration of the increasing software control of hardware systems in automobiles. Among the many features...roll stabilization, dynamic brake con- trol, coded drive-away protection, an adaptive automatic transmission, and iDrive systems. This list can be

  18. A Direct TeX-to-Braille Transcribing Method

    ERIC Educational Resources Information Center

    Papasalouros, Andreas; Tsolomitis, Antonis

    2017-01-01

    The TeX/LaTeX typesetting system is the most wide-spread system for creating documents in Mathematics and Science. However, no reliable tool exists to this day for automatically transcribing documents from the above formats into Braille/Nemeth code. Thus, visually impaired students of related fields do not have access to the bulk of study material…

  19. Cross-terminology mapping challenges: a demonstration using medication terminological systems.

    PubMed

    Saitwal, Himali; Qing, David; Jones, Stephen; Bernstam, Elmer V; Chute, Christopher G; Johnson, Todd R

    2012-08-01

    Standardized terminological systems for biomedical information have provided considerable benefits to biomedical applications and research. However, practical use of this information often requires mapping across terminological systems-a complex and time-consuming process. This paper demonstrates the complexity and challenges of mapping across terminological systems in the context of medication information. It provides a review of medication terminological systems and their linkages, then describes a case study in which we mapped proprietary medication codes from an electronic health record to SNOMED CT and the UMLS Metathesaurus. The goal was to create a polyhierarchical classification system for querying an i2b2 clinical data warehouse. We found that three methods were required to accurately map the majority of actively prescribed medications. Only 62.5% of source medication codes could be mapped automatically. The remaining codes were mapped using a combination of semi-automated string comparison with expert selection, and a completely manual approach. Compound drugs were especially difficult to map: only 7.5% could be mapped using the automatic method. General challenges to mapping across terminological systems include (1) the availability of up-to-date information to assess the suitability of a given terminological system for a particular use case, and to assess the quality and completeness of cross-terminology links; (2) the difficulty of correctly using complex, rapidly evolving, modern terminologies; (3) the time and effort required to complete and evaluate the mapping; (4) the need to address differences in granularity between the source and target terminologies; and (5) the need to continuously update the mapping as terminological systems evolve. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Near-line Archive Data Mining at the Goddard Distributed Active Archive Center

    NASA Astrophysics Data System (ADS)

    Pham, L.; Mack, R.; Eng, E.; Lynnes, C.

    2002-12-01

    NASA's Earth Observing System (EOS) is generating immense volumes of data, in some cases too much to provide to users with data-intensive needs. As an alternative to moving the data to the user and his/her research algorithms, we are providing a means to move the algorithms to the data. The Near-line Archive Data Mining (NADM) system is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web data mining portal to the EOS Data and Information System (EOSDIS) data pool, a 50-TB online disk cache. The NADM web portal enables registered users to submit and execute data mining algorithm codes on the data in the EOSDIS data pool. A web interface allows the user to access the NADM system. The users first develops personalized data mining code on their home platform and then uploads them to the NADM system. The C, FORTRAN and IDL languages are currently supported. The user developed code is automatically audited for any potential security problems before it is installed within the NADM system and made available to the user. Once the code has been installed the user is provided a test environment where he/she can test the execution of the software against data sets of the user's choosing. When the user is satisfied with the results, he/she can promote their code to the "operational" environment. From here the user can interactively run his/her code on the data available in the EOSDIS data pool. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the EOSDIS data pool. The generated mined data products are then made available for FTP pickup. The NADM system uses the GES DAAC-developed Simple Scalable Script-based Science Processor (S4P) to automate tasks and perform the actual data processing. Users will also have the option of selecting a DAAC-provided data mining algorithm and using it to process the data of their choice.

  1. Use of emergency department electronic medical records for automated epidemiological surveillance of suicide attempts: a French pilot study.

    PubMed

    Metzger, Marie-Hélène; Tvardik, Nastassia; Gicquel, Quentin; Bouvry, Côme; Poulet, Emmanuel; Potinet-Pagliaroli, Véronique

    2017-06-01

    The aim of this study was to determine whether an expert system based on automated processing of electronic health records (EHRs) could provide a more accurate estimate of the annual rate of emergency department (ED) visits for suicide attempts in France, as compared to the current national surveillance system based on manual coding by emergency practitioners. A feasibility study was conducted at Lyon University Hospital, using data for all ED patient visits in 2012. After automatic data extraction and pre-processing, including automatic coding of medical free-text through use of the Unified Medical Language System, seven different machine-learning methods were used to classify the reasons for ED visits into "suicide attempts" versus "other reasons". The performance of these different methods was compared by using the F-measure. In a test sample of 444 patients admitted to the ED in 2012 (98 suicide attempts, 48 cases of suicidal ideation, and 292 controls with no recorded non-fatal suicidal behaviour), the F-measure for automatic detection of suicide attempts ranged from 70.4% to 95.3%. The random forest and naïve Bayes methods performed best. This study demonstrates that machine-learning methods can improve the quality of epidemiological indicators as compared to current national surveillance of suicide attempts. Copyright © 2016 John Wiley & Sons, Ltd.

  2. CELCAP: A Computer Model for Cogeneration System Analysis

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A description of the CELCAP cogeneration analysis program is presented. A detailed description of the methodology used by the Naval Civil Engineering Laboratory in developing the CELCAP code and the procedures for analyzing cogeneration systems for a given user are given. The four engines modeled in CELCAP are: gas turbine with exhaust heat boiler, diesel engine with waste heat boiler, single automatic-extraction steam turbine, and back-pressure steam turbine. Both the design point and part-load performances are taken into account in the engine models. The load model describes how the hourly electric and steam demand of the user is represented by 24 hourly profiles. The economic model describes how the annual and life-cycle operating costs that include the costs of fuel, purchased electricity, and operation and maintenance of engines and boilers are calculated. The CELCAP code structure and principal functions of the code are described to how the various components of the code are related to each other. Three examples of the application of the CELCAP code are given to illustrate the versatility of the code. The examples shown represent cases of system selection, system modification, and system optimization.

  3. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.

  4. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling.

    PubMed

    Xiao, Bo; Huang, Chewei; Imel, Zac E; Atkins, David C; Georgiou, Panayiotis; Narayanan, Shrikanth S

    2016-04-01

    Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy-a key therapy quality index-from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training.

  5. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling

    PubMed Central

    Xiao, Bo; Huang, Chewei; Imel, Zac E.; Atkins, David C.; Georgiou, Panayiotis; Narayanan, Shrikanth S.

    2016-01-01

    Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy—a key therapy quality index—from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training. PMID:28286867

  6. Columbia Switches to Automatic Fire Detection

    ERIC Educational Resources Information Center

    Gardner, John C.

    1978-01-01

    Columbia University has started a project that, in the first two phases, will provide an internal fire alarm system to residence halls and academic buildings. The third phase will be major structural changes to bring older academic buildings up to meet new life safety codes. (Author/MLF)

  7. Towards objective and reproducible study of patient-doctor interaction: Automatic text analysis based VR-CoDES annotation of consultation transcripts.

    PubMed

    Birkett, Charlotte; Arandjelovic, Ognjen; Humphris, Gerald

    2017-07-01

    While increasingly appreciated for its importance, the interaction between health care professionals (HCP) and patients is notoriously difficult to study, with both methodological and practical challenges. The former has been addressed by the so-called Verona coding definitions of emotional sequences (VR-CoDES) - a system for identifying and coding patient emotions and the corresponding HCP responses - shown to be reliable and informative in a number of independent studies in different health care delivery contexts. In the preset work we focus on the practical challenge of the scalability of this coding system, namely on making it easily usable more widely and on applying it on larger patient cohorts. In particular, VR-CoDES is inherently complex and training is required to ensure consistent annotation of audio recordings or textual transcripts of consultations. Following up on our previous pilot investigation, in the the present paper we describe the first automatic, computer based algorithm capable of providing coarse level coding of textual transcripts. We investigate different representations of patient utterances and classification methodologies, and label each utterance as either containing an explicit expression of emotional distress (a `concern'), an implicit one (a `cue'), or neither. Using a data corpus comprising 200 consultations between radiotherapists and adult female breast cancer patients we demonstrate excellent labelling performance.

  8. Cross-terminology mapping challenges: A demonstration using medication terminological systems

    PubMed Central

    Saitwal, Himali; Qing, David; Jones, Stephen; Bernstam, Elmer; Chute, Christopher G.; Johnson, Todd R.

    2015-01-01

    Standardized terminological systems for biomedical information have provided considerable benefits to biomedical applications and research. However, practical use of this information often requires mapping across terminological systems—a complex and time-consuming process. This paper demonstrates the complexity and challenges of mapping across terminological systems in the context of medication information. It provides a review of medication terminological systems and their linkages, then describes a case study in which we mapped proprietary medication codes from an electronic health record to SNOMED-CT and the UMLS Metathesaurus. The goal was to create a polyhierarchical classification system for querying an i2b2 clinical data warehouse. We found that three methods were required to accurately map the majority of actively prescribed medications. Only 62.5% of source medication codes could be mapped automatically. The remaining codes were mapped using a combination of semi-automated string comparison with expert selection, and a completely manual approach. Compound drugs were especially difficult to map: only 7.5% could be mapped using the automatic method. General challenges to mapping across terminological systems include (1) the availability of up-to-date information to assess the suitability of a given terminological system for a particular use case, and to assess the quality and completeness of cross-terminology links; (2) the difficulty of correctly using complex, rapidly evolving, modern terminologies; (3) the time and effort required to complete and evaluate the mapping; (4) the need to address differences in granularity between the source and target terminologies; and (5) the need to continuously update the mapping as terminological systems evolve. PMID:22750536

  9. Occupational self-coding and automatic recording (OSCAR): a novel web-based tool to collect and code lifetime job histories in large population-based studies.

    PubMed

    De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul

    2017-03-01

    Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.

  10. A Survey of Automatic Code Generating Software

    DTIC Science & Technology

    1988-09-01

    11189 Cincinnati, OH 45211 513-662-2300 69 LIST OF REFERENCES 1. Boehm, Barry W., "Software and Its Impact: A Quantita- tive Assessment," Daamtin, Vol...Decision SuDvort Systems: An Organizational Perspective, pp. 11- 12, Addison-Wesley Publishing Company, Inc., 1978. 6. Pressman , Roger S., Software

  11. Automatic mathematical modeling for real time simulation program (AI application)

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1989-01-01

    A methodology is described for automatic mathematical modeling and generating simulation models. The major objective was to create a user friendly environment for engineers to design, maintain, and verify their models; to automatically convert the mathematical models into conventional code for computation; and finally, to document the model automatically.

  12. Interconnecting smartphone, image analysis server, and case report forms in clinical trials for automatic skin lesion tracking in clinical trials

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Doma, Aliaa; Gombert, Alexander; Deserno, Thomas M.

    2016-03-01

    Today, subject's medical data in controlled clinical trials is captured digitally in electronic case report forms (eCRFs). However, eCRFs only insufficiently support integration of subject's image data, although medical imaging is looming large in studies today. For bed-side image integration, we present a mobile application (App) that utilizes the smartphone-integrated camera. To ensure high image quality with this inexpensive consumer hardware, color reference cards are placed in the camera's field of view next to the lesion. The cards are used for automatic calibration of geometry, color, and contrast. In addition, a personalized code is read from the cards that allows subject identification. For data integration, the App is connected to an communication and image analysis server that also holds the code-study-subject relation. In a second system interconnection, web services are used to connect the smartphone with OpenClinica, an open-source, Food and Drug Administration (FDA)-approved electronic data capture (EDC) system in clinical trials. Once the photographs have been securely stored on the server, they are released automatically from the mobile device. The workflow of the system is demonstrated by an ongoing clinical trial, in which photographic documentation is frequently performed to measure the effect of wound incision management systems. All 205 images, which have been collected in the study so far, have been correctly identified and successfully integrated into the corresponding subject's eCRF. Using this system, manual steps for the study personnel are reduced, and, therefore, errors, latency and costs decreased. Our approach also increases data security and privacy.

  13. ONR Far East Scientific Bulletin, Volume 7, Number 2, April-June 1982,

    DTIC Science & Technology

    1982-01-01

    contained source code . - PAL (Program Automation Language) PAL is a system design language that automatically generates an executable program from a...NTIS c3&1 DTIC TliB Unn ’l.- A ElJustitt for _ By - Distrib~tion Availability Codes Avail and/or Di st Speojal iii 0- CONTENTS~ P age r’A Gflmpse at...tools exist at ECL in prototype forms. Like most major computer manufacturers, they have also extended high level languages such as FORTRAN , COBOL

  14. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  15. Advanced information processing system: Hosting of advanced guidance, navigation and control algorithms on AIPS using ASTER

    NASA Technical Reports Server (NTRS)

    Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John

    1994-01-01

    This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.

  16. Automatic Web-based Calibration of Network-Capable Shipboard Sensors

    DTIC Science & Technology

    2007-09-01

    Server, Java , Applet, and Servlet . 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY CLASSIFICATION OF THIS PAGE...49 b. Sensor Applet...........................................................................49 3. Java Servlet ...Table 1. Required System Environment Variables for Java Servlet Development. ......25 Table 2. Payload Data Format of the POST Requests from

  17. Fire Protection System for an Atrium Satisfies Code Intent

    ERIC Educational Resources Information Center

    Boehmer, Donald J.; Jensen, Rolf

    1975-01-01

    The Civic Center in Scarborough, Ontario, has an open interior design that incorporates an atrium. Fire protection elements include automatic sprinklers, provisions for efficient exiting of building occupants, and smoke evacuation by gravity exhaust. (Available from 1221 Avenue of the Americas, New York, NY 10020, $15.00 annually.) (Author/MLF)

  18. A programming environment for distributed complex computing. An overview of the Framework for Interdisciplinary Design Optimization (FIDO) project. NASA Langley TOPS exhibit H120b

    NASA Technical Reports Server (NTRS)

    Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.

    1993-01-01

    The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.

  19. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  20. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  1. StochKit2: software for discrete stochastic simulation of biochemical systems with events.

    PubMed

    Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R

    2011-09-01

    StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.

  2. PFMCal : Photonic force microscopy calibration extended for its application in high-frequency microrheology

    NASA Astrophysics Data System (ADS)

    Butykai, A.; Domínguez-García, P.; Mor, F. M.; Gaál, R.; Forró, L.; Jeney, S.

    2017-11-01

    The present document is an update of the previously published MatLab code for the calibration of optical tweezers in the high-resolution detection of the Brownian motion of non-spherical probes [1]. In this instance, an alternative version of the original code, based on the same physical theory [2], but focused on the automation of the calibration of measurements using spherical probes, is outlined. The new added code is useful for high-frequency microrheology studies, where the probe radius is known but the viscosity of the surrounding fluid maybe not. This extended calibration methodology is automatic, without the need of a user's interface. A code for calibration by means of thermal noise analysis [3] is also included; this is a method that can be applied when using viscoelastic fluids if the trap stiffness is previously estimated [4]. The new code can be executed in MatLab and using GNU Octave. Program Files doi:http://dx.doi.org/10.17632/s59f3gz729.1 Licensing provisions: GPLv3 Programming language: MatLab 2016a (MathWorks Inc.) and GNU Octave 4.0 Operating system: Linux and Windows. Supplementary material: A new document README.pdf includes basic running instructions for the new code. Journal reference of previous version: Computer Physics Communications, 196 (2015) 599 Does the new version supersede the previous version?: No. It adds alternative but compatible code while providing similar calibration factors. Nature of problem (approx. 50-250 words): The original code uses a MatLab-provided user's interface, which is not available in GNU Octave, and cannot be used outside of a proprietary software as MatLab. Besides, the process of calibration when using spherical probes needs an automatic method when calibrating big amounts of different data focused to microrheology. Solution method (approx. 50-250 words): The new code can be executed in the latest version of MatLab and using GNU Octave, a free and open-source alternative to MatLab. This code generates an automatic calibration process which requires only to write the input data in the main script. Additionally, we include a calibration method based on thermal noise statistics, which can be used with viscoelastic fluids if the trap stiffness is previously estimated. Reasons for the new version: This version extends the functionality of PFMCal for the particular case of spherical probes and unknown fluid viscosities. The extended code is automatic, works in different operating systems and it is compatible with GNU Octave. Summary of revisions: The original MatLab program in the previous version, which is executed by PFMCal.m, is not changed. Here, we have added two additional main archives named PFMCal_auto.m and PFMCal_histo.m, which implement automatic calculations of the calibration process and calibration through Boltzmann statistics, respectively. The process of calibration using this code for spherical beads is described in the README.pdf file provided in the new code submission. Here, we obtain different calibration factors, β (given in μm/V), according to [2], related to two statistical quantities: the mean-squared displacement (MSD), βMSD, and the velocity autocorrelation function (VAF), βVAF. Using that methodology, the trap stiffness, k, and the zero-shear viscosity of the fluid, η, can be calculated if the value of the particle's radius, a, is previously known. For comparison, we include in the extended code the method of calibration using the corner frequency of the power-spectral density (PSD) [5], providing a calibration factor βPSD. Besides, with the prior estimation of the trap stiffness, along with the known value of the particle's radius, we can use thermal noise statistics to obtain calibration factors, β, according to the quadratic form of the optical potential, βE, and related to the Gaussian distribution of the bead's positions, βσ2. This method has been demonstrated to be applicable to the calibration of optical tweezers when using non-Newtonian viscoelastic polymeric liquids [4]. An example of the results using this calibration process is summarized in Table 1. Using the data provided in the new code submission, for water and acetone fluids, we calculate all the calibration factors by using the original PFMCal.m and by the new non-GUI code PFMCal_auto.m and PFMCal_histo.m. Regarding the new code, PFMCal_auto.m returns η, k, βMSD, βVAF and βPSD, while PFMCal_histo.m provides βσ2 and βE. Table 1 shows how we obtain the expected viscosity of the two fluids at this temperature and how the different methods provide good agreement between trap stiffnesses and calibration factors. Additional comments including Restrictions and Unusual features (approx. 50-250 words): The original code, PFMCal.m, runs under MatLab using the Statistics Toolbox. The extended code, PFMCal_auto.m and PFMCal_histo.m, can be executed without modification using MatLab or GNU Octave. The code has been tested in Linux and Windows operating systems.

  3. Bidirectional automatic release of reserve for low voltage network made with low capacity PLCs

    NASA Astrophysics Data System (ADS)

    Popa, I.; Popa, G. N.; Diniş, C. M.; Deaconu, S. I.

    2018-01-01

    The article presents the design of a bidirectional automatic release of reserve made on two types low capacity programmable logic controllers: PS-3 from Klöckner-Moeller and Zelio from Schneider. It analyses the electronic timing circuits that can be used for making the bidirectional automatic release of reserve: time-on delay circuit and time-off delay circuit (two types). In the paper are present the sequences code for timing performed on the PS-3 PLC, the logical functions for the bidirectional automatic release of reserve, the classical control electrical diagram (with contacts, relays, and time relays), the electronic control diagram (with logical gates and timing circuits), the code (in IL language) made for the PS-3 PLC, and the code (in FBD language) made for Zelio PLC. A comparative analysis will be carried out on the use of the two types of PLC and will be present the advantages of using PLCs.

  4. Automated Simplification of Full Chemical Mechanisms

    NASA Technical Reports Server (NTRS)

    Norris, A. T.

    1997-01-01

    A code has been developed to automatically simplify full chemical mechanisms. The method employed is based on the Intrinsic Low Dimensional Manifold (ILDM) method of Maas and Pope. The ILDM method is a dynamical systems approach to the simplification of large chemical kinetic mechanisms. By identifying low-dimensional attracting manifolds, the method allows complex full mechanisms to be parameterized by just a few variables; in effect, generating reduced chemical mechanisms by an automatic procedure. These resulting mechanisms however, still retain all the species used in the full mechanism. Full and skeletal mechanisms for various fuels are simplified to a two dimensional manifold, and the resulting mechanisms are found to compare well with the full mechanisms, and show significant improvement over global one step mechanisms, such as those by Westbrook and Dryer. In addition, by using an ILDM reaction mechanism in a CID code, a considerable improvement in turn-around time can be achieved.

  5. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  6. Thread concept for automatic task parallelization in image analysis

    NASA Astrophysics Data System (ADS)

    Lueckenhaus, Maximilian; Eckstein, Wolfgang

    1998-09-01

    Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.

  7. Generating Safety-Critical PLC Code From a High-Level Application Software Specification

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is shown.

  8. Auto identification technology and its impact on patient safety in the Operating Room of the Future.

    PubMed

    Egan, Marie T; Sandberg, Warren S

    2007-03-01

    Automatic identification technologies, such as bar coding and radio frequency identification, are ubiquitous in everyday life but virtually nonexistent in the operating room. User expectations, based on everyday experience with automatic identification technologies, have generated much anticipation that these systems will improve readiness, workflow, and safety in the operating room, with minimal training requirements. We report, in narrative form, a multi-year experience with various automatic identification technologies in the Operating Room of the Future Project at Massachusetts General Hospital. In each case, the additional human labor required to make these ;labor-saving' technologies function in the medical environment has proved to be their undoing. We conclude that while automatic identification technologies show promise, significant barriers to realizing their potential still exist. Nevertheless, overcoming these obstacles is necessary if the vision of an operating room of the future in which all processes are monitored, controlled, and optimized is to be achieved.

  9. A New Design Method of Automotive Electronic Real-time Control System

    NASA Astrophysics Data System (ADS)

    Zuo, Wenying; Li, Yinguo; Wang, Fengjuan; Hou, Xiaobo

    Structure and functionality of automotive electronic control system is becoming more and more complex. The traditional manual programming development mode to realize automotive electronic control system can't satisfy development needs. So, in order to meet diversity and speedability of development of real-time control system, combining model-based design approach and auto code generation technology, this paper proposed a new design method of automotive electronic control system based on Simulink/RTW. Fristly, design algorithms and build a control system model in Matlab/Simulink. Then generate embedded code automatically by RTW and achieve automotive real-time control system development in OSEK/VDX operating system environment. The new development mode can significantly shorten the development cycle of automotive electronic control system, improve program's portability, reusability and scalability and had certain practical value for the development of real-time control system.

  10. ROSSTEP v1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allevato, Adam

    2016-07-21

    ROSSTEP is a system for sequentially running roslaunch, rosnode, and bash scripts automatically, for use in Robot Operating System (ROS) applications. The system consists of YAML files which define actions and conditions. A python file parses the code and runs actions sequentially using the sys and subprocess python modules. Between actions, it uses various ROS-based code to check conditions required to proceed, and only moves on to the next action when all the necessary conditions have been met. Included is rosstep-creator, a QT application designed to create the YAML files required for ROSSTEP. It has a nearly one-to-one mapping frommore » interface elements to YAML output, and serves as a convenient GUI for working with the ROSSTEP system.« less

  11. Research on Secure Systems and Automatic Programming. Volume I

    DTIC Science & Technology

    1977-10-14

    for the enforcement of adherence to authorization; they include physical limitations, legal codes, social pressures, and the psychological makeup of...systems job statistics and possibly indications of an support instructions. The criteria for their abnormal termination. * inclusion were high execution...interrupt processes, for the output data page. Jobs may also terminate however, use the standard SWI TCH PROCESS instruc- abnormally by executing an

  12. Development of Automated Procedures to Generate Reference Building Models for ASHRAE Standard 90.1 and India’s Building Energy Code and Implementation in OpenStudio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Andrew; Haves, Philip; Jegi, Subhash

    This paper describes a software system for automatically generating a reference (baseline) building energy model from the proposed (as-designed) building energy model. This system is built using the OpenStudio Software Development Kit (SDK) and is designed to operate on building energy models in the OpenStudio file format.

  13. Anomaly-Based Intrusion Detection Systems Utilizing System Call Data

    DTIC Science & Technology

    2012-03-01

    Functionality Description Persistence mechanism Mimicry technique Camouflage malware image: • renaming its image • appending its image to victim...particular industrial plant . Exactly which one was targeted still remains unknown, however a majority of the attacks took place in Iran [24]. Due... plant to unstable phase and eventually physical damage. It is interesting to note that a particular block of code - block DB8061 is automatically

  14. Object-oriented controlled-vocabulary translator using TRANSOFT + HyperPAD.

    PubMed

    Moore, G W; Berman, J J

    1991-01-01

    Automated coding of surgical pathology reports is demonstrated. This public-domain translation software operates on surgical pathology files, extracting diagnoses and assigning codes in a controlled medical vocabulary, such as SNOMED. Context-sensitive translation algorithms are employed, and syntactically correct diagnostic items are produced that are matched with controlled vocabulary. English-language surgical pathology reports, accessioned over one year at the Baltimore Veterans Affairs Medical Center, were translated. With an interface to a larger hospital information system, all natural language pathology reports are automatically rendered as topography and morphology codes. This translator frees the pathologist from the time-intensive task of personally coding each report, and may be used to flag certain diagnostic categories that require specific quality assurance actions.

  15. Object-oriented controlled-vocabulary translator using TRANSOFT + HyperPAD.

    PubMed Central

    Moore, G. W.; Berman, J. J.

    1991-01-01

    Automated coding of surgical pathology reports is demonstrated. This public-domain translation software operates on surgical pathology files, extracting diagnoses and assigning codes in a controlled medical vocabulary, such as SNOMED. Context-sensitive translation algorithms are employed, and syntactically correct diagnostic items are produced that are matched with controlled vocabulary. English-language surgical pathology reports, accessioned over one year at the Baltimore Veterans Affairs Medical Center, were translated. With an interface to a larger hospital information system, all natural language pathology reports are automatically rendered as topography and morphology codes. This translator frees the pathologist from the time-intensive task of personally coding each report, and may be used to flag certain diagnostic categories that require specific quality assurance actions. PMID:1807773

  16. Preparing a collection of radiology examinations for distribution and retrieval.

    PubMed

    Demner-Fushman, Dina; Kohli, Marc D; Rosenman, Marc B; Shooshan, Sonya E; Rodriguez, Laritza; Antani, Sameer; Thoma, George R; McDonald, Clement J

    2016-03-01

    Clinical documents made available for secondary use play an increasingly important role in discovery of clinical knowledge, development of research methods, and education. An important step in facilitating secondary use of clinical document collections is easy access to descriptions and samples that represent the content of the collections. This paper presents an approach to developing a collection of radiology examinations, including both the images and radiologist narrative reports, and making them publicly available in a searchable database. The authors collected 3996 radiology reports from the Indiana Network for Patient Care and 8121 associated images from the hospitals' picture archiving systems. The images and reports were de-identified automatically and then the automatic de-identification was manually verified. The authors coded the key findings of the reports and empirically assessed the benefits of manual coding on retrieval. The automatic de-identification of the narrative was aggressive and achieved 100% precision at the cost of rendering a few findings uninterpretable. Automatic de-identification of images was not quite as perfect. Images for two of 3996 patients (0.05%) showed protected health information. Manual encoding of findings improved retrieval precision. Stringent de-identification methods can remove all identifiers from text radiology reports. DICOM de-identification of images does not remove all identifying information and needs special attention to images scanned from film. Adding manual coding to the radiologist narrative reports significantly improved relevancy of the retrieved clinical documents. The de-identified Indiana chest X-ray collection is available for searching and downloading from the National Library of Medicine (http://openi.nlm.nih.gov/). Published by Oxford University Press on behalf of the American Medical Informatics Association 2015. This work is written by US Government employees and is in the public domain in the US.

  17. Flexible Generation of Kalman Filter Code

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Wilson, Edward

    2006-01-01

    Domain-specific program synthesis can automatically generate high quality code in complex domains from succinct specifications, but the range of programs which can be generated by a given synthesis system is typically narrow. Obtaining code which falls outside this narrow scope necessitates either 1) extension of the code generator, which is usually very expensive, or 2) manual modification of the generated code, which is often difficult and which must be redone whenever changes are made to the program specification. In this paper, we describe adaptations and extensions of the AUTOFILTER Kalman filter synthesis system which greatly extend the range of programs which can be generated. Users augment the input specification with a specification of code fragments and how those fragments should interleave with or replace parts of the synthesized filter. This allows users to generate a much wider range of programs without their needing to modify the synthesis system or edit generated code. We demonstrate the usefulness of the approach by applying it to the synthesis of a complex state estimator which combines code from several Kalman filters with user-specified code. The work described in this paper allows the complex design decisions necessary for real-world applications to be reflected in the synthesized code. When executed on simulated input data, the generated state estimator was found to produce comparable estimates to those produced by a handcoded estimator

  18. SU-D-BRD-03: A Gateway for GPU Computing in Cancer Radiotherapy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, X; Folkerts, M; Shi, F

    Purpose: Graphics Processing Unit (GPU) has become increasingly important in radiotherapy. However, it is still difficult for general clinical researchers to access GPU codes developed by other researchers, and for developers to objectively benchmark their codes. Moreover, it is quite often to see repeated efforts spent on developing low-quality GPU codes. The goal of this project is to establish an infrastructure for testing GPU codes, cross comparing them, and facilitating code distributions in radiotherapy community. Methods: We developed a system called Gateway for GPU Computing in Cancer Radiotherapy Research (GCR2). A number of GPU codes developed by our group andmore » other developers can be accessed via a web interface. To use the services, researchers first upload their test data or use the standard data provided by our system. Then they can select the GPU device on which the code will be executed. Our system offers all mainstream GPU hardware for code benchmarking purpose. After the code running is complete, the system automatically summarizes and displays the computing results. We also released a SDK to allow the developers to build their own algorithm implementation and submit their binary codes to the system. The submitted code is then systematically benchmarked using a variety of GPU hardware and representative data provided by our system. The developers can also compare their codes with others and generate benchmarking reports. Results: It is found that the developed system is fully functioning. Through a user-friendly web interface, researchers are able to test various GPU codes. Developers also benefit from this platform by comprehensively benchmarking their codes on various GPU platforms and representative clinical data sets. Conclusion: We have developed an open platform allowing the clinical researchers and developers to access the GPUs and GPU codes. This development will facilitate the utilization of GPU in radiation therapy field.« less

  19. Compiler-Driven Performance Optimization and Tuning for Multicore Architectures

    DTIC Science & Technology

    2015-04-10

    develop a powerful system for auto-tuning of library routines and compute-intensive kernels, driven by the Pluto system for multicores that we are...kernels, driven by the Pluto system for multicores that we are developing. The work here is motivated by recent advances in two major areas of...automatic C-to-CUDA code generator using a polyhedral compiler transformation framework. We have used and adapted PLUTO (our state-of-the-art tool

  20. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures

  1. Data-Driven Hint Generation in Vast Solution Spaces: A Self-Improving Python Programming Tutor

    ERIC Educational Resources Information Center

    Rivers, Kelly; Koedinger, Kenneth R.

    2017-01-01

    To provide personalized help to students who are working on code-writing problems, we introduce a data-driven tutoring system, ITAP (Intelligent Teaching Assistant for Programming). ITAP uses state abstraction, path construction, and state reification to automatically generate personalized hints for students, even when given states that have not…

  2. A New Internet Tool for Automatic Evaluation in Control Systems and Programming

    ERIC Educational Resources Information Center

    Munoz de la Pena, D.; Gomez-Estern, F.; Dormido, S.

    2012-01-01

    In this paper we present a web-based innovative education tool designed for automating the collection, evaluation and error detection in practical exercises assigned to computer programming and control engineering students. By using a student/instructor code-fusion architecture, the conceptual limits of multiple-choice tests are overcome by far.…

  3. Multiple Views, Contexts, and Symbol Systems in Learning with Hypertext/Hypermedia: A Critical Review of Research.

    ERIC Educational Resources Information Center

    Tergan, Sigmar-Olaf

    1997-01-01

    Reviews research on the effectiveness of hypertext/hypermedia-based learning and concludes that presenting subject matter from different perspectives, in multiple contexts, and in multiple codes does not automatically contribute to higher performance but may when instructional scaffolding is provided. The additional cognitive load may actually…

  4. The KATE shell: An implementation of model-based control, monitor and diagnosis

    NASA Technical Reports Server (NTRS)

    Cornell, Matthew

    1987-01-01

    The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.

  5. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  6. Experiences with a Requirements-Based Programming Approach to the Development of a NASA Autonomous Ground Control System

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.; Gracanin, Denis; Erickson, John

    2005-01-01

    Requirements-to-Design-to-Code (R2D2C) is an approach to the engineering of computer-based systems that embodies the idea of requirements-based programming in system development. It goes further; however, in that the approach offers not only an underlying formalism, but full formal development from requirements capture through to the automatic generation of provably-correct code. As such, the approach has direct application to the development of systems requiring autonomic properties. We describe a prototype tool to support the method, and illustrate its applicability to the development of LOGOS, a NASA autonomous ground control system, which exhibits autonomic behavior. Finally, we briefly discuss other areas where the approach and prototype tool are being considered for application.

  7. Resource allocation for error resilient video coding over AWGN using optimization approach.

    PubMed

    An, Cheolhong; Nguyen, Truong Q

    2008-12-01

    The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.

  8. Formal Safety Certification of Aerospace Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2005-01-01

    In principle, formal methods offer many advantages for aerospace software development: they can help to achieve ultra-high reliability, and they can be used to provide evidence of the reliability claims which can then be subjected to external scrutiny. However, despite years of research and many advances in the underlying formalisms of specification, semantics, and logic, formal methods are not much used in practice. In our opinion this is related to three major shortcomings. First, the application of formal methods is still expensive because they are labor- and knowledge-intensive. Second, they are difficult to scale up to complex systems because they are based on deep mathematical insights about the behavior of the systems (t.e., they rely on the "heroic proof"). Third, the proofs can be difficult to interpret, and typically stand in isolation from the original code. In this paper, we describe a tool for formally demonstrating safety-relevant aspects of aerospace software, which largely circumvents these problems. We focus on safely properties because it has been observed that safety violations such as out-of-bounds memory accesses or use of uninitialized variables constitute the majority of the errors found in the aerospace domain. In our approach, safety means that the program will not violate a set of rules that can range for the simple memory access rules to high-level flight rules. These different safety properties are formalized as different safety policies in Hoare logic, which are then used by a verification condition generator along with the code and logical annotations in order to derive formal safety conditions; these are then proven using an automated theorem prover. Our certification system is currently integrated into a model-based code generation toolset that generates the annotations together with the code. However, this automated formal certification technology is not exclusively constrained to our code generator and could, in principle, also be integrated with other code generators such as RealTime Workshop or even applied to legacy code. Our approach circumvents the historical problems with formal methods by increasing the degree of automation on all levels. The restriction to safety policies (as opposed to arbitrary functional behavior) results in simpler proof problems that can generally be solved by fully automatic theorem proves. An automated linking mechanism between the safety conditions and the code provides some of the traceability mandated by process standards such as DO-178B. An automated explanation mechanism uses semantic markup added by the verification condition generator to produce natural-language explanations of the safety conditions and thus supports their interpretation in relation to the code. It shows an automatically generated certification browser that lets users inspect the (generated) code along with the safety conditions (including textual explanations), and uses hyperlinks to automate tracing between the two levels. Here, the explanations reflect the logical structure of the safety obligation but the mechanism can in principle be customized using different sets of domain concepts. The interface also provides some limited control over the certification process itself. Our long-term goal is a seamless integration of certification, code generation, and manual coding that results in a "certified pipeline" in which specifications are automatically transformed into executable code, together with the supporting artifacts necessary for achieving and demonstrating the high level of assurance needed in the aerospace domain.

  9. Computer memory management system

    DOEpatents

    Kirk, III, Whitson John

    2002-01-01

    A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

  10. Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns

    NASA Astrophysics Data System (ADS)

    Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2012-07-01

    Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.

  11. Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes

    NASA Astrophysics Data System (ADS)

    Su, Hualing; He, Yucheng; Zhou, Lin

    2017-08-01

    In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.

  12. The Environment-Power System Analysis Tool development program. [for spacecraft power supplies

    NASA Technical Reports Server (NTRS)

    Jongeward, Gary A.; Kuharski, Robert A.; Kennedy, Eric M.; Wilcox, Katherine G.; Stevens, N. John; Putnam, Rand M.; Roche, James C.

    1989-01-01

    The Environment Power System Analysis Tool (EPSAT) is being developed to provide engineers with the ability to assess the effects of a broad range of environmental interactions on space power systems. A unique user-interface-data-dictionary code architecture oversees a collection of existing and future environmental modeling codes (e.g., neutral density) and physical interaction models (e.g., sheath ionization). The user-interface presents the engineer with tables, graphs, and plots which, under supervision of the data dictionary, are automatically updated in response to parameter change. EPSAT thus provides the engineer with a comprehensive and responsive environmental assessment tool and the scientist with a framework into which new environmental or physical models can be easily incorporated.

  13. Constructing graph models for software system development and analysis

    NASA Astrophysics Data System (ADS)

    Pogrebnoy, Andrey V.

    2017-01-01

    We propose a concept for creating the instrumentation for functional and structural decisions rationale during the software system (SS) development. We propose to develop SS simultaneously on two models - functional (FM) and structural (SM). FM is a source code of the SS. Adequate representation of the FM in the form of a graph model (GM) is made automatically and called SM. The problem of creating and visualizing GM is considered from the point of applying it as a uniform platform for the adequate representation of the SS source code. We propose three levels of GM detailing: GM1 - for visual analysis of the source code and for SS version control, GM2 - for resources optimization and analysis of connections between SS components, GM3 - for analysis of the SS functioning in dynamics. The paper includes examples of constructing all levels of GM.

  14. Speech input system for meat inspection and pathological coding used thereby

    NASA Astrophysics Data System (ADS)

    Abe, Shozo

    Meat inspection is one of exclusive and important jobs of veterinarians though it is not well known in general. As the inspection should be conducted skillfully during a series of continuous operations in a slaughter house, development of automatic inspecting systems has been required for a long time. We employed a hand-free speech input system to record the inspecting data because inspecters have to use their both hands to treat the internals of catles and check their health conditions by necked eyes. The data collected by the inspectors are transfered to a speech recognizer and then stored as controlable data of each catle inspected. Control of terms such as pathological conditions to be input and their coding are also important in this speech input system and practical examples are shown.

  15. Integration of the Remote Agent for the NASA Deep Space One Autonomy Experiment

    NASA Technical Reports Server (NTRS)

    Dorais, Gregory A.; Bernard, Douglas E.; Gamble, Edward B., Jr.; Kanefsky, Bob; Kurien, James; Muscettola, Nicola; Nayak, P. Pandurang; Rajan, Kanna; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes the integration of the Remote Agent (RA), a spacecraft autonomy system which is scheduled to control the Deep Space 1 spacecraft during a flight experiment in 1999. The RA is a reusable, model-based autonomy system that is quite different from software typically used to control an aerospace system. We describe the integration challenges we faced, how we addressed them, and the lessons learned. We focus on those aspects of integrating the RA that were either easier or more difficult than integrating a more traditional large software application because the RA is a model-based autonomous system. A number of characteristics of the RA made integration process easier. One example is the model-based nature of RA. Since the RA is model-based, most of its behavior is not hard coded into procedural program code. Instead, engineers specify high level models of the spacecraft's components from which the Remote Agent automatically derives correct system-wide behavior on the fly. This high level, modular, and declarative software description allowed some interfaces between RA components and between RA and the flight software to be automatically generated and tested for completeness against the Remote Agent's models. In addition, the Remote Agent's model-based diagnosis system automatically diagnoses when the RA models are not consistent with the behavior of the spacecraft. In flight, this feature is used to diagnose failures in the spacecraft hardware. During integration, it proved valuable in finding problems in the spacecraft simulator or flight software. In addition, when modifications are made to the spacecraft hardware or flight software, the RA models are easily changed because they only capture a description of the spacecraft. one does not have to maintain procedural code that implements the correct behavior for every expected situation. On the other hand, several features of the RA made it more difficult to integrate than typical flight software. For example, the definition of correct behavior is more difficult to specify for a system that is expected to reason about and flexibly react to its environment than for a traditional flight software system. Consequently, whenever a change is made to the RA it is more time consuming to determine if the resulting behavior is correct. We conclude the paper with a discussion of future work on the Remote Agent as well as recommendations to ease integration of similar autonomy projects.

  16. ACSYNT - A standards-based system for parametric, computer aided conceptual design of aircraft

    NASA Technical Reports Server (NTRS)

    Jayaram, S.; Myklebust, A.; Gelhausen, P.

    1992-01-01

    A group of eight US aerospace companies together with several NASA and NAVY centers, led by NASA Ames Systems Analysis Branch, and Virginia Tech's CAD Laboratory agreed, through the assistance of Americal Technology Initiative, in 1990 to form the ACSYNT (Aircraft Synthesis) Institute. The Institute is supported by a Joint Sponsored Research Agreement to continue the research and development in computer aided conceptual design of aircraft initiated by NASA Ames Research Center and Virginia Tech's CAD Laboratory. The result of this collaboration, a feature-based, parametric computer aided aircraft conceptual design code called ACSYNT, is described. The code is based on analysis routines begun at NASA Ames in the early 1970's. ACSYNT's CAD system is based entirely on the ISO standard Programmer's Hierarchical Interactive Graphics System and is graphics-device independent. The code includes a highly interactive graphical user interface, automatically generated Hermite and B-Spline surface models, and shaded image displays. Numerous features to enhance aircraft conceptual design are described.

  17. Using a business rule management system to improve disposition of traumatized patients.

    PubMed

    Neuhaus, Philipp; Noack, Oliver; Majchrzak, Tim; Uckert, Frank

    2010-01-01

    We propose a business rule management system that is used to optimize the dispatchment on a mass casualty incident. Using geospatial information from available ambulances and rescue helicopters, a business rule engine calculates an optimized transportation plan for injured persons. It automatically considers special needs like ambulances equipped for baby transportation or special decontamination equipment, e.g. to deal with an accident in a chemical factory. The rules used in the system are not hardcoded; thus, it is possible to redefine them on the fly without changing the program's source code. It is possible to load and save a rule set in case of a catastrophe. Furthermore, it is possible to automatically recalculate an already planned operation if it becomes clear that the rescue vehicles assigned are needed by a person with life-threatening injuries.

  18. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling

    PubMed Central

    Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira

    2015-01-01

    Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082

  19. SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.

    PubMed

    Liu, T; Ding, A; Xu, X

    2012-06-01

    To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.

  20. GridMan: A grid manipulation system

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.; Wang, Zhu

    1992-01-01

    GridMan is an interactive grid manipulation system. It operates on grids to produce new grids which conform to user demands. The input grids are not constrained to come from any particular source. They may be generated by algebraic methods, elliptic methods, hyperbolic methods, parabolic methods, or some combination of methods. The methods are included in the various available structured grid generation codes. These codes perform the basic assembly function for the various elements of the initial grid. For block structured grids, the assembly can be quite complex due to a large number of clock corners, edges, and faces for which various connections and orientations must be properly identified. The grid generation codes are distinguished among themselves by their balance between interactive and automatic actions and by their modest variations in control. The basic form of GridMan provides a much more substantial level of grid control and will take its input from any of the structured grid generation codes. The communication link to the outside codes is a data file which contains the grid or section of grid.

  1. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer. This presentation will demonstrate this new technology through linear and non-linear shallow water test cases. The exceptionally simple model syntax will be highlighted and the correctness of the resulting adjoint simulations will be demonstrated using rigorous convergence tests.

  2. Exogean: a framework for annotating protein-coding genes in eukaryotic genomic DNA

    PubMed Central

    Djebali, Sarah; Delaplace, Franck; Crollius, Hugues Roest

    2006-01-01

    Background Accurate and automatic gene identification in eukaryotic genomic DNA is more than ever of crucial importance to efficiently exploit the large volume of assembled genome sequences available to the community. Automatic methods have always been considered less reliable than human expertise. This is illustrated in the EGASP project, where reference annotations against which all automatic methods are measured are generated by human annotators and experimentally verified. We hypothesized that replicating the accuracy of human annotators in an automatic method could be achieved by formalizing the rules and decisions that they use, in a mathematical formalism. Results We have developed Exogean, a flexible framework based on directed acyclic colored multigraphs (DACMs) that can represent biological objects (for example, mRNA, ESTs, protein alignments, exons) and relationships between them. Graphs are analyzed to process the information according to rules that replicate those used by human annotators. Simple individual starting objects given as input to Exogean are thus combined and synthesized into complex objects such as protein coding transcripts. Conclusion We show here, in the context of the EGASP project, that Exogean is currently the method that best reproduces protein coding gene annotations from human experts, in terms of identifying at least one exact coding sequence per gene. We discuss current limitations of the method and several avenues for improvement. PMID:16925841

  3. General Framework for Animal Food Safety Traceability Using GS1 and RFID

    NASA Astrophysics Data System (ADS)

    Cao, Weizhu; Zheng, Limin; Zhu, Hong; Wu, Ping

    GS1 is global traceability standard, which is composed by the encoding system (EAN/UCC, EPC), the data carriers identified automatically (bar codes, RFID), electronic data interchange standards (EDI, XML). RFID is a non-contact, multi-objective automatic identification technique. Tracing of source food, standardization of RFID tags, sharing of dynamic data are problems to solve urgently for recent traceability systems. The paper designed general framework for animal food safety traceability using GS1 and RFID. This framework uses RFID tags encoding with EPCglobal tag data standards. Each information server has access tier, business tier and resource tier. These servers are heterogeneous and distributed, providing user access interfaces by SOAP or HTTP protocols. For sharing dynamic data, discovery service and object name service are used to locate dynamic distributed information servers.

  4. Cross-Layer Design for Space-Time coded MIMO Systems over Rice Fading Channel

    NASA Astrophysics Data System (ADS)

    Yu, Xiangbin; Zhou, Tingting; Liu, Xiaoshuai; Yin, Xin

    A cross-layer design (CLD) scheme for space-time coded MIMO systems over Rice fading channel is presented by combining adaptive modulation and automatic repeat request, and the corresponding system performance is investigated well. The fading gain switching thresholds subject to a target packet error rate (PER) and fixed power constraint are derived. According to these results, and using the generalized Marcum Q-function, the calculation formulae of the average spectrum efficiency (SE) and PER of the system with CLD are derived. As a result, closed-form expressions for average SE and PER are obtained. These expressions include some existing expressions in Rayleigh channel as special cases. With these expressions, the system performance in Rice fading channel is evaluated effectively. Numerical results verify the validity of the theoretical analysis. The results show that the system performance in Rice channel is effectively improved as Rice factor increases, and outperforms that in Rayleigh channel.

  5. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  6. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  7. Automated Testcase Generation for Numerical Support Functions in Embedded Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Schnieder, Stefan-Alexander

    2014-01-01

    We present a tool for the automatic generation of test stimuli for small numerical support functions, e.g., code for trigonometric functions, quaternions, filters, or table lookup. Our tool is based on KLEE to produce a set of test stimuli for full path coverage. We use a method of iterative deepening over abstractions to deal with floating-point values. During actual testing the stimuli exercise the code against a reference implementation. We illustrate our approach with results of experiments with low-level trigonometric functions, interpolation routines, and mathematical support functions from an open source UAS autopilot.

  8. A Model-Driven Architecture Approach for Modeling, Specifying and Deploying Policies in Autonomous and Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel

    2006-01-01

    Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.

  9. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.

  10. Visualization of semantic indexing similarity over MeSH.

    PubMed

    Du, Haixia; Yoo, Terry S

    2007-10-11

    We present an interactive visualization system for the evaluation of indexing results of the MEDLINE data-base over the Medical Subject Headings (MeSH) structure in a graphical radial-tree layout. It displays indexing similarity measurements with 2D color coding and a 3D height field permitting the evaluation of the automatic Medical Text Indexer (MTI), compared with human indexers.

  11. Electronic surveillance and using administrative data to identify healthcare associated infections.

    PubMed

    Gastmeier, Petra; Behnke, Michael

    2016-08-01

    Traditional surveillance of healthcare associated infections (HCAI) is time consuming and error-prone. We have analysed literature of the past year to look at new developments in this field. It is divided into three parts: new algorithms for electronic surveillance, the use of administrative data for surveillance of HCAI, and the definition of new endpoints of surveillance, in accordance with an automatic surveillance approach. Most studies investigating electronic surveillance of HCAI have concentrated on bloodstream infection or surgical site infection. However, the lack of important parameters in hospital databases can lead to misleading results. The accuracy of administrative coding data was poor at identifying HCAI. New endpoints should be defined for automatic detection, with the most crucial step being to win clinicians' acceptance. Electronic surveillance with conventional endpoints is a successful method when hospital information systems implemented key changes and enhancements. One requirement is the access to systems for hospital administration and clinical databases.Although the primary source of data for HCAI surveillance is not administrative coding data, these are important components of a hospital-wide programme of automated surveillance. The implementation of new endpoints for surveillance is an approach which needs to be discussed further.

  12. Design of monitoring system for mail-sorting based on the Profibus S7 series PLC

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Jia, S. H.; Wang, Y. H.; Liu, H.; Tang, G. C.

    2017-01-01

    With the rapid development of the postal express, the workload of mail sorting is increasing, but the automatic technology of mail sorting is not mature enough. In view of this, the system uses Siemens S7-300 PLC as the main station controller, PLC of Siemens S7-200/400 is from the station controller, through the man-machine interface configuration software MCGS, PROFIBUS-DP communication, RFID technology and mechanical sorting hand achieve mail classification sorting monitoring. Among them, distinguish mail-sorting by scanning RFID posted in the mail electronic bar code (fixed code), the system uses the corresponding controller on the acquisition of information processing, the processed information transmit to the sorting manipulator by PROFIBUS-DP. The system can realize accurate and efficient mail sorting, which will promote the development of mail sorting technology.

  13. The Contributions of Vocabulary and Letter Writing Automaticity to Word Reading and Spelling for Kindergartners

    ERIC Educational Resources Information Center

    Kim, Young-Suk; Al Otaiba, Stephanie; Puranik, Cynthia; Folsom, Jessica Sidler; Gruelich, Luana

    2014-01-01

    In the present study we examined the relation between alphabet knowledge fluency (letter names and sounds) and letter writing automaticity, and unique relations of letter writing automaticity and semantic knowledge (i.e., vocabulary) to word reading and spelling over and above code-related skills such as phonological awareness and alphabet…

  14. Neural Coding for Effective Rehabilitation

    PubMed Central

    2014-01-01

    Successful neurological rehabilitation depends on accurate diagnosis, effective treatment, and quantitative evaluation. Neural coding, a technology for interpretation of functional and structural information of the nervous system, has contributed to the advancements in neuroimaging, brain-machine interface (BMI), and design of training devices for rehabilitation purposes. In this review, we summarized the latest breakthroughs in neuroimaging from microscale to macroscale levels with potential diagnostic applications for rehabilitation. We also reviewed the achievements in electrocorticography (ECoG) coding with both animal models and human beings for BMI design, electromyography (EMG) interpretation for interaction with external robotic systems, and robot-assisted quantitative evaluation on the progress of rehabilitation programs. Future rehabilitation would be more home-based, automatic, and self-served by patients. Further investigations and breakthroughs are mainly needed in aspects of improving the computational efficiency in neuroimaging and multichannel ECoG by selection of localized neuroinformatics, validation of the effectiveness in BMI guided rehabilitation programs, and simplification of the system operation in training devices. PMID:25258708

  15. QR Codes: Outlook for Food Science and Nutrition.

    PubMed

    Sanz-Valero, Javier; Álvarez Sabucedo, Luis M; Wanden-Berghe, Carmina; Santos Gago, Juan M

    2016-01-01

    QR codes opens up the possibility to develop simple-to-use, cost-effective-cost, and functional systems based on the optical recognition of inexpensive tags attached to physical objects. These systems, combined with Web platforms, can provide us with advanced services that are already currently broadly used on many contexts of the common life. Due to its philosophy, based on the automatic recognition of messages embedded on simple graphics by means of common devices such as mobile phones, QR codes are very convenient for the average user. Regretfully, its potential has not yet been fully exploited in the domains of food science and nutrition. This paper points out some applications to make the most of this technology for these domains in a straightforward manner. For its characteristics, we are addressing systems with low barriers to entry and high scalability for its deployment. Therefore, its launching among professional and final users is quite simple. The paper also provides high-level indications for the evaluation of the technological frame required to implement the identified possibilities of use.

  16. A computer-controlled instrumentation system for third octave analysis

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.; Monteith, J. H.

    1978-01-01

    An instrumentation system is described which employs a minicomputer, a one-third octave band analyzer, and a time code/tape search unit for the automatic control and analysis of third-octave data. With this system the information necessary for data adjustment is formatted in such a way as to eliminate much operator interface, thereby substantially reducing the probability for error. A description of a program for the calculation of effective perceived noise level from aircraft noise data is included as an example of how this system can be used.

  17. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  18. Recognizing Action Units for Facial Expression Analysis

    PubMed Central

    Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.

    2010-01-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210

  19. Subsumption principles underlying medical concept systems and their formal reconstruction.

    PubMed Central

    Bernauer, J.

    1994-01-01

    Conventional medical concept systems represent generic concept relations by hierarchical coding principles. Often, these coding principles constrain the concept system and reduce the potential for automatical derivation of subsumption. Formal reconstruction of medical concept systems is an approach that bases on the conceptual representation of meanings and that allows for the application of formal criteria for subsumption. Those criteria must reflect intuitive principles of subordination which are underlying conventional medical concept systems. Particularly these are: The subordinate concept results (1) from adding a specializing criterion to the superordinate concept, (2) from refining the primary category, or a criterion of the superordinate concept, by a concept that is less general, (3) from adding a partitive criterion to a criterion of the superordinate, (4) from refining a criterion by a concept that is less comprehensive, and finally (5) from coordinating the superordinate concept, or one of its criteria. This paper introduces a formalism called BERNWARD that aims at the formal reconstruction of medical concept systems according to these intuitive principles. The automatical derivation of hierarchical relations is primarily supported by explicit generic and explicit partititive hierarchies of concepts, secondly, by two formal criteria that base on the structure of concept descriptions and explicit hierarchical relations between their elements, namely: formal subsumption and part-sensitive subsumption. Formal subsumption takes only generic relations into account, part-sensitive subsumption additionally regards partive relations between criteria. This approach seems to be flexible enough to cope with unforeseeable effects of partitive criteria on subsumption. PMID:7949907

  20. FAMA: An automatic code for stellar parameter and abundance determination

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2013-10-01

    Context. The large amount of spectra obtained during the epoch of extensive spectroscopic surveys of Galactic stars needs the development of automatic procedures to derive their atmospheric parameters and individual element abundances. Aims: Starting from the widely-used code MOOG by C. Sneden, we have developed a new procedure to determine atmospheric parameters and abundances in a fully automatic way. The code FAMA (Fast Automatic MOOG Analysis) is presented describing its approach to derive atmospheric stellar parameters and element abundances. The code, freely distributed, is written in Perl and can be used on different platforms. Methods: The aim of FAMA is to render the computation of the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) as automatic and as independent of any subjective approach as possible. It is based on the simultaneous search for three equilibria: excitation equilibrium, ionization balance, and the relationship between log n(Fe i) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. The convergence criteria are not fixed "a priori" but are based on the quality of the spectra. Results: In this paper we present tests performed on the solar spectrum EWs that assess the method's dependency on the initial parameters and we analyze a sample of stars observed in Galactic open and globular clusters. The current version of FAMA is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/558/A38

  1. An automatic gore panel mapping system

    NASA Technical Reports Server (NTRS)

    Shiver, John D.; Phelps, Norman N.

    1990-01-01

    The Automatic Gore Mapping System is being developed to reduce the time and labor costs associated with manufacturing the External Tank. The present chem-milling processes and procedures are discussed. The down loading of the simulation of the system has to be performed to verify that the simulation package will translate the simulation code into robot code. Also a simulation of this system has to be programmed for a gantry robot instead of the articulating robot that is presently in the system. It was discovered using the simulation package that the articulation robot cannot reach all the point on some of the panels, therefore when the system is ready for production, a gantry robot will be used. Also a hydrosensor system is being developed to replace the point-to-point contact probe. The hydrosensor will allow the robot to perform a non-contact continuous scan of the panel. It will also provide a faster scan of the panel because it will eliminate the in-and-out movement required for the present end effector. The system software is currently being modified so that the hydrosensor will work with the system. The hydrosensor consists of a Krautkramer-Branson transducer encased in a plexiglass nozzle. The water stream pumped through the nozzle is the couplant for the probe. Also, software is being written so that the robot will have the ability to draw the contour lines on the panel displaying the out-of-tolerance regions. Presently the contour lines can only be displayed on the computer screens. Research is also being performed on improving and automating the method of scribing the panels. Presently the panels are manually scribed with a sharp knife. The use of a low power laser or water jet is being studied as a method of scribing the panels. The contour drawing pen will be replaced with scribing tool and the robot will then move along the contour lines. With these developments the Automatic Gore Mapping Systems will provide a reduction in time and labor costs associated with manufacturing the External Task. The system also has the potential of inspecting other manufactured parts.

  2. A control system based on field programmable gate array for papermaking sewage treatment

    NASA Astrophysics Data System (ADS)

    Zhang, Zi Sheng; Xie, Chang; Qing Xiong, Yan; Liu, Zhi Qiang; Li, Qing

    2013-03-01

    A sewage treatment control system is designed to improve the efficiency of papermaking wastewater treatment system. The automation control system is based on Field Programmable Gate Array (FPGA), coded with Very-High-Speed Integrate Circuit Hardware Description Language (VHDL), compiled and simulated with Quartus. In order to ensure the stability of the data used in FPGA, the data is collected through temperature sensors, water level sensor and online PH measurement system. The automatic control system is more sensitive, and both the treatment efficiency and processing power are increased. This work provides a new method for sewage treatment control.

  3. Synthesizing Certified Code

    NASA Technical Reports Server (NTRS)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  4. Methods for Ensuring High Quality of Coding of Cause of Death. The Mortality Register to Follow Southern Urals Populations Exposed to Radiation.

    PubMed

    Startsev, N; Dimov, P; Grosche, B; Tretyakov, F; Schüz, J; Akleyev, A

    2015-01-01

    To follow up populations exposed to several radiation accidents in the Southern Urals, a cause-of-death registry was established at the Urals Center capturing deaths in the Chelyabinsk, Kurgan and Sverdlovsk region since 1950. When registering deaths over such a long time period, quality measures need to be in place to maintain quality and reduce the impact of individual coders as well as quality changes in death certificates. To ensure the uniformity of coding, a method for semi-automatic coding was developed, which is described here. Briefly, the method is based on a dynamic thesaurus, database-supported coding and parallel coding by two different individuals. A comparison of the proposed method for organizing the coding process with the common procedure of coding showed good agreement, with, at the end of the coding process, 70  - 90% agreement for the three-digit ICD -9 rubrics. The semi-automatic method ensures a sufficiently high quality of coding by at the same time providing an opportunity to reduce the labor intensity inherent in the creation of large-volume cause-of-death registries.

  5. Social priming of hemispatial neglect affects spatial coding: Evidence from the Simon task.

    PubMed

    Arend, Isabel; Aisenberg, Daniela; Henik, Avishai

    2016-10-01

    In the Simon effect (SE), choice reactions are fast if the location of the stimulus and the response correspond when stimulus location is task-irrelevant; therefore, the SE reflects the automatic processing of space. Priming of social concepts was found to affect automatic processing in the Stroop effect. We investigated whether spatial coding measured by the SE can be affected by the observer's mental state. We used two social priming manipulations of impairments: one involving spatial processing - hemispatial neglect (HN) and another involving color perception - achromatopsia (ACHM). In two experiments the SE was reduced in the "neglected" visual field (VF) under the HN, but not under the ACHM manipulation. Our results show that spatial coding is sensitive to spatial representations that are not derived from task-relevant parameters, but from the observer's cognitive state. These findings dispute stimulus-response interference models grounded on the idea of the automaticity of spatial processing. Copyright © 2016. Published by Elsevier Inc.

  6. Android platform based smartphones for a logistical remote association repair framework.

    PubMed

    Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing

    2014-06-25

    The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use.

  7. Simulation and visualization of fundamental optics phenomenon by LabVIEW

    NASA Astrophysics Data System (ADS)

    Lyu, Bohan

    2017-08-01

    Most instructors teach complex phenomenon by equation and static illustration without interactive multimedia. Students usually memorize phenomenon by taking note. However, only note or complex formula can not make user visualize the phenomenon of the photonics system. LabVIEW is a good tool for in automatic measurement. However, the simplicity of coding in LabVIEW makes it not only suit for automatic measurement, but also suitable for simulation and visualization of fundamental optics phenomenon. In this paper, five simple optics phenomenon will be discuss and simulation with LabVIEW. They are Snell's Law, Hermite-Gaussian beam transverse mode, square and circular aperture diffraction, polarization wave and Poincare sphere, and finally Fabry-Perrot etalon in spectrum domain.

  8. Cryptanalysis of the Sodark Family of Cipher Algorithms

    DTIC Science & Technology

    2017-09-01

    software project for building three-bit LUT circuit representations of S- boxes is available as a GitHub repository [40]. It contains several improvements...DISTRIBUTION / AVAILABILITY STATEMENT Approved for public release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The...second- and third-generation automatic link establishment (ALE) systems for high frequency radios. Radios utilizing ALE technology are in use by a

  9. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  10. Galaxy morphology - An unsupervised machine learning approach

    NASA Astrophysics Data System (ADS)

    Schutter, A.; Shamir, L.

    2015-09-01

    Structural properties poses valuable information about the formation and evolution of galaxies, and are important for understanding the past, present, and future universe. Here we use unsupervised machine learning methodology to analyze a network of similarities between galaxy morphological types, and automatically deduce a morphological sequence of galaxies. Application of the method to the EFIGI catalog show that the morphological scheme produced by the algorithm is largely in agreement with the De Vaucouleurs system, demonstrating the ability of computer vision and machine learning methods to automatically profile galaxy morphological sequences. The unsupervised analysis method is based on comprehensive computer vision techniques that compute the visual similarities between the different morphological types. Rather than relying on human cognition, the proposed system deduces the similarities between sets of galaxy images in an automatic manner, and is therefore not limited by the number of galaxies being analyzed. The source code of the method is publicly available, and the protocol of the experiment is included in the paper so that the experiment can be replicated, and the method can be used to analyze user-defined datasets of galaxy images.

  11. The Development of Bimodal Bilingualism: Implications for Linguistic Theory.

    PubMed

    Lillo-Martin, Diane; de Quadros, Ronice Müller; Pichler, Deborah Chen

    2016-01-01

    A wide range of linguistic phenomena contribute to our understanding of the architecture of the human linguistic system. In this paper we present a proposal dubbed Language Synthesis to capture bilingual phenomena including code-switching and 'transfer' as automatic consequences of the addition of a second language, using basic concepts of Minimalism and Distributed Morphology. Bimodal bilinguals, who use a sign language and a spoken language, provide a new type of evidence regarding possible bilingual phenomena, namely code-blending, the simultaneous production of (aspects of) a message in both speech and sign. We argue that code-blending also follows naturally once a second articulatory interface is added to the model. Several different types of code-blending are discussed in connection to the predictions of the Synthesis model. Our primary data come from children developing as bimodal bilinguals, but our proposal is intended to capture a wide range of bilingual effects across any language pair.

  12. Fuzzy support vector machines for adaptive Morse code recognition.

    PubMed

    Yang, Cheng-Hong; Jin, Li-Cheng; Chuang, Li-Yeh

    2006-11-01

    Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, facilitating mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. Therefore, an adaptive automatic recognition method with a high recognition rate is needed. The proposed system uses both fuzzy support vector machines and the variable-degree variable-step-size least-mean-square algorithm to achieve these objectives. We apply fuzzy memberships to each point, and provide different contributions to the decision learning function for support vector machines. Statistical analyses demonstrated that the proposed method elicited a higher recognition rate than other algorithms in the literature.

  13. The Development of Bimodal Bilingualism: Implications for Linguistic Theory

    PubMed Central

    Lillo-Martin, Diane; de Quadros, Ronice Müller; Pichler, Deborah Chen

    2017-01-01

    A wide range of linguistic phenomena contribute to our understanding of the architecture of the human linguistic system. In this paper we present a proposal dubbed Language Synthesis to capture bilingual phenomena including code-switching and ‘transfer’ as automatic consequences of the addition of a second language, using basic concepts of Minimalism and Distributed Morphology. Bimodal bilinguals, who use a sign language and a spoken language, provide a new type of evidence regarding possible bilingual phenomena, namely code-blending, the simultaneous production of (aspects of) a message in both speech and sign. We argue that code-blending also follows naturally once a second articulatory interface is added to the model. Several different types of code-blending are discussed in connection to the predictions of the Synthesis model. Our primary data come from children developing as bimodal bilinguals, but our proposal is intended to capture a wide range of bilingual effects across any language pair. PMID:28603576

  14. Navy’s Advanced Aircraft Armament System Program Concept Objectives

    DTIC Science & Technology

    1983-10-01

    12-1 00 NAVY’S ADVANCED AIRCRAFT ARMAMENT SYSTEM PROGRAM CONCEPT OBJECTIVES T. M . Leese and J. F. Haney Naval Weapons Center Code 31403 China...STORE FLWNT LIFE RECONFIOURATION ♦ UWMST OMHTH ninoairv M — MANN HUCTHM ^♦■ SILECT ALTERNATE • STORE 0PTI0M ■ REOUCIO CK« W0RKL0A0 • . README...mOVEMENTS INÜTEO FUIWUTY MM AM tTATWM COMPLEX AUTOMATIC LACK OF OIT RESTRICTIVE MLNIRV M FLUWAITV IUCSMVI Figure 1. Carrier aircraft

  15. Linear-Force Actuators for Use on Shipboard Weapons and Cargo Elevators.

    DTIC Science & Technology

    1984-01-09

    lock units an electro-mechanical brake is furnished so that when the unit stops at any position, its brake locks automatically , preventing any drift...NAME AND ADDRESS 12. REPORT DATE Naval Sea Systems Command (Code 56W4) f~nur 9, 1984 Washigton DC 03623. NUM4BER Of PAGES .9 ~, DC24 1.MONITORING...Hydraulic systems Weapons elevators a&. )TRACT (Couinsiu an ,evelee aide It nogceoy and fdentflr by block nunbov) "-Reports of hydraulic problems in

  16. Data engineering systems: Computerized modeling and data bank capabilities for engineering analysis

    NASA Technical Reports Server (NTRS)

    Kopp, H.; Trettau, R.; Zolotar, B.

    1984-01-01

    The Data Engineering System (DES) is a computer-based system that organizes technical data and provides automated mechanisms for storage, retrieval, and engineering analysis. The DES combines the benefits of a structured data base system with automated links to large-scale analysis codes. While the DES provides the user with many of the capabilities of a computer-aided design (CAD) system, the systems are actually quite different in several respects. A typical CAD system emphasizes interactive graphics capabilities and organizes data in a manner that optimizes these graphics. On the other hand, the DES is a computer-aided engineering system intended for the engineer who must operationally understand an existing or planned design or who desires to carry out additional technical analysis based on a particular design. The DES emphasizes data retrieval in a form that not only provides the engineer access to search and display the data but also links the data automatically with the computer analysis codes.

  17. Traceability Through Automatic Program Generation

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Green, Jeff

    2003-01-01

    Program synthesis is a technique for automatically deriving programs from specifications of their behavior. One of the arguments made in favour of program synthesis is that it allows one to trace from the specification to the program. One way in which traceability information can be derived is to augment the program synthesis system so that manipulations and calculations it carries out during the synthesis process are annotated with information on what the manipulations and calculations were and why they were made. This information is then accumulated throughout the synthesis process, at the end of which, every artifact produced by the synthesis is annotated with a complete history relating it to every other artifact (including the source specification) which influenced its construction. This approach requires modification of the entire synthesis system - which is labor-intensive and hard to do without influencing its behavior. In this paper, we introduce a novel, lightweight technique for deriving traceability from a program specification to the corresponding synthesized code. Once a program has been successfully synthesized from a specification, small changes are systematically made to the specification and the effects on the synthesized program observed. We have partially automated the technique and applied it in an experiment to one of our program synthesis systems, AUTOFILTER, and to the GNU C compiler, GCC. The results are promising: 1. Manual inspection of the results indicates that most of the connections derived from the source (a specification in the case of AUTOFILTER, C source code in the case of GCC) to its generated target (C source code in the case of AUTOFILTER, assembly language code in the case of GCC) are correct. 2. Around half of the lines in the target can be traced to at least one line of the source. 3. Small changes in the source often induce only small changes in the target.

  18. Use Them ... or Lose Them? The Case for and against Using QR Codes

    ERIC Educational Resources Information Center

    Cunningham, Chuck; Dull, Cassie

    2011-01-01

    A quick-response (QR) code is a two-dimensional, black-and-white square barcode and links directly to a URL of one's choice. When the code is scanned with a smartphone, it will automatically redirect the user to the designated URL. QR codes are popping up everywhere--billboards, magazines, posters, shop windows, TVs, computer screens, and more.…

  19. Deductive Evaluation: Formal Code Analysis With Low User Burden

    NASA Technical Reports Server (NTRS)

    Di Vito, Ben. L

    2016-01-01

    We describe a framework for symbolically evaluating iterative C code using a deductive approach that automatically discovers and proves program properties. Although verification is not performed, the method can infer detailed program behavior. Software engineering work flows could be enhanced by this type of analysis. Floyd-Hoare verification principles are applied to synthesize loop invariants, using a library of iteration-specific deductive knowledge. When needed, theorem proving is interleaved with evaluation and performed on the fly. Evaluation results take the form of inferred expressions and type constraints for values of program variables. An implementation using PVS (Prototype Verification System) is presented along with results for sample C functions.

  20. Robust automatic control system of vessel descent-rise device for plant with distributed parameters “cable – towed underwater vehicle”

    NASA Astrophysics Data System (ADS)

    Chupina, K. V.; Kataev, E. V.; Khannanov, A. M.; Korshunov, V. N.; Sennikov, I. A.

    2018-05-01

    The paper is devoted to a problem of synthesis of the robust control system for a distributed parameters plant. The vessel descent-rise device has a heave compensation function for stabilization of the towed underwater vehicle on a set depth. A sea state code, parameters of the underwater vehicle and cable vary during underwater operations, the vessel heave is a stochastic process. It means that the plant and external disturbances have uncertainty. That is why it is necessary to use the robust theory for synthesis of an automatic control system, but without use of traditional methods of optimization, because this cable has distributed parameters. The offered technique has allowed one to design an effective control system for stabilization of immersion depth of the towed underwater vehicle for various degrees of sea roughness and to provide its robustness to deviations of parameters of the vehicle and cable’s length.

  1. Automatically Preparing Safe SQL Queries

    NASA Astrophysics Data System (ADS)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  2. Automatic identification of IASLC-defined mediastinal lymph node stations on CT scans using multi-atlas organ segmentation

    NASA Astrophysics Data System (ADS)

    Hoffman, Joanne; Liu, Jiamin; Turkbey, Evrim; Kim, Lauren; Summers, Ronald M.

    2015-03-01

    Station-labeling of mediastinal lymph nodes is typically performed to identify the location of enlarged nodes for cancer staging. Stations are usually assigned in clinical radiology practice manually by qualitative visual assessment on CT scans, which is time consuming and highly variable. In this paper, we developed a method that automatically recognizes the lymph node stations in thoracic CT scans based on the anatomical organs in the mediastinum. First, the trachea, lungs, and spines are automatically segmented to locate the mediastinum region. Then, eight more anatomical organs are simultaneously identified by multi-atlas segmentation. Finally, with the segmentation of those anatomical organs, we convert the text definitions of the International Association for the Study of Lung Cancer (IASLC) lymph node map into patient-specific color-coded CT image maps. Thus, a lymph node station is automatically assigned to each lymph node. We applied this system to CT scans of 86 patients with 336 mediastinal lymph nodes measuring equal or greater than 10 mm. 84.8% of mediastinal lymph nodes were correctly mapped to their stations.

  3. Automatic indexing in a drug information portal.

    PubMed

    Sakji, Saoussen; Letord, Catherine; Dahamna, Badisse; Kergourlay, Ivan; Pereira, Suzanne; Joubert, Michel; Darmoni, Stéfan

    2009-01-01

    The objective of this work is to create a bilingual (French/English) Drug Information Portal (DIP), in a multi-terminological context and to emphasize its exploitation by an ATC automatic indexing allowing having more pertinent information about substances, organs or systems on which drugs act and their therapeutic and chemical characteristics. The development of the DIP was based on the CISMeF portal, which catalogues and indexes the most important and quality-controlled sources of institutional health information in French. DIP has created specific functionalities and uses specific drugs terminologies such as the ATC classification which used to automatic index the DIP resources. DIP is the result of collaboration between the CISMeF team and the VIDAL Company, specialized in drug information. DIP is conceived to facilitate the user information retrieval. The ATC automatic indexing provided relevant results in 76% of cases. Using multi-terminological context and in the framework of the drug field, indexing drugs with the appropriate codes or/and terms revealed to be very important to have the appropriate information storage and retrieval. The main challenge in the coming year is to increase the accuracy of the approach.

  4. Effectiveness of Global Features for Automatic Medical Image Classification and Retrieval – the experiences of OHSU at ImageCLEFmed

    PubMed Central

    Kalpathy-Cramer, Jayashree; Hersh, William

    2008-01-01

    In 2006 and 2007, Oregon Health & Science University (OHSU) participated in the automatic image annotation task for medical images at ImageCLEF, an annual international benchmarking event that is part of the Cross Language Evaluation Forum (CLEF). The goal of the automatic annotation task was to classify 1000 test images based on the Image Retrieval in Medical Applications (IRMA) code, given a set of 10,000 training images. There were 116 distinct classes in 2006 and 2007. We evaluated the efficacy of a variety of primarily global features for this classification task. These included features based on histograms, gray level correlation matrices and the gist technique. A multitude of classifiers including k-nearest neighbors, two-level neural networks, support vector machines, and maximum likelihood classifiers were evaluated. Our official error rates for the 1000 test images were 26% in 2006 using the flat classification structure. The error count in 2007 was 67.8 using the hierarchical classification error computation based on the IRMA code in 2007. Confusion matrices as well as clustering experiments were used to identify visually similar classes. The use of the IRMA code did not help us in the classification task as the semantic hierarchy of the IRMA classes did not correspond well with the hierarchy based on clustering of image features that we used. Our most frequent misclassification errors were along the view axis. Subsequent experiments based on a two-stage classification system decreased our error rate to 19.8% for the 2006 dataset and our error count to 55.4 for the 2007 data. PMID:19884953

  5. Improving the quality of self-monitoring blood glucose measurement: a study in reducing calibration errors.

    PubMed

    Baum, John M; Monhaut, Nanette M; Parker, Donald R; Price, Christopher P

    2006-06-01

    Two independent studies reported that 16% of people who self-monitor blood glucose used incorrectly coded meters. The degree of analytical error, however, was not characterized. Our study objectives were to demonstrate that miscoding can cause analytical errors and to characterize the potential amount of bias that can occur. The impact of calibration error with three selfblood glucose monitoring systems (BGMSs), one of which has an autocoding feature, is reported. Fresh capillary fingerstick blood from 50 subjects, 18 men and 32 women ranging in age from 23 to 82 years, was used to measure glucose with three BGMSs. Two BGMSs required manual coding and were purposely miscoded using numbers different from the one recommended for the reagent lot used. Two properly coded meters of each BGMS were included to assess within-system variability. Different reagent lots were used to challenge a third system that had autocoding capability and could not be miscoded. Some within-system comparisons showed deviations of greater than +/-30% when results obtained with miscoded meters were compared with data obtained with ones programmed using the correct code number. Similar erroneous results were found when the miscoded meter results were compared with those obtained with a glucose analyzer. For some miscoded meter and test strip combinations, error grid analysis showed that 90% of results fell into zones indicating altered clinical action. Such inaccuracies were not found with the BGMS having the autocoding feature. When certain meter code number settings of two BGMSs were used in conjunction with test strips having code numbers that did not match, statistically and clinically inaccurate results were obtained. Coding errors resulted in analytical errors of greater than +/-30% (-31.6 to +60.9%). These results confirm the value of a BGMS with an automatic coding feature.

  6. An improved lateral control wheel steering law for the Transport Systems Research Vehicle (TSRV)

    NASA Technical Reports Server (NTRS)

    Ragsdale, W. A.

    1992-01-01

    A lateral control wheel steering law with improved performance was developed for the Transport Systems Research Vehicle (TSRV) simulation and used in the Microwave Landing System research project. The control law converted rotational hand controller inputs into roll rate commands, manipulated ailerons, spoilers, and the rudder to achieve the desired roll rates. The system included automatic turn coordination, track angle hold, and autopilot/autoland modes. The resulting control law produced faster roll rates (15 degrees/sec), quicker response to command reversals, and safer bank angle limits, while using a more concise program code.

  7. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking.

    PubMed

    Lin, Zhicheng; He, Sheng

    2012-10-25

    Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.

  8. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  9. Introduction of the ASGARD code (Automated Selection and Grouping of events in AIA Regional Data)

    NASA Astrophysics Data System (ADS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv K.; Fayock, Brian

    2017-08-01

    We have developed the ASGARD code to automatically detect and group brightenings ("events") in AIA data. The event selection and grouping can be optimized to the respective dataset with a multitude of control parameters. The code was initially written for IRIS data, but has since been optimized for AIA. However, the underlying algorithm is not limited to either and could be used for other data as well.Results from datasets in various AIA channels show that brightenings are reliably detected and that coherent coronal structures can be isolated by using the obtained information about the start, peak, and end times of events. We are presently working on a follow-up algorithm to automatically determine the heating and cooling timescales of coronal structures. This will be done by correlating the information from different AIA channels with different temperature responses. We will present the code and preliminary results.

  10. The Definition and Implementation of a Computer Programming Language Based on Constraints.

    DTIC Science & Technology

    1980-08-01

    though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that IISP , say...and detecting and resolving conflicts, just as iisp provides certain services such as automatic storage management, which records given dala in a...defined- it permits the statement of equalities and some simple arithmetic relationships. An implementation representation is chosen, and IISP code for a

  11. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

  12. ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai

    2017-07-01

    The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.

  13. ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations.

    PubMed

    Laloo, Jalal Z A; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai

    2017-07-01

    The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.

  14. Evolution of the ATLAS Nightly Build System

    NASA Astrophysics Data System (ADS)

    Undrus, A.

    2012-12-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  15. Interactive specification acquisition via scenarios: A proposal

    NASA Technical Reports Server (NTRS)

    Hall, Robert J.

    1992-01-01

    Some reactive systems are most naturally specified by giving large collections of behavior scenarios. These collections not only specify the behavior of the system, but also provide good test suites for validating the implemented system. Due to the complexity of the systems and the number of scenarios, however, it appears that automated assistance is necessary to make this software development process workable. Interactive Specification Acquisition Tool (ISAT) is a proposed interactive system for supporting the acquisition and maintenance of a formal system specification from scenarios, as well as automatic synthesis of control code and automated test generation. This paper discusses the background, motivation, proposed functions, and implementation status of ISAT.

  16. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    PubMed

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  17. Aozan: an automated post-sequencing data-processing pipeline.

    PubMed

    Perrin, Sandrine; Firmo, Cyril; Lemoine, Sophie; Le Crom, Stéphane; Jourdren, Laurent

    2017-07-15

    Data management and quality control of output from Illumina sequencers is a disk space- and time-consuming task. Thus, we developed Aozan to automatically handle data transfer, demultiplexing, conversion and quality control once a run has finished. This software greatly improves run data management and the monitoring of run statistics via automatic emails and HTML web reports. Aozan is implemented in Java and Python, supported on Linux systems, and distributed under the GPLv3 License at: http://www.outils.genomique.biologie.ens.fr/aozan/ . Aozan source code is available on GitHub: https://github.com/GenomicParisCentre/aozan . aozan@biologie.ens.fr. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  18. MetaQuant: a tool for the automatic quantification of GC/MS-based metabolome data.

    PubMed

    Bunk, Boyke; Kucklick, Martin; Jonas, Rochus; Münch, Richard; Schobert, Max; Jahn, Dieter; Hiller, Karsten

    2006-12-01

    MetaQuant is a Java-based program for the automatic and accurate quantification of GC/MS-based metabolome data. In contrast to other programs MetaQuant is able to quantify hundreds of substances simultaneously with minimal manual intervention. The integration of a self-acting calibration function allows the parallel and fast calibration for several metabolites simultaneously. Finally, MetaQuant is able to import GC/MS data in the common NetCDF format and to export the results of the quantification into Systems Biology Markup Language (SBML), Comma Separated Values (CSV) or Microsoft Excel (XLS) format. MetaQuant is written in Java and is available under an open source license. Precompiled packages for the installation on Windows or Linux operating systems are freely available for download. The source code as well as the installation packages are available at http://bioinformatics.org/metaquant

  19. Scheduling Operations for Massive Heterogeneous Clusters

    NASA Technical Reports Server (NTRS)

    Humphrey, John; Spagnoli, Kyle

    2013-01-01

    High-performance computing (HPC) programming has become increasingly difficult with the advent of hybrid supercomputers consisting of multicore CPUs and accelerator boards such as the GPU. Manual tuning of software to achieve high performance on this type of machine has been performed by programmers. This is needlessly difficult and prone to being invalidated by new hardware, new software, or changes in the underlying code. A system was developed for task-based representation of programs, which when coupled with a scheduler and runtime system, allows for many benefits, including higher performance and utilization of computational resources, easier programming and porting, and adaptations of code during runtime. The system consists of a method of representing computer algorithms as a series of data-dependent tasks. The series forms a graph, which can be scheduled for execution on many nodes of a supercomputer efficiently by a computer algorithm. The schedule is executed by a dispatch component, which is tailored to understand all of the hardware types that may be available within the system. The scheduler is informed by a cluster mapping tool, which generates a topology of available resources and their strengths and communication costs. Software is decoupled from its hardware, which aids in porting to future architectures. A computer algorithm schedules all operations, which for systems of high complexity (i.e., most NASA codes), cannot be performed optimally by a human. The system aids in reducing repetitive code, such as communication code, and aids in the reduction of redundant code across projects. It adds new features to code automatically, such as recovering from a lost node or the ability to modify the code while running. In this project, the innovators at the time of this reporting intend to develop two distinct technologies that build upon each other and both of which serve as building blocks for more efficient HPC usage. First is the scheduling and dynamic execution framework, and the second is scalable linear algebra libraries that are built directly on the former.

  20. Android Platform Based Smartphones for a Logistical Remote Association Repair Framework

    PubMed Central

    Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing

    2014-01-01

    The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use. PMID:24967603

  1. Research on Automatic Programming

    DTIC Science & Technology

    1975-12-31

    Sequential processes, deadlocks, and semaphore primitives , Ph.D. Thesis, Harvard University, November 1974; Center for Research in Computing...verified. 13 Code generated to effect the synchronization makes use of the ECL control extension facility (Prenner’s CI, see [Prenner]). The... semaphore operations [Dijkstra] is being developed. Initial results for this code generator are very encouraging; in many cases generated code is

  2. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  3. Multiblock grid generation with automatic zoning

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.

    1995-01-01

    An overview will be given for multiblock grid generation with automatic zoning. We shall explore the many advantages and benefits of this exciting technology and will also see how to apply it to a number of interesting cases. The technology is available in the form of a commercial code, GridPro(registered trademark)/az3000. This code takes surface geometry definitions and patterns of points as its primary input and produces high quality grids as its output. Before we embark upon our exploration, we shall first give a brief background of the environment in which this technology fits.

  4. Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, G. M.

    2002-01-01

    We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.

  5. Introduction of the ASGARD Code

    NASA Technical Reports Server (NTRS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian

    2017-01-01

    ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).

  6. SORTA: a system for ontology-based re-coding and technical annotation of biomedical phenotype data.

    PubMed

    Pang, Chao; Sollie, Annet; Sijtsma, Anna; Hendriksen, Dennis; Charbon, Bart; de Haan, Mark; de Boer, Tommy; Kelpin, Fleur; Jetten, Jonathan; van der Velde, Joeri K; Smidt, Nynke; Sijmons, Rolf; Hillege, Hans; Swertz, Morris A

    2015-01-01

    There is an urgent need to standardize the semantics of biomedical data values, such as phenotypes, to enable comparative and integrative analyses. However, it is unlikely that all studies will use the same data collection protocols. As a result, retrospective standardization is often required, which involves matching of original (unstructured or locally coded) data to widely used coding or ontology systems such as SNOMED CT (clinical terms), ICD-10 (International Classification of Disease) and HPO (Human Phenotype Ontology). This data curation process is usually a time-consuming process performed by a human expert. To help mechanize this process, we have developed SORTA, a computer-aided system for rapidly encoding free text or locally coded values to a formal coding system or ontology. SORTA matches original data values (uploaded in semicolon delimited format) to a target coding system (uploaded in Excel spreadsheet, OWL ontology web language or OBO open biomedical ontologies format). It then semi- automatically shortlists candidate codes for each data value using Lucene and n-gram based matching algorithms, and can also learn from matches chosen by human experts. We evaluated SORTA's applicability in two use cases. For the LifeLines biobank, we used SORTA to recode 90 000 free text values (including 5211 unique values) about physical exercise to MET (Metabolic Equivalent of Task) codes. For the CINEAS clinical symptom coding system, we used SORTA to map to HPO, enriching HPO when necessary (315 terms matched so far). Out of the shortlists at rank 1, we found a precision/recall of 0.97/0.98 in LifeLines and of 0.58/0.45 in CINEAS. More importantly, users found the tool both a major time saver and a quality improvement because SORTA reduced the chances of human mistakes. Thus, SORTA can dramatically ease data (re)coding tasks and we believe it will prove useful for many more projects. Database URL: http://molgenis.org/sorta or as an open source download from http://www.molgenis.org/wiki/SORTA. © The Author(s) 2015. Published by Oxford University Press.

  7. SORTA: a system for ontology-based re-coding and technical annotation of biomedical phenotype data

    PubMed Central

    Pang, Chao; Sollie, Annet; Sijtsma, Anna; Hendriksen, Dennis; Charbon, Bart; de Haan, Mark; de Boer, Tommy; Kelpin, Fleur; Jetten, Jonathan; van der Velde, Joeri K.; Smidt, Nynke; Sijmons, Rolf; Hillege, Hans; Swertz, Morris A.

    2015-01-01

    There is an urgent need to standardize the semantics of biomedical data values, such as phenotypes, to enable comparative and integrative analyses. However, it is unlikely that all studies will use the same data collection protocols. As a result, retrospective standardization is often required, which involves matching of original (unstructured or locally coded) data to widely used coding or ontology systems such as SNOMED CT (clinical terms), ICD-10 (International Classification of Disease) and HPO (Human Phenotype Ontology). This data curation process is usually a time-consuming process performed by a human expert. To help mechanize this process, we have developed SORTA, a computer-aided system for rapidly encoding free text or locally coded values to a formal coding system or ontology. SORTA matches original data values (uploaded in semicolon delimited format) to a target coding system (uploaded in Excel spreadsheet, OWL ontology web language or OBO open biomedical ontologies format). It then semi- automatically shortlists candidate codes for each data value using Lucene and n-gram based matching algorithms, and can also learn from matches chosen by human experts. We evaluated SORTA’s applicability in two use cases. For the LifeLines biobank, we used SORTA to recode 90 000 free text values (including 5211 unique values) about physical exercise to MET (Metabolic Equivalent of Task) codes. For the CINEAS clinical symptom coding system, we used SORTA to map to HPO, enriching HPO when necessary (315 terms matched so far). Out of the shortlists at rank 1, we found a precision/recall of 0.97/0.98 in LifeLines and of 0.58/0.45 in CINEAS. More importantly, users found the tool both a major time saver and a quality improvement because SORTA reduced the chances of human mistakes. Thus, SORTA can dramatically ease data (re)coding tasks and we believe it will prove useful for many more projects. Database URL: http://molgenis.org/sorta or as an open source download from http://www.molgenis.org/wiki/SORTA PMID:26385205

  8. A Scalable and Dynamic Testbed for Conducting Penetration-Test Training in a Laboratory Environment

    DTIC Science & Technology

    2015-03-01

    entry point through which to execute a payload to accomplish a higher-level goal: executing arbitrary code, escalating privileges , pivoting...Mobile Ad Hoc Network Emulator (EMANE)26 can emulate the entire network stack (physical to application -layer protocols). 2. Methodology To build a...to host Windows, Linux, MacOS, Android , and other operating systems without much effort. 4 E. A simple and automatic “restore” function: Many

  9. A System for Mailpiece ZIP Code Assignment through Contextual Analysis. Phase 2

    DTIC Science & Technology

    1991-03-01

    Segmentation Address Block Interpretation Automatic Feature Generation Word Recognition Feature Detection Word Verification Optical Character Recognition Directory...in the Phase III effort. 1.1 Motivation The United States Postal Service (USPS) deploys large numbers of optical character recognition (OCR) machines...4):208-218, November 1986. [2] Gronmeyer, L. K., Ruffin, B. W., Lybanon, M. A., Neely, P. L., and Pierce, S. E. An Overview of Optical Character Recognition (OCR

  10. Compilation of Abstracts of Theses Submitted by Candidates for Degrees: October 1988 to September 1989

    DTIC Science & Technology

    1989-09-30

    to accommodate peripherally non -uniform flow modelling free of experimental uncertainties. It was effects (blockage) in the throughflow code...combines that experimental control functions with a detail in this thesis, and the results of a computer menu-driven, diagnostic subsystem to ensure...equations and design a complete (DSL) for both linear and non -linear models and automatic control system for the three dimensional compared. Cross

  11. An Evolving Ecosystem for Natural Language Processing in Department of Veterans Affairs.

    PubMed

    Garvin, Jennifer H; Kalsy, Megha; Brandt, Cynthia; Luther, Stephen L; Divita, Guy; Coronado, Gregory; Redd, Doug; Christensen, Carrie; Hill, Brent; Kelly, Natalie; Treitler, Qing Zeng

    2017-02-01

    In an ideal clinical Natural Language Processing (NLP) ecosystem, researchers and developers would be able to collaborate with others, undertake validation of NLP systems, components, and related resources, and disseminate them. We captured requirements and formative evaluation data from the Veterans Affairs (VA) Clinical NLP Ecosystem stakeholders using semi-structured interviews and meeting discussions. We developed a coding rubric to code interviews. We assessed inter-coder reliability using percent agreement and the kappa statistic. We undertook 15 interviews and held two workshop discussions. The main areas of requirements related to; design and functionality, resources, and information. Stakeholders also confirmed the vision of the second generation of the Ecosystem and recommendations included; adding mechanisms to better understand terms, measuring collaboration to demonstrate value, and datasets/tools to navigate spelling errors with consumer language, among others. Stakeholders also recommended capability to: communicate with developers working on the next version of the VA electronic health record (VistA Evolution), provide a mechanism to automatically monitor download of tools and to automatically provide a summary of the downloads to Ecosystem contributors and funders. After three rounds of coding and discussion, we determined the percent agreement of two coders to be 97.2% and the kappa to be 0.7851. The vision of the VA Clinical NLP Ecosystem met stakeholder needs. Interviews and discussion provided key requirements that inform the design of the VA Clinical NLP Ecosystem.

  12. Wide tracking range, auto ranging, low jitter phase lock loop for swept and fixed frequency systems

    DOEpatents

    Kerner, Thomas M.

    2001-01-01

    The present invention provides a wide tracking range phase locked loop (PLL) circuit that achieves minimal jitter in a recovered clock signal, regardless of the source of the jitter (i.e. whether it is in the source or the transmission media). The present invention PLL has automatic harmonic lockout detection circuitry via a novel lock and seek control logic in electrical communication with a programmable frequency discriminator and a code balance detector. (The frequency discriminator enables preset of a frequency window of upper and lower frequency limits to derive a programmable range within which signal acquisition is effected. The discriminator works in combination with the code balance detector circuit to minimize the sensitivity of the PLL circuit to random data in the data stream). In addition, the combination of a differential loop integrator with the lock and seek control logic obviates a code preamble and guarantees signal acquisition without harmonic lockup. An adaptive cable equalizer is desirably used in combination with the present invention PLL to recover encoded transmissions containing a clock and/or data. The equalizer automatically adapts to equalize short haul cable lengths of coaxial and twisted pair cables or wires and provides superior jitter performance itself. The combination of the equalizer with the present invention PLL is desirable in that such combination permits the use of short haul wires without significant jitter.

  13. Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support

    NASA Astrophysics Data System (ADS)

    Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar

    This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.

  14. Vadose zone monitoring strategies to control water flux dynamics and changes in soil hydraulic properties.

    NASA Astrophysics Data System (ADS)

    Valdes-Abellan, Javier; Jiménez-Martínez, Joaquin; Candela, Lucila

    2013-04-01

    For monitoring the vadose zone, different strategies can be chosen, depending on the objectives and scale of observation. The effects of non-conventional water use on the vadose zone might produce impacts in porous media which could lead to changes in soil hydraulic properties, among others. Controlling these possible effects requires an accurate monitoring strategy that controls the volumetric water content, θ, and soil pressure, h, along the studied profile. According to the available literature, different monitoring systems have been carried out independently, however less attention has received comparative studies between different techniques. An experimental plot of 9x5 m2 was set with automatic and non-automatic sensors to control θ and h up to 1.5m depth. The non-automatic system consisted of ten Jet Fill tensiometers at 30, 45, 60, 90 and 120 cm (Soil Moisture®) and a polycarbonate access tube of 44 mm (i.d) for soil moisture measurements with a TRIME FM TDR portable probe (IMKO®). Vertical installation was carefully performed; measurements with this system were manual, twice a week for θ and three times per week for h. The automatic system composed of five 5TE sensors (Decagon Devices®) installed at 20, 40, 60, 90 and 120 cm for θ measurements and one MPS1 sensor (Decagon Devices®) at 60 cm depth for h. Installation took place laterally in a 40-50 cm length hole bored in a side of a trench that was excavated. All automatic sensors hourly recorded and stored in a data-logger. Boundary conditions were controlled with a volume-meter and with a meteorological station. ET was modelled with Penman-Monteith equation. Soil characterization include bulk density, gravimetric water content, grain size distribution, saturated hydraulic conductivity and soil water retention curves determined following laboratory standards. Soil mineralogy was determined by X-Ray difractometry. Unsaturated soil hydraulic parameters were model-fitted through SWRC-fit code and ROSETTA based on soil textural fractions. Simulation of water flow using automatic and non-automatic date was carried out by HYDRUS-1D independently. A good agreement from collected automatic and non-automatic data and modelled results can be recognized. General trend was captured, except for the outlier values as expected. Slightly differences were found between hydraulic properties obtained from laboratory determinations, and from inverse modelling from the two approaches. Differences up to 14% of flux through the lower boundary were detected between the two strategies According to results, automatic sensors have more resolution and then they're more appropriated to detect subtle changes of soil hydraulic properties. Nevertheless, if the aim of the research is to control the general trend of water dynamics, no significant differences were observed between the two systems.

  15. Virtual Engineering and Science Team - Reusable Autonomy for Spacecraft Subsystems

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney C.; Johnson, Michael A.; Rilee, Michael L.; Truszkowski, Walt; Thompson, Bryan; Day, John H. (Technical Monitor)

    2002-01-01

    In this paper we address the design, development, and evaluation of the Virtual Engineering and Science Team (VEST) tool - a revolutionary way to achieve onboard subsystem/instrument autonomy. VEST directly addresses the technology needed for advanced autonomy enablers for spacecraft subsystems. It will significantly support the efficient and cost effective realization of on-board autonomy and contribute directly to realizing the concept of an intelligent autonomous spacecraft. VEST will support the evolution of a subsystem/instrument model that is probably correct and from that model the automatic generation of the code needed to support the autonomous operation of what was modeled. VEST will directly support the integration of the efforts of engineers, scientists, and software technologists. This integration of efforts will be a significant advancement over the way things are currently accomplished. The model, developed through the use of VEST, will be the basis for the physical construction of the subsystem/instrument and the generated code will support its autonomous operation once in space. The close coupling between the model and the code, in the same tool environment, will help ensure that correct and reliable operational control of the subsystem/instrument is achieved.VEST will provide a thoroughly modern interface that will allow users to easily and intuitively input subsystem/instrument requirements and visually get back the system's reaction to the correctness and compatibility of the inputs as the model evolves. User interface/interaction, logic, theorem proving, rule-based and model-based reasoning, and automatic code generation are some of the basic technologies that will be brought into play in realizing VEST.

  16. Proceedings of the Workshop on software tools for distributed intelligent control systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herget, C.J.

    1990-09-01

    The Workshop on Software Tools for Distributed Intelligent Control Systems was organized by Lawrence Livermore National Laboratory for the United States Army Headquarters Training and Doctrine Command and the Defense Advanced Research Projects Agency. The goals of the workshop were to the identify the current state of the art in tools which support control systems engineering design and implementation, identify research issues associated with writing software tools which would provide a design environment to assist engineers in multidisciplinary control design and implementation, formulate a potential investment strategy to resolve the research issues and develop public domain code which can formmore » the core of more powerful engineering design tools, and recommend test cases to focus the software development process and test associated performance metrics. Recognizing that the development of software tools for distributed intelligent control systems will require a multidisciplinary effort, experts in systems engineering, control systems engineering, and compute science were invited to participate in the workshop. In particular, experts who could address the following topics were selected: operating systems, engineering data representation and manipulation, emerging standards for manufacturing data, mathematical foundations, coupling of symbolic and numerical computation, user interface, system identification, system representation at different levels of abstraction, system specification, system design, verification and validation, automatic code generation, and integration of modular, reusable code.« less

  17. Automatic scheduling of outages of nuclear power plants with time windows. Final report, January-December 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomes, C.

    This report describes a successful project for transference of advanced AI technology into the domain of planning of outages of nuclear power plants as part of DOD`s dual-use program. ROMAN (Rome Lab Outage Manager) is the prototype system that was developed as a result of this project. ROMAN`s main innovation compared to the current state-of-the-art of outage management tools is its capability to automatically enforce safety constraints during the planning and scheduling phase. Another innovative aspect of ROMAN is the generation of more robust schedules that are feasible over time windows. In other words, ROMAN generates a family of schedulesmore » by assigning time intervals as start times to activities rather than single start times, without affecting the overall duration of the project. ROMAN uses a constraint satisfaction paradigm combining a global search tactic with constraint propagation. The derivation of very specialized representations for the constraints to perform efficient propagation is a key aspect for the generation of very fast schedules - constraints are compiled into the code, which is a novel aspect of our work using an automatic programming system, KIDS.« less

  18. A simple system for detection of EEG artifacts in polysomnographic recordings.

    PubMed

    Durka, P J; Klekowicz, H; Blinowska, K J; Szelenberger, W; Niemcewicz, Sz

    2003-04-01

    We present an efficient parametric system for automatic detection of electroencephalogram (EEG) artifacts in polysomnographic recordings. For each of the selected types of artifacts, a relevant parameter was calculated for a given epoch. If any of these parameters exceeded a threshold, the epoch was marked as an artifact. Performance of the system, evaluated on 18 overnight polysomnographic recordings, revealed concordance with decisions of human experts close to the interexpert agreement and the repeatability of expert's decisions, assessed via a double-blind test. Complete software (Matlab source code) for the presented system is freely available from the Internet at http://brain.fuw.edu.pl/artifacts.

  19. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    NASA Astrophysics Data System (ADS)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  20. 77 FR 66601 - Electronic Tariff Filings; Notice of Change to eTariff Type of Filing Codes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-06

    ... Tariff Filings; Notice of Change to eTariff Type of Filing Codes Take notice that, effective November 18, 2012, the list of available eTariff Type of Filing Codes (TOFC) will be modified to include a new TOFC... Energy's regulations. Tariff records included in such filings will be automatically accepted to be...

  1. Validation of the "HAMP" mapping algorithm: a tool for long-term trauma research studies in the conversion of AIS 2005 to AIS 98.

    PubMed

    Adams, Derk; Schreuder, Astrid B; Salottolo, Kristin; Settell, April; Goss, J Richard

    2011-07-01

    There are significant changes in the abbreviated injury scale (AIS) 2005 system, which make it impractical to compare patients coded in AIS version 98 with patients coded in AIS version 2005. Harborview Medical Center created a computer algorithm "Harborview AIS Mapping Program (HAMP)" to automatically convert AIS 2005 to AIS 98 injury codes. The mapping was validated using 6 months of double-coded patient injury records from a Level I Trauma Center. HAMP was used to determine how closely individual AIS and injury severity scores (ISS) were converted from AIS 2005 to AIS 98 versions. The kappa statistic was used to measure the agreement between manually determined codes and HAMP-derived codes. Seven hundred forty-nine patient records were used for validation. For the conversion of AIS codes, the measure of agreement between HAMP and manually determined codes was [kappa] = 0.84 (95% confidence interval, 0.82-0.86). The algorithm errors were smaller in magnitude than the manually determined coding errors. For the conversion of ISS, the agreement between HAMP versus manually determined ISS was [kappa] = 0.81 (95% confidence interval, 0.78-0.84). The HAMP algorithm successfully converted injuries coded in AIS 2005 to AIS 98. This algorithm will be useful when comparing trauma patient clinical data across populations coded in different versions, especially for longitudinal studies.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    The systems resilience research community has developed methods to manually insert additional source-program level assertions to trap errors, and also devised tools to conduct fault injection studies for scalar program codes. In this work, we contribute the first vector oriented LLVM-level fault injector VULFI to help study the effects of faults in vector architectures that are of growing importance, especially for vectorizing loops. Using VULFI, we conduct a resiliency study of nine real-world vector benchmarks using Intel’s AVX and SSE extensions as the target vector instruction sets, and offer the first reported understanding of how faults affect vector instruction sets.more » We take this work further toward automating the insertion of resilience assertions during compilation. This is based on our observation that during intermediate (e.g., LLVM-level) code generation to handle full and partial vectorization, modern compilers exploit (and explicate in their code-documentation) critical invariants. These invariants are turned into error-checking code. We confirm the efficacy of these automatically inserted low-overhead error detectors for vectorized for-loops.« less

  3. EvoluCode: Evolutionary Barcodes as a Unifying Framework for Multilevel Evolutionary Data.

    PubMed

    Linard, Benjamin; Nguyen, Ngoc Hoan; Prosdocimi, Francisco; Poch, Olivier; Thompson, Julie D

    2012-01-01

    Evolutionary systems biology aims to uncover the general trends and principles governing the evolution of biological networks. An essential part of this process is the reconstruction and analysis of the evolutionary histories of these complex, dynamic networks. Unfortunately, the methodologies for representing and exploiting such complex evolutionary histories in large scale studies are currently limited. Here, we propose a new formalism, called EvoluCode (Evolutionary barCode), which allows the integration of different evolutionary parameters (eg, sequence conservation, orthology, synteny …) in a unifying format and facilitates the multilevel analysis and visualization of complex evolutionary histories at the genome scale. The advantages of the approach are demonstrated by constructing barcodes representing the evolution of the complete human proteome. Two large-scale studies are then described: (i) the mapping and visualization of the barcodes on the human chromosomes and (ii) automatic clustering of the barcodes to highlight protein subsets sharing similar evolutionary histories and their functional analysis. The methodologies developed here open the way to the efficient application of other data mining and knowledge extraction techniques in evolutionary systems biology studies. A database containing all EvoluCode data is available at: http://lbgi.igbmc.fr/barcodes.

  4. Major Electrocardiographic Abnormalities According to the Minnesota Coding System Among Brazilian Adults (from the ELSA-Brasil Cohort Study).

    PubMed

    Pinto-Filho, Marcelo M; Brant, Luisa C C; Foppa, Murilo; Garcia-Silva, Kaiser B; Mendes de Oliveira, Rackel Aguiar; de Jesus Mendes da Fonseca, Maria; Alvim, Sheila; Lotufo, Paulo A; Mill, José G; Barreto, Sandhi M; Macfarlane, Peter W; Ribeiro, Antonio L P

    2017-06-15

    The electrocardiogram is a simple and useful clinical tool; nevertheless, few studies have evaluated the prevalence of electrocardiographic abnormalities in the Latin American population. This study aims to evaluate the major electrocardiographic abnormalities according to the Minnesota coding system in Brazilian adults, stratified by gender, age, race, and cardiovascular risk factors. Data from 14,424 adults (45.8% men, age 35 to 74 years) were obtained at baseline of the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil), according to standardized protocol. The electrocardiogram were obtained with the Burdick Atria 6100 machine, stored on Pyramis System, automatically coded according to the Minnesota coding system by the Glasgow University software and then manually revised. Major abnormalities were more prevalent in men than women (11.3% and 7.9%, p <0.001). These differences were consistent through the different age groups, race, and number of cardiovascular risk factors. Electrocardiographic major abnormalities were more prevalent in black participants for both men (black: 15.1%, mixed: 10.4%, white: 11.1%, p = 0.001) and women (black: 10%, mixed: 7.6%, white: 7.2%, p = 0.004). In conclusion, in this large sample of Brazilian adults, the prevalence of major electrocardiographic abnormalities was higher among men, the elderly, black, and among people with more cardiovascular risk factors. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. MIAQuant, a novel system for automatic segmentation, measurement, and localization comparison of different biomarkers from serialized histological slices.

    PubMed

    Casiraghi, Elena; Cossa, Mara; Huber, Veronica; Rivoltini, Licia; Tozzi, Matteo; Villa, Antonello; Vergani, Barbara

    2017-11-02

    In the clinical practice, automatic image analysis methods quickly quantizing histological results by objective and replicable methods are getting more and more necessary and widespread. Despite several commercial software products are available for this task, they are very little flexible, and provided as black boxes without modifiable source code. To overcome the aforementioned problems, we employed the commonly used MATLAB platform to develop an automatic method, MIAQuant, for the analysis of histochemical and immunohistochemical images, stained with various methods and acquired by different tools. It automatically extracts and quantifies markers characterized by various colors and shapes; furthermore, it aligns contiguous tissue slices stained by different markers and overlaps them with differing colors for visual comparison of their localization. Application of MIAQuant for clinical research fields, such as oncology and cardiovascular disease studies, has proven its efficacy, robustness and flexibility with respect to various problems; we highlight that, the flexibility of MIAQuant makes it an important tool to be exploited for basic researches where needs are constantly changing. MIAQuant software and its user manual are freely available for clinical studies, pathological research, and diagnosis.

  6. "Rate My Therapist": Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing.

    PubMed

    Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis G; Atkins, David C; Narayanan, Shrikanth S

    2015-01-01

    The technology for evaluating patient-provider interactions in psychotherapy-observational coding-has not changed in 70 years. It is labor-intensive, error prone, and expensive, limiting its use in evaluating psychotherapy in the real world. Engineering solutions from speech and language processing provide new methods for the automatic evaluation of provider ratings from session recordings. The primary data are 200 Motivational Interviewing (MI) sessions from a study on MI training methods with observer ratings of counselor empathy. Automatic Speech Recognition (ASR) was used to transcribe sessions, and the resulting words were used in a text-based predictive model of empathy. Two supporting datasets trained the speech processing tasks including ASR (1200 transcripts from heterogeneous psychotherapy sessions and 153 transcripts and session recordings from 5 MI clinical trials). The accuracy of computationally-derived empathy ratings were evaluated against human ratings for each provider. Computationally-derived empathy scores and classifications (high vs. low) were highly accurate against human-based codes and classifications, with a correlation of 0.65 and F-score (a weighted average of sensitivity and specificity) of 0.86, respectively. Empathy prediction using human transcription as input (as opposed to ASR) resulted in a slight increase in prediction accuracies, suggesting that the fully automatic system with ASR is relatively robust. Using speech and language processing methods, it is possible to generate accurate predictions of provider performance in psychotherapy from audio recordings alone. This technology can support large-scale evaluation of psychotherapy for dissemination and process studies.

  7. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking

    PubMed Central

    Lin, Zhicheng; He, Sheng

    2012-01-01

    Object identities (“what”) and their spatial locations (“where”) are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects (“files”) within the reference frame (“cabinet”) are orderly coded relative to the frame. PMID:23104817

  8. Accuracy of automatic syndromic classification of coded emergency department diagnoses in identifying mental health-related presentations for public health surveillance.

    PubMed

    Liljeqvist, Henning T G; Muscatello, David; Sara, Grant; Dinh, Michael; Lawrence, Glenda L

    2014-09-23

    Syndromic surveillance in emergency departments (EDs) may be used to deliver early warnings of increases in disease activity, to provide situational awareness during events of public health significance, to supplement other information on trends in acute disease and injury, and to support the development and monitoring of prevention or response strategies. Changes in mental health related ED presentations may be relevant to these goals, provided they can be identified accurately and efficiently. This study aimed to measure the accuracy of using diagnostic codes in electronic ED presentation records to identify mental health-related visits. We selected a random sample of 500 records from a total of 1,815,588 ED electronic presentation records from 59 NSW public hospitals during 2010. ED diagnoses were recorded using any of ICD-9, ICD-10 or SNOMED CT classifications. Three clinicians, blinded to the automatically generated syndromic grouping and each other's classification, reviewed the triage notes and classified each of the 500 visits as mental health-related or not. A "mental health problem presentation" for the purposes of this study was defined as any ED presentation where either a mental disorder or a mental health problem was the reason for the ED visit. The combined clinicians' assessment of the records was used as reference standard to measure the sensitivity, specificity, and positive and negative predictive values of the automatic classification of coded emergency department diagnoses. Agreement between the reference standard and the automated coded classification was estimated using the Kappa statistic. Agreement between clinician's classification and automated coded classification was substantial (Kappa = 0.73. 95% CI: 0.58 - 0.87). The automatic syndromic grouping of coded ED diagnoses for mental health-related visits was found to be moderately sensitive (68% 95% CI: 46%-84%) and highly specific at 99% (95% CI: 98%-99.7%) when compared with the reference standard in identifying mental health related ED visits. Positive predictive value was 81% (95% CI: 0.57 - 0.94) and negative predictive value was 98% (95% CI: 0.97-0.99). Mental health presentations identified using diagnoses coded with various classifications in electronic ED presentation records offers sufficient accuracy for application in near real-time syndromic surveillance.

  9. Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor

    NASA Astrophysics Data System (ADS)

    Pranger, Casper

    2017-04-01

    In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.

  10. Mesoscopic-microscopic spatial stochastic simulation with automatic system partitioning.

    PubMed

    Hellander, Stefan; Hellander, Andreas; Petzold, Linda

    2017-12-21

    The reaction-diffusion master equation (RDME) is a model that allows for efficient on-lattice simulation of spatially resolved stochastic chemical kinetics. Compared to off-lattice hard-sphere simulations with Brownian dynamics or Green's function reaction dynamics, the RDME can be orders of magnitude faster if the lattice spacing can be chosen coarse enough. However, strongly diffusion-controlled reactions mandate a very fine mesh resolution for acceptable accuracy. It is common that reactions in the same model differ in their degree of diffusion control and therefore require different degrees of mesh resolution. This renders mesoscopic simulation inefficient for systems with multiscale properties. Mesoscopic-microscopic hybrid methods address this problem by resolving the most challenging reactions with a microscale, off-lattice simulation. However, all methods to date require manual partitioning of a system, effectively limiting their usefulness as "black-box" simulation codes. In this paper, we propose a hybrid simulation algorithm with automatic system partitioning based on indirect a priori error estimates. We demonstrate the accuracy and efficiency of the method on models of diffusion-controlled networks in 3D.

  11. Miniaturized Water Flow and Level Monitoring System for Flood Disaster Early Warning

    NASA Astrophysics Data System (ADS)

    Ifedapo Abdullahi, Salami; Hadi Habaebi, Mohamed; Surya Gunawan, Teddy; Rafiqul Islam, MD

    2017-11-01

    This study presents the performance of a prototype miniaturised water flow and water level monitoring sensor designed towards supporting flood disaster early warning systems. The design involved selection of sensors, coding to control the system mechanism, and automatic data logging and storage. During the design phase, the apparatus was constructed where all the components were assembled using locally sourced items. Subsequently, under controlled laboratory environment, the system was tested by running water through the inlet during which the flow rate and rising water levels are automatically recorded and stored in a database via Microsoft Excel using Coolterm software. The system is simulated such that the water level readings measured in centimeters is output in meters using a multiplicative of 10. A total number of 80 readings were analyzed to evaluate the performance of the system. The result shows that the system is sensitive to water level rise and yielded accurate measurement of water level. But, the flow rate fluctuates due to the manual water supply that produced inconsistent flow. It was also observed that the flow sensor has a duty cycle of 50% of operating time under normal condition which implies that the performance of the flow sensor is optimal.

  12. Applications of automatic differentiation in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.

    1994-01-01

    Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.

  13. Automatic choroid cells segmentation and counting based on approximate convexity and concavity of chain code in fluorescence microscopic image

    NASA Astrophysics Data System (ADS)

    Lu, Weihua; Chen, Xinjian; Zhu, Weifang; Yang, Lei; Cao, Zhaoyuan; Chen, Haoyu

    2015-03-01

    In this paper, we proposed a method based on the Freeman chain code to segment and count rhesus choroid-retinal vascular endothelial cells (RF/6A) automatically for fluorescence microscopy images. The proposed method consists of four main steps. First, a threshold filter and morphological transform were applied to reduce the noise. Second, the boundary information was used to generate the Freeman chain codes. Third, the concave points were found based on the relationship between the difference of the chain code and the curvature. Finally, cells segmentation and counting were completed based on the characteristics of the number of the concave points, the area and shape of the cells. The proposed method was tested on 100 fluorescence microscopic cell images, and the average true positive rate (TPR) is 98.13% and the average false positive rate (FPR) is 4.47%, respectively. The preliminary results showed the feasibility and efficiency of the proposed method.

  14. An automatic editing algorithm for GPS data

    NASA Technical Reports Server (NTRS)

    Blewitt, Geoffrey

    1990-01-01

    An algorithm has been developed to edit automatically Global Positioning System data such that outlier deletion, cycle slip identification, and correction are independent of clock instability, selective availability, receiver-satellite kinematics, and tropospheric conditions. This algorithm, called TurboEdit, operates on undifferenced, dual frequency carrier phase data, and requires the use of P code pseudorange data and a smoothly varying ionospheric electron content. TurboEdit was tested on the large data set from the CASA Uno experiment, which contained over 2500 cycle slips.Analyst intervention was required on 1 percent of the station-satellite passes, almost all of these problems being due to difficulties in extrapolating variations in the ionospheric delay. The algorithm is presently being adapted for real time data editing in the Rogue receiver for continuous monitoring applications.

  15. HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics

    NASA Astrophysics Data System (ADS)

    Wiebusch, Martin

    2015-10-01

    This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.

  16. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions.

    PubMed

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.

  17. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions

    PubMed Central

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding. PMID:29312075

  18. GOAL - A test engineer oriented language. [Ground Operations Aerospace Language for coding automatic test

    NASA Technical Reports Server (NTRS)

    Mitchell, T. R.

    1974-01-01

    The development of a test engineer oriented language has been under way at the Kennedy Space Center for several years. The result of this effort is the Ground Operations Aerospace Language, GOAL, a self-documenting, high-order language suitable for coding automatic test, checkout and launch procedures. GOAL is a highly readable, writable, retainable language that is easily learned by nonprogramming oriented engineers. It is sufficiently powerful for use at all levels of Space Shuttle ground processing, from line replaceable unit checkout to integrated launch day operations. This paper will relate the language development, and describe GOAL and its applications.

  19. Counter-propagation network with variable degree variable step size LMS for single switch typing recognition.

    PubMed

    Yang, Cheng-Huei; Luo, Ching-Hsing; Yang, Cheng-Hong; Chuang, Li-Yeh

    2004-01-01

    Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, including mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for disabled persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. This restriction is a major hindrance. Therefore, a switch adaptive automatic recognition method with a high recognition rate is needed. The proposed system combines counter-propagation networks with a variable degree variable step size LMS algorithm. It is divided into five stages: space recognition, tone recognition, learning process, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method elicited a better recognition rate in comparison to alternative methods in the literature.

  20. Patient safety with blood products administration using wireless and bar-code technology.

    PubMed

    Porcella, Aleta; Walker, Kristy

    2005-01-01

    Supported by a grant from the Agency for Healthcare Research and Quality, a University of Iowa Hospitals and Clinics interdisciplinary research team created an online data-capture-response tool utilizing wireless mobile devices and bar code technology to track and improve blood products administration process. The tool captures 1) sample collection, 2) sample arrival in the blood bank, 3) blood product dispense from blood bank, and 4) administration. At each step, the scanned patient wristband ID bar code is automatically compared to scanned identification barcode on requisition, sample, and/or product, and the system presents either a confirmation or an error message to the user. Following an eight-month, 5 unit, staged pilot, a 'big bang,' hospital-wide implementation occurred on February 7, 2005. Preliminary results from pilot data indicate that the new barcode process captures errors 3 to 10 times better than the old manual process.

  1. Small passenger car transmission test: Mercury Lynx ATX transmission

    NASA Technical Reports Server (NTRS)

    Bujold, M. P.

    1981-01-01

    The testing of a Mercury Lynx automatic transmission is reported. The transmission was tested in accordance with a passenger car automatic transmission test code (SAE J65lb) which required drive performance, coast performance, and no load test conditions. Under these conditions, the transmission attained maximum efficiencies in the mid-ninety percent range both for drive performance test and coast performance tests. The torque, speed, and efficiency curves are presented, which provide the complete performance characteristics for the Mercury Lynx automatic transmission.

  2. LOINC, a universal standard for identifying laboratory observations: a 5-year update.

    PubMed

    McDonald, Clement J; Huff, Stanley M; Suico, Jeffrey G; Hill, Gilbert; Leavelle, Dennis; Aller, Raymond; Forrey, Arden; Mercer, Kathy; DeMoor, Georges; Hook, John; Williams, Warren; Case, James; Maloney, Pat

    2003-04-01

    The Logical Observation Identifier Names and Codes (LOINC) database provides a universal code system for reporting laboratory and other clinical observations. Its purpose is to identify observations in electronic messages such as Health Level Seven (HL7) observation messages, so that when hospitals, health maintenance organizations, pharmaceutical manufacturers, researchers, and public health departments receive such messages from multiple sources, they can automatically file the results in the right slots of their medical records, research, and/or public health systems. For each observation, the database includes a code (of which 25 000 are laboratory test observations), a long formal name, a "short" 30-character name, and synonyms. The database comes with a mapping program called Regenstrief LOINC Mapping Assistant (RELMA(TM)) to assist the mapping of local test codes to LOINC codes and to facilitate browsing of the LOINC results. Both LOINC and RELMA are available at no cost from http://www.regenstrief.org/loinc/. The LOINC medical database carries records for >30 000 different observations. LOINC codes are being used by large reference laboratories and federal agencies, e.g., the CDC and the Department of Veterans Affairs, and are part of the Health Insurance Portability and Accountability Act (HIPAA) attachment proposal. Internationally, they have been adopted in Switzerland, Hong Kong, Australia, and Canada, and by the German national standards organization, the Deutsches Instituts für Normung. Laboratories should include LOINC codes in their outbound HL7 messages so that clinical and research clients can easily integrate these results into their clinical and research repositories. Laboratories should also encourage instrument vendors to deliver LOINC codes in their instrument outputs and demand LOINC codes in HL7 messages they get from reference laboratories to avoid the need to lump so many referral tests under the "send out lab" code.

  3. Priority coding for control room alarms

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1994-01-01

    Indicating the priority of a spatially fixed, activated alarm tile on an alarm tile array by a shape coding at the tile, and preferably using the same shape coding wherever the same alarm condition is indicated elsewhere in the control room. The status of an alarm tile can change automatically or by operator acknowledgement, but tones and/or flashing cues continue to provide status information to the operator.

  4. Motor automaticity in Parkinson’s disease

    PubMed Central

    Wu, Tao; Hallett, Mark; Chan, Piu

    2017-01-01

    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson’s disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  5. Automatic de-identification of French clinical records: comparison of rule-based and machine-learning approaches.

    PubMed

    Grouin, Cyril; Zweigenbaum, Pierre

    2013-01-01

    In this paper, we present a comparison of two approaches to automatically de-identify medical records written in French: a rule-based system and a machine-learning based system using a conditional random fields (CRF) formalism. Both systems have been designed to process nine identifiers in a corpus of medical records in cardiology. We performed two evaluations: first, on 62 documents in cardiology, and on 10 documents in foetopathology - produced by optical character recognition (OCR) - to evaluate the robustness of our systems. We achieved a 0.843 (rule-based) and 0.883 (machine-learning) exact match overall F-measure in cardiology. While the rule-based system allowed us to achieve good results on nominative (first and last names) and numerical data (dates, phone numbers, and zip codes), the machine-learning approach performed best on more complex categories (postal addresses, hospital names, medical devices, and towns). On the foetopathology corpus, although our systems have not been designed for this corpus and despite OCR character recognition errors, we obtained promising results: a 0.681 (rule-based) and 0.638 (machine-learning) exact-match overall F-measure. This demonstrates that existing tools can be applied to process new documents of lower quality.

  6. An automatic target recognition system based on SAR image

    NASA Astrophysics Data System (ADS)

    Li, Qinfu; Wang, Jinquan; Zhao, Bo; Luo, Furen; Xu, Xiaojian

    2009-10-01

    In this paper, an automatic target recognition (ATR) system based on synthetic aperture radar (SAR) is proposed. This ATR system can play an important role in the simulation of up-to-data battlefield environment and be used in ATR research. To establish an integral and available system, the processing of SAR image was divided into four main stages which are de-noise, detection, cluster-discrimination and segment-recognition, respectively. The first three stages are used for searching region of interest (ROI). Once the ROIs are extracted, the recognition stage will be taken to compute the similarity between the ROIs and the templates in the electromagnetic simulation software National Electromagnetic Scattering Code (NESC). Due to the lack of the SAR raw data, the electromagnetic simulated images are added to the measured SAR background to simulate the battlefield environment8. The purpose of the system is to find the ROIs which can be the artificial military targets such as tanks, armored cars and so on and to categorize the ROIs into the right classes according to the existing templates. From the results we can see that the proposed system achieves a satisfactory result.

  7. Information quality measurement of medical encoding support based on usability.

    PubMed

    Puentes, John; Montagner, Julien; Lecornu, Laurent; Cauvin, Jean-Michel

    2013-12-01

    Medical encoding support systems for diagnoses and medical procedures are an emerging technology that begins to play a key role in billing, reimbursement, and health policies decisions. A significant problem to exploit these systems is how to measure the appropriateness of any automatically generated list of codes, in terms of fitness for use, i.e. their quality. Until now, only information retrieval performance measurements have been applied to estimate the accuracy of codes lists as quality indicator. Such measurements do not give the value of codes lists for practical medical encoding, and cannot be used to globally compare the quality of multiple codes lists. This paper defines and validates a new encoding information quality measure that addresses the problem of measuring medical codes lists quality. It is based on a usability study of how expert coders and physicians apply computer-assisted medical encoding. The proposed measure, named ADN, evaluates codes Accuracy, Dispersion and Noise, and is adapted to the variable length and content of generated codes lists, coping with limitations of previous measures. According to the ADN measure, the information quality of a codes list is fully represented by a single point, within a suitably constrained feature space. Using one scheme, our approach is reliable to measure and compare the information quality of hundreds of codes lists, showing their practical value for medical encoding. Its pertinence is demonstrated by simulation and application to real data corresponding to 502 inpatient stays in four clinic departments. Results are compared to the consensus of three expert coders who also coded this anonymized database of discharge summaries, and to five information retrieval measures. Information quality assessment applying the ADN measure showed the degree of encoding-support system variability from one clinic department to another, providing a global evaluation of quality measurement trends. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  9. Tools for Rapid Understanding of Malware Code

    DTIC Science & Technology

    2015-05-07

    cloaking techniques. We used three malware detectors, covering a wide spectrum of detection technologies, for our experiments: VirusTotal, an online ...Analysis and Manipulation ( SCAM ), 2014. [9] Babak Yadegari, Brian Johannesmeyer, Benjamin Whitely, and Saumya Debray. A generic approach to automatic...and Manipulation ( SCAM ), 2014. [9] Babak Yadegari, Brian Johannesmeyer, Benjamin Whitely, and Saumya Debray. A generic approach to automatic

  10. Real-time range acquisition by adaptive structured light.

    PubMed

    Koninckx, Thomas P; Van Gool, Luc

    2006-03-01

    The goal of this paper is to provide a "self-adaptive" system for real-time range acquisition. Reconstructions are based on a single frame structured light illumination. Instead of using generic, static coding that is supposed to work under all circumstances, system adaptation is proposed. This occurs on-the-fly and renders the system more robust against instant scene variability and creates suitable patterns at startup. A continuous trade-off between speed and quality is made. A weighted combination of different coding cues--based upon pattern color, geometry, and tracking--yields a robust way to solve the correspondence problem. The individual coding cues are automatically adapted within a considered family of patterns. The weights to combine them are based on the average consistency with the result within a small time-window. The integration itself is done by reformulating the problem as a graph cut. Also, the camera-projector configuration is taken into account for generating the projection patterns. The correctness of the range maps is not guaranteed, but an estimation of the uncertainty is provided for each part of the reconstruction. Our prototype is implemented using unmodified consumer hardware only and, therefore, is cheap. Frame rates vary between 10 and 25 fps, dependent on scene complexity.

  11. Retrofitting the AutoBayes Program Synthesis System with Concrete Syntax

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Visser, Eelco

    2004-01-01

    AutoBayes is a fully automatic, schema-based program synthesis system for statistical data analysis applications. Its core component is a schema library. i.e., a collection of generic code templates with associated applicability constraints which are instantiated in a problem-specific way during synthesis. Currently, AutoBayes is implemented in Prolog; the schemas thus use abstract syntax (i.e., Prolog terms) to formulate the templates. However, the conceptual distance between this abstract representation and the concrete syntax of the generated programs makes the schemas hard to create and maintain. In this paper we describe how AutoBayes is retrofitted with concrete syntax. We show how it is integrated into Prolog and describe how the seamless interaction of concrete syntax fragments with AutoBayes's remaining legacy meta-programming kernel based on abstract syntax is achieved. We apply the approach to gradually mitigate individual schemas without forcing a disruptive migration of the entire system to a different First experiences show that a smooth migration can be achieved. Moreover, it can result in a considerable reduction of the code size and improved readability of the code. In particular, abstracting out fresh-variable generation and second-order term construction allows the formulation of larger continuous fragments.

  12. StandFood: Standardization of Foods Using a Semi-Automatic System for Classifying and Describing Foods According to FoodEx2

    PubMed Central

    Eftimov, Tome; Korošec, Peter; Koroušić Seljak, Barbara

    2017-01-01

    The European Food Safety Authority has developed a standardized food classification and description system called FoodEx2. It uses facets to describe food properties and aspects from various perspectives, making it easier to compare food consumption data from different sources and perform more detailed data analyses. However, both food composition data and food consumption data, which need to be linked, are lacking in FoodEx2 because the process of classification and description has to be manually performed—a process that is laborious and requires good knowledge of the system and also good knowledge of food (composition, processing, marketing, etc.). In this paper, we introduce a semi-automatic system for classifying and describing foods according to FoodEx2, which consists of three parts. The first involves a machine learning approach and classifies foods into four FoodEx2 categories, with two for single foods: raw (r) and derivatives (d), and two for composite foods: simple (s) and aggregated (c). The second uses a natural language processing approach and probability theory to describe foods. The third combines the result from the first and the second part by defining post-processing rules in order to improve the result for the classification part. We tested the system using a set of food items (from Slovenia) manually-coded according to FoodEx2. The new semi-automatic system obtained an accuracy of 89% for the classification part and 79% for the description part, or an overall result of 79% for the whole system. PMID:28587103

  13. StandFood: Standardization of Foods Using a Semi-Automatic System for Classifying and Describing Foods According to FoodEx2.

    PubMed

    Eftimov, Tome; Korošec, Peter; Koroušić Seljak, Barbara

    2017-05-26

    The European Food Safety Authority has developed a standardized food classification and description system called FoodEx2. It uses facets to describe food properties and aspects from various perspectives, making it easier to compare food consumption data from different sources and perform more detailed data analyses. However, both food composition data and food consumption data, which need to be linked, are lacking in FoodEx2 because the process of classification and description has to be manually performed-a process that is laborious and requires good knowledge of the system and also good knowledge of food (composition, processing, marketing, etc.). In this paper, we introduce a semi-automatic system for classifying and describing foods according to FoodEx2, which consists of three parts. The first involves a machine learning approach and classifies foods into four FoodEx2 categories, with two for single foods: raw (r) and derivatives (d), and two for composite foods: simple (s) and aggregated (c). The second uses a natural language processing approach and probability theory to describe foods. The third combines the result from the first and the second part by defining post-processing rules in order to improve the result for the classification part. We tested the system using a set of food items (from Slovenia) manually-coded according to FoodEx2. The new semi-automatic system obtained an accuracy of 89% for the classification part and 79% for the description part, or an overall result of 79% for the whole system.

  14. Certifying Auto-Generated Flight Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen

    2008-01-01

    Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.

  15. Time coded distribution via broadcasting stations

    NASA Technical Reports Server (NTRS)

    Leschiutta, S.; Pettiti, V.; Detoma, E.

    1979-01-01

    The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.

  16. Automating FEA programming

    NASA Technical Reports Server (NTRS)

    Sharma, Naveen

    1992-01-01

    In this paper we briefly describe a combined symbolic and numeric approach for solving mathematical models on parallel computers. An experimental software system, PIER, is being developed in Common Lisp to synthesize computationally intensive and domain formulation dependent phases of finite element analysis (FEA) solution methods. Quantities for domain formulation like shape functions, element stiffness matrices, etc., are automatically derived using symbolic mathematical computations. The problem specific information and derived formulae are then used to generate (parallel) numerical code for FEA solution steps. A constructive approach to specify a numerical program design is taken. The code generator compiles application oriented input specifications into (parallel) FORTRAN77 routines with the help of built-in knowledge of the particular problem, numerical solution methods and the target computer.

  17. Demonstration of fully coupled simplified extended station black-out accident simulation with RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling

    2014-10-01

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The RELAP-7 code develop-ment effort started in October of 2011 and by the end of the second development year, a number of physical components with simplified two phase flow capability have been de-veloped to support the simplified boiling water reactor (BWR) extended station blackout (SBO) analyses. The demonstration case includes the major components for the primary system of a BWR, as well as the safety system components for the safety relief valve (SRV), the reactor core isolation cooling (RCIC)more » system, and the wet well. Three scenar-ios for the SBO simulations have been considered. Since RELAP-7 is not a severe acci-dent analysis code, the simulation stops when fuel clad temperature reaches damage point. Scenario I represents an extreme station blackout accident without any external cooling and cooling water injection. The system pressure is controlled by automatically releasing steam through SRVs. Scenario II includes the RCIC system but without SRV. The RCIC system is fully coupled with the reactor primary system and all the major components are dynamically simulated. The third scenario includes both the RCIC system and the SRV to provide a more realistic simulation. This paper will describe the major models and dis-cuss the results for the three scenarios. The RELAP-7 simulations for the three simplified SBO scenarios show the importance of dynamically simulating the SRVs, the RCIC sys-tem, and the wet well system to the reactor safety during extended SBO accidents.« less

  18. Optimizing Tensor Contraction Expressions for Hybrid CPU-GPU Execution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste

    2013-03-01

    Tensor contractions are generalized multidimensional matrix multiplication operations that widely occur in quantum chemistry. Efficient execution of tensor contractions on Graphics Processing Units (GPUs) requires several challenges to be addressed, including index permutation and small dimension-sizes reducing thread block utilization. Moreover, to apply the same optimizations to various expressions, we need a code generation tool. In this paper, we present our approach to automatically generate CUDA code to execute tensor contractions on GPUs, including management of data movement between CPU and GPU. To evaluate our tool, GPU-enabled code is generated for the most expensive contractions in CCSD(T), a key coupledmore » cluster method, and incorporated into NWChem, a popular computational chemistry suite. For this method, we demonstrate speedup over a factor of 8.4 using one GPU (instead of one core per node) and over 2.6 when utilizing the entire system using hybrid CPU+GPU solution with 2 GPUs and 5 cores (instead of 7 cores per node). Finally, we analyze the implementation behavior on future GPU systems.« less

  19. CATS, continuous automated testing of seismological, hydroacoustic, and infrasound (SHI) processing software.

    NASA Astrophysics Data System (ADS)

    Brouwer, Albert; Brown, David; Tomuta, Elena

    2017-04-01

    To detect nuclear explosions, waveform data from over 240 SHI stations world-wide flows into the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), located in Vienna, Austria. A complex pipeline of software applications processes this data in numerous ways to form event hypotheses. The software codebase comprises over 2 million lines of code, reflects decades of development, and is subject to frequent enhancement and revision. Since processing must run continuously and reliably, software changes are subjected to thorough testing before being put into production. To overcome the limitations and cost of manual testing, the Continuous Automated Testing System (CATS) has been created. CATS provides an isolated replica of the IDC processing environment, and is able to build and test different versions of the pipeline software directly from code repositories that are placed under strict configuration control. Test jobs are scheduled automatically when code repository commits are made. Regressions are reported. We present the CATS design choices and test methods. Particular attention is paid to how the system accommodates the individual testing of strongly interacting software components that lack test instrumentation.

  20. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    DOEpatents

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  1. Coding of navigational affordances in the human visual system

    PubMed Central

    Epstein, Russell A.

    2017-01-01

    A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space. PMID:28416669

  2. BGen: A UML Behavior Network Generator Tool

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry; Reder, Leonard J.; Balian, Harry

    2010-01-01

    BGen software was designed for autogeneration of code based on a graphical representation of a behavior network used for controlling automatic vehicles. A common format used for describing a behavior network, such as that used in the JPL-developed behavior-based control system, CARACaS ["Control Architecture for Robotic Agent Command and Sensing" (NPO-43635), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 40] includes a graph with sensory inputs flowing through the behaviors in order to generate the signals for the actuators that drive and steer the vehicle. A computer program to translate Unified Modeling Language (UML) Freeform Implementation Diagrams into a legacy C implementation of Behavior Network has been developed in order to simplify the development of C-code for behavior-based control systems. UML is a popular standard developed by the Object Management Group (OMG) to model software architectures graphically. The C implementation of a Behavior Network is functioning as a decision tree.

  3. Applang - A DSL for specification of mobile applications for android platform based on textX

    NASA Astrophysics Data System (ADS)

    Kosanović, Milan; Dejanović, Igor; Milosavljević, Gordana

    2016-06-01

    Mobile platforms become a ubiquitous part of our daily lives thus making more pressure to software developers to develop more applications faster and with the support for different mobile operating systems. To foster the faster development of mobile services and applications and to support various mobile operating systems a new software development approaches must be undertaken. Domain-Specific Languages (DSL) are a viable approach that promise to solve a problem of target platform diversity as well as to facilitate rapid application development and shorter time-to-market. This paper presents Applang, a DSL for the specification of mobile applications for the Android platform, based on textX meta-language. The application is described using Applang DSL and the source code for a target platform is automatically generated by the provided code generator. The same application defined using single Applang source can be transformed to various targets with little or no manual modifications.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, J.P.; Bangs, A.L.; Butler, P.L.

    Hetero Helix is a programming environment which simulates shared memory on a heterogeneous network of distributed-memory computers. The machines in the network may vary with respect to their native operating systems and internal representation of numbers. Hetero Helix presents a simple programming model to developers, and also considers the needs of designers, system integrators, and maintainers. The key software technology underlying Hetero Helix is the use of a compiler'' which analyzes the data structures in shared memory and automatically generates code which translates data representations from the format native to each machine into a common format, and vice versa. Themore » design of Hetero Helix was motivated in particular by the requirements of robotics applications. Hetero Helix has been used successfully in an integration effort involving 27 CPUs in a heterogeneous network and a body of software totaling roughly 100,00 lines of code. 25 refs., 6 figs.« less

  5. Automatic multi-banking of memory for microprocessors

    NASA Technical Reports Server (NTRS)

    Wiker, G. A. (Inventor)

    1984-01-01

    A microprocessor system is provided with added memories to expand its address spaces beyond its address word length capacity by using indirect addressing instructions of a type having a detectable operations code and dedicating designated address spaces of memory to each of the added memories, one space to a memory. By decoding each operations code of instructions read from main memory into a decoder to identify indirect addressing instructions of the specified type, and then decoding the address that follows in a decoder to determine which added memory is associated therewith, the associated added memory is selectively enabled through a unit while the main memory is disabled to permit the instruction to be executed on the location to which the effective address of the indirect address instruction points, either before the indirect address is read from main memory or afterwards, depending on how the system is arranged by a switch.

  6. Binary translation using peephole translation rules

    DOEpatents

    Bansal, Sorav; Aiken, Alex

    2010-05-04

    An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.

  7. 75 FR 80677 - The Low-Income Definition

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-23

    ... original regulatory text so it is consistent with the geo-coding software the agency uses to make the low... Union Act (Act) authorizes the NCUA Board (Board) to define ``low-income members'' so that credit unions... process of implementing geo- coding software to make the calculation automatically for credit unions...

  8. Energy efficient rateless codes for high speed data transfer over free space optical channels

    NASA Astrophysics Data System (ADS)

    Prakash, Geetha; Kulkarni, Muralidhar; Acharya, U. S.

    2015-03-01

    Terrestrial Free Space Optical (FSO) links transmit information by using the atmosphere (free space) as a medium. In this paper, we have investigated the use of Luby Transform (LT) codes as a means to mitigate the effects of data corruption induced by imperfect channel which usually takes the form of lost or corrupted packets. LT codes, which are a class of Fountain codes, can be used independent of the channel rate and as many code words as required can be generated to recover all the message bits irrespective of the channel performance. Achieving error free high data rates with limited energy resources is possible with FSO systems if error correction codes with minimal overheads on the power can be used. We also employ a combination of Binary Phase Shift Keying (BPSK) with provision for modification of threshold and optimized LT codes with belief propagation for decoding. These techniques provide additional protection even under strong turbulence regimes. Automatic Repeat Request (ARQ) is another method of improving link reliability. Performance of ARQ is limited by the number of retransmissions and the corresponding time delay. We prove through theoretical computations and simulations that LT codes consume less energy per bit. We validate the feasibility of using energy efficient LT codes over ARQ for FSO links to be used in optical wireless sensor networks within the eye safety limits.

  9. Towards Time Automata and Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Hutzler, G.; Klaudel, H.; Wang, D. Y.

    2004-01-01

    The design of reactive systems must comply with logical correctness (the system does what it is supposed to do) and timeliness (the system has to satisfy a set of temporal constraints) criteria. In this paper, we propose a global approach for the design of adaptive reactive systems, i.e., systems that dynamically adapt their architecture depending on the context. We use the timed automata formalism for the design of the agents' behavior. This allows evaluating beforehand the properties of the system (regarding logical correctness and timeliness), thanks to model-checking and simulation techniques. This model is enhanced with tools that we developed for the automatic generation of code, allowing to produce very quickly a running multi-agent prototype satisfying the properties of the model.

  10. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  11. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  12. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    PubMed

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  13. A visual programming environment for the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl; Crockett, Thomas W.; Middleton, David

    1988-01-01

    The Navier-Stokes computer is a high-performance, reconfigurable, pipelined machine designed to solve large computational fluid dynamics problems. Due to the complexity of the architecture, development of effective, high-level language compilers for the system appears to be a very difficult task. Consequently, a visual programming methodology has been developed which allows users to program the system at an architectural level by constructing diagrams of the pipeline configuration. These schematic program representations can then be checked for validity and automatically translated into machine code. The visual environment is illustrated by using a prototype graphical editor to program an example problem.

  14. Standardized mappings--a framework to combine different semantic mappers into a standardized web-API.

    PubMed

    Neuhaus, Philipp; Doods, Justin; Dugas, Martin

    2015-01-01

    Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.

  15. Development of Advanced Verification and Validation Procedures and Tools for the Certification of Learning Systems in Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola

    2005-01-01

    Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.

  16. Unsupervised Extraction of Diagnosis Codes from EMRs Using Knowledge-Based and Extractive Text Summarization Techniques

    PubMed Central

    Kavuluru, Ramakanth; Han, Sifei; Harris, Daniel

    2017-01-01

    Diagnosis codes are extracted from medical records for billing and reimbursement and for secondary uses such as quality control and cohort identification. In the US, these codes come from the standard terminology ICD-9-CM derived from the international classification of diseases (ICD). ICD-9 codes are generally extracted by trained human coders by reading all artifacts available in a patient’s medical record following specific coding guidelines. To assist coders in this manual process, this paper proposes an unsupervised ensemble approach to automatically extract ICD-9 diagnosis codes from textual narratives included in electronic medical records (EMRs). Earlier attempts on automatic extraction focused on individual documents such as radiology reports and discharge summaries. Here we use a more realistic dataset and extract ICD-9 codes from EMRs of 1000 inpatient visits at the University of Kentucky Medical Center. Using named entity recognition (NER), graph-based concept-mapping of medical concepts, and extractive text summarization techniques, we achieve an example based average recall of 0.42 with average precision 0.47; compared with a baseline of using only NER, we notice a 12% improvement in recall with the graph-based approach and a 7% improvement in precision using the extractive text summarization approach. Although diagnosis codes are complex concepts often expressed in text with significant long range non-local dependencies, our present work shows the potential of unsupervised methods in extracting a portion of codes. As such, our findings are especially relevant for code extraction tasks where obtaining large amounts of training data is difficult. PMID:28748227

  17. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  18. Mise en Scene: Conversion of Scenarios to CSP Traces for the Requirements-to-Design-to-Code Project

    NASA Technical Reports Server (NTRS)

    Carter. John D.; Gardner, William B.; Rash, James L.; Hinchey, Michael G.

    2007-01-01

    The "Requirements-to-Design-to-Code" (R2D2C) project at NASA's Goddard Space Flight Center is based on deriving a formal specification expressed in Communicating Sequential Processes (CSP) notation from system requirements supplied in the form of CSP traces. The traces, in turn, are to be extracted from scenarios, a user-friendly medium often used to describe the required behavior of computer systems under development. This work, called Mise en Scene, defines a new scenario medium (Scenario Notation Language, SNL) suitable for control-dominated systems, coupled with a two-stage process for automatic translation of scenarios to a new trace medium (Trace Notation Language, TNL) that encompasses CSP traces. Mise en Scene is offered as an initial solution to the problem of the scenarios-to-traces "D2" phase of R2D2C. A survey of the "scenario" concept and some case studies are also provided.

  19. Software design and implementation of ship heave motion monitoring system based on MBD method

    NASA Astrophysics Data System (ADS)

    Yu, Yan; Li, Yuhan; Zhang, Chunwei; Kang, Won-Hee; Ou, Jinping

    2015-03-01

    Marine transportation plays a significant role in the modern transport sector due to its advantage of low cost, large capacity. It is being attached enormous importance to all over the world. Nowadays the related areas of product development have become an existing hot spot. DSP signal processors feature micro volume, low cost, high precision, fast processing speed, which has been widely used in all kinds of monitoring systems. But traditional DSP code development process is time-consuming, inefficiency, costly and difficult. MathWorks company proposed Model-based Design (MBD) to overcome these defects. By calling the target board modules in simulink library to compile and generate the corresponding code for the target processor. And then automatically call DSP integrated development environment CCS for algorithm validation on the target processor. This paper uses the MDB to design the algorithm for the ship heave motion monitoring system. It proves the effectiveness of the MBD run successfully on the processor.

  20. An Expert Assistant for Computer Aided Parallelization

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Chun, Robert; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    The prototype implementation of an expert system was developed to assist the user in the computer aided parallelization process. The system interfaces to tools for automatic parallelization and performance analysis. By fusing static program structure information and dynamic performance analysis data the expert system can help the user to filter, correlate, and interpret the data gathered by the existing tools. Sections of the code that show poor performance and require further attention are rapidly identified and suggestions for improvements are presented to the user. In this paper we describe the components of the expert system and discuss its interface to the existing tools. We present a case study to demonstrate the successful use in full scale scientific applications.

  1. Design of Provider-Provisioned Website Protection Scheme against Malware Distribution

    NASA Astrophysics Data System (ADS)

    Yagi, Takeshi; Tanimoto, Naoto; Hariu, Takeo; Itoh, Mitsutaka

    Vulnerabilities in web applications expose computer networks to security threats, and many websites are used by attackers as hopping sites to attack other websites and user terminals. These incidents prevent service providers from constructing secure networking environments. To protect websites from attacks exploiting vulnerabilities in web applications, service providers use web application firewalls (WAFs). WAFs filter accesses from attackers by using signatures, which are generated based on the exploit codes of previous attacks. However, WAFs cannot filter unknown attacks because the signatures cannot reflect new types of attacks. In service provider environments, the number of exploit codes has recently increased rapidly because of the spread of vulnerable web applications that have been developed through cloud computing. Thus, generating signatures for all exploit codes is difficult. To solve these problems, our proposed scheme detects and filters malware downloads that are sent from websites which have already received exploit codes. In addition, to collect information for detecting malware downloads, web honeypots, which automatically extract the communication records of exploit codes, are used. According to the results of experiments using a prototype, our scheme can filter attacks automatically so that service providers can provide secure and cost-effective network environments.

  2. Proceedings Papers of the AFSC (Air Force Systems Command) Avionics Standardization Conference (2nd) Held at Dayton, Ohio on 30 November-2 December 1982. Volume 2

    DTIC Science & Technology

    1982-11-01

    groups. The Air Force is concerned with such issues such as resource allocation to foster and prcomotc standards, transitioning from current to future...perform automatic resource allocation , generate MATE Intermediate code, and provide formatted output listings. d. MATE Test Executive (MTE). The MTE...AFFECTED BY THESE STANDARDS TO KNOW JUST WHAT IS AVAILABLE TO SUPPORT THEM: THE HARDWARE; THE COMPLIANCE TESTING ; THE TOOLS NECESSARY TO FACILITATE DESIGN

  3. Information Technology Innovation in the U.S. Army: The Case of the Adoption, Adaptation, and Utilization of the Strategic Crisis Exercise Intranet.

    DTIC Science & Technology

    1999-01-01

    the system using widely available Microsoft Visual and Access Basic programming language . For SCE 󈨦, SWAMI was upgraded to automatically update...into pseudo-code and pass it on to contractors to program, usually using a complex programming language like FORTRAN. Army operations research...easier to use than programming languages like FORTRAN or C, there was still very little expertise in HTML among the instructors and controllers who were

  4. HAL/S - The programming language for Shuttle

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1974-01-01

    HAL/S is a higher order language and system, now operational, adopted by NASA for programming Space Shuttle on-board software. Program reliability is enhanced through language clarity and readability, modularity through program structure, and protection of code and data. Salient features of HAL/S include output orientation, automatic checking (with strictly enforced compiler rules), the availability of linear algebra, real-time control, a statement-level simulator, and compiler transferability (for applying HAL/S to additional object and host computers). The compiler is described briefly.

  5. An algorithm for automatic target recognition using passive radar and an EKF for estimating aircraft orientation

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.

    2005-07-01

    Rather than emitting pulses, passive radar systems rely on "illuminators of opportunity," such as TV and FM radio, to illuminate potential targets. These systems are attractive since they allow receivers to operate without emitting energy, rendering them covert. Until recently, most of the research regarding passive radar has focused on detecting and tracking targets. This dissertation focuses on extending the capabilities of passive radar systems to include automatic target recognition. The target recognition algorithm described in this dissertation uses the radar cross section (RCS) of potential targets, collected over a short period of time, as the key information for target recognition. To make the simulated RCS as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. An extended Kalman filter (EKF) estimates the target's orientation (and uncertainty in the estimate) from velocity measurements obtained from the passive radar tracker. Coupling the aircraft orientation and state with the known antenna locations permits computation of the incident and observed azimuth and elevation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of potential target classes as a function of these angles. Thus, the approximated incident and observed angles allow the appropriate RCS to be extracted from a database of FISC results. Using this process, the RCS of each aircraft in the target class is simulated as though each is executing the same maneuver as the target detected by the system. Two additional scaling processes are required to transform the RCS into a power profile (magnitude only) simulating the signal in the receiver. First, the RCS is scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. Then, the Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, further scaling the RCS. A Rician likelihood model compares the scaled RCS of the illuminated aircraft with those of the potential targets. To improve the robustness of the result, the algorithm jointly optimizes over feasible orientation profiles and target types via dynamic programming.

  6. Unstructured Mesh Methods for the Simulation of Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Peraire, Jaime; Bibb, K. L. (Technical Monitor)

    2001-01-01

    This report describes the research work undertaken at the Massachusetts Institute of Technology. The aim of this research is to identify effective algorithms and methodologies for the efficient and routine solution of hypersonic viscous flows about re-entry vehicles. For over ten years we have received support from NASA to develop unstructured mesh methods for Computational Fluid Dynamics. As a result of this effort a methodology based on the use, of unstructured adapted meshes of tetrahedra and finite volume flow solvers has been developed. A number of gridding algorithms flow solvers, and adaptive strategies have been proposed. The most successful algorithms developed from the basis of the unstructured mesh system FELISA. The FELISA system has been extensively for the analysis of transonic and hypersonic flows about complete vehicle configurations. The system is highly automatic and allows for the routine aerodynamic analysis of complex configurations starting from CAD data. The code has been parallelized and utilizes efficient solution algorithms. For hypersonic flows, a version of the, code which incorporates real gas effects, has been produced. One of the latest developments before the start of this grant was to extend the system to include viscous effects. This required the development of viscous generators, capable of generating the anisotropic grids required to represent boundary layers, and viscous flow solvers. In figures I and 2, we show some sample hypersonic viscous computations using the developed viscous generators and solvers. Although these initial results were encouraging, it became apparent that in order to develop a fully functional capability for viscous flows, several advances in gridding, solution accuracy, robustness and efficiency were required. As part of this research we have developed: 1) automatic meshing techniques and the corresponding computer codes have been delivered to NASA and implemented into the GridEx system, 2) a finite element algorithm for the solution of the viscous compressible flow equations which can solve flows all the way down to the incompressible limit and that can use higher order (quadratic) approximations leading to highly accurate answers, and 3) and iterative algebraic multigrid solution techniques.

  7. Automatic Adaptation to Fast Input Changes in a Time-Invariant Neural Circuit

    PubMed Central

    Bharioke, Arjun; Chklovskii, Dmitri B.

    2015-01-01

    Neurons must faithfully encode signals that can vary over many orders of magnitude despite having only limited dynamic ranges. For a correlated signal, this dynamic range constraint can be relieved by subtracting away components of the signal that can be predicted from the past, a strategy known as predictive coding, that relies on learning the input statistics. However, the statistics of input natural signals can also vary over very short time scales e.g., following saccades across a visual scene. To maintain a reduced transmission cost to signals with rapidly varying statistics, neuronal circuits implementing predictive coding must also rapidly adapt their properties. Experimentally, in different sensory modalities, sensory neurons have shown such adaptations within 100 ms of an input change. Here, we show first that linear neurons connected in a feedback inhibitory circuit can implement predictive coding. We then show that adding a rectification nonlinearity to such a feedback inhibitory circuit allows it to automatically adapt and approximate the performance of an optimal linear predictive coding network, over a wide range of inputs, while keeping its underlying temporal and synaptic properties unchanged. We demonstrate that the resulting changes to the linearized temporal filters of this nonlinear network match the fast adaptations observed experimentally in different sensory modalities, in different vertebrate species. Therefore, the nonlinear feedback inhibitory network can provide automatic adaptation to fast varying signals, maintaining the dynamic range necessary for accurate neuronal transmission of natural inputs. PMID:26247884

  8. Automated software configuration in the MONSOON system

    NASA Astrophysics Data System (ADS)

    Daly, Philip N.; Buchholz, Nick C.; Moore, Peter C.

    2004-09-01

    MONSOON is the next generation OUV-IR controller project being developed at NOAO. The design is flexible, emphasizing code re-use, maintainability and scalability as key factors. The software needs to support widely divergent detector systems ranging from multi-chip mosaics (for LSST, QUOTA, ODI and NEWFIRM) down to large single or multi-detector laboratory development systems. In order for this flexibility to be effective and safe, the software must be able to configure itself to the requirements of the attached detector system at startup. The basic building block of all MONSOON systems is the PAN-DHE pair which make up a single data acquisition node. In this paper we discuss the software solutions used in the automatic PAN configuration system.

  9. Automated target recognition using passive radar and coordinated flight models

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2003-09-01

    Rather than emitting pulses, passive radar systems rely on illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. These systems are particularly attractive since they allow receivers to operate without emitting energy, rendering them covert. Many existing passive radar systems estimate the locations and velocities of targets. This paper focuses on adding an automatic target recognition (ATR) component to such systems. Our approach to ATR compares the Radar Cross Section (RCS) of targets detected by a passive radar system to the simulated RCS of known targets. To make the comparison as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. The estimated positions become inputs for an algorithm that uses a coordinated flight model to compute probable aircraft orientation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of several potential target classes as they execute the estimated maneuvers. The RCS is then scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. The Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, so that the RCS can be further scaled. The Rician model compares the RCS of the illuminated aircraft with those of the potential targets. This comparison results in target identification.

  10. Evidence Arguments for Using Formal Methods in Software Certification

    NASA Technical Reports Server (NTRS)

    Denney, Ewen W.; Pai, Ganesh

    2013-01-01

    We describe a generic approach for automatically integrating the output generated from a formal method/tool into a software safety assurance case, as an evidence argument, by (a) encoding the underlying reasoning as a safety case pattern, and (b) instantiating it using the data produced from the method/tool. We believe this approach not only improves the trustworthiness of the evidence generated from a formal method/tool, by explicitly presenting the reasoning and mechanisms underlying its genesis, but also provides a way to gauge the suitability of the evidence in the context of the wider assurance case. We illustrate our work by application to a real example-an unmanned aircraft system- where we invoke a formal code analysis tool from its autopilot software safety case, automatically transform the verification output into an evidence argument, and then integrate it into the former.

  11. Automatic Activation of Phonological Code during Visual Word Recognition in Children: A Masked Priming Study in Grades 3 and 5

    ERIC Educational Resources Information Center

    Sauval, Karinne; Perre, Laetitia; Casalis, Séverine

    2017-01-01

    The present study aimed to investigate the development of automatic phonological processes involved in visual word recognition during reading acquisition in French. A visual masked priming lexical decision experiment was carried out with third, fifth graders and adult skilled readers. Three different types of partial overlap between the prime and…

  12. Reading Aloud Is Not Automatic: Processing Capacity Is Required to Generate a Phonological Code from Print

    ERIC Educational Resources Information Center

    Reynolds, Michael; Besner, Derek

    2006-01-01

    The present experiments tested the claim that phonological recoding occurs "automatically" by assessing whether it uses central attention in the context of the psychological refractory period paradigm. Task 1 was a tone discrimination task and Task 2 was reading aloud. The joint effects of long-lag word repetition priming and stimulus onset…

  13. Oregon Washington Coastal Ocean Forecast System: Real-time Modeling and Data Assimilation

    NASA Astrophysics Data System (ADS)

    Erofeeva, S.; Kurapov, A. L.; Pasmans, I.

    2016-02-01

    Three-day forecasts of ocean currents, temperature and salinity along the Oregon and Washington coasts are produced daily by a numerical ROMS-based ocean circulation model. NAM is used to derive atmospheric forcing for the model. Fresh water discharge from Columbia River, Fraser River, and small rivers in Puget Sound are included. The forecast is constrained by open boundary conditions derived from the global Navy HYCOM model and once in 3 days assimilation of recent data, including HF radar surface currents, sea surface temperature from the GOES satellite, and SSH from several satellite altimetry missions. 4-dimensional variational data assimilation is implemented in 3-day time windows using the tangent linear and adjoint codes developed at OSU. The system is semi-autonomous - all the data, including NAM and HYCOM fields are automatically updated, and daily operational forecast is automatically initiated. The pre-assimilation data quality control and post-assimilation forecast quality control require the operator's involvement. The daily forecast and 60 days of hindcast fields are available for public on opendap. As part of the system model validation plots to various satellites and SEAGLIDER are also automatically updated and available on the web (http://ingria.coas.oregonstate.edu/rtdavow/). Lessons learned in this pilot real-time coastal ocean forecasting project help develop and test metrics for forecast skill assessment for the West Coast Operational Forecast System (WCOFS), currently at testing and development phase at the National Oceanic and Atmospheric Administration (NOAA).

  14. FAMA: Fast Automatic MOOG Analysis

    NASA Astrophysics Data System (ADS)

    Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella

    2014-02-01

    FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.

  15. Rapid algorithm prototyping and implementation for power quality measurement

    NASA Astrophysics Data System (ADS)

    Kołek, Krzysztof; Piątek, Krzysztof

    2015-12-01

    This article presents a Model-Based Design (MBD) approach to rapidly implement power quality (PQ) metering algorithms. Power supply quality is a very important aspect of modern power systems and will become even more important in future smart grids. In this case, maintaining the PQ parameters at the desired level will require efficient implementation methods of the metering algorithms. Currently, the development of new, advanced PQ metering algorithms requires new hardware with adequate computational capability and time intensive, cost-ineffective manual implementations. An alternative, considered here, is an MBD approach. The MBD approach focuses on the modelling and validation of the model by simulation, which is well-supported by a Computer-Aided Engineering (CAE) packages. This paper presents two algorithms utilized in modern PQ meters: a phase-locked loop based on an Enhanced Phase Locked Loop (EPLL), and the flicker measurement according to the IEC 61000-4-15 standard. The algorithms were chosen because of their complexity and non-trivial development. They were first modelled in the MATLAB/Simulink package, then tested and validated in a simulation environment. The models, in the form of Simulink diagrams, were next used to automatically generate C code. The code was compiled and executed in real-time on the Zynq Xilinx platform that combines a reconfigurable Field Programmable Gate Array (FPGA) with a dual-core processor. The MBD development of PQ algorithms, automatic code generation, and compilation form a rapid algorithm prototyping and implementation path for PQ measurements. The main advantage of this approach is the ability to focus on the design, validation, and testing stages while skipping over implementation issues. The code generation process renders production-ready code that can be easily used on the target hardware. This is especially important when standards for PQ measurement are in constant development, and the PQ issues in emerging smart grids will require tools for rapid development and implementation of such algorithms.

  16. a New Approach for the Semi-Automatic Texture Generation of the Buildings Facades, from Terrestrial Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Oniga, E.

    2012-07-01

    The result of the terrestrial laser scanning is an impressive number of spatial points, each of them being characterized as position by the X, Y and Z co-ordinates, by the value of the laser reflectance and their real color, expressed as RGB (Red, Green, Blue) values. The color code for each LIDAR point is taken from the georeferenced digital images, taken with a high resolution panoramic camera incorporated in the scanner system. In this article I propose a new algorithm for the semiautomatic texture generation, using the color information, the RGB values of every point that has been taken by terrestrial laser scanning technology and the 3D surfaces defining the buildings facades, generated with the Leica Cyclone software. The first step is when the operator defines the limiting value, i.e. the minimum distance between a point and the closest surface. The second step consists in calculating the distances, or the perpendiculars drawn from each point to the closest surface. In the third step we associate the points whose 3D coordinates are known, to every surface, depending on the limiting value. The fourth step consists in computing the Voronoi diagram for the points that belong to a surface. The final step brings automatic association between the RGB value of the color code and the corresponding polygon of the Voronoi diagram. The advantage of using this algorithm is that we can obtain, in a semi-automatic manner, a photorealistic 3D model of the building.

  17. Progress in The Semantic Analysis of Scientific Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  18. 14 CFR 91.215 - ATC transponder and altitude reporting equipment and use.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... interrogations with the code specified by ATC, or a Mode S capability, replying to Mode 3/A interrogations with the code specified by ATC and intermode and Mode S interrogations in accordance with the applicable... equipment having a Mode C capability that automatically replies to Mode C interrogations by transmitting...

  19. 14 CFR 91.215 - ATC transponder and altitude reporting equipment and use.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... interrogations with the code specified by ATC, or a Mode S capability, replying to Mode 3/A interrogations with the code specified by ATC and intermode and Mode S interrogations in accordance with the applicable... equipment having a Mode C capability that automatically replies to Mode C interrogations by transmitting...

  20. 14 CFR 91.215 - ATC transponder and altitude reporting equipment and use.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... interrogations with the code specified by ATC, or a Mode S capability, replying to Mode 3/A interrogations with the code specified by ATC and intermode and Mode S interrogations in accordance with the applicable... equipment having a Mode C capability that automatically replies to Mode C interrogations by transmitting...

  1. 14 CFR 91.215 - ATC transponder and altitude reporting equipment and use.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... interrogations with the code specified by ATC, or a Mode S capability, replying to Mode 3/A interrogations with the code specified by ATC and intermode and Mode S interrogations in accordance with the applicable... equipment having a Mode C capability that automatically replies to Mode C interrogations by transmitting...

  2. 14 CFR 91.215 - ATC transponder and altitude reporting equipment and use.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... interrogations with the code specified by ATC, or a Mode S capability, replying to Mode 3/A interrogations with the code specified by ATC and intermode and Mode S interrogations in accordance with the applicable... equipment having a Mode C capability that automatically replies to Mode C interrogations by transmitting...

  3. Automatic contact in DYNA3D for vehicle crashworthiness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whirley, R.G.; Engelmann, B.E.

    1993-07-15

    This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit nonlinear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. This paper discusses in detail a new four-step automatic contact algorithm. Key aspects of the proposed method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a smoothly varying surface normal which allows a consistent treatment of shell intersection and corner contact conditions without ad-hocmore » rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public DYNA3D code.« less

  4. Automatic energy calibration algorithm for an RBS setup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Tiago F.; Moro, Marcos V.; Added, Nemitala

    2013-05-06

    This work describes a computer algorithm for automatic extraction of the energy calibration parameters from a Rutherford Back-Scattering Spectroscopy (RBS) spectrum. Parameters like the electronic gain, electronic offset and detection resolution (FWHM) of a RBS setup are usually determined using a standard sample. In our case, the standard sample comprises of a multi-elemental thin film made of a mixture of Ti-Al-Ta that is analyzed at the beginning of each run at defined beam energy. A computer program has been developed to extract automatically the calibration parameters from the spectrum of the standard sample. The code evaluates the first derivative ofmore » the energy spectrum, locates the trailing edges of the Al, Ti and Ta peaks and fits a first order polynomial for the energy-channel relation. The detection resolution is determined fitting the convolution of a pre-calculated theoretical spectrum. To test the code, data of two years have been analyzed and the results compared with the manual calculations done previously, obtaining good agreement.« less

  5. Optimizing a liquid propellant rocket engine with an automated combustor design code (AUTOCOM)

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Reichel, R. H.; Jones, R. T.; Glatt, C. R.

    1972-01-01

    A procedure for automatically designing a liquid propellant rocket engine combustion chamber in an optimal fashion is outlined. The procedure is contained in a digital computer code, AUTOCOM. The code is applied to an existing engine, and design modifications are generated which provide a substantial potential payload improvement over the existing design. Computer time requirements for this payload improvement were small, approximately four minutes in the CDC 6600 computer.

  6. A scalable, self-analyzing digital locking system for use on quantum optics experiments.

    PubMed

    Sparkes, B M; Chrzanowski, H M; Parrain, D P; Buchler, B C; Lam, P K; Symul, T

    2011-07-01

    Digital control of optics experiments has many advantages over analog control systems, specifically in terms of the scalability, cost, flexibility, and the integration of system information into one location. We present a digital control system, freely available for download online, specifically designed for quantum optics experiments that allows for automatic and sequential re-locking of optical components. We show how the inbuilt locking analysis tools, including a white-noise network analyzer, can be used to help optimize individual locks, and verify the long term stability of the digital system. Finally, we present an example of the benefits of digital locking for quantum optics by applying the code to a specific experiment used to characterize optical Schrödinger cat states.

  7. Near-real time 3D probabilistic earthquakes locations at Mt. Etna volcano

    NASA Astrophysics Data System (ADS)

    Barberi, G.; D'Agostino, M.; Mostaccio, A.; Patane', D.; Tuve', T.

    2012-04-01

    Automatic procedure for locating earthquake in quasi-real time must provide a good estimation of earthquakes location within a few seconds after the event is first detected and is strongly needed for seismic warning system. The reliability of an automatic location algorithm is influenced by several factors such as errors in picking seismic phases, network geometry, and velocity model uncertainties. On Mt. Etna, the seismic network is managed by INGV and the quasi-real time earthquakes locations are performed by using an automatic-picking algorithm based on short-term-average to long-term-average ratios (STA/LTA) calculated from an approximate squared envelope function of the seismogram, which furnish a list of P-wave arrival times, and the location algorithm Hypoellipse, with a 1D velocity model. The main purpose of this work is to investigate the performances of a different automatic procedure to improve the quasi-real time earthquakes locations. In fact, as the automatic data processing may be affected by outliers (wrong picks), the use of a traditional earthquake location techniques based on a least-square misfit function (L2-norm) often yield unstable and unreliable solutions. Moreover, on Mt. Etna, the 1D model is often unable to represent the complex structure of the volcano (in particular the strong lateral heterogeneities), whereas the increasing accuracy in the 3D velocity models at Mt. Etna during recent years allows their use today in routine earthquake locations. Therefore, we selected, as reference locations, all the events occurred on Mt. Etna in the last year (2011) which was automatically detected and located by means of the Hypoellipse code. By using this dataset (more than 300 events), we applied a nonlinear probabilistic earthquake location algorithm using the Equal Differential Time (EDT) likelihood function, (Font et al., 2004; Lomax, 2005) which is much more robust in the presence of outliers in the data. Successively, by using a probabilistic non linear method (NonLinLoc, Lomax, 2001) and the 3D velocity model, derived from the one developed by Patanè et al. (2006) integrated with that obtained by Chiarabba et al. (2004), we obtained the best possible constraint on the location of the focii expressed as a probability density function (PDF) for the hypocenter location in 3D space. As expected, the obtained results, compared with the reference ones, show that the NonLinLoc software (applied to a 3D velocity model) is more reliable than the Hypoellipse code (applied to layered 1D velocity models), leading to more reliable automatic locations also when outliers are present.

  8. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  9. Natural Language Interface for Safety Certification of Safety-Critical Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2011-01-01

    Model-based design and automated code generation are being used increasingly at NASA. The trend is to move beyond simulation and prototyping to actual flight code, particularly in the guidance, navigation, and control domain. However, there are substantial obstacles to more widespread adoption of code generators in such safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. The AutoCert generator plug-in supports the certification of automatically generated code by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews.

  10. Distribution of a Generic Mission Planning and Scheduling Toolkit for Astronomical Spacecraft

    NASA Technical Reports Server (NTRS)

    Kleiner, Steven C.

    1996-01-01

    Work is progressing as outlined in the proposal for this contract. A working planning and scheduling system has been documented and packaged and made available to the WIRE Small Explorer group at JPL, the FUSE group at JHU, the NASA/GSFC Laboratory for Astronomy and Solar Physics and the Advanced Planning and Scheduling Branch at STScI. The package is running successfully on the WIRE computer system. It is expected that the WIRE will reuse significant portions of the SWAS code in its system. This scheduling system itself was tested successfully against the spacecraft hardware in December 1995. A fully automatic scheduling module has been developed and is being added to the toolkit. In order to maximize reuse, the code is being reorganized during the current build into object-oriented class libraries. A paper describing the toolkit has been written and is included in the software distribution. We have experienced interference between the export and production versions of the toolkit. We will be requesting permission to reprogram funds in order to purchase a standalone PC onto which to offload the export version.

  11. Proteomics to go: Proteomatic enables the user-friendly creation of versatile MS/MS data evaluation workflows.

    PubMed

    Specht, Michael; Kuhlgert, Sebastian; Fufezan, Christian; Hippler, Michael

    2011-04-15

    We present Proteomatic, an operating system independent and user-friendly platform that enables the construction and execution of MS/MS data evaluation pipelines using free and commercial software. Required external programs such as for peptide identification are downloaded automatically in the case of free software. Due to a strict separation of functionality and presentation, and support for multiple scripting languages, new processing steps can be added easily. Proteomatic is implemented in C++/Qt, scripts are implemented in Ruby, Python and PHP. All source code is released under the LGPL. Source code and installers for Windows, Mac OS X, and Linux are freely available at http://www.proteomatic.org. michael.specht@uni-muenster.de Supplementary data are available at Bioinformatics online.

  12. Countermeasures for Time-Cheat Detection in Multiplayer Online Games

    NASA Astrophysics Data System (ADS)

    Ferretti, Stefano

    Cheating is an important issue in games. Depending on the system over which the game is deployed, several types of malicious actions may be accomplished so as to take an unfair and unexpected advantage over the game and over the (digital, human) adversaries. When the game is a standalone application, cheats typically just relate to the specific software code being developed to build the application. It is not a surprise to find (in the Web and in specialized magazines) people that explain cheats on specific games stating, for instance, which configuration files can be altered (and how to do it) to automatically gain some bonus during the game. To avoid this, game developers are hence motivated to build stable code, with related data that should be securely managed and made difficult to alter.

  13. How Does Reading Performance Modulate the Impact of Orthographic Knowledge on Speech Processing? A Comparison of Normal Readers and Dyslexic Adults

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Nelis, Aubéline; Kolinsky, Régine

    2014-01-01

    Studies on proficient readers showed that speech processing is affected by knowledge of the orthographic code. Yet, the automaticity of the orthographic influence depends on task demand. Here, we addressed this automaticity issue in normal and dyslexic adult readers by comparing the orthographic effects obtained in two speech processing tasks that…

  14. MO-F-CAMPUS-I-01: A System for Automatically Calculating Organ and Effective Dose for Fluoroscopically-Guided Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiong, Z; Vijayan, S; Rana, V

    2015-06-15

    Purpose: A system was developed that automatically calculates the organ and effective dose for individual fluoroscopically-guided procedures using a log of the clinical exposure parameters. Methods: We have previously developed a dose tracking system (DTS) to provide a real-time color-coded 3D- mapping of skin dose. This software produces a log file of all geometry and exposure parameters for every x-ray pulse during a procedure. The data in the log files is input into PCXMC, a Monte Carlo program that calculates organ and effective dose for projections and exposure parameters set by the user. We developed a MATLAB program to readmore » data from the log files produced by the DTS and to automatically generate the definition files in the format used by PCXMC. The processing is done at the end of a procedure after all exposures are completed. Since there are thousands of exposure pulses with various parameters for fluoroscopy, DA and DSA and at various projections, the data for exposures with similar parameters is grouped prior to entry into PCXMC to reduce the number of Monte Carlo calculations that need to be performed. Results: The software developed automatically transfers data from the DTS log file to PCXMC and runs the program for each grouping of exposure pulses. When the dose from all exposure events are calculated, the doses for each organ and all effective doses are summed to obtain procedure totals. For a complicated interventional procedure, the calculations can be completed on a PC without manual intervention in less than 30 minutes depending on the level of data grouping. Conclusion: This system allows organ dose to be calculated for individual procedures for every patient without tedious calculations or data entry so that estimates of stochastic risk can be obtained in addition to the deterministic risk estimate provided by the DTS. Partial support from NIH grant R01EB002873 and Toshiba Medical Systems Corp.« less

  15. Automatic removal of cosmic ray signatures in Deep Impact images

    NASA Astrophysics Data System (ADS)

    Ipatov, S. I.; A'Hearn, M. F.; Klaasen, K. P.

    The results of recognition of cosmic ray (CR) signatures on single images made during the Deep Impact mission were analyzed for several codes written by several authors. For automatic removal of CR signatures on many images, we suggest using the code imgclean ( http://pdssbn.astro.umd.edu/volume/didoc_0001/document/calibration_software/dical_v5/) written by E. Deutsch as other codes considered do not work properly automatically with a large number of images and do not run to completion for some images; however, other codes can be better for analysis of certain specific images. Sometimes imgclean detects false CR signatures near the edge of a comet nucleus, and it often does not recognize all pixels of long CR signatures. Our code rmcr is the only code among those considered that allows one to work with raw images. For most visual images made during low solar activity at exposure time t > 4 s, the number of clusters of bright pixels on an image per second per sq. cm of CCD was about 2-4, both for dark and normal sky images. At high solar activity, it sometimes exceeded 10. The ratio of the number of CR signatures consisting of n pixels obtained at high solar activity to that at low solar activity was greater for greater n. The number of clusters detected as CR signatures on a single infrared image is by at least a factor of several greater than the actual number of CR signatures; the number of clusters based on analysis of two successive dark infrared frames is in agreement with an expected number of CR signatures. Some glitches of false CR signatures include bright pixels repeatedly present on different infrared images. Our interactive code imr allows a user to choose the regions on a considered image where glitches detected by imgclean as CR signatures are ignored. In other regions chosen by the user, the brightness of some pixels is replaced by the local median brightness if the brightness of these pixels is greater by some factor than the median brightness. The interactive code allows one to delete long CR signatures and prevents removal of false CR signatures near the edge of the nucleus of the comet. The interactive code can be applied to editing any digital images. Results obtained can be used for other missions to comets.

  16. The laboratory demonstration and signal processing of the inverse synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Gao, Si; Zhang, ZengHui; Xu, XianWen; Yu, WenXian

    2017-10-01

    This paper presents a coherent inverse synthetic-aperture imaging ladar(ISAL)system to obtain high resolution images. A balanced coherent optics system in laboratory is built with binary phase coded modulation transmit waveform which is different from conventional chirp. A whole digital signal processing solution is proposed including both quality phase gradient autofocus(QPGA) algorithm and cubic phase function(CPF) algorithm. Some high-resolution well-focused ISAL images of retro-reflecting targets are shown to validate the concepts. It is shown that high resolution images can be achieved and the influences from vibrations of platform involving targets and radar can be automatically compensated by the distinctive laboratory system and digital signal process.

  17. Practical gigahertz quantum key distribution robust against channel disturbance.

    PubMed

    Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; He, De-Yong; Hui, Cong; Hao, Peng-Lei; Fan-Yuan, Guan-Jie; Wang, Chao; Zhang, Li-Jun; Kuang, Jie; Liu, Shu-Feng; Zhou, Zheng; Wang, Yong-Gang; Guo, Guang-Can; Han, Zheng-Fu

    2018-05-01

    Quantum key distribution (QKD) provides an attractive solution for secure communication. However, channel disturbance severely limits its application when a QKD system is transferred from the laboratory to the field. Here a high-speed Faraday-Sagnac-Michelson QKD system is proposed that can automatically compensate for the channel polarization disturbance, which largely avoids the intermittency limitations of environment mutation. Over a 50 km fiber channel with 30 Hz polarization scrambling, the practicality of this phase-coding QKD system was characterized with an interference fringe visibility of 99.35% over 24 h and a stable secure key rate of 306 k bits/s over seven days without active polarization alignment.

  18. Some User's Insights Into ADIFOR 2.0D

    NASA Technical Reports Server (NTRS)

    Giesy, Daniel P.

    2002-01-01

    Some insights are given which were gained by one user through experience with the use of the ADIFOR 2.0D software for automatic differentiation of Fortran code. These insights are generally in the area of the user interface with the generated derivative code - particularly the actual form of the interface and the use of derivative objects, including "seed" matrices. Some remarks are given as to how to iterate application of ADIFOR in order to generate second derivative code.

  19. Frequency-Accommodating Manchester Decoder

    NASA Technical Reports Server (NTRS)

    Vasquez, Mario J.

    1988-01-01

    No adjustment necessary to cover a 10:1 frequency range. Decoding circuit converts biphase-level pulse-code modulation to nonreturn-to-zero (NRZ)-level pulse-code modulation plus clock signal. Circuit accommodates input data rate of 50 to 500 kb/s. Tracks gradual changes in rate automatically, eliminating need for extra circuits and manual switching to adjust to different rates.

  20. Automatic detection of white-light flare kernels in SDO/HMI intensitygrams

    NASA Astrophysics Data System (ADS)

    Mravcová, Lucia; Švanda, Michal

    2017-11-01

    Solar flares with a broadband emission in the white-light range of the electromagnetic spectrum belong to most enigmatic phenomena on the Sun. The origin of the white-light emission is not entirely understood. We aim to systematically study the visible-light emission connected to solar flares in SDO/HMI observations. We developed a code for automatic detection of kernels of flares with HMI intensity brightenings and study properties of detected candidates. The code was tuned and tested and with a little effort, it could be applied to any suitable data set. By studying a few flare examples, we found indication that HMI intensity brightening might be an artefact of the simplified procedure used to compute HMI observables.

  1. The role of standardized data and terminological systems in computerized clinical decision support systems: literature review and survey.

    PubMed

    Ahmadian, Leila; van Engen-Verheul, Mariette; Bakhshi-Raiez, Ferishta; Peek, Niels; Cornet, Ronald; de Keizer, Nicolette F

    2011-02-01

    Clinical decision support systems (CDSSs) should be seamlessly integrated with existing clinical information systems to enable automatic provision of advice at the time and place where decisions are made. It has been suggested that a lack of agreed data standards frequently hampers this integration. We performed a literature review to investigate whether CDSSs used standardized (i.e. coded or numerical) data and which terminological systems have been used to code data. We also investigated whether a lack of standardized data was considered an impediment for CDSS implementation. Articles reporting an evaluation of a CDSS that provided a computerized advice based on patient-specific data items were identified based on a former literature review on CDSS and on CDSS studies identified in AMIA's 'Year in Review'. Authors of these articles were contacted to check and complete the extracted data. A questionnaire among the authors of included studies was used to determine the obstacles in CDSS implementation. We identified 77 articles published between 1995 and 2008. Twenty-two percent of the evaluated CDSSs used only numerical data. Fifty one percent of the CDSSs that used coded data applied an international terminology. The most frequently used international terminology were the ICD (International Classification of Diseases), used in 68% of the cases and LOINC (Logical Observation Identifiers Names and Codes) in 12% of the cases. More than half of the authors experienced barriers in CDSS implementation. In most cases these barriers were related to the lack of electronically available standardized data required to invoke or activate the CDSS. Many CDSSs applied different terminological systems to code data. This diversity hampers the possibility of sharing and reasoning with data within different systems. The results of the survey confirm the hypothesis that data standardization is a critical success factor for CDSS development. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  2. The procedure execution manager and its application to Advanced Photon Source operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.

    1997-06-01

    The Procedure Execution Manager (PEM) combines a complete scripting environment for coding accelerator operation procedures with a manager application for executing and monitoring the procedures. PEM is based on Tcl/Tk, a supporting widget library, and the dp-tcl extension for distributed processing. The scripting environment provides support for distributed, parallel execution of procedures along with join and abort operations. Nesting of procedures is supported, permitting the same code to run as a top-level procedure under operator control or as a subroutine under control of another procedure. The manager application allows an operator to execute one or more procedures in automatic, semi-automatic,more » or manual modes. It also provides a standard way for operators to interact with procedures. A number of successful applications of PEM to accelerator operations have been made to date. These include start-up, shutdown, and other control of the positron accumulator ring (PAR), low-energy transport (LET) lines, and the booster rf systems. The PAR/LET procedures make nested use of PEM`s ability to run parallel procedures. There are also a number of procedures to guide and assist tune-up operations, to make accelerator physics measurements, and to diagnose equipment. Because of the success of the existing procedures, expanded use of PEM is planned.« less

  3. ALDAS user's manual

    NASA Technical Reports Server (NTRS)

    Watts, Michael E.

    1991-01-01

    The Acoustic Laboratory Data Acquisition System (ALDAS) is an inexpensive, transportable means to digitize and analyze data. The system is based on the Macintosh 2 family of computers, with internal analog-to-digital boards providing four channels of simultaneous data acquisition at rates up to 50,000 samples/sec. The ALDAS software package, written for use with rotorcraft acoustics, performs automatic acoustic calibration of channels, data display, two types of cycle averaging, and spectral amplitude analysis. The program can use data obtained from internal analog-to-digital conversion, or discrete external data imported in ASCII format. All aspects of ALDAS can be improved as new hardware becomes available and new features are introduced into the code.

  4. Secure Video Surveillance System Acquisition Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build themore » video review system.« less

  5. Retargeting of existing FORTRAN program and development of parallel compilers

    NASA Technical Reports Server (NTRS)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  6. JDFTx: Software for joint density-functional theory

    DOE PAGES

    Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.; ...

    2017-11-14

    Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less

  7. JDFTx: Software for joint density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.

    Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less

  8. Automated synthesis and composition of taskblocks for control of manufacturing systems.

    PubMed

    Holloway, L E; Guan, X; Sundaravadivelu, R; Ashley, J R

    2000-01-01

    Automated control synthesis methods for discrete-event systems promise to reduce the time required to develop, debug, and modify control software. Such methods must be able to translate high-level control goals into detailed sequences of actuation and sensing signals. In this paper, we present such a technique. It relies on analysis of a system model, defined as a set of interacting components, each represented as a form of condition system Petri net. Control logic modules, called taskblocks, are synthesized from these individual models. These then interact hierarchically and sequentially to drive the system through specified control goals. The resulting controller is automatically converted to executable control code. The paper concludes with a discussion of a set of software tools developed to demonstrate the techniques on a small manufacturing system.

  9. Automated reduction of sub-millimetre single-dish heterodyne data from the James Clerk Maxwell Telescope using ORAC-DR

    NASA Astrophysics Data System (ADS)

    Jenness, Tim; Currie, Malcolm J.; Tilanus, Remo P. J.; Cavanagh, Brad; Berry, David S.; Leech, Jamie; Rizzi, Luca

    2015-10-01

    With the advent of modern multidetector heterodyne instruments that can result in observations generating thousands of spectra per minute it is no longer feasible to reduce these data as individual spectra. We describe the automated data reduction procedure used to generate baselined data cubes from heterodyne data obtained at the James Clerk Maxwell Telescope (JCMT). The system can automatically detect baseline regions in spectra and automatically determine regridding parameters, all without input from a user. Additionally, it can detect and remove spectra suffering from transient interference effects or anomalous baselines. The pipeline is written as a set of recipes using the ORAC-DR pipeline environment with the algorithmic code using Starlink software packages and infrastructure. The algorithms presented here can be applied to other heterodyne array instruments and have been applied to data from historical JCMT heterodyne instrumentation.

  10. An Experiment in Scientific Code Semantic Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, distributed expert parsers. These semantic parser are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. The parsers will automatically recognize and document some static, semantic concepts and locate some program semantic errors. Results are shown for a subroutine test case and a collection of combustion code routines. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  11. ogs6 - a new concept for porous-fractured media simulations

    NASA Astrophysics Data System (ADS)

    Naumov, Dmitri; Bilke, Lars; Fischer, Thomas; Rink, Karsten; Wang, Wenqing; Watanabe, Norihiro; Kolditz, Olaf

    2015-04-01

    OpenGeoSys (OGS) is a scientific open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THMC) processes in porous and fractured media, continuously developed since the mid-eighties. The basic concept is to provide a flexible numerical framework for solving coupled multi-field problems. OGS is targeting mainly on applications in environmental geoscience, e.g. in the fields of contaminant hydrology, water resources management, waste deposits, or geothermal energy systems, but it has also been successfully applied to new topics in energy storage recently. OGS is actively participating several international benchmarking initiatives, e.g. DECOVALEX (waste management), CO2BENCH (CO2 storage and sequestration), SeSBENCH (reactive transport processes) and HM-Intercomp (coupled hydrosystems). Despite the broad applicability of OGS in geo-, hydro- and energy-sciences, several shortcomings became obvious concerning the computational efficiency as well as the code structure became too sophisticated for further efficient development. OGS-5 was designed for object-oriented FEM applications. However, in many multi-field problems a certain flexibility of tailored numerical schemes is essential. Therefore, a new concept was designed to overcome existing bottlenecks. The paradigms for ogs6 are: - Flexibility of numerical schemes (FEM#FVM#FDM), - Computational efficiency (PetaScale ready), - Developer- and user-friendly. ogs6 has a module-oriented architecture based on thematic libraries (e.g. MeshLib, NumLib) on the large scale and uses object-oriented approach for the small scale interfaces. Usage of a linear algebra library (Eigen3) for the mathematical operations together with the ISO C++11 standard increases the expressiveness of the code and makes it more developer-friendly. The new C++ standard also makes the template meta-programming technique code used for compile-time optimizations more compact. We have transitioned the main code development to the GitHub code hosting system (https://github.com/ufz/ogs). The very flexible revision control system Git in combination with issue tracking, developer feedback and the code review options improve the code quality and the development process in general. The continuous testing procedure of the benchmarks as it was established for OGS-5 is maintained. Additionally unit testing, which is automatically triggered by any code changes, is executed by two continuous integration frameworks (Jenkins CI, Travis CI) which build and test the code on different operating systems (Windows, Linux, Mac OS), in multiple configurations and with different compilers (GCC, Clang, Visual Studio). To improve the testing possibilities further, XML based file input formats are introduced helping with automatic validation of the user contributed benchmarks. The first ogs6 prototype version 6.0.1 has been implemented for solving generic elliptic problems. Next steps are envisaged to transient, non-linear and coupled problems. Literature: [1] Kolditz O, Shao H, Wang W, Bauer S (eds) (2014): Thermo-Hydro-Mechanical-Chemical Processes in Fractured Porous Media: Modelling and Benchmarking - Closed Form Solutions. In: Terrestrial Environmental Sciences, Vol. 1, Springer, Heidelberg, ISBN 978-3-319-11893-2, 315pp. http://www.springer.com/earth+sciences+and+geography/geology/book/978-3-319-11893-2 [2] Naumov D (2015): Computational Fluid Dynamics in Unconsolidated Sediments: Model Generation and Discrete Flow Simulations, PhD thesis, Technische Universität Dresden.

  12. Fission Meter Information Barrier Attribute Measurement System: Task 1 Report: Document existing Fission Meter neutron IB system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, P. L.

    An SNM attribute Information Barrier (IB) system was developed for a 2011 US/UK Exercise. The system was modified and extensively tested in a 2013-2014 US-UK Measurement Campaign. This work demonstrated rapid deployment of an IB system for potential treaty use. The system utilizes an Ortec Fission Meter neutron multiplicity counter and custom computer code. The system demonstrates a proof-of-principle automated Pu-240 mass determination with an information barrier. After a software start command is issued, the system automatically acquires and downloads data, performs an analysis, and displays the results. This system conveys the results of a Pu mass threshold measurements inmore » a way the does not reveal sensitive information. In full IB mode, only red/green ‘lights’ are displayed in the software. In test mode, more detailed information is displayed. The code can also read in, analyze, and display results from previously acquired or simulated data. Because the equipment is commercial-off-the-shelf (COTS), the system demonstrates a low-cost short-lead-time technology for treaty SNM attribute measurements. A deployed system will likely require integration of additional authentication and tamper-indicating technologies. This will be discussed for the project in this and future progress reports.« less

  13. Simulation of Targets Feeding Pipe Rupture in Wendelstein 7-X Facility Using RELAP5 and COCOSYS Codes

    NASA Astrophysics Data System (ADS)

    Kaliatka, T.; Povilaitis, M.; Kaliatka, A.; Urbonavicius, E.

    2012-10-01

    Wendelstein nuclear fusion device W7-X is a stellarator type experimental device, developed by Max Planck Institute of plasma physics. Rupture of one of the 40 mm inner diameter coolant pipes providing water for the divertor targets during the "baking" regime of the facility operation is considered to be the most severe accident in terms of the plasma vessel pressurization. "Baking" regime is the regime of the facility operation during which plasma vessel structures are heated to the temperature acceptable for the plasma ignition in the vessel. This paper presents the model of W7-X cooling system (pumps, valves, pipes, hydro-accumulators, and heat exchangers), developed using thermal-hydraulic state-of-the-art RELAP5 Mod3.3 code, and model of plasma vessel, developed by employing the lumped-parameter code COCOSYS. Using both models the numerical simulation of processes in W7-X cooling system and plasma vessel has been performed. The results of simulation showed, that the automatic valve closure time 1 s is the most acceptable (no water hammer effect occurs) and selected area of the burst disk is sufficient to prevent pressure in the plasma vessel.

  14. The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)

    1997-01-01

    Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.

  15. Effect of normal aging and of Alzheimer's disease on, episodic memory.

    PubMed

    Le Moal, S; Reymann, J M; Thomas, V; Cattenoz, C; Lieury, A; Allain, H

    1997-01-01

    Performances of 12 patients with Alzheimer's disease (AD), 15 healthy elderly subjects and 20 young healthy volunteers were compared on two episodic memory tests. The first, a learning test of semantically related words, enabled an assessment of the effect of semantic relationships on word learning by controlling the encoding and retrieval processes. The second, a dual coding test, is about the assessment of automatic processes operating during drawings encoding. The results obtained demonstrated quantitative and qualitative differences between the population. Manifestations of episodic memory deficit in AD patients were shown not only by lower performance scores than in elderly controls, but also by the lack of any effect of semantic cues and the production of a large number of extra-list intrusions. Automatic processes underlying dual coding appear to be spared in AD, although more time is needed to process information than in young or elderly subjects. These findings confirm former data and emphasize the preservation of certain memory processes (dual coding) in AD which could be used in future therapeutic approaches.

  16. Defining datasets and creating data dictionaries for quality improvement and research in chronic disease using routinely collected data: an ontology-driven approach.

    PubMed

    de Lusignan, Simon; Liaw, Siaw-Teng; Michalakidis, Georgios; Jones, Simon

    2011-01-01

    The burden of chronic disease is increasing, and research and quality improvement will be less effective if case finding strategies are suboptimal. To describe an ontology-driven approach to case finding in chronic disease and how this approach can be used to create a data dictionary and make the codes used in case finding transparent. A five-step process: (1) identifying a reference coding system or terminology; (2) using an ontology-driven approach to identify cases; (3) developing metadata that can be used to identify the extracted data; (4) mapping the extracted data to the reference terminology; and (5) creating the data dictionary. Hypertension is presented as an exemplar. A patient with hypertension can be represented by a range of codes including diagnostic, history and administrative. Metadata can link the coding system and data extraction queries to the correct data mapping and translation tool, which then maps it to the equivalent code in the reference terminology. The code extracted, the term, its domain and subdomain, and the name of the data extraction query can then be automatically grouped and published online as a readily searchable data dictionary. An exemplar online is: www.clininf.eu/qickd-data-dictionary.html Adopting an ontology-driven approach to case finding could improve the quality of disease registers and of research based on routine data. It would offer considerable advantages over using limited datasets to define cases. This approach should be considered by those involved in research and quality improvement projects which utilise routine data.

  17. Automating annotation of information-giving for analysis of clinical conversation.

    PubMed

    Mayfield, Elijah; Laws, M Barton; Wilson, Ira B; Penstein Rosé, Carolyn

    2014-02-01

    Coding of clinical communication for fine-grained features such as speech acts has produced a substantial literature. However, annotation by humans is laborious and expensive, limiting application of these methods. We aimed to show that through machine learning, computers could code certain categories of speech acts with sufficient reliability to make useful distinctions among clinical encounters. The data were transcripts of 415 routine outpatient visits of HIV patients which had previously been coded for speech acts using the Generalized Medical Interaction Analysis System (GMIAS); 50 had also been coded for larger scale features using the Comprehensive Analysis of the Structure of Encounters System (CASES). We aggregated selected speech acts into information-giving and requesting, then trained the machine to automatically annotate using logistic regression classification. We evaluated reliability by per-speech act accuracy. We used multiple regression to predict patient reports of communication quality from post-visit surveys using the patient and provider information-giving to information-requesting ratio (briefly, information-giving ratio) and patient gender. Automated coding produces moderate reliability with human coding (accuracy 71.2%, κ=0.57), with high correlation between machine and human prediction of the information-giving ratio (r=0.96). The regression significantly predicted four of five patient-reported measures of communication quality (r=0.263-0.344). The information-giving ratio is a useful and intuitive measure for predicting patient perception of provider-patient communication quality. These predictions can be made with automated annotation, which is a practical option for studying large collections of clinical encounters with objectivity, consistency, and low cost, providing greater opportunity for training and reflection for care providers.

  18. Management of natural resources through automatic cartographic inventory

    NASA Technical Reports Server (NTRS)

    Rey, P. A.; Gourinard, Y.; Cambou, F. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Significant correspondence codes relating ERTS imagery to ground truth from vegetation and geology maps have been established. The use of color equidensity and color composite methods for selecting zones of equal densitometric value on ERTS imagery was perfected. Primary interest of temporal color composite is stressed. A chain of transfer operations from ERTS imagery to the automatic mapping of natural resources was developed.

  19. Automated UMLS-Based Comparison of Medical Forms

    PubMed Central

    Dugas, Martin; Fritz, Fleur; Krumm, Rainer; Breil, Bernhard

    2013-01-01

    Medical forms are very heterogeneous: on a European scale there are thousands of data items in several hundred different systems. To enable data exchange for clinical care and research purposes there is a need to develop interoperable documentation systems with harmonized forms for data capture. A prerequisite in this harmonization process is comparison of forms. So far – to our knowledge – an automated method for comparison of medical forms is not available. A form contains a list of data items with corresponding medical concepts. An automatic comparison needs data types, item names and especially item with these unique concept codes from medical terminologies. The scope of the proposed method is a comparison of these items by comparing their concept codes (coded in UMLS). Each data item is represented by item name, concept code and value domain. Two items are called identical, if item name, concept code and value domain are the same. Two items are called matching, if only concept code and value domain are the same. Two items are called similar, if their concept codes are the same, but the value domains are different. Based on these definitions an open-source implementation for automated comparison of medical forms in ODM format with UMLS-based semantic annotations was developed. It is available as package compareODM from http://cran.r-project.org. To evaluate this method, it was applied to a set of 7 real medical forms with 285 data items from a large public ODM repository with forms for different medical purposes (research, quality management, routine care). Comparison results were visualized with grid images and dendrograms. Automated comparison of semantically annotated medical forms is feasible. Dendrograms allow a view on clustered similar forms. The approach is scalable for a large set of real medical forms. PMID:23861827

  20. Automatic detection and decoding of honey bee waggle dances.

    PubMed

    Wario, Fernando; Wild, Benjamin; Rojas, Raúl; Landgraf, Tim

    2017-01-01

    The waggle dance is one of the most popular examples of animal communication. Forager bees direct their nestmates to profitable resources via a complex motor display. Essentially, the dance encodes the polar coordinates to the resource in the field. Unemployed foragers follow the dancer's movements and then search for the advertised spots in the field. Throughout the last decades, biologists have employed different techniques to measure key characteristics of the waggle dance and decode the information it conveys. Early techniques involved the use of protractors and stopwatches to measure the dance orientation and duration directly from the observation hive. Recent approaches employ digital video recordings and manual measurements on screen. However, manual approaches are very time-consuming. Most studies, therefore, regard only small numbers of animals in short periods of time. We have developed a system capable of automatically detecting, decoding and mapping communication dances in real-time. In this paper, we describe our recording setup, the image processing steps performed for dance detection and decoding and an algorithm to map dances to the field. The proposed system performs with a detection accuracy of 90.07%. The decoded waggle orientation has an average error of -2.92° (± 7.37°), well within the range of human error. To evaluate and exemplify the system's performance, a group of bees was trained to an artificial feeder, and all dances in the colony were automatically detected, decoded and mapped. The system presented here is the first of this kind made publicly available, including source code and hardware specifications. We hope this will foster quantitative analyses of the honey bee waggle dance.

  1. An Internal Data Non-hiding Type Real-time Kernel and its Application to the Mechatronics Controller

    NASA Astrophysics Data System (ADS)

    Yoshida, Toshio

    For the mechatronics equipment controller that controls robots and machine tools, high-speed motion control processing is essential. The software system of the controller like other embedded systems is composed of three layers software such as real-time kernel layer, middleware layer, and application software layer on the dedicated hardware. The application layer in the top layer is composed of many numbers of tasks, and application function of the system is realized by the cooperation between these tasks. In this paper we propose an internal data non-hiding type real-time kernel in which customizing the task control is possible only by change in the program code of the task side without any changes in the program code of real-time kernel. It is necessary to reduce the overhead caused by the real-time kernel task control for the speed-up of the motion control of the mechatronics equipment. For this, customizing the task control function is needed. We developed internal data non-cryptic type real-time kernel ZRK to evaluate this method, and applied to the control of the multi system automatic lathe. The effect of the speed-up of the task cooperation processing was able to be confirmed by combined task control processing on the task side program code using an internal data non-hiding type real-time kernel ZRK.

  2. Proceedings of the U.S. Army Symposium on Gun Dynamics (5th) Held in Rensselaerville, New York on 23-25 September 1987

    DTIC Science & Technology

    1987-09-01

    have shown that gun barrel heating, and hence thermal expansion , is both axially and circumferentially asymmetric. Circumferential, or cross-barrel...element code, which ended in the selection of ABAQUS . The code will perform static, dynamic, and thermal anal- ysis on a broad range of structures...analysis may be performed by a user supplied FORTRAN subroutine which is automatically linked to the code and supplements the stand- ard ABAQUS

  3. Volume estimation using food specific shape templates in mobile image-based dietary assessment

    NASA Astrophysics Data System (ADS)

    Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.

    2011-03-01

    As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.

  4. Northwest range-plant symbols adapted to automatic data processing.

    Treesearch

    George A. Garrison; Jon M. Skovlin

    1960-01-01

    Many range technicians, agronomists, foresters, biologists, and botanists of various educational institutions and government agencies in the Northwest have been using a four-letter symbol list or code compiled 12 years ago from records of plants collected by the U.S. Forest Service in Oregon and Washington, This code has served well as a means of entering plant names...

  5. Finite Macro-Element Mesh Deformation in a Structured Multi-Block Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2005-01-01

    A mesh deformation scheme is developed for a structured multi-block Navier-Stokes code consisting of two steps. The first step is a finite element solution of either user defined or automatically generated macro-elements. Macro-elements are hexagonal finite elements created from a subset of points from the full mesh. When assembled, the finite element system spans the complete flow domain. Macro-element moduli vary according to the distance to the nearest surface, resulting in extremely stiff elements near a moving surface and very pliable elements away from boundaries. Solution of the finite element system for the imposed boundary deflections generally produces smoothly varying nodal deflections. The manner in which distance to the nearest surface has been found to critically influence the quality of the element deformation. The second step is a transfinite interpolation which distributes the macro-element nodal deflections to the remaining fluid mesh points. The scheme is demonstrated for several two-dimensional applications.

  6. A Model-Based Approach for Bridging Virtual and Physical Sensor Nodes in a Hybrid Simulation Framework

    PubMed Central

    Mozumdar, Mohammad; Song, Zhen Yu; Lavagno, Luciano; Sangiovanni-Vincentelli, Alberto L.

    2014-01-01

    The Model Based Design (MBD) approach is a popular trend to speed up application development of embedded systems, which uses high-level abstractions to capture functional requirements in an executable manner, and which automates implementation code generation. Wireless Sensor Networks (WSNs) are an emerging very promising application area for embedded systems. However, there is a lack of tools in this area, which would allow an application developer to model a WSN application by using high level abstractions, simulate it mapped to a multi-node scenario for functional analysis, and finally use the refined model to automatically generate code for different WSN platforms. Motivated by this idea, in this paper we present a hybrid simulation framework that not only follows the MBD approach for WSN application development, but also interconnects a simulated sub-network with a physical sub-network and then allows one to co-simulate them, which is also known as Hardware-In-the-Loop (HIL) simulation. PMID:24960083

  7. Kinetic modelling of the oxidation of large aliphatic hydrocarbons using an automatic mechanism generation.

    PubMed

    Muharam, Yuswan; Warnatz, Jürgen

    2007-08-21

    A mechanism generator code to automatically generate mechanisms for the oxidation of large hydrocarbons has been successfully modified and considerably expanded in this work. The modification was through (1) improvement of the existing rules such as cyclic-ether reactions and aldehyde reactions, (2) inclusion of some additional rules to the code, such as ketone reactions, hydroperoxy cyclic-ether formations and additional reactions of alkenes, (3) inclusion of small oxygenates, produced by the code but not included in the handwritten C(1)-C(4) sub-mechanism yet, to the handwritten C(1)-C(4) sub-mechanism. In order to evaluate mechanisms generated by the code, simulations of observed results in different experimental environments have been carried out. Experimentally derived and numerically predicted ignition delays of n-heptane-air and n-decane-air mixtures in high-pressure shock tubes in a wide range of temperatures, pressures and equivalence ratios agree very well. Concentration profiles of the main products and intermediates of n-heptane and n-decane oxidation in jet-stirred reactors at a wide range of temperatures and equivalence ratios are generally well reproduced. In addition, the ignition delay times of different normal alkanes was numerically studied.

  8. Automated and objective action coding of facial expressions in patients with acute facial palsy.

    PubMed

    Haase, Daniel; Minnigerode, Laura; Volk, Gerd Fabian; Denzler, Joachim; Guntinas-Lichius, Orlando

    2015-05-01

    Aim of the present observational single center study was to objectively assess facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS). Still photographs using posed facial expressions of 28 healthy subjects and of 299 patients with acute facial palsy were automatically analyzed for bilateral AU expression profiles. All palsies were graded with the House-Brackmann (HB) grading system and with the Stennert Index (SI). Changes of the AU profiles during follow-up were analyzed for 77 patients. The initial HB grading of all patients was 3.3 ± 1.2. SI at rest was 1.86 ± 1.3 and during motion 3.79 ± 4.3. Healthy subjects showed a significant AU asymmetry score of 21 ± 11 % and there was no significant difference to patients (p = 0.128). At initial examination of patients, the number of activated AUs was significantly lower on the paralyzed side than on the healthy side (p < 0.0001). The final examination for patients took place 4 ± 6 months post baseline. The number of activated AUs and the ratio between affected and healthy side increased significantly between baseline and final examination (both p < 0.0001). The asymmetry score decreased between baseline and final examination (p < 0.0001). The number of activated AUs on the healthy side did not change significantly (p = 0.779). Radical rethinking in facial grading is worthwhile: automated FACS delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials.

  9. Intelligent Data Visualization for Cross-Checking Spacecraft System Diagnosis

    NASA Technical Reports Server (NTRS)

    Ong, James C.; Remolina, Emilio; Breeden, David; Stroozas, Brett A.; Mohammed, John L.

    2012-01-01

    Any reasoning system is fallible, so crew members and flight controllers must be able to cross-check automated diagnoses of spacecraft or habitat problems by considering alternate diagnoses and analyzing related evidence. Cross-checking improves diagnostic accuracy because people can apply information processing heuristics, pattern recognition techniques, and reasoning methods that the automated diagnostic system may not possess. Over time, cross-checking also enables crew members to become comfortable with how the diagnostic reasoning system performs, so the system can earn the crew s trust. We developed intelligent data visualization software that helps users cross-check automated diagnoses of system faults more effectively. The user interface displays scrollable arrays of timelines and time-series graphs, which are tightly integrated with an interactive, color-coded system schematic to show important spatial-temporal data patterns. Signal processing and rule-based diagnostic reasoning automatically identify alternate hypotheses and data patterns that support or rebut the original and alternate diagnoses. A color-coded matrix display summarizes the supporting or rebutting evidence for each diagnosis, and a drill-down capability enables crew members to quickly view graphs and timelines of the underlying data. This system demonstrates that modest amounts of diagnostic reasoning, combined with interactive, information-dense data visualizations, can accelerate system diagnosis and cross-checking.

  10. Mechanisms of Habitual Approach

    PubMed Central

    Anderson, Brian A.; Folk, Charles L.; Garrison, Rebecca; Rogers, Leeland

    2016-01-01

    Reward learning has a powerful influence on the attention system, causing previously reward-associated stimuli to automatically capture attention. Difficulty ignoring stimuli associated with drug reward has been linked to addiction relapse, and the attention system of drug-dependent patients seems especially influenced by reward history. This and other evidence suggests that value-driven attention has consequences for behavior and decision-making, facilitating a bias to approach and consume the previously reward-associated stimulus even when doing so runs counter to current goals and priorities. Yet, a mechanism linking value-driven attention to behavioral responding and a general approach bias is lacking. Here we show that previously reward-associated stimuli escape inhibitory processing in a go/no-go task. Control experiments confirmed that this value-dependent failure of goal-directed inhibition could not be explained by search history or residual motivation, but depended specifically on the learned association between particular stimuli and reward outcome. When a previously high-value stimulus is encountered, the response codes generated by that stimulus are automatically afforded high priority, bypassing goal-directed cognitive processes involved in suppressing task-irrelevant behavior. PMID:27054684

  11. MOLGENIS/connect: a system for semi-automatic integration of heterogeneous phenotype data with applications in biobanks.

    PubMed

    Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans; Swertz, Morris A

    2016-07-15

    While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect : m.a.swertz@rug.nl Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  12. MOLGENIS/connect: a system for semi-automatic integration of heterogeneous phenotype data with applications in biobanks

    PubMed Central

    Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K. Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans

    2016-01-01

    Motivation: While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. Results: To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Availability and Implementation: Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect. Contact: m.a.swertz@rug.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153686

  13. Helping the police with their inquiries

    NASA Astrophysics Data System (ADS)

    Kitson, Anthony J.

    1995-09-01

    The UK Home Office has held a long term interest in facial recognition. Work has concentrated upon providing the UK police with facilities to improve the use that can be made of the memory of victims and witnesses rather than automatically matching images. During the 1970s a psychological coding scheme and a search method were developed by Aberdeen University and Home Office. This has been incorporated into systems for searching prisoner photographs both experimentally and operationally. The coding scheme has also been incorporated in a facial likeness composition system. The Home Office is currenly implementing a national criminal record system (Phoenix) and work has been conducted to define and demonstrate standards for image enabled terminals for this application. Users have been consulted to establish suitable picture quality for the purpose, and a study of compression methods is in hand. Recently there has been increased use made by UK courts of expert testimony based upon the measurement of facial images. We are currently working with a group of practitioners to examine and improve the quality of such evidence and to develop a national standard.

  14. Software engineering and automatic continuous verification of scientific software

    NASA Astrophysics Data System (ADS)

    Piggott, M. D.; Hill, J.; Farrell, P. E.; Kramer, S. C.; Wilson, C. R.; Ham, D.; Gorman, G. J.; Bond, T.

    2011-12-01

    Software engineering of scientific code is challenging for a number of reasons including pressure to publish and a lack of awareness of the pitfalls of software engineering by scientists. The Applied Modelling and Computation Group at Imperial College is a diverse group of researchers that employ best practice software engineering methods whilst developing open source scientific software. Our main code is Fluidity - a multi-purpose computational fluid dynamics (CFD) code that can be used for a wide range of scientific applications from earth-scale mantle convection, through basin-scale ocean dynamics, to laboratory-scale classic CFD problems, and is coupled to a number of other codes including nuclear radiation and solid modelling. Our software development infrastructure consists of a number of free tools that could be employed by any group that develops scientific code and has been developed over a number of years with many lessons learnt. A single code base is developed by over 30 people for which we use bazaar for revision control, making good use of the strong branching and merging capabilities. Using features of Canonical's Launchpad platform, such as code review, blueprints for designing features and bug reporting gives the group, partners and other Fluidity uers an easy-to-use platform to collaborate and allows the induction of new members of the group into an environment where software development forms a central part of their work. The code repositoriy are coupled to an automated test and verification system which performs over 20,000 tests, including unit tests, short regression tests, code verification and large parallel tests. Included in these tests are build tests on HPC systems, including local and UK National HPC services. The testing of code in this manner leads to a continuous verification process; not a discrete event performed once development has ceased. Much of the code verification is done via the "gold standard" of comparisons to analytical solutions via the method of manufactured solutions. By developing and verifying code in tandem we avoid a number of pitfalls in scientific software development and advocate similar procedures for other scientific code applications.

  15. Palm: Easing the Burden of Analytical Performance Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Hoisie, Adolfy

    2014-06-01

    Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less

  16. Automatic computer procedure for generating exact and analytical kinetic energy operators based on the polyspherical approach: General formulation and removal of singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ndong, Mamadou; Lauvergnat, David; Nauts, André

    2013-11-28

    We present new techniques for an automatic computation of the kinetic energy operator in analytical form. These techniques are based on the use of the polyspherical approach and are extended to take into account Cartesian coordinates as well. An automatic procedure is developed where analytical expressions are obtained by symbolic calculations. This procedure is a full generalization of the one presented in Ndong et al., [J. Chem. Phys. 136, 034107 (2012)]. The correctness of the new implementation is analyzed by comparison with results obtained from the TNUM program. We give several illustrations that could be useful for users of themore » code. In particular, we discuss some cyclic compounds which are important in photochemistry. Among others, we show that choosing a well-adapted parameterization and decomposition into subsystems can allow one to avoid singularities in the kinetic energy operator. We also discuss a relation between polyspherical and Z-matrix coordinates: this comparison could be helpful for building an interface between the new code and a quantum chemistry package.« less

  17. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    2000-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAFT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAFT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  18. Association rule mining on grid monitoring data to detect error sources

    NASA Astrophysics Data System (ADS)

    Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin

    2010-04-01

    Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.

  19. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    1999-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAPT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAPT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  20. An Experiment in Scientific Program Understanding

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.; Owen, Karl (Technical Monitor)

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  1. Modeling Guidelines for Code Generation in the Railway Signaling Context

    NASA Technical Reports Server (NTRS)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these recommendations has been performed for the automotive control systems domain in order to enforce code generation [7]. The MAAB guidelines have been found profitable also in the aerospace/avionics sector [1] and they have been adopted by the MathWorks Aerospace Leadership Council (MALC). General Electric Transportation Systems (GETS) is a well known railway signaling systems manufacturer leading in Automatic Train Protection (ATP) systems technology. Inside an effort of adopting formal methods within its own development process, GETS decided to introduce system modeling by means of the MathWorks tools [2], and in 2008 chose to move to code generation. This article reports the experience performed by GETS in developing its own modeling standard through customizing the MAAB rules for the railway signaling domain and shows the result of this experience with a successful product development story.

  2. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  3. Apply network coding for H.264/SVC multicasting

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Kuo, C.-C. Jay

    2008-08-01

    In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.

  4. Implementation of Rosenbrock methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shampine, L. F.

    1980-11-01

    Rosenbrock formulas have shown promise in research codes for the solution of initial-value problems for stiff systems of ordinary differential equations (ODEs). To help assess their practical value, the author wrote an item of mathematical software based on such a formula. This required a variety of algorithmic and software developments. Those of general interest are reported in this paper. Among them is a way to select automatically, at every step, an explicit Runge-Kutta formula or a Rosenbrock formula according to the stiffness of the problem. Solving linear systems is important to methods for stiff ODEs, and is rather special formore » Rosenbrock methods. A cheap, effective estimate of the condition of the linear systems is derived. Some numerical results are presented to illustrate the developments.« less

  5. Higher-order automatic differentiation of mathematical functions

    NASA Astrophysics Data System (ADS)

    Charpentier, Isabelle; Dal Cappello, Claude

    2015-04-01

    Functions of mathematical physics such as the Bessel functions, the Chebyshev polynomials, the Gauss hypergeometric function and so forth, have practical applications in many scientific domains. On the one hand, differentiation formulas provided in reference books apply to real or complex variables. These do not account for the chain rule. On the other hand, based on the chain rule, the automatic differentiation has become a natural tool in numerical modeling. Nevertheless automatic differentiation tools do not deal with the numerous mathematical functions. This paper describes formulas and provides codes for the higher-order automatic differentiation of mathematical functions. The first method is based on Faà di Bruno's formula that generalizes the chain rule. The second one makes use of the second order differential equation they satisfy. Both methods are exemplified with the aforementioned functions.

  6. Automatic Data Traffic Control on DSM Architecture

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry; Kwak, Dochan (Technical Monitor)

    2000-01-01

    We study data traffic on distributed shared memory machines and conclude that data placement and grouping improve performance of scientific codes. We present several methods which user can employ to improve data traffic in his code. We report on implementation of a tool which detects the code fragments causing data congestions and advises user on improvements of data routing in these fragments. The capabilities of the tool include deduction of data alignment and affinity from the source code; detection of the code constructs having abnormally high cache or TLB misses; generation of data placement constructs. We demonstrate the capabilities of the tool on experiments with NAS parallel benchmarks and with a simple computational fluid dynamics application ARC3D.

  7. Short non-coding RNAs as bacteria species identifiers detected by surface plasmon resonance enhanced common path interferometry

    NASA Astrophysics Data System (ADS)

    Greef, Charles; Petropavlovskikh, Viatcheslav; Nilsen, Oyvind; Khattatov, Boris; Plam, Mikhail; Gardner, Patrick; Hall, John

    2008-04-01

    Small non-coding RNA sequences have recently been discovered as unique identifiers of certain bacterial species, raising the possibility that they can be used as highly specific Biowarfare Agent detection markers in automated field deployable integrated detection systems. Because they are present in high abundance they could allow genomic based bacterial species identification without the need for pre-assay amplification. Further, a direct detection method would obviate the need for chemical labeling, enabling a rapid, efficient, high sensitivity mechanism for bacterial detection. Surface Plasmon Resonance enhanced Common Path Interferometry (SPR-CPI) is a potentially market disruptive, high sensitivity dual technology that allows real-time direct multiplex measurement of biomolecule interactions, including small molecules, nucleic acids, proteins, and microbes. SPR-CPI measures differences in phase shift of reflected S and P polarized light under Total Internal Reflection (TIR) conditions at a surface, caused by changes in refractive index induced by biomolecular interactions within the evanescent field at the TIR interface. The measurement is performed on a microarray of discrete 2-dimensional areas functionalized with biomolecule capture reagents, allowing simultaneous measurement of up to 100 separate analytes. The optical beam encompasses the entire microarray, allowing a solid state detector system with no scanning requirement. Output consists of simultaneous voltage measurements proportional to the phase differences resulting from the refractive index changes from each microarray feature, and is automatically processed and displayed graphically or delivered to a decision making algorithm, enabling a fully automatic detection system capable of rapid detection and quantification of small nucleic acids at extremely sensitive levels. Proof-of-concept experiments on model systems and cell culture samples have demonstrated utility of the system, and efforts are in progress for full development and deployment of the device. The technology has broad applicability as a universal detection platform for BWA detection, medical diagnostics, and drug discovery research, and represents a new class of instrumentation as a rapid, high sensitivity, label-free methodology.

  8. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostuk, M.; Uram, T. D.; Evans, T.

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  9. Automatic Between-Pulse Analysis of DIII-D Experimental Data Performed Remotely on a Supercomputer at Argonne Leadership Computing Facility

    DOE PAGES

    Kostuk, M.; Uram, T. D.; Evans, T.; ...

    2018-02-01

    For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less

  10. Crustal Fracturing Field and Presence of Fluid as Revealed by Seismic Anisotropy

    NASA Astrophysics Data System (ADS)

    Pastori, M.; Piccinini, D.; de Gori, P.; Margheriti, L.; Barchi, M. R.; di Bucci, D.

    2010-12-01

    In the last three years, we developed, tested and improved an automatic analysis code (Anisomat+) to calculate the shear wave splitting parameters, fast polarization direction (φ) and delay time (∂t). The code is a set of MatLab scripts able to retrieve crustal anisotropy parameters from three-component seismic recording of local earthquakes using horizontal component cross-correlation method. The analysis procedure consists in choosing an appropriate frequency range, that better highlights the signal containing the shear waves, and a length of time window on the seismogram centered on the S arrival (the temporal window contains at least one cycle of S wave). The code was compared to other two automatic analysis code (SPY and SHEBA) and tested on three Italian areas (Val d’Agri, Tiber Valley and L’Aquila surrounding) along the Apennine mountains. For each region we used the anisotropic parameters resulting from the automatic computation as a tool to determine the fracture field geometries connected with the active stress field. We compare the temporal variations of anisotropic parameters to the evolution of vp/vs ratio for the same seismicity. The anisotropic fast directions are used to define the active stress field (EDA model), finding a general consistence between fast direction and main stress indicators (focal mechanism and borehole break-out). The magnitude of delay time is used to define the fracture field intensity finding higher value in the volume where micro-seismicity occurs. Furthermore we studied temporal variations of anisotropic parameters and vp/vs ratio in order to explain if fluids play an important role in the earthquake generation process. The close association of anisotropic and vp/vs parameters variations and seismicity rate changes supports the hypothesis that the background seismicity is influenced by the fluctuation of pore fluid pressure in the rocks.

  11. Compiled MPI: Cost-Effective Exascale Applications Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Quinlan, D; Lumsdaine, A

    2012-04-10

    The complexity of petascale and exascale machines makes it increasingly difficult to develop applications that can take advantage of them. Future systems are expected to feature billion-way parallelism, complex heterogeneous compute nodes and poor availability of memory (Peter Kogge, 2008). This new challenge for application development is motivating a significant amount of research and development on new programming models and runtime systems designed to simplify large-scale application development. Unfortunately, DoE has significant multi-decadal investment in a large family of mission-critical scientific applications. Scaling these applications to exascale machines will require a significant investment that will dwarf the costs of hardwaremore » procurement. A key reason for the difficulty in transitioning today's applications to exascale hardware is their reliance on explicit programming techniques, such as the Message Passing Interface (MPI) programming model to enable parallelism. MPI provides a portable and high performance message-passing system that enables scalable performance on a wide variety of platforms. However, it also forces developers to lock the details of parallelization together with application logic, making it very difficult to adapt the application to significant changes in the underlying system. Further, MPI's explicit interface makes it difficult to separate the application's synchronization and communication structure, reducing the amount of support that can be provided by compiler and run-time tools. This is in contrast to the recent research on more implicit parallel programming models such as Chapel, OpenMP and OpenCL, which promise to provide significantly more flexibility at the cost of reimplementing significant portions of the application. We are developing CoMPI, a novel compiler-driven approach to enable existing MPI applications to scale to exascale systems with minimal modifications that can be made incrementally over the application's lifetime. It includes: (1) New set of source code annotations, inserted either manually or automatically, that will clarify the application's use of MPI to the compiler infrastructure, enabling greater accuracy where needed; (2) A compiler transformation framework that leverages these annotations to transform the original MPI source code to improve its performance and scalability; (3) Novel MPI runtime implementation techniques that will provide a rich set of functionality extensions to be used by applications that have been transformed by our compiler; and (4) A novel compiler analysis that leverages simple user annotations to automatically extract the application's communication structure and synthesize most complex code annotations.« less

  12. Measurement of myocardial perfusion and infarction size using computer-aided diagnosis system for myocardial contrast echocardiography.

    PubMed

    Du, Guo-Qing; Xue, Jing-Yi; Guo, Yanhui; Chen, Shuang; Du, Pei; Wu, Yan; Wang, Yu-Hang; Zong, Li-Qiu; Tian, Jia-Wei

    2015-09-01

    Proper evaluation of myocardial microvascular perfusion and assessment of infarct size is critical for clinicians. We have developed a novel computer-aided diagnosis (CAD) approach for myocardial contrast echocardiography (MCE) to measure myocardial perfusion and infarct size. Rabbits underwent 15 min of coronary occlusion followed by reperfusion (group I, n = 15) or 60 min of coronary occlusion followed by reperfusion (group II, n = 15). Myocardial contrast echocardiography was performed before and 7 d after ischemia/reperfusion, and images were analyzed with the CAD system on the basis of eliminating particle swarm optimization clustering analysis. The myocardium was quickly and accurately detected using contrast-enhanced images, myocardial perfusion was quantitatively calibrated and a color-coded map calibrated by contrast intensity and automatically produced by the CAD system was used to outline the infarction region. Calibrated contrast intensity was significantly lower in infarct regions than in non-infarct regions, allowing differentiation of abnormal and normal myocardial perfusion. Receiver operating characteristic curve analysis documented that -54-pixel contrast intensity was an optimal cutoff point for the identification of infarcted myocardium with a sensitivity of 95.45% and specificity of 87.50%. Infarct sizes obtained using myocardial perfusion defect analysis of original contrast images and the contrast intensity-based color-coded map in computerized images were compared with infarct sizes measured using triphenyltetrazolium chloride staining. Use of the proposed CAD approach provided observers with more information. The infarct sizes obtained with myocardial perfusion defect analysis, the contrast intensity-based color-coded map and triphenyltetrazolium chloride staining were 23.72 ± 8.41%, 21.77 ± 7.8% and 18.21 ± 4.40% (% left ventricle) respectively (p > 0.05), indicating that computerized myocardial contrast echocardiography can accurately measure infarct size. On the basis of the results, we believe the CAD method can quickly and automatically measure myocardial perfusion and infarct size and will, it is hoped, be very helpful in clinical therapeutics. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  13. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less costly than development of comparable parallel code. Moreover, SequenceL not only automatically parallelizes the code, but since it is based on CSP-NT, it is provably race free, thus eliminating the largest quality challenge the parallelized software developer faces.

  14. A graph-based approach for designing extensible pipelines

    PubMed Central

    2012-01-01

    Background In bioinformatics, it is important to build extensible and low-maintenance systems that are able to deal with the new tools and data formats that are constantly being developed. The traditional and simplest implementation of pipelines involves hardcoding the execution steps into programs or scripts. This approach can lead to problems when a pipeline is expanding because the incorporation of new tools is often error prone and time consuming. Current approaches to pipeline development such as workflow management systems focus on analysis tasks that are systematically repeated without significant changes in their course of execution, such as genome annotation. However, more dynamism on the pipeline composition is necessary when each execution requires a different combination of steps. Results We propose a graph-based approach to implement extensible and low-maintenance pipelines that is suitable for pipeline applications with multiple functionalities that require different combinations of steps in each execution. Here pipelines are composed automatically by compiling a specialised set of tools on demand, depending on the functionality required, instead of specifying every sequence of tools in advance. We represent the connectivity of pipeline components with a directed graph in which components are the graph edges, their inputs and outputs are the graph nodes, and the paths through the graph are pipelines. To that end, we developed special data structures and a pipeline system algorithm. We demonstrate the applicability of our approach by implementing a format conversion pipeline for the fields of population genetics and genetic epidemiology, but our approach is also helpful in other fields where the use of multiple software is necessary to perform comprehensive analyses, such as gene expression and proteomics analyses. The project code, documentation and the Java executables are available under an open source license at http://code.google.com/p/dynamic-pipeline. The system has been tested on Linux and Windows platforms. Conclusions Our graph-based approach enables the automatic creation of pipelines by compiling a specialised set of tools on demand, depending on the functionality required. It also allows the implementation of extensible and low-maintenance pipelines and contributes towards consolidating openness and collaboration in bioinformatics systems. It is targeted at pipeline developers and is suited for implementing applications with sequential execution steps and combined functionalities. In the format conversion application, the automatic combination of conversion tools increased both the number of possible conversions available to the user and the extensibility of the system to allow for future updates with new file formats. PMID:22788675

  15. GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC) Earth system model (version 2.52)

    NASA Astrophysics Data System (ADS)

    Alvanos, Michail; Christoudias, Theodoros

    2017-10-01

    This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.

  16. Validation of the Operating and Support Cost Model for Avionics Automatic Test Equipment (OSCATE).

    DTIC Science & Technology

    1980-06-01

    AFLCR 65-1 (56) DOD 4140 -32 (74) CODES DATA LISTED BY. ALC code, Division Code, Equipment Specialist Code, NSN DATA ORDERING SEQUENCEs This data is...PAJ6A 4140 -01-043-5035 .... IL0UERft1TfR 1002 1 319.55 22720 1 0 0 1003 0 14.55 0 0 0 10.00 0 0 1004 0 0 32.454 16.42 0 0 0 0 0 0 0 127 1101 PAJHA 4920...5320 480 CONTINUE 5330 60 To 150 5340 5350C *...*~****.*.s*..** 5360C *****eOUTPUT OPTION 7 5370C e**ss*** sae ******* 5380 500 PRINT 510 5390 510

  17. Strengths and limitations of the NATALI code for aerosol typing from multiwavelength Raman lidar observations

    NASA Astrophysics Data System (ADS)

    Nicolae, Doina; Talianu, Camelia; Vasilescu, Jeni; Nicolae, Victor; Stachlewska, Iwona S.

    2018-04-01

    A Python code was developed to automatically retrieve the aerosol type (and its predominant component in the mixture) from EARLINET's 3 backscatter and 2 extinction data. The typing relies on Artificial Neural Networks which are trained to identify the most probable aerosol type from a set of mean-layer intensive optical parameters. This paper presents the use and limitations of the code with respect to the quality of the inputed lidar profiles, as well as with the assumptions made in the aerosol model.

  18. Toward a standard reference database for computer-aided mammography

    NASA Astrophysics Data System (ADS)

    Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.

    2008-03-01

    Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).

  19. Classifying Chinese Questions Related to Health Care Posted by Consumers Via the Internet.

    PubMed

    Guo, Haihong; Na, Xu; Hou, Li; Li, Jiao

    2017-06-20

    In question answering (QA) system development, question classification is crucial for identifying information needs and improving the accuracy of returned answers. Although the questions are domain-specific, they are asked by non-professionals, making the question classification task more challenging. This study aimed to classify health care-related questions posted by the general public (Chinese speakers) on the Internet. A topic-based classification schema for health-related questions was built by manually annotating randomly selected questions. The Kappa statistic was used to measure the interrater reliability of multiple annotation results. Using the above corpus, we developed a machine-learning method to automatically classify these questions into one of the following six classes: Condition Management, Healthy Lifestyle, Diagnosis, Health Provider Choice, Treatment, and Epidemiology. The consumer health question schema was developed with a four-hierarchical-level of specificity, comprising 48 quaternary categories and 35 annotation rules. The 2000 sample questions were coded with 2000 major codes and 607 minor codes. Using natural language processing techniques, we expressed the Chinese questions as a set of lexical, grammatical, and semantic features. Furthermore, the effective features were selected to improve the question classification performance. From the 6-category classification results, we achieved an average precision of 91.41%, recall of 89.62%, and F 1 score of 90.24%. In this study, we developed an automatic method to classify questions related to Chinese health care posted by the general public. It enables Artificial Intelligence (AI) agents to understand Internet users' information needs on health care. ©Haihong Guo, Xu Na, Li Hou, Jiao Li. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 20.06.2017.

  20. A Clustering-Based Approach to Enriching Code Foraging Environment.

    PubMed

    Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu

    2016-09-01

    Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools.

  1. A Comprehensive C++ Controller for a Magnetically Supported Vertical Rotor. 1.0

    NASA Technical Reports Server (NTRS)

    Morrison, Carlos R.

    2001-01-01

    This manual describes the new FATMaCC (Five-Axis, Three-Magnetic-Bearing Control Code). The FATMaCC (pronounced "fat mak") is a versatile control code that possesses many desirable features that were not available in previous in-house controllers. The ultimate goal in designing this code was to achieve full rotor levitation and control at a loop time of 50 microsec. Using a 1-GHz processor, the code will control a five-axis system in either a decentralized or a more elegant centralized (modal control) mode at a loop time of 56 microsec. In addition, it will levitate and control (with only minor modification to the input/output wiring) a two-axis and/or a four-axis system. Stable rotor levitation and control of any of the systems mentioned above are accomplished through appropriate key presses to modify parameters, such as stiffness, damping, and bias. A signal generation block provides 11 excitation signals. An excitation signal is then superimposed on the radial bearing x- and y-control signals, thus producing a resultant force vector. By modulating the signals on the bearing x- and y-axes with a cosine and a sine function, respectively, a radial excitation force vector is made to rotate 360 deg. about the bearing geometric center. The rotation of the force vector is achieved manually by using key press or automatically by engaging the "one-per-revolution" feature. Rotor rigid body modes can be excited by using the excitation module. Depending on the polarities of the excitation signal in each radial bearing, the bounce or tilt mode will be excited.

  2. Source Lines Counter (SLiC) Version 4.0

    NASA Technical Reports Server (NTRS)

    Monson, Erik W.; Smith, Kevin A.; Newport, Brian J.; Gostelow, Roli D.; Hihn, Jairus M.; Kandt, Ronald K.

    2011-01-01

    Source Lines Counter (SLiC) is a software utility designed to measure software source code size using logical source statements and other common measures for 22 of the programming languages commonly used at NASA and the aerospace industry. Such metrics can be used in a wide variety of applications, from parametric cost estimation to software defect analysis. SLiC has a variety of unique features such as automatic code search, automatic file detection, hierarchical directory totals, and spreadsheet-compatible output. SLiC was written for extensibility; new programming language support can be added with minimal effort in a short amount of time. SLiC runs on a variety of platforms including UNIX, Windows, and Mac OSX. Its straightforward command-line interface allows for customization and incorporation into the software build process for tracking development metrics. T

  3. Segmentation, dynamic storage, and variable loading on CDC equipment

    NASA Technical Reports Server (NTRS)

    Tiffany, S. H.

    1980-01-01

    Techniques for varying the segmented load structure of a program and for varying the dynamic storage allocation, depending upon whether a batch type or interactive type run is desired, are explained and demonstrated. All changes are based on a single data input to the program. The techniques involve: code within the program to suppress scratch pad input/output (I/O) for a batch run or translate the in-core data storage area from blank common to the end-of-code+1 address of a particular segment for an interactive run; automatic editing of the segload directives prior to loading, based upon data input to the program, to vary the structure of the load for interactive and batch runs; and automatic editing of the load map to determine the initial addresses for in core data storage for an interactive run.

  4. Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.

  5. Development of an ease-of-use remote healthcare system architecture using RFID and networking technologies.

    PubMed

    Lin, Shih-Sung; Hung, Min-Hsiung; Tsai, Chang-Lung; Chou, Li-Ping

    2012-12-01

    The study aims to provide an ease-of-use approach for senior patients to utilize remote healthcare systems. An ease-of-use remote healthcare system (RHS) architecture using RFID (Radio Frequency Identification) and networking technologies is developed. Specifically, the codes in RFID tags are used for authenticating the patients' ID to secure and ease the login process. The patient needs only to take one action, i.e. placing a RFID tag onto the reader, to automatically login and start the RHS and then acquire automatic medical services. An ease-of-use emergency monitoring and reporting mechanism is developed as well to monitor and protect the safety of the senior patients who have to be left alone at home. By just pressing a single button, the RHS can automatically report the patient's emergency information to the clinic side so that the responsible medical personnel can take proper urgent actions for the patient. Besides, Web services technology is used to build the Internet communication scheme of the RHS so that the interoperability and data transmission security between the home server and the clinical server can be enhanced. A prototype RHS is constructed to validate the effectiveness of our designs. Testing results show that the proposed RHS architecture possesses the characteristics of ease to use, simplicity to operate, promptness in login, and no need to preserve identity information. The proposed RHS architecture can effectively increase the willingness of senior patients who act slowly or are unfamiliar with computer operations to use the RHS. The research results can be used as an add-on for developing future remote healthcare systems.

  6. Joint three-dimensional inversion of coupled groundwater flow and heat transfer based on automatic differentiation: sensitivity calculation, verification, and synthetic examples

    NASA Astrophysics Data System (ADS)

    Rath, V.; Wolf, A.; Bücker, H. M.

    2006-10-01

    Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.

  7. Automatic morphological classification of galaxy images

    PubMed Central

    Shamir, Lior

    2009-01-01

    We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594

  8. 49 CFR 236.504 - Operation interconnected with automatic block-signal system.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Operation interconnected with automatic block... Operation interconnected with automatic block-signal system. (a) A continuous inductive automatic train stop or train control system shall operate in connection with an automatic block signal system and shall...

  9. Real-time Shakemap implementation in Austria

    NASA Astrophysics Data System (ADS)

    Weginger, Stefan; Jia, Yan; Papi Isaba, Maria; Horn, Nikolaus

    2017-04-01

    ShakeMaps provide near-real-time maps of ground motion and shaking intensity following significant earthquakes. They are automatically generated within a few minutes after occurrence of an earthquake. We tested and included the USGS ShakeMap 4.0 (experimental code) based on python in the Antelope real-time system with local modified GMPE and Site Effects based on the conditions in Austria. The ShakeMaps are provided in terms of Intensity, PGA, PGV and PSA. Future presentation of ShakeMap contour lines and Ground Motion Parameter with interactive maps and data exchange over Web-Services are shown.

  10. A Program Certification Assistant Based on Fully Automated Theorem Provers

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2005-01-01

    We describe a certification assistant to support formal safety proofs for programs. It is based on a graphical user interface that hides the low-level details of first-order automated theorem provers while supporting limited interactivity: it allows users to customize and control the proof process on a high level, manages the auxiliary artifacts produced during this process, and provides traceability between the proof obligations and the relevant parts of the program. The certification assistant is part of a larger program synthesis system and is intended to support the deployment of automatically generated code in safety-critical applications.

  11. ALI: A CSSL/multiprocessor software interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makoui, A.; Karplus, W.J.

    ALI (A Language Interface) is a software package which translates simulation models expressed in one of the higher-level languages, CSSL-IV or ACSL, into sequences of instructions for each processor of a network of microprocessors. The partitioning of the source program among the processors is automatically accomplished. The code is converted into a data flow graph, analyzed and divided among the processors to minimize the overall execution time in the presence of interprocessor communication delays. This paper describes ALI from the user's point of view and includes a detailed example of the application of ALI to a specific dynamic system simulation.

  12. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  13. New Developments in Modeling MHD Systems on High Performance Computing Architectures

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.

    2009-04-01

    Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).

  14. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  15. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  16. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    NASA Astrophysics Data System (ADS)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  17. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  18. Establishment and assessment of code scaling capability

    NASA Astrophysics Data System (ADS)

    Lim, Jaehyok

    In this thesis, a method for using RELAP5/MOD3.3 (Patch03) code models is described to establish and assess the code scaling capability and to corroborate the scaling methodology that has been used in the design of the Purdue University Multi-Dimensional Integral Test Assembly for ESBWR applications (PUMA-E) facility. It was sponsored by the United States Nuclear Regulatory Commission (USNRC) under the program "PUMA ESBWR Tests". PUMA-E facility was built for the USNRC to obtain data on the performance of the passive safety systems of the General Electric (GE) Nuclear Energy Economic Simplified Boiling Water Reactor (ESBWR). Similarities between the prototype plant and the scaled-down test facility were investigated for a Gravity-Driven Cooling System (GDCS) Drain Line Break (GDLB). This thesis presents the results of the GDLB test, i.e., the GDLB test with one Isolation Condenser System (ICS) unit disabled. The test is a hypothetical multi-failure small break loss of coolant (SB LOCA) accident scenario in the ESBWR. The test results indicated that the blow-down phase, Automatic Depressurization System (ADS) actuation, and GDCS injection processes occurred as expected. The GDCS as an emergency core cooling system provided adequate supply of water to keep the Reactor Pressure Vessel (RPV) coolant level well above the Top of Active Fuel (TAF) during the entire GDLB transient. The long-term cooling phase, which is governed by the Passive Containment Cooling System (PCCS) condensation, kept the reactor containment system that is composed of Drywell (DW) and Wetwell (WW) below the design pressure of 414 kPa (60 psia). In addition, the ICS continued participating in heat removal during the long-term cooling phase. A general Code Scaling, Applicability, and Uncertainty (CSAU) evaluation approach was discussed in detail relative to safety analyses of Light Water Reactor (LWR). The major components of the CSAU methodology that were highlighted particularly focused on the scaling issues of experiments and models and their applicability to the nuclear power plant transient and accidents. The major thermal-hydraulic phenomena to be analyzed were identified and the predictive models adopted in RELAP5/MOD3.3 (Patch03) code were briefly reviewed.

  19. Small passenger car transmission test; Ford C4 transmission

    NASA Technical Reports Server (NTRS)

    Bujold, M. P.

    1980-01-01

    A 1979 Ford C4 automatic transmission was tested per a passenger car automatic transmission test code (SAE J651b) which required drive performance, coast performance, and no load test conditions. Under these test conditions, the transmission attained maximum efficiencies in the mid-eighty percent range for both drive performance tests and coast performance tests. The major results of this test (torque, speed, and efficiency curves) are presented. Graphs map the complete performance characteristics for the Ford C4 transmission.

  20. Modeling of photon migration in the human lung using a finite volume solver

    NASA Astrophysics Data System (ADS)

    Sikorski, Zbigniew; Furmanczyk, Michal; Przekwas, Andrzej J.

    2006-02-01

    The application of the frequency domain and steady-state diffusive optical spectroscopy (DOS) and steady-state near infrared spectroscopy (NIRS) to diagnosis of the human lung injury challenges many elements of these techniques. These include the DOS/NIRS instrument performance and accurate models of light transport in heterogeneous thorax tissue. The thorax tissue not only consists of different media (e.g. chest wall with ribs, lungs) but its optical properties also vary with time due to respiration and changes in thorax geometry with contusion (e.g. pneumothorax or hemothorax). This paper presents a finite volume solver developed to model photon migration in the diffusion approximation in heterogeneous complex 3D tissues. The code applies boundary conditions that account for Fresnel reflections. We propose an effective diffusion coefficient for the void volumes (pneumothorax) based on the assumption of the Lambertian diffusion of photons entering the pleural cavity and accounting for the local pleural cavity thickness. The code has been validated using the MCML Monte Carlo code as a benchmark. The code environment enables a semi-automatic preparation of 3D computational geometry from medical images and its rapid automatic meshing. We present the application of the code to analysis/optimization of the hybrid DOS/NIRS/ultrasound technique in which ultrasound provides data on the localization of thorax tissue boundaries. The code effectiveness (3D complex case computation takes 1 second) enables its use to quantitatively relate detected light signal to absorption and reduced scattering coefficients that are indicators of the pulmonary physiologic state (hemoglobin concentration and oxygenation).

  1. Global Location-Based Access to Web Applications Using Atom-Based Automatic Update

    NASA Astrophysics Data System (ADS)

    Singh, Kulwinder; Park, Dong-Won

    We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily with an existing web infrastructure, thereby making the wealth of Web information easily available to the user by phone. This kind of system can be deployed as an extension to 911 and 411 services to share the workload with human operators. This paper presents all the underlying principles, architecture, features, and an example of the real world deployment of our proposed system. The source code and documentations are available for commercial productions.

  2. 41 CFR 102-80.100 - What performance objective should an automatic sprinkler system be capable of meeting?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... objective should an automatic sprinkler system be capable of meeting? 102-80.100 Section 102-80.100 Public... Automatic Sprinkler Systems § 102-80.100 What performance objective should an automatic sprinkler system be capable of meeting? The performance objective of the automatic sprinkler system is that it must be capable...

  3. 41 CFR 102-80.100 - What performance objective should an automatic sprinkler system be capable of meeting?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... objective should an automatic sprinkler system be capable of meeting? 102-80.100 Section 102-80.100 Public... Automatic Sprinkler Systems § 102-80.100 What performance objective should an automatic sprinkler system be capable of meeting? The performance objective of the automatic sprinkler system is that it must be capable...

  4. 41 CFR 102-80.100 - What performance objective should an automatic sprinkler system be capable of meeting?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... objective should an automatic sprinkler system be capable of meeting? 102-80.100 Section 102-80.100 Public... Automatic Sprinkler Systems § 102-80.100 What performance objective should an automatic sprinkler system be capable of meeting? The performance objective of the automatic sprinkler system is that it must be capable...

  5. 41 CFR 102-80.100 - What performance objective should an automatic sprinkler system be capable of meeting?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... objective should an automatic sprinkler system be capable of meeting? 102-80.100 Section 102-80.100 Public... Automatic Sprinkler Systems § 102-80.100 What performance objective should an automatic sprinkler system be capable of meeting? The performance objective of the automatic sprinkler system is that it must be capable...

  6. 41 CFR 102-80.100 - What performance objective should an automatic sprinkler system be capable of meeting?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... objective should an automatic sprinkler system be capable of meeting? 102-80.100 Section 102-80.100 Public... Automatic Sprinkler Systems § 102-80.100 What performance objective should an automatic sprinkler system be capable of meeting? The performance objective of the automatic sprinkler system is that it must be capable...

  7. A Standard-Driven Data Dictionary for Data Harmonization of Heterogeneous Datasets in Urban Geological Information Systems

    NASA Astrophysics Data System (ADS)

    Liu, G.; Wu, C.; Li, X.; Song, P.

    2013-12-01

    The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.

  8. A Procedure for Extending Input Selection Algorithms to Low Quality Data in Modelling Problems with Application to the Automatic Grading of Uploaded Assignments

    PubMed Central

    Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis

    2014-01-01

    When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967

  9. Parallel processing approach to transform-based image coding

    NASA Astrophysics Data System (ADS)

    Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.

    1991-06-01

    This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.

  10. A computer program for analyzing the energy consumption of automatically controlled lighting systems

    NASA Astrophysics Data System (ADS)

    1982-01-01

    A computer code to predict the performance of controlled lighting systems with respect to their energy saving capabilities is presented. The computer program provides a mathematical model from which comparisons of control schemes can be made on an economic basis only. The program does not calculate daylighting, but uses daylighting values as input. The program can analyze any of three power input versus light output relationships, continuous dimming with a linear response, continuous dimming with a nonlinear response, or discrete stepped response. Any of these options can be used with or without daylighting, making six distinct modes of control system operation. These relationships are described in detail. The major components of the program are discussed and examples are included to explain how to run the program.

  11. A numerical similarity approach for using retired Current Procedural Terminology (CPT) codes for electronic phenotyping in the Scalable Collaborative Infrastructure for a Learning Health System (SCILHS).

    PubMed

    Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N

    2015-12-11

    Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer can quickly validate. Lower optimality precision meant that codes were not often placed in the optimal hierarchical subfolder. The seven sites encountered few occurrences of codes outside our ontology, 93 % of which comprised just four codes. Our hierarchical approach correctly grouped retired and non-retired codes in most cases and extended the temporal reach of several important phenotyping algorithms. We developed a simple, easily-validated, automated method to place retired CPT codes into the BioPortal CPT hierarchy. This complements existing hierarchical terminologies, which do not include retired codes. The approach's utility is confirmed by the high correctness precision and successful grouping of retired with non-retired codes.

  12. LexValueSets: An Approach for Context-Driven Value Sets Extraction

    PubMed Central

    Pathak, Jyotishman; Jiang, Guoqian; Dwarkanath, Sridhar O.; Buntrock, James D.; Chute, Christopher G.

    2008-01-01

    The ability to model, share and re-use value sets across multiple medical information systems is an important requirement. However, generating value sets semi-automatically from a terminology service is still an unresolved issue, in part due to the lack of linkage to clinical context patterns that provide the constraints in defining a concept domain and invocation of value sets extraction. Towards this goal, we develop and evaluate an approach for context-driven automatic value sets extraction based on a formal terminology model. The crux of the technique is to identify and define the context patterns from various domains of discourse and leverage them for value set extraction using two complementary ideas based on (i) local terms provided by the Subject Matter Experts (extensional) and (ii) semantic definition of the concepts in coding schemes (intensional). A prototype was implemented based on SNOMED CT rendered in the LexGrid terminology model and a preliminary evaluation is presented. PMID:18998955

  13. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  14. 49 CFR 236.552 - Insulation resistance; requirement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., INSPECTION, MAINTENANCE, AND REPAIR OF SIGNAL AND TRAIN CONTROL SYSTEMS, DEVICES, AND APPLIANCES Automatic... control system, or automatic train stop system shall be not less than one megohm, and that of an... system, automatic train control system, or automatic train stop system, and 20,000 ohms for an...

  15. Visualization Co-Processing of a CFD Simulation

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    1999-01-01

    OVERFLOW, a widely used CFD simulation code, is combined with a visualization system, pV3, to experiment with an environment for simulation/visualization co-processing on a SGI Origin 2000 computer(O2K) system. The shared memory version of the solver is used with the O2K 'pfa' preprocessor invoked to automatically discover parallelism in the source code. No other explicit parallelism is enabled. In order to study the scaling and performance of the visualization co-processing system, sample runs are made with different processor groups in the range of 1 to 254 processors. The data exchange between the visualization system and the simulation system is rapid enough for user interactivity when the problem size is small. This shared memory version of OVERFLOW, with minimal parallelization, does not scale well to an increasing number of available processors. The visualization task takes about 18 to 30% of the total processing time and does not appear to be a major contributor to the poor scaling. Improper load balancing and inter-processor communication overhead are contributors to this poor performance. Work is in progress which is aimed at obtaining improved parallel performance of the solver and removing the limitations of serial data transfer to pV3 by examining various parallelization/communication strategies, including the use of the explicit message passing.

  16. The fidelity of Kepler eclipsing binary parameters inferred by the neural network

    NASA Astrophysics Data System (ADS)

    Holanda, N.; da Silva, J. R. P.

    2018-04-01

    This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 eclipsing binary detached obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cos ω and e sin ω, but orbital inclination is clearly underestimated in neural network tests.

  17. The fidelity of Kepler eclipsing binary parameters inferred by the neural network

    NASA Astrophysics Data System (ADS)

    Holanda, N.; da Silva, J. R. P.

    2018-07-01

    This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 detached eclipsing binaries obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light-curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cosω and e sinω, but orbital inclination is clearly underestimated in neural network tests.

  18. A Semantic Analysis Method for Scientific and Engineering Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  19. Use of the QR Reader to Provide Real-Time Evaluation of Residents' Skills Following Surgical Procedures.

    PubMed

    Reynolds, Kellin; Barnhill, Danny; Sias, Jamie; Young, Amy; Polite, Florencia Greer

    2014-12-01

    A portable electronic method of providing instructional feedback and recording an evaluation of resident competency immediately following surgical procedures has not previously been documented in obstetrics and gynecology. This report presents a unique electronic format that documents resident competency and encourages verbal communication between faculty and residents immediately following operative procedures. The Microsoft Tag system and SurveyMonkey platform were linked by a 2-D QR code using Microsoft QR code generator. Each resident was given a unique code (TAG) embedded onto an ID card. An evaluation form was attached to each resident's file in SurveyMonkey. Postoperatively, supervising faculty scanned the resident's TAG with a smartphone and completed the brief evaluation using the phone's screen. The evaluation was reviewed with the resident and automatically submitted to the resident's educational file. The evaluation system was quickly accepted by residents and faculty. Of 43 residents and faculty in the study, 38 (88%) responded to a survey 8 weeks after institution of the electronic evaluation system. Thirty (79%) of the 38 indicated it was superior to the previously used handwritten format. The electronic system demonstrated improved utilization compared with paper evaluations, with a mean of 23 electronic evaluations submitted per resident during a 6-month period versus 14 paper assessments per resident during an earlier period of 6 months. This streamlined portable electronic evaluation is an effective tool for direct, formative feedback for residents, and it creates a longitudinal record of resident progress. Satisfaction with, and use of, this evaluation system was high.

  20. Use of the QR Reader to Provide Real-Time Evaluation of Residents' Skills Following Surgical Procedures

    PubMed Central

    Reynolds, Kellin; Barnhill, Danny; Sias, Jamie; Young, Amy; Polite, Florencia Greer

    2014-01-01

    Background A portable electronic method of providing instructional feedback and recording an evaluation of resident competency immediately following surgical procedures has not previously been documented in obstetrics and gynecology. Objective This report presents a unique electronic format that documents resident competency and encourages verbal communication between faculty and residents immediately following operative procedures. Methods The Microsoft Tag system and SurveyMonkey platform were linked by a 2-D QR code using Microsoft QR code generator. Each resident was given a unique code (TAG) embedded onto an ID card. An evaluation form was attached to each resident's file in SurveyMonkey. Postoperatively, supervising faculty scanned the resident's TAG with a smartphone and completed the brief evaluation using the phone's screen. The evaluation was reviewed with the resident and automatically submitted to the resident's educational file. Results The evaluation system was quickly accepted by residents and faculty. Of 43 residents and faculty in the study, 38 (88%) responded to a survey 8 weeks after institution of the electronic evaluation system. Thirty (79%) of the 38 indicated it was superior to the previously used handwritten format. The electronic system demonstrated improved utilization compared with paper evaluations, with a mean of 23 electronic evaluations submitted per resident during a 6-month period versus 14 paper assessments per resident during an earlier period of 6 months. Conclusions This streamlined portable electronic evaluation is an effective tool for direct, formative feedback for residents, and it creates a longitudinal record of resident progress. Satisfaction with, and use of, this evaluation system was high. PMID:26140128

  1. The software for automatic creation of the formal grammars used by speech recognition, computer vision, editable text conversion systems, and some new functions

    NASA Astrophysics Data System (ADS)

    Kardava, Irakli; Tadyszak, Krzysztof; Gulua, Nana; Jurga, Stefan

    2017-02-01

    For more flexibility of environmental perception by artificial intelligence it is needed to exist the supporting software modules, which will be able to automate the creation of specific language syntax and to make a further analysis for relevant decisions based on semantic functions. According of our proposed approach, of which implementation it is possible to create the couples of formal rules of given sentences (in case of natural languages) or statements (in case of special languages) by helping of computer vision, speech recognition or editable text conversion system for further automatic improvement. In other words, we have developed an approach, by which it can be achieved to significantly improve the training process automation of artificial intelligence, which as a result will give us a higher level of self-developing skills independently from us (from users). At the base of our approach we have developed a software demo version, which includes the algorithm and software code for the entire above mentioned component's implementation (computer vision, speech recognition and editable text conversion system). The program has the ability to work in a multi - stream mode and simultaneously create a syntax based on receiving information from several sources.

  2. Automatically generated code for relativistic inhomogeneous cosmologies

    NASA Astrophysics Data System (ADS)

    Bentivegna, Eloisa

    2017-02-01

    The applications of numerical relativity to cosmology are on the rise, contributing insight into such cosmological problems as structure formation, primordial phase transitions, gravitational-wave generation, and inflation. In this paper, I present the infrastructure for the computation of inhomogeneous dust cosmologies which was used recently to measure the effect of nonlinear inhomogeneity on the cosmic expansion rate. I illustrate the code's architecture, provide evidence for its correctness in a number of familiar cosmological settings, and evaluate its parallel performance for grids of up to several billion points. The code, which is available as free software, is based on the Einstein Toolkit infrastructure, and in particular leverages the automated code generation capabilities provided by its component Kranc.

  3. 49 CFR 236.825 - System, automatic train control.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false System, automatic train control. 236.825 Section..., INSPECTION, MAINTENANCE, AND REPAIR OF SIGNAL AND TRAIN CONTROL SYSTEMS, DEVICES, AND APPLIANCES Definitions § 236.825 System, automatic train control. A system so arranged that its operation will automatically...

  4. 49 CFR 236.825 - System, automatic train control.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false System, automatic train control. 236.825 Section..., INSPECTION, MAINTENANCE, AND REPAIR OF SIGNAL AND TRAIN CONTROL SYSTEMS, DEVICES, AND APPLIANCES Definitions § 236.825 System, automatic train control. A system so arranged that its operation will automatically...

  5. Research in Parallel Algorithms and Software for Computational Aerosciences

    DOT National Transportation Integrated Search

    1996-04-01

    Phase I is complete for the development of a Computational Fluid Dynamics : with automatic grid generation and adaptation for the Euler : analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian : grid code developed at Lockheed...

  6. 28-Bit serial word simulator/monitor

    NASA Technical Reports Server (NTRS)

    Durbin, J. W.

    1979-01-01

    Modular interface unit transfers data at high speeds along four channels. Device expedites variable-word-length communication between computers. Operation eases exchange of bit information by automatically reformatting coded input data and status information to match requirements of output.

  7. Do perceived context pictures automatically activate their phonological code?

    PubMed

    Jescheniak, Jörg D; Oppermann, Frank; Hantsch, Ansgar; Wagner, Valentin; Mädebach, Andreas; Schriefers, Herbert

    2009-01-01

    Morsella and Miozzo (Morsella, E., & Miozzo, M. (2002). Evidence for a cascade model of lexical access in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 555-563) have reported that the to-be-ignored context pictures become phonologically activated when participants name a target picture, and took this finding as support for cascaded models of lexical retrieval in speech production. In a replication and extension of their experiment in German, we failed to obtain priming effects from context pictures phonologically related to a to-be-named target picture. By contrast, corresponding context words (i.e., the names of the respective pictures) and the same context pictures, when used in an identity condition, did reliably facilitate the naming process. This pattern calls into question the generality of the claim advanced by Morsella and Miozzo that perceptual processing of pictures in the context of a naming task automatically leads to the activation of corresponding lexical-phonological codes.

  8. Predictive assimilation framework to support contaminated site understanding and remediation

    NASA Astrophysics Data System (ADS)

    Versteeg, R. J.; Bianchi, M.; Hubbard, S. S.

    2014-12-01

    Subsurface system behavior at contaminated sites is driven and controlled by the interplay of physical, chemical, and biological processes occurring at multiple temporal and spatial scales. Effective remediation and monitoring planning requires an understanding of this complexity that is current, predictive (with some level of confidence) and actionable. We present and demonstrate a predictive assimilation framework (PAF). This framework automatically ingests, quality controls and stores near real-time environmental data and processes these data using different inversion and modeling codes to provide information on the current state and evolution of the subsurface system. PAF is implemented as a cloud based software application which has five components: (1) data acquisition, (2) data management, (3) data assimilation and processing, (4) visualization and result deliver and (5) orchestration. Access to and interaction with PAF is done through a standard browser. PAF is designed to be modular so that it can ingest and process different data streams dependent on the site. We will present an implementation of PAF which uses data from a highly instrumented site (the DOE Rifle Subsurface Biogeochemistry Field Observatory in Rifle, Colorado) for which PAF automatically ingests hydrological data and forward models groundwater flow in the saturated zone.

  9. Consolidated List of Debarred, Suspended, and Ineligible Contractors as of April 10, 1985.

    DTIC Science & Technology

    1985-04-01

    Administration code 202, but Fdral officials calling lon distance Juan L . Smith should use the FTS (Federal "blecommunications 523-4873 Sterr) or AUTVN (Automatic...Washington, DC 20405 Belle N. Davis (Codes A and B) Attention: Mrs. Juan L . Smith 475-8025 FTS/(202) 523-4873 3 r, % .1 𔄀 ’ % Cause and Treatment Codes...for violation of1961, the I is effctve win 0O; It Imposed the Buy American Act (41 U.S.C. 10b(b)) 4 -k % =",W""-- l m. .°6 ."._. ’ t. ,+... W LML

  10. Computer-Aided Software Engineering - An approach to real-time software development

    NASA Technical Reports Server (NTRS)

    Walker, Carrie K.; Turkovich, John J.

    1989-01-01

    A new software engineering discipline is Computer-Aided Software Engineering (CASE), a technology aimed at automating the software development process. This paper explores the development of CASE technology, particularly in the area of real-time/scientific/engineering software, and a history of CASE is given. The proposed software development environment for the Advanced Launch System (ALS CASE) is described as an example of an advanced software development system for real-time/scientific/engineering (RT/SE) software. The Automated Programming Subsystem of ALS CASE automatically generates executable code and corresponding documentation from a suitably formatted specification of the software requirements. Software requirements are interactively specified in the form of engineering block diagrams. Several demonstrations of the Automated Programming Subsystem are discussed.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epiney, A.; Canepa, S.; Zerkak, O.

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  12. fMRat: an extension of SPM for a fully automatic analysis of rodent brain functional magnetic resonance series.

    PubMed

    Chavarrías, Cristina; García-Vázquez, Verónica; Alemán-Gómez, Yasser; Montesinos, Paula; Pascau, Javier; Desco, Manuel

    2016-05-01

    The purpose of this study was to develop a multi-platform automatic software tool for full processing of fMRI rodent studies. Existing tools require the usage of several different plug-ins, a significant user interaction and/or programming skills. Based on a user-friendly interface, the tool provides statistical parametric brain maps (t and Z) and percentage of signal change for user-provided regions of interest. The tool is coded in MATLAB (MathWorks(®)) and implemented as a plug-in for SPM (Statistical Parametric Mapping, the Wellcome Trust Centre for Neuroimaging). The automatic pipeline loads default parameters that are appropriate for preclinical studies and processes multiple subjects in batch mode (from images in either Nifti or raw Bruker format). In advanced mode, all processing steps can be selected or deselected and executed independently. Processing parameters and workflow were optimized for rat studies and assessed using 460 male-rat fMRI series on which we tested five smoothing kernel sizes and three different hemodynamic models. A smoothing kernel of FWHM = 1.2 mm (four times the voxel size) yielded the highest t values at the somatosensorial primary cortex, and a boxcar response function provided the lowest residual variance after fitting. fMRat offers the features of a thorough SPM-based analysis combined with the functionality of several SPM extensions in a single automatic pipeline with a user-friendly interface. The code and sample images can be downloaded from https://github.com/HGGM-LIM/fmrat .

  13. 49 CFR 236.826 - System, automatic train stop.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false System, automatic train stop. 236.826 Section 236..., INSPECTION, MAINTENANCE, AND REPAIR OF SIGNAL AND TRAIN CONTROL SYSTEMS, DEVICES, AND APPLIANCES Definitions § 236.826 System, automatic train stop. A system so arranged that its operation will automatically...

  14. 49 CFR 236.826 - System, automatic train stop.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false System, automatic train stop. 236.826 Section 236..., INSPECTION, MAINTENANCE, AND REPAIR OF SIGNAL AND TRAIN CONTROL SYSTEMS, DEVICES, AND APPLIANCES Definitions § 236.826 System, automatic train stop. A system so arranged that its operation will automatically...

  15. Fast Computation of the Two-Point Correlation Function in the Age of Big Data

    NASA Astrophysics Data System (ADS)

    Pellegrino, Andrew; Timlin, John

    2018-01-01

    We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.

  16. Computer-aided system design

    NASA Technical Reports Server (NTRS)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  17. 49 CFR 236.824 - System, automatic block signal.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false System, automatic block signal. 236.824 Section... § 236.824 System, automatic block signal. A block signal system wherein the use of each block is governed by an automatic block signal, cab signal, or both. ...

  18. Benchmarking and Evaluating Unified Memory for OpenMP GPU Offloading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Alok; Li, Lingda; Kong, Martin

    Here, the latest OpenMP standard offers automatic device offloading capabilities which facilitate GPU programming. Despite this, there remain many challenges. One of these is the unified memory feature introduced in recent GPUs. GPUs in current and future HPC systems have enhanced support for unified memory space. In such systems, CPU and GPU can access each other's memory transparently, that is, the data movement is managed automatically by the underlying system software and hardware. Memory over subscription is also possible in these systems. However, there is a significant lack of knowledge about how this mechanism will perform, and how programmers shouldmore » use it. We have modified several benchmarks codes, in the Rodinia benchmark suite, to study the behavior of OpenMP accelerator extensions and have used them to explore the impact of unified memory in an OpenMP context. We moreover modified the open source LLVM compiler to allow OpenMP programs to exploit unified memory. The results of our evaluation reveal that, while the performance of unified memory is comparable with that of normal GPU offloading for benchmarks with little data reuse, it suffers from significant overhead when GPU memory is over subcribed for benchmarks with large amount of data reuse. Based on these results, we provide several guidelines for programmers to achieve better performance with unified memory.« less

  19. Reconstructing past occupational exposures: how reliable are women's reports of their partner's occupation?

    PubMed

    Tagiyeva, Nara; Semple, Sean; Devereux, Graham; Sherriff, Andrea; Henderson, John; Elias, Peter; Ayres, Jon G

    2011-06-01

    Most of the evidence on agreement between self- and proxy-reported occupational data comes from interview-based studies. The authors aimed to examine agreement between women's reports of their partner's occupation and their partner's own description using questionnaire-based data collected as a part of the prospective, population-based Avon Longitudinal Study of Parents and Children. Information on present occupation was self-reported by women's partners and proxy-reported by women through questionnaires administered at 8 and 21 months after the birth of a child. Job titles were coded to the Standard Occupational Classification (SOC2000) using software developed by the University of Warwick (Computer-Assisted Structured Coding Tool). The accuracy of proxy-report was expressed as percentage agreement and kappa coefficients for four-, three- and two-digit SOC2000 codes obtained in automatic and semiautomatic (manually improved) coding modes. Data from 6016 couples at 8 months and 5232 couples at 21 months postnatally were included in the analyses. The agreement between men's self-reported occupation and women's report of their partner's occupation in fully automatic coding mode at four-, three- and two-digit code level was 65%, 71% and 77% at 8 months and 68%, 73% and 76% at 21 months. The accuracy of agreement was slightly improved by semiautomatic coding of occupations: 73%/73%, 78%/77% and 83%/80% at 8/21 months respectively. While this suggests that women's description of their partners' occupation can be used as a valuable tool in epidemiological research where data from partners are not available, this study revealed no agreement between these young women and their partners at the two-digit level of SOC2000 coding in approximately one in five cases. Proxy reporting of occupation introduces a statistically significant degree of error in classification. The effects of occupational misclassification by proxy reporting in retrospective occupational epidemiological studies based on questionnaire data should be considered.

  20. Automatic three-dimensional measurement of large-scale structure based on vision metrology.

    PubMed

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.

  1. An automated, broad-based, near real-time public health surveillance system using presentations to hospital Emergency Departments in New South Wales, Australia.

    PubMed

    Muscatello, David J; Churches, Tim; Kaldor, Jill; Zheng, Wei; Chiu, Clayton; Correll, Patricia; Jorm, Louisa

    2005-12-22

    In a climate of concern over bioterrorism threats and emergent diseases, public health authorities are trialling more timely surveillance systems. The 2003 Rugby World Cup (RWC) provided an opportunity to test the viability of a near real-time syndromic surveillance system in metropolitan Sydney, Australia. We describe the development and early results of this largely automated system that used data routinely collected in Emergency Departments (EDs). Twelve of 49 EDs in the Sydney metropolitan area automatically transmitted surveillance data from their existing information systems to a central database in near real-time. Information captured for each ED visit included patient demographic details, presenting problem and nursing assessment entered as free-text at triage time, physician-assigned provisional diagnosis codes, and status at departure from the ED. Both diagnoses from the EDs and triage text were used to assign syndrome categories. The text information was automatically classified into one or more of 26 syndrome categories using automated "naïve Bayes" text categorisation techniques. Automated processes were used to analyse both diagnosis and free text-based syndrome data and to produce web-based statistical summaries for daily review. An adjusted cumulative sum (cusum) was used to assess the statistical significance of trends. During the RWC the system did not identify any major public health threats associated with the tournament, mass gatherings or the influx of visitors. This was consistent with evidence from other sources, although two known outbreaks were already in progress before the tournament. Limited baseline in early monitoring prevented the system from automatically identifying these ongoing outbreaks. Data capture was invisible to clinical staff in EDs and did not add to their workload. We have demonstrated the feasibility and potential utility of syndromic surveillance using routinely collected data from ED information systems. Key features of our system are its nil impact on clinical staff, and its use of statistical methods to assign syndrome categories based on clinical free text information. The system is ongoing, and has expanded to cover 30 EDs. Results of formal evaluations of both the technical efficiency and the public health impacts of the system will be described subsequently.

  2. The Use of Barker Coded Signal on the Measurement of Wave Velocity of Rock

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Wu, H.

    2016-12-01

    The wave velocity of the rock is important petro physics parameters; it can be used to calculate the elastic parameters, monitor the variations in the stress suffered by rock; and the velocity anisotropy reflects the rock anisotropy. Furthermore, since the coda wave is more sensitive to the change in rock properties, its velocity variation has been applied to monitor the variations in rock structures caused by varying temperature, stress, water saturation and other factors. However, the measurements of velocities heavily depend on signal-to-noise ratio (SNR) of the signals, because low signal-to-noise ratio would result in the difficulty in the identification of information. Fortunately coded excitation technique, widely used in radar, and medical system, just can solve the problem above. Although this technique can effectively improve the SNR and resolution of received signal, there exits very high sidelobes after traditional matched filter. So a pseudo inverse filter was successfully applied to suppress the side lobes. After comparing different coded signals, Barker coded signal are selected to measure the velocity of P wave of Plexiglas, sandstone, granite, marble with automatic measurement method, which are compared with the measurement results of single pulse; the results showed that the measurement of coded signals is more closely to the manual measurement. Moreover, coda wave measurement of loading granite was also made with Barker coded signal, the results of which also showed that the detection result of coded signals is better than that of the single pulse. In conclusion, the experiments verify the effectiveness and reliability of coded signals used on the measurement of wave velocity of rock.

  3. Vulnerabilities in Bytecode Removed by Analysis, Nuanced Confinement and Diversification (VIBRANCE)

    DTIC Science & Technology

    2015-06-01

    VIBRANCE tool starts with a vulnerable Java application and automatically hardens it against SQL injection, OS command injection, file path traversal...7 2.2 Java Front End...7 2.2.2 Java Byte Code Parser

  4. Adaptive pseudolinear compensators of dynamic characteristics of automatic control systems

    NASA Astrophysics Data System (ADS)

    Skorospeshkin, M. V.; Sukhodoev, M. S.; Timoshenko, E. A.; Lenskiy, F. V.

    2016-04-01

    Adaptive pseudolinear gain and phase compensators of dynamic characteristics of automatic control systems are suggested. The automatic control system performance with adaptive compensators has been explored. The efficiency of pseudolinear adaptive compensators in the automatic control systems with time-varying parameters has been demonstrated.

  5. Automatic detection and decoding of honey bee waggle dances

    PubMed Central

    Wild, Benjamin; Rojas, Raúl; Landgraf, Tim

    2017-01-01

    The waggle dance is one of the most popular examples of animal communication. Forager bees direct their nestmates to profitable resources via a complex motor display. Essentially, the dance encodes the polar coordinates to the resource in the field. Unemployed foragers follow the dancer’s movements and then search for the advertised spots in the field. Throughout the last decades, biologists have employed different techniques to measure key characteristics of the waggle dance and decode the information it conveys. Early techniques involved the use of protractors and stopwatches to measure the dance orientation and duration directly from the observation hive. Recent approaches employ digital video recordings and manual measurements on screen. However, manual approaches are very time-consuming. Most studies, therefore, regard only small numbers of animals in short periods of time. We have developed a system capable of automatically detecting, decoding and mapping communication dances in real-time. In this paper, we describe our recording setup, the image processing steps performed for dance detection and decoding and an algorithm to map dances to the field. The proposed system performs with a detection accuracy of 90.07%. The decoded waggle orientation has an average error of -2.92° (± 7.37°), well within the range of human error. To evaluate and exemplify the system’s performance, a group of bees was trained to an artificial feeder, and all dances in the colony were automatically detected, decoded and mapped. The system presented here is the first of this kind made publicly available, including source code and hardware specifications. We hope this will foster quantitative analyses of the honey bee waggle dance. PMID:29236712

  6. The Pan-STARRS PS1 Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Magnier, E.

    The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.

  7. An Implementation of Privacy Protection for a Surveillance Camera Using ROI Coding of JPEG2000 with Face Detection

    NASA Astrophysics Data System (ADS)

    Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi

    On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.

  8. NESSUS/NASTRAN Interface

    NASA Technical Reports Server (NTRS)

    Millwater, Harry; Riha, David

    1996-01-01

    The NESSUS and NASTRAN computer codes were successfully integrated. The enhanced NESSUS code will use NASTRAN for the structural Analysis and NESSUS for the probabilistic analysis. Any quantities in the NASTRAN bulk data input can be random variables. Any NASTRAN result that is written to the output2 file can be returned to NESSUS as the finite element result. The interfacing between NESSUS and NASTRAN is handled automatically by NESSUS. NESSUS and NASTRAN can be run on different machines using the remote host option.

  9. Automatic Identification Technology (AIT): The Development of Functional Capability and Card Application Matrices

    DTIC Science & Technology

    1994-09-01

    650 B.C. in Asia Minor, coins were developed and used in acquiring goods and services. In France, during the eighteenth century, paper money made its... counterfeited . [INFO94, p. 23] Other weaknesses of bar code technology include limited data storage capability based on the bar code symbology used when...extremely accurate, with calculated error rates as low as 1 in 100 trillion, and are difficult to counterfeit . Strong magnetic fields cannot erase RF

  10. The PlusCal Algorithm Language

    NASA Astrophysics Data System (ADS)

    Lamport, Leslie

    Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.

  11. 14 CFR 25.904 - Automatic takeoff thrust control system (ATTCS).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Automatic takeoff thrust control system... Automatic takeoff thrust control system (ATTCS). Each applicant seeking approval for installation of an engine power control system that automatically resets the power or thrust on the operating engine(s) when...

  12. 14 CFR 25.904 - Automatic takeoff thrust control system (ATTCS).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Automatic takeoff thrust control system... Automatic takeoff thrust control system (ATTCS). Each applicant seeking approval for installation of an engine power control system that automatically resets the power or thrust on the operating engine(s) when...

  13. The Development of the Ducted Fan Noise Propagation and Radiation Code CDUCT-LaRC

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Farassat, F.; Pope, D. Stuart; Vatsa, Veer

    2003-01-01

    The development of the ducted fan noise propagation and radiation code CDUCT-LaRC at NASA Langley Research Center is described. This code calculates the propagation and radiation of given acoustic modes ahead of the fan face or aft of the exhaust guide vanes in the inlet or exhaust ducts, respectively. This paper gives a description of the modules comprising CDUCT-LaRC. The grid generation module provides automatic creation of numerical grids for complex (non-axisymmetric) geometries that include single or multiple pylons. Files for performing automatic inviscid mean flow calculations are also generated within this module. The duct propagation is based on the parabolic approximation theory of R. P. Dougherty. This theory allows the handling of complex internal geometries and the ability to study the effect of non-uniform (i.e. circumferentially and axially segmented) liners. Finally, the duct radiation module is based on the Ffowcs Williams-Hawkings (FW-H) equation with a penetrable data surface. Refraction of sound through the shear layer between the external flow and bypass duct flow is included. Results for benchmark annular ducts, as well as other geometries with pylons, are presented and compared with available analytical data.

  14. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  15. Burner liner thermal/structural load modeling: TRANCITS program user's manual

    NASA Technical Reports Server (NTRS)

    Maffeo, R.

    1985-01-01

    Transfer Analysis Code to Interface Thermal/Structural Problems (TRANCITS) is discussed. The TRANCITS code satisfies all the objectives for transferring thermal data between heat transfer and structural models of combustor liners and it can be used as a generic thermal translator between heat transfer and stress models of any component, regardless of the geometry. The TRANCITS can accurately and efficiently convert the temperature distributions predicted by the heat transfer programs to those required by the stress codes. It can be used for both linear and nonlinear structural codes and can produce nodal temperatures, elemental centroid temperatures, or elemental Gauss point temperatures. The thermal output of both the MARC and SINDA heat transfer codes can be interfaced directly with TRANCITS, and it will automatically produce stress model codes formatted for NASTRAN and MARC. Any thermal program and structural program can be interfaced by using the neutral input and output forms supported by TRANCITS.

  16. Expert system validation in prolog

    NASA Technical Reports Server (NTRS)

    Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline

    1988-01-01

    An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.

  17. Molecular Genetics Information System (MOLGENIS): alternatives in developing local experimental genomics databases.

    PubMed

    Swertz, Morris A; De Brock, E O; Van Hijum, Sacha A F T; De Jong, Anne; Buist, Girbe; Baerends, Richard J S; Kok, Jan; Kuipers, Oscar P; Jansen, Ritsert C

    2004-09-01

    Genomic research laboratories need adequate infrastructure to support management of their data production and research workflow. But what makes infrastructure adequate? A lack of appropriate criteria makes any decision on buying or developing a system difficult. Here, we report on the decision process for the case of a molecular genetics group establishing a microarray laboratory. Five typical requirements for experimental genomics database systems were identified: (i) evolution ability to keep up with the fast developing genomics field; (ii) a suitable data model to deal with local diversity; (iii) suitable storage of data files in the system; (iv) easy exchange with other software; and (v) low maintenance costs. The computer scientists and the researchers of the local microarray laboratory considered alternative solutions for these five requirements and chose the following options: (i) use of automatic code generation; (ii) a customized data model based on standards; (iii) storage of datasets as black boxes instead of decomposing them in database tables; (iv) loosely linking to other programs for improved flexibility; and (v) a low-maintenance web-based user interface. Our team evaluated existing microarray databases and then decided to build a new system, Molecular Genetics Information System (MOLGENIS), implemented using code generation in a period of three months. This case can provide valuable insights and lessons to both software developers and a user community embarking on large-scale genomic projects. http://www.molgenis.nl

  18. Table-driven image transformation engine algorithm

    NASA Astrophysics Data System (ADS)

    Shichman, Marc

    1993-04-01

    A high speed image transformation engine (ITE) was designed and a prototype built for use in a generic electronic light table and image perspective transformation application code. The ITE takes any linear transformation, breaks the transformation into two passes and resamples the image appropriately for each pass. The system performance is achieved by driving the engine with a set of look up tables computed at start up time for the calculation of pixel output contributions. Anti-aliasing is done automatically in the image resampling process. Operations such as multiplications and trigonometric functions are minimized. This algorithm can be used for texture mapping, image perspective transformation, electronic light table, and virtual reality.

  19. A Software Engineering Approach based on WebML and BPMN to the Mediation Scenario of the SWS Challenge

    NASA Astrophysics Data System (ADS)

    Brambilla, Marco; Ceri, Stefano; Valle, Emanuele Della; Facca, Federico M.; Tziviskou, Christina

    Although Semantic Web Services are expected to produce a revolution in the development of Web-based systems, very few enterprise-wide design experiences are available; one of the main reasons is the lack of sound Software Engineering methods and tools for the deployment of Semantic Web applications. In this chapter, we present an approach to software development for the Semantic Web based on classical Software Engineering methods (i.e., formal business process development, computer-aided and component-based software design, and automatic code generation) and on semantic methods and tools (i.e., ontology engineering, semantic service annotation and discovery).

  20. Crowdsourcing the Measurement of Interstate Conflict

    PubMed Central

    2016-01-01

    Much of the data used to measure conflict is extracted from news reports. This is typically accomplished using either expert coders to quantify the relevant information or machine coders to automatically extract data from documents. Although expert coding is costly, it produces quality data. Machine coding is fast and inexpensive, but the data are noisy. To diminish the severity of this tradeoff, we introduce a method for analyzing news documents that uses crowdsourcing, supplemented with computational approaches. The new method is tested on documents about Militarized Interstate Disputes, and its accuracy ranges between about 68 and 76 percent. This is shown to be a considerable improvement over automated coding, and to cost less and be much faster than expert coding. PMID:27310427

Top