Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)
2001-01-01
This viewgraph presentation provides information on the technical aspects of debugging computer code that has been automatically converted for use in a parallel computing system. Shared memory parallelization and distributed memory parallelization entail separate and distinct challenges for a debugging program. A prototype system has been developed which integrates various tools for the debugging of automatically parallelized programs including the CAPTools Database which provides variable definition information across subroutines as well as array distribution information.
Debugging Fortran on a shared memory machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, T.R.; Padua, D.A.
1987-01-01
Debugging on a parallel processor is more difficult than debugging on a serial machine because errors in a parallel program may introduce nondeterminism. The approach to parallel debugging presented here attempts to reduce the problem of debugging on a parallel machine to that of debugging on a serial machine by automatically detecting nondeterminism. 20 refs., 6 figs.
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2001-01-01
This viewgraph presentation provides information on support sources available for the automatic parallelization of computer program. CAPTools, a support tool developed at the University of Greenwich, transforms, with user guidance, existing sequential Fortran code into parallel message passing code. Comparison routines are then run for debugging purposes, in essence, ensuring that the code transformation was accurate.
Backtracking and Re-execution in the Automatic Debugging of Parallelized Programs
NASA Technical Reports Server (NTRS)
Matthews, Gregory; Hood, Robert; Johnson, Stephen; Leggett, Peter; Biegel, Bryan (Technical Monitor)
2002-01-01
In this work we describe a new approach using relative debugging to find differences in computation between a serial program and a parallel version of th it program. We use a combination of re-execution and backtracking in order to find the first difference in computation that may ultimately lead to an incorrect value that the user has indicated. In our prototype implementation we use static analysis information from a parallelization tool in order to perform the backtracking as well as the mapping required between serial and parallel computations.
Monitoring Data-Structure Evolution in Distributed Message-Passing Programs
NASA Technical Reports Server (NTRS)
Sarukkai, Sekhar R.; Beers, Andrew; Woodrow, Thomas S. (Technical Monitor)
1996-01-01
Monitoring the evolution of data structures in parallel and distributed programs, is critical for debugging its semantics and performance. However, the current state-of-art in tracking and presenting data-structure information on parallel and distributed environments is cumbersome and does not scale. In this paper we present a methodology that automatically tracks memory bindings (not the actual contents) of static and dynamic data-structures of message-passing C programs, using PVM. With the help of a number of examples we show that in addition to determining the impact of memory allocation overheads on program performance, graphical views can help in debugging the semantics of program execution. Scalable animations of virtual address bindings of source-level data-structures are used for debugging the semantics of parallel programs across all processors. In conjunction with light-weight core-files, this technique can be used to complement traditional debuggers on single processors. Detailed information (such as data-structure contents), on specific nodes, can be determined using traditional debuggers after the data structure evolution leading to the semantic error is observed graphically.
Debugging expert systems using a dynamically created hypertext network
NASA Technical Reports Server (NTRS)
Boyle, Craig D. B.; Schuette, John F.
1991-01-01
The labor intensive nature of expert system writing and debugging motivated this study. The hypothesis is that a hypertext based debugging tool is easier and faster than one traditional tool, the graphical execution trace. HESDE (Hypertext Expert System Debugging Environment) uses Hypertext nodes and links to represent the objects and their relationships created during the execution of a rule based expert system. HESDE operates transparently on top of the CLIPS (C Language Integrated Production System) rule based system environment and is used during the knowledge base debugging process. During the execution process HESDE builds an execution trace. Use of facts, rules, and their values are automatically stored in a Hypertext network for each execution cycle. After the execution process, the knowledge engineer may access the Hypertext network and browse the network created. The network may be viewed in terms of rules, facts, and values. An experiment was conducted to compare HESDE with a graphical debugging environment. Subjects were given representative tasks. For speed and accuracy, in eight of the eleven tasks given to subjects, HESDE was significantly better.
Debugging and Performance Analysis Software Tools for Peregrine System |
High-Performance Computing | NREL Debugging and Performance Analysis Software Tools for Peregrine System Debugging and Performance Analysis Software Tools for Peregrine System Learn about debugging and performance analysis software tools available to use with the Peregrine system. Allinea
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2001-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Relative Debugging of Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2002-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai
2009-01-01
Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.
Automated knowledge-base refinement
NASA Technical Reports Server (NTRS)
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
NASA Technical Reports Server (NTRS)
Feller, A.
1978-01-01
The entire complement of standard cells and components, except for the set-reset flip-flop, was completed. Two levels of checking were performed on each device. Logic cells and topological layout are described. All the related computer programs were coded and one level of debugging was completed. The logic for the test chip was modified and updated. This test chip served as the first test vehicle to exercise the standard cell complementary MOS(C-MOS) automatic artwork generation capability.
Tracking Students' Cognitive Processes during Program Debugging--An Eye-Movement Approach
ERIC Educational Resources Information Center
Lin, Yu-Tzu; Wu, Cheng-Chih; Hou, Ting-Yun; Lin, Yu-Chih; Yang, Fang-Ying; Chang, Chia-Hu
2016-01-01
This study explores students' cognitive processes while debugging programs by using an eye tracker. Students' eye movements during debugging were recorded by an eye tracker to investigate whether and how high- and low-performance students act differently during debugging. Thirty-eight computer science undergraduates were asked to debug two C…
Automatic Debugging Support for UML Designs
NASA Technical Reports Server (NTRS)
Schumann, Johann; Swanson, Keith (Technical Monitor)
2001-01-01
Design of large software systems requires rigorous application of software engineering methods covering all phases of the software process. Debugging during the early design phases is extremely important, because late bug-fixes are expensive. In this paper, we describe an approach which facilitates debugging of UML requirements and designs. The Unified Modeling Language (UML) is a set of notations for object-orient design of a software system. We have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts. This algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge. After synthesizing statecharts from sequence diagrams, these statecharts usually are subject to manual modification and refinement. By using the "backward" direction of our synthesis algorithm. we are able to map modifications made to the statechart back into the requirements (sequence diagrams) and check for conflicts there. Fed back to the user conflicts detected by our algorithm are the basis for deductive-based debugging of requirements and domain theory in very early development stages. Our approach allows to generate explanations oil why there is a conflict and which parts of the specifications are affected.
Instrumentation, performance visualization, and debugging tools for multiprocessors
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.; Hontalas, Philip J.
1991-01-01
The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessor architectures. However, without effective means to monitor (and visualize) program execution, debugging, and tuning parallel programs becomes intractably difficult as program complexity increases with the number of processors. Research on performance evaluation tools for multiprocessors is being carried out at ARC. Besides investigating new techniques for instrumenting, monitoring, and presenting the state of parallel program execution in a coherent and user-friendly manner, prototypes of software tools are being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Our current tool set, the Ames Instrumentation Systems (AIMS), incorporates features from various software systems developed in academia and industry. The execution of FORTRAN programs on the Intel iPSC/860 can be automatically instrumented and monitored. Performance data collected in this manner can be displayed graphically on workstations supporting X-Windows. We have successfully compared various parallel algorithms for computational fluid dynamics (CFD) applications in collaboration with scientists from the Numerical Aerodynamic Simulation Systems Division. By performing these comparisons, we show that performance monitors and debuggers such as AIMS are practical and can illuminate the complex dynamics that occur within parallel programs.
Using PAFEC as a preprocessor for COSMIC/NASTRAN
NASA Technical Reports Server (NTRS)
Gray, W. H.; Baudry, T. V.
1983-01-01
Programs for Automatic Finite Element Calculations (PAFEC) is a general purpose, three dimensional linear and nonlinear finite element program (ref. 1). PAFEC's features include free format input utilizing engineering keywords, powerful mesh generating facilities, sophisticated data base management procedures, and extensive data validation checks. Presented here is a description of a software interface that permits PAFEC to be used as a preprocessor for COSMIC/NASTRAN. This user friendly software, called PAFCOS, frees the stress analyst from the laborious and error prone procedure of creating and debugging a rigid format COSMIC/NASTRAN bulk data deck. By interactively creating and debugging a finite element model with PAFEC, thus taking full advantage of the free format engineering keyword oriented data structure of PAFEC, the amount of time spent during model generation can be drastically reduced. The PAFCOS software will automatically convert a PAFEC data structure into a COSMIC/NASTRAN bulk data deck. The capabilities and limitations of the PAFCOS software are fully discussed in the following report.
Towards an Intelligent Planning Knowledge Base Development Environment
NASA Technical Reports Server (NTRS)
Chien, S.
1994-01-01
ract describes work in developing knowledge base editing and debugging tools for the Multimission VICAR Planner (MVP) system. MVP uses artificial intelligence planning techniques to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing requests made to the JPL Multimission Image Processing Laboratory.
Overview of a Linguistic Theory of Design. AI Memo 383A.
ERIC Educational Resources Information Center
Miller, Mark L.; Goldstein, Ira P.
The SPADE theory, which uses linguistic formalisms to model the planning and debugging processes of computer programming, was simultaneously developed and tested in three separate contexts--computer uses in education, automatic programming (a traditional artificial intelligence arena), and protocol analysis (the domain of information processing…
Debugging a high performance computing program
Gooding, Thomas M.
2014-08-19
Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.
Debugging a high performance computing program
Gooding, Thomas M.
2013-08-20
Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.
Space Fabrication Demonstration System
NASA Technical Reports Server (NTRS)
1978-01-01
The completion of assembly of the beam builder and its first automatic production of truss is discussed. A four bay, hand assembled, roll formed members truss was built and tested to ultimate load. Detail design of the fabrication facility (beam builder) was completed and designs for subsystem debugging are discussed. Many one bay truss specimens were produced to demonstrate subsystem operation and to detect problem areas.
Experiences Building an Object-Oriented System in C++
NASA Technical Reports Server (NTRS)
Madany, Peter W.; Campbell, Roy H.; Kougiouris, Panagiotis
1991-01-01
This paper describes tools that we built to support the construction of an object-oriented operating system in C++. The tools provide the automatic deletion of unwanted objects, first-class classes, dynamically loadable classes, and class-oriented debugging. As a consequence of our experience building Choices, we advocate these features as useful, simplifying and unifying many aspects of system programming.
NASA Technical Reports Server (NTRS)
Hoppa, Mary Ann; Wilson, Larry W.
1994-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
A debugging method of the Quadrotor UAV based on infrared thermal imaging
NASA Astrophysics Data System (ADS)
Cui, Guangjie; Hao, Qian; Yang, Jianguo; Chen, Lizhi; Hu, Hongkang; Zhang, Lijun
2018-01-01
High-performance UAV has been popular and in great need in recent years. The paper introduces a new method in debugging Quadrotor UAVs. Based on the infrared thermal technology and heat transfer theory, a UAV is under debugging above a hot-wire grid which is composed of 14 heated nichrome wires. And the air flow propelled by the rotating rotors has an influence on the temperature distribution of the hot-wire grid. An infrared thermal imager below observes the distribution and gets thermal images of the hot-wire grid. With the assistance of mathematic model and some experiments, the paper discusses the relationship between thermal images and the speed of rotors. By means of getting debugged UAVs into test, the standard information and thermal images can be acquired. The paper demonstrates that comparing to the standard thermal images, a UAV being debugging in the same test can draw some critical data directly or after interpolation. The results are shown in the paper and the advantages are discussed.
2014-05-01
developed techniques for building better IP geolocation systems. Geolocation has many applications, such as presenting advertisements for local business ...presenting advertisements for local business establishments on web pages to debugging network performance issues to attributing attack traffic to...Pennsylvania.” Geolocation has many applications, such as presenting advertisements for local business establishments on web pages to debugging network
BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark.
Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung
2016-05-01
Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today's data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG's simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact.
Titian: Data Provenance Support in Spark
Interlandi, Matteo; Shah, Kshitij; Tetali, Sai Deep; Gulzar, Muhammad Ali; Yoo, Seunghyun; Kim, Miryung; Millstein, Todd; Condie, Tyson
2015-01-01
Debugging data processing logic in Data-Intensive Scalable Computing (DISC) systems is a difficult and time consuming effort. Today’s DISC systems offer very little tooling for debugging programs, and as a result programmers spend countless hours collecting evidence (e.g., from log files) and performing trial and error debugging. To aid this effort, we built Titian, a library that enables data provenance—tracking data through transformations—in Apache Spark. Data scientists using the Titian Spark extension will be able to quickly identify the input data at the root cause of a potential bug or outlier result. Titian is built directly into the Spark platform and offers data provenance support at interactive speeds—orders-of-magnitude faster than alternative solutions—while minimally impacting Spark job performance; observed overheads for capturing data lineage rarely exceed 30% above the baseline job execution time. PMID:26726305
Debugging: Finding, Fixing and Flailing, a Multi-Institutional Study of Novice Debuggers
ERIC Educational Resources Information Center
Fitzgerald, Sue; Lewandowski, Gary; McCauley, Renee; Murphy, Laurie; Simon, Beth; Thomas, Lynda; Zander, Carol
2008-01-01
Debugging is often difficult and frustrating for novices. Yet because students typically debug outside the classroom and often in isolation, instructors rarely have the opportunity to closely observe students while they debug. This paper describes the details of an exploratory study of the debugging skills and behaviors of contemporary novice Java…
Software reliability perspectives
NASA Technical Reports Server (NTRS)
Wilson, Larry; Shen, Wenhui
1987-01-01
Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.
BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark
Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung
2016-01-01
Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today’s data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG’s simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact. PMID:27390389
Automatic generation of user material subroutines for biomechanical growth analysis.
Young, Jonathan M; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A; Perucchio, Renato
2010-10-01
The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis package ABAQUS allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package MATHEMATICA and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress-stretch response of a material defined by a Fung-orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in ABAQUS. The MATHEMATICA UMAT generator is then extended to include continuum growth by adding a growth subroutine to the automatically generated UMAT. The MATHEMATICA UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT, we simulate the growth-based bending of a bilayered bar with differing fiber directions in a nongrowing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for a biomechanical growth analysis.
Understanding Problem Solving Behavior of 6-8 Graders in a Debugging Game
ERIC Educational Resources Information Center
Liu, Zhongxiu; Zhi, Rui; Hicks, Andrew; Barnes, Tiffany
2017-01-01
Debugging is an over-looked component in K-12 computational thinking education. Few K-12 programming environments are designed to teach debugging, and most debugging research were conducted on college-aged students. In this paper, we presented debugging exercises to 6th-8th grade students and analyzed their problem solving behaviors in a…
Debugging classification and anti-debugging strategies
NASA Astrophysics Data System (ADS)
Gao, Shang; Lin, Qian; Xia, Mingyuan; Yu, Miao; Qi, Zhengwei; Guan, Haibing
2011-12-01
Debugging, albeit useful for software development, is also a double-edge sword since it could also be exploited by malicious attackers. This paper analyzes the prevailing debuggers and classifies them into 4 categories based on the debugging mechanism. Furthermore, as an opposite, we list 13 typical anti-debugging strategies adopted in Windows. These methods intercept specific execution points which expose the diagnostic behavior of debuggers.
NASA Astrophysics Data System (ADS)
Tian, Changbin; Chang, Jun; Wang, Qiang; Wei, Wei; Zhu, Cunguang
2015-03-01
An optical fiber gas sensor mainly consists of two parts: optical part and detection circuit. In the debugging for the detection circuit, the optical part usually serves as a signal source. However, in the debugging condition, the optical part can be easily influenced by many factors, such as the fluctuation of ambient temperature or driving current resulting in instability of the wavelength and intensity for the laser; for dual-beam sensor, the different bends and stresses of the optical fiber will lead to the fluctuation of the intensity and phase; the intensity noise from the collimator, coupler, and other optical devices in the system will also result in the impurity of the optical part based signal source. In order to dramatically improve the debugging efficiency of the detection circuit and shorten the period of research and development, this paper describes an analog signal source, consisting of a single chip microcomputer (SCM), an amplifier circuit, and a voltage-to-current conversion circuit. It can be used to realize the rapid debugging detection circuit of the optical fiber gas sensor instead of optical part based signal source. This analog signal source performs well with many other advantages, such as the simple operation, small size, and light weight.
Lightweight and Statistical Techniques for Petascale PetaScale Debugging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-06-30
This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errorsmore » in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis to a small set of nodes or by identifying equivalence classes of nodes and sampling our debug targets from them. We implemented these techniques as lightweight tools that efficiently work on the full scale of the target machine. We explored four lightweight debugging refinements: generic classification parameters, such as stack traces, application-specific classification parameters, such as global variables, statistical data acquisition techniques and machine learning based approaches to perform root cause analysis. Work done under this project can be divided into two categories, new algorithms and techniques for scalable debugging, and foundation infrastructure work on our MRNet multicast-reduction framework for scalability, and Dyninst binary analysis and instrumentation toolkits.« less
Debugging and Logging Services for Defence Service Oriented Architectures
2012-02-01
Service A software component and callable end point that provides a logically related set of operations, each of which perform a logical step in a...important to note that in some cases when the fault is identified to lie in uneditable code such as program libraries, or outsourced software services ...debugging is limited to characterisation of the fault, reporting it to the software or service provider and development of work-arounds and management
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Supinski, B R; Miller, B P; Liblit, B
2011-09-13
Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques.more » Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the first two years of the project, we have successfully extended STAT to determine the relative progress of different MPI processes. We have shown that the STAT, which is now included in the debugging tools distributed by Cray with their large-scale systems, substantially reduces the scale at which traditional debugging techniques are applied. We have extended CBI to large-scale systems and developed new compiler based analyses that reduce its instrumentation overhead. Our results demonstrate that CBI can identify the source of errors in large-scale applications. Finally, we have developed MPIecho, a new technique that will reduce the time required to perform key correctness analyses, such as the detection of writes to unallocated memory. Overall, our research results are the foundations for new debugging paradigms that will improve application scientist productivity by reducing the time to determine which package or module contains the root cause of a problem that arises at all scales of our high end systems. While we have made substantial progress in the first two years of CoPS research, significant work remains. While STAT provides scalable debugging assistance for incorrect application runs, we could apply its techniques to assertions in order to observe deviations from expected behavior. Further, we must continue to refine STAT's techniques to represent behavioral equivalence classes efficiently as we expect systems with millions of threads in the next year. We are exploring new CBI techniques that can assess the likelihood that execution deviations from past behavior are the source of erroneous execution. Finally, we must develop usable correctness analyses that apply the MPIecho parallelization strategy in order to locate coding errors. We expect to make substantial progress on these directions in the next year but anticipate that significant work will remain to provide usable, scalable debugging paradigms.« less
Debugging embedded computer programs. [tactical missile computers
NASA Technical Reports Server (NTRS)
Kemp, G. H.
1980-01-01
Every embedded computer program must complete its debugging cycle using some system that will allow real time debugging. Many of the common items addressed during debugging are listed. Seven approaches to debugging are analyzed to evaluate how well they treat those items. Cost evaluations are also included in the comparison. The results indicate that the best collection of capabilities to cover the common items present in the debugging task occurs in the approach where a minicomputer handles the environment simulation with an emulation of some kind representing the embedded computer. This approach can be taken at a reasonable cost. The case study chosen is an embedded computer in a tactical missile. Several choices of computer for the environment simulation are discussed as well as different approaches to the embedded emulator.
Surrogate oracles, generalized dependency and simpler models
NASA Technical Reports Server (NTRS)
Wilson, Larry
1990-01-01
Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.
Debugging Techniques Used by Experienced Programmers to Debug Their Own Code.
1990-09-01
IS. NUMBER OF PAGES code debugging 62 computer programmers 16. PRICE CODE debug programming 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 119...Davis, and Schultz (1987) also compared experts and novices, but focused on the way a computer program is represented cognitively and how that...of theories in the emerging computer programming domain (Fisher, 1987). In protocol analysis, subjects are asked to talk/think aloud as they solve
Automatic Single Event Effects Sensitivity Analysis of a 13-Bit Successive Approximation ADC
NASA Astrophysics Data System (ADS)
Márquez, F.; Muñoz, F.; Palomo, F. R.; Sanz, L.; López-Morillo, E.; Aguirre, M. A.; Jiménez, A.
2015-08-01
This paper presents Analog Fault Tolerant University of Seville Debugging System (AFTU), a tool to evaluate the Single-Event Effect (SEE) sensitivity of analog/mixed signal microelectronic circuits at transistor level. As analog cells can behave in an unpredictable way when critical areas interact with the particle hitting, there is a need for designers to have a software tool that allows an automatic and exhaustive analysis of Single-Event Effects influence. AFTU takes the test-bench SPECTRE design, emulates radiation conditions and automatically evaluates vulnerabilities using user-defined heuristics. To illustrate the utility of the tool, the SEE sensitivity of a 13-bits Successive Approximation Analog-to-Digital Converter (ADC) has been analysed. This circuit was selected not only because it was designed for space applications, but also due to the fact that a manual SEE sensitivity analysis would be too time-consuming. After a user-defined test campaign, it was detected that some voltage transients were propagated to a node where a parasitic diode was activated, affecting the offset cancelation, and therefore the whole resolution of the ADC. A simple modification of the scheme solved the problem, as it was verified with another automatic SEE sensitivity analysis.
Property-driven functional verification technique for high-speed vision system-on-chip processor
NASA Astrophysics Data System (ADS)
Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian
2017-04-01
The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.
Purple L1 Milestone Review Panel TotalView Debugger Functionality and Performance for ASC Purple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, M
2006-12-12
ASC code teams require a robust software debugging tool to help developers quickly find bugs in their codes and get their codes running. Development debugging commonly runs up to 512 processes. Production jobs run up to full ASC Purple scale, and at times require introspection while running. Developers want a debugger that runs on all their development and production platforms and that works with all compilers and runtimes used with ASC codes. The TotalView Multiprocess Debugger made by Etnus was specified for ASC Purple to address this needed capability. The ASC Purple environment builds on the environment seen by TotalViewmore » on ASCI White. The debugger must now operate with the Power5 CPU, Federation switch, AIX 5.3 operating system including large pages, IBM compilers 7 and 9, POE 4.2 parallel environment, and rs6000 SLURM resource manager. Users require robust, basic debugger functionality with acceptable performance at development debugging scale. A TotalView installation must be provided at the beginning of the early user access period that meets these requirements. A functional enhancement, fast conditional data watchpoints, and a scalability enhancement, capability up to 8192 processes, are to be demonstrated.« less
NASA Technical Reports Server (NTRS)
Wilson, Larry
1991-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.
Performance Metrics for Monitoring Parallel Program Executions
NASA Technical Reports Server (NTRS)
Sarukkai, Sekkar R.; Gotwais, Jacob K.; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Existing tools for debugging performance of parallel programs either provide graphical representations of program execution or profiles of program executions. However, for performance debugging tools to be useful, such information has to be augmented with information that highlights the cause of poor program performance. Identifying the cause of poor performance necessitates the need for not only determining the significance of various performance problems on the execution time of the program, but also needs to consider the effect of interprocessor communications of individual source level data structures. In this paper, we present a suite of normalized indices which provide a convenient mechanism for focusing on a region of code with poor performance and highlights the cause of the problem in terms of processors, procedures and data structure interactions. All the indices are generated from trace files augmented with data structure information.. Further, we show with the help of examples from the NAS benchmark suite that the indices help in detecting potential cause of poor performance, based on augmented execution traces obtained by monitoring the program.
Debugging from the Student Perspective
ERIC Educational Resources Information Center
Fitzgerald, S.; McCauley, R.; Hanks, B.; Murphy, L.; Simon, B.; Zander, C.
2010-01-01
Learning to debug is a difficult, yet essential, aspect of learning to program. Students in this multi-institutional study report that finding bugs is harder than fixing them. They use a wide variety of debugging strategies, some of them unexpected. Time spent on understanding the problem can be effective. Pattern matching, particularly at the…
MIRO: A debugging tool for CLIPS incorporating historical Rete networks
NASA Technical Reports Server (NTRS)
Tuttle, Sharon M.; Eick, Christoph F.
1994-01-01
At the last CLIPS conference, we discussed our ideas for adding a temporal dimension to the Rete network used to implement CLIPS. The resulting historical Rete network could then be used to store 'historical' information about a run of a CLIPS program, to aid in debugging. MIRO, a debugging tool for CLIPS built on top of CLIPS, incorporates such a historical Rete network and uses it to support its prototype question-answering capability. By enabling CLIPS users to directly ask debugging-related questions about the history of a program run, we hope to reduce the amount of single-stepping and program tracing required to debug a CLIPS program. In this paper, we briefly describe MIRO's architecture and implementation, and the current question-types that MIRO supports. These question-types are further illustrated using an example, and the benefits of the debugging tool are discussed. We also present empirical results that measure the run-time and partial storage overhead of MIRO, and discuss how MIRO may also be used to study various efficiency aspects of CLIPS programs.
Representing and Teaching Knowledge for Troubleshooting/Debugging. Technical Report No. 292.
ERIC Educational Resources Information Center
Wescourt, Keith T.; Hemphill, Linda
The goal of the present project was to identify the types of knowledge necessary and useful for competent troubleshooting/debugging and to examine how new approaches to formal instruction might influence the attainment of competence by students. The research focused on the role of general strategies in troubleshooting/debugging, and how they might…
Telemetry and Science Data Software System
NASA Technical Reports Server (NTRS)
Bates, Lakesha; Hong, Liang
2011-01-01
The Telemetry and Science Data Software System (TSDSS) was designed to validate the operational health of a spacecraft, ease test verification, assist in debugging system anomalies, and provide trending data and advanced science analysis. In doing so, the system parses, processes, and organizes raw data from the Aquarius instrument both on the ground and while in space. In addition, it provides a user-friendly telemetry viewer, and an instant pushbutton test report generator. Existing ground data systems can parse and provide simple data processing, but have limitations in advanced science analysis and instant report generation. The TSDSS functions as an offline data analysis system during I&T (integration and test) and mission operations phases. After raw data are downloaded from an instrument, TSDSS ingests the data files, parses, converts telemetry to engineering units, and applies advanced algorithms to produce science level 0, 1, and 2 data products. Meanwhile, it automatically schedules upload of the raw data to a remote server and archives all intermediate and final values in a MySQL database in time order. All data saved in the system can be straightforwardly retrieved, exported, and migrated. Using TSDSS s interactive data visualization tool, a user can conveniently choose any combination and mathematical computation of interesting telemetry points from a large range of time periods (life cycle of mission ground data and mission operations testing), and display a graphical and statistical view of the data. With this graphical user interface (GUI), the data queried graphs can be exported and saved in multiple formats. This GUI is especially useful in trending data analysis, debugging anomalies, and advanced data analysis. At the request of the user, mission-specific instrument performance assessment reports can be generated with a simple click of a button on the GUI. From instrument level to observatory level, the TSDSS has been operating supporting functional and performance tests and refining system calibration algorithms and coefficients, in sync with the Aquarius/SAC-D spacecraft. At the time of this reporting, it was prepared and set up to perform anomaly investigation for mission operations preceding the Aquarius/SAC-D spacecraft launch on June 10, 2011.
A mechanism for efficient debugging of parallel programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, B.P.; Choi, J.D.
1988-01-01
This paper addresses the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors (SMMP). The authors describe the use of flowback analysis to provide information on causal relationships between events in a program's execution without re-executing the program for debugging. The authors introduce a mechanism called incremental tracing that, by using semantic analyses of the debugged program, makes the flowback analysis practical with only a small amount of trace generated during execution. The extend flowback analysis to apply to parallel programs and describe a method to detect race conditions in the interactions ofmore » the co-operating processes.« less
NASA Astrophysics Data System (ADS)
Cavallari, Francesca; de Gruttola, Michele; Di Guida, Salvatore; Govi, Giacomo; Innocente, Vincenzo; Pfeiffer, Andreas; Pierro, Antonio
2011-12-01
Automatic, synchronous and reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. In this complex infrastructure, monitoring and fast detection of errors is a very challenging task. In this paper, we describe the CMS experiment system to process and populate the Condition Databases and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are automatically collected using centralized jobs or are "dropped" by the users in dedicated services (offline and online drop-box), which synchronize them and take care of writing them into the online database. Then they are automatically streamed to the offline database, and thus are immediately accessible offline worldwide. The condition data are managed by different users using a wide range of applications.In normal operation the database monitor is used to provide simple timing information and the history of all transactions for all database accounts, and in the case of faults it is used to return simple error messages and more complete debugging information.
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Ogilvie, P.
1973-01-01
The user and programming information necessary for the application of the SATELLITE programs for the STARS system are presented. The individual program functions are: (1) data debugging for the STARS-2S program, (2) Fourier series conversion program, (3) data debugging for the STARS-2B program, and (4) data debugging for the STARS-2V program.
Mindtagger: A Demonstration of Data Labeling in Knowledge Base Construction.
Shin, Jaeho; Ré, Christopher; Cafarella, Michael
2015-08-01
End-to-end knowledge base construction systems using statistical inference are enabling more people to automatically extract high-quality domain-specific information from unstructured data. As a result of deploying DeepDive framework across several domains, we found new challenges in debugging and improving such end-to-end systems to construct high-quality knowledge bases. DeepDive has an iterative development cycle in which users improve the data. To help our users, we needed to develop principles for analyzing the system's error as well as provide tooling for inspecting and labeling various data products of the system. We created guidelines for error analysis modeled after our colleagues' best practices, in which data labeling plays a critical role in every step of the analysis. To enable more productive and systematic data labeling, we created Mindtagger, a versatile tool that can be configured to support a wide range of tasks. In this demonstration, we show in detail what data labeling tasks are modeled in our error analysis guidelines and how each of them is performed using Mindtagger.
Assessment of NDE reliability data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.
1975-01-01
Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaines, Sherry
Intentionally simple buggy code created for use in a debugging demonstration as part of recruiting tech talks. Code exemplifies a buffer overflow, leading to return address corruption. Code also demonstrates unused return value.
A unified approach for debugging is-a structure and mappings in networked taxonomies
2013-01-01
Background With the increased use of ontologies and ontology mappings in semantically-enabled applications such as ontology-based search and data integration, the issue of detecting and repairing defects in ontologies and ontology mappings has become increasingly important. These defects can lead to wrong or incomplete results for the applications. Results We propose a unified framework for debugging the is-a structure of and mappings between taxonomies, the most used kind of ontologies. We present theory and algorithms as well as an implemented system RepOSE, that supports a domain expert in detecting and repairing missing and wrong is-a relations and mappings. We also discuss two experiments performed by domain experts: an experiment on the Anatomy ontologies from the Ontology Alignment Evaluation Initiative, and a debugging session for the Swedish National Food Agency. Conclusions Semantically-enabled applications need high quality ontologies and ontology mappings. One key aspect is the detection and removal of defects in the ontologies and ontology mappings. Our system RepOSE provides an environment that supports domain experts to deal with this issue. We have shown the usefulness of the approach in two experiments by detecting and repairing circa 200 and 30 defects, respectively. PMID:23548155
Trace-Driven Debugging of Message Passing Programs
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Hood, Robert; Lopez, Louis; Bailey, David (Technical Monitor)
1998-01-01
In this paper we report on features added to a parallel debugger to simplify the debugging of parallel message passing programs. These features include replay, setting consistent breakpoints based on interprocess event causality, a parallel undo operation, and communication supervision. These features all use trace information collected during the execution of the program being debugged. We used a number of different instrumentation techniques to collect traces. We also implemented trace displays using two different trace visualization systems. The implementation was tested on an SGI Power Challenge cluster and a network of SGI workstations.
Control system of mobile radiographic complex to study equations of state of substances
NASA Astrophysics Data System (ADS)
Belov, O. V.; Valekzhanin, R. V.; Kustov, D. V.; Shamro, O. A.; Sharov, T. V.
2017-05-01
A source of x-ray radiation is one of the tools to study equations of state of substances in dynamics. The mobile radiographic bench based on BIM-1500 [1] was developed in RFNC-VNIIEF to increase output parameters of the x-ray radiation source. From automated control system side, BIM-1500 is a set of six high-voltage generators based on the capacitive energy storage, technological equipment, and elements of a blocking system. This paper considers automated control system of the mobile radiographic bench MCA BIM 1500. It consists of six high-voltage generator control circuits, synchronization subsystem, and block subsystem. The object of control has some peculiarities: high level of electromagnetic noise, remoteness of the control panel from the object of control. In connection with this, the coupling devices are arranged closer to the object of control and performed in the form of a set of galvanically insulated control units, which are combined into a net. The operator runs MCA BIM using the operator’s screens on PC or by means of manual control on the equipment in the mode of debugging. The control software provides performance of the experiment in automatic regime in accordance with preset settings. The operator can stop the experiment at the stage of charging the capacitive storage.
[General-purpose microcomputer for medical laboratory instruments].
Vil'ner, G A; Dudareva, I E; Kurochkin, V E; Opalev, A A; Polek, A M
1984-01-01
Presented in the paper is the microcomputer based on the KP580 microprocessor set. Debugging of the hardware and the software by using the unique debugging stand developed on the basis of microcomputer "Electronica-60" is discussed.
ArrayBridge: Interweaving declarative array processing with high-performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Haoyuan; Floratos, Sofoklis; Blanas, Spyros
Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aimsmore » to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.« less
Data Provenance as a Tool for Debugging Hydrological Models based on Python
NASA Astrophysics Data System (ADS)
Wombacher, A.; Huq, M.; Wada, Y.; Van Beek, R.
2012-12-01
There is an increase in data volume used in hydrological modeling. The increasing data volume requires additional efforts in debugging models since a single output value is influenced by a multitude of input values. Thus, it is difficult to keep an overview among the data dependencies. Further, knowing these dependencies, it is a tedious job to infer all the relevant data values. The aforementioned data dependencies are also known as data provenance, i.e. the determination of how a particular value has been created and processed. The proposed tool infers the data provenance automatically from a python script and visualizes the dependencies as a graph without executing the script. To debug the model the user specifies the value of interest in space and time. The tool infers all related data values and displays them in the graph. The tool has been evaluated by hydrologists developing a model for estimating the global water demand [1]. The model uses multiple different data sources. The script we analysed has 120 lines of codes and used more than 3000 individual files, each of them representing a raster map of 360*720 cells. After importing the data of the files into a SQLite database, the data consumes around 40 GB of memory. Using the proposed tool a modeler is able to select individual values and infer which values have been used to calculate the value. Especially in cases of outliers or missing values it is a beneficial tool to provide the modeler with efficient information to investigate the unexpected behavior of the model. The proposed tool can be applied to many python scripts and has been tested with other scripts in different contexts. In case a python code contains an unknown function or class the tool requests additional information about the used function or class to enable the inference. This information has to be entered only once and can be shared with colleagues or in the community. Reference [1] Y. Wada, L. P. H. van Beek, D. Viviroli, H. H. Drr, R. Weingartner, and M. F. P. Bierkens, "Global monthly water stress: II. water demand and severity of water," Water Resources Research, vol. 47, 2011.
Automated synthesis and composition of taskblocks for control of manufacturing systems.
Holloway, L E; Guan, X; Sundaravadivelu, R; Ashley, J R
2000-01-01
Automated control synthesis methods for discrete-event systems promise to reduce the time required to develop, debug, and modify control software. Such methods must be able to translate high-level control goals into detailed sequences of actuation and sensing signals. In this paper, we present such a technique. It relies on analysis of a system model, defined as a set of interacting components, each represented as a form of condition system Petri net. Control logic modules, called taskblocks, are synthesized from these individual models. These then interact hierarchically and sequentially to drive the system through specified control goals. The resulting controller is automatically converted to executable control code. The paper concludes with a discussion of a set of software tools developed to demonstrate the techniques on a small manufacturing system.
Preventing Run-Time Bugs at Compile-Time Using Advanced C++
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neswold, Richard
When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.
NASA Technical Reports Server (NTRS)
Kole, R. E.; Helmers, P. H.; Hotz, R. L.
1974-01-01
This is a reference document to be used in the process of getting HAL/S programs compiled and debugged on the IBM 360 computer. Topics from the operating system communication to interpretation of debugging aids are discussed. Features of HAL programming system that have specific system/360 dependencies are presented.
Development of a complex experimental system for controlled ecological life support technique
NASA Astrophysics Data System (ADS)
Guo, S.; Tang, Y.; Zhu, J.; Wang, X.; Feng, H.; Ai, W.; Qin, L.; Deng, Y.
A complex experimental system for controlled ecological life support technique can be used as a test platform for plant-man integrated experiments and material close-loop experiments of the controlled ecological life support system CELSS Based on lots of plan investigation plan design and drawing design the system was built through the steps of processing installation and joined debugging The system contains a volume of about 40 0m 3 its interior atmospheric parameters such as temperature relative humidity oxygen concentration carbon dioxide concentration total pressure lighting intensity photoperiod water content in the growing-matrix and ethylene concentration are all monitored and controlled automatically and effectively Its growing system consists of two rows of racks along its left-and-right sides separately and each of which holds two up-and-down layers eight growing beds hold a total area of about 8 4m 2 and their vertical distance can be adjusted automatically and independently lighting sources consist of both red and blue light-emitting diodes Successful development of the test platform will necessarily create an essential condition for next large-scale integrated study of controlled ecological life support technique
Performance management system enhancement and maintenance
NASA Technical Reports Server (NTRS)
Cleaver, T. G.; Ahour, R.; Johnson, B. R.
1984-01-01
The research described in this report concludes a two-year effort to develop a Performance Management System (PMS) for the NCC computers. PMS provides semi-automated monthly reports to NASA and contractor management on the status and performance of the NCC computers in the TDRSS program. Throughout 1984, PMS was tested, debugged, extended, and enhanced. Regular PMS monthly reports were produced and distributed. PMS continues to operate at the NCC under control of Bendix Corp. personnel.
A design of camera simulator for photoelectric image acquisition system
NASA Astrophysics Data System (ADS)
Cai, Guanghui; Liu, Wen; Zhang, Xin
2015-02-01
In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.
Aspects of a Theory of Simplification, Debugging, and Coaching.
ERIC Educational Resources Information Center
Fischer, Gerhard; And Others
This paper analyses new methods of teaching skiing in terms of a computational paradigm for learning called increasingly complex microworlds (ICM). Examining the factors that underlie the dramatic enhancement of the learning of skiing led to the focus on the processes of simplification, debugging, and coaching. These three processes are studied in…
A debugger-interpreter with setup facilities for assembly programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolinskii, I.S.; Zisel`man, I.M.; Belotskii, S.L.
1995-11-01
In this paper a software program allowing one to introduce and debug the descriptions of the von Nuemann architecture processors and their assemblers, efficiently debug assembly programs, and investigate the instruction sets of the described processors is considered. For a description of the processor sematics and assembler syntax, a metassembly language is suggested.
A Framework for Debugging Geoscience Projects in a High Performance Computing Environment
NASA Astrophysics Data System (ADS)
Baxter, C.; Matott, L.
2012-12-01
High performance computing (HPC) infrastructure has become ubiquitous in today's world with the emergence of commercial cloud computing and academic supercomputing centers. Teams of geoscientists, hydrologists and engineers can take advantage of this infrastructure to undertake large research projects - for example, linking one or more site-specific environmental models with soft computing algorithms, such as heuristic global search procedures, to perform parameter estimation and predictive uncertainty analysis, and/or design least-cost remediation systems. However, the size, complexity and distributed nature of these projects can make identifying failures in the associated numerical experiments using conventional ad-hoc approaches both time- consuming and ineffective. To address these problems a multi-tiered debugging framework has been developed. The framework allows for quickly isolating and remedying a number of potential experimental failures, including: failures in the HPC scheduler; bugs in the soft computing code; bugs in the modeling code; and permissions and access control errors. The utility of the framework is demonstrated via application to a series of over 200,000 numerical experiments involving a suite of 5 heuristic global search algorithms and 15 mathematical test functions serving as cheap analogues for the simulation-based optimization of pump-and-treat subsurface remediation systems.
Parsing Protocols Using Problem Solving Grammars. AI Memo 385.
ERIC Educational Resources Information Center
Miller, Mark L.; Goldstein, Ira P.
A theory of the planning and debugging of computer programs is formalized as a context free grammar, which is used to reveal the constituent structure of problem solving episodes by parsing protocols in which programs are written, tested, and debugged. This is illustrated by the detailed analysis of an actual session with a beginning student…
SABRINA: an interactive solid geometry modeling program for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.
SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.
Allinea Parallel Profiling and Debugging Tools on the Peregrine System |
client for your platform. (Mac/Windows/Linux) Configuration to connect to Peregrine: Open the Allinea view it # directly through x11 forwarding just type 'map', # it will open a GUI. $ map # to profile an enable x-forwarding when connecting to # Peregrine. $ map # This will open the GUI Debugging using
Making statistical inferences about software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1988-01-01
Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.
Server-Side JavaScript Debugging: Viewing the Contents of an Object
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, J.; Simons, R.
1999-04-21
JavaScript allows the definition and use of large, complex objects. Unlike some other object-oriented languages, it also allows run-time modifications not only of the values of object components, but also of the very structure of the object itself. This feature is powerful and sometimes very convenient, but it can be difficult to keep track of the object's structure and values throughout program execution. What's needed is a simple way to view the current state of an object at any point during execution. There is a debug function that is included in the Netscape server-side JavaScript environment. The function outputs themore » value(s) of the expression given as the argument to the function in the JavaScript Application Manager's debug window [SSJS].« less
Parallel program debugging with flowback analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Jongdeok.
1989-01-01
This thesis describes the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors. The goal of the debugging system is to present to the programmer a graphical view of the dynamic program dependences while keeping the execution-time overhead low. The author first describes the use of flowback analysis to provide information on causal relationship between events in a programs' execution without re-executing the program for debugging. Execution time overhead is kept low by recording only a small amount of trace during a program's execution. He uses semantic analysis and a technique called incrementalmore » tracing to keep the time and space overhead low. As part of the semantic analysis, he uses a static program dependence graph structure that reduces the amount of work done at compile time and takes advantage of the dynamic information produced during execution time. The cornerstone of the incremental tracing concept is to generate a coarse trace during execution and fill incrementally, during the interactive portion of the debugging session, the gap between the information gathered in the coarse trace and the information needed to do the flowback analysis using the coarse trace. Then, he describes how to extend the flowback analysis to parallel programs. The flowback analysis can span process boundaries; i.e., the most recent modification to a shared variable might be traced to a different process than the one that contains the current reference. The static and dynamic program dependence graphs of the individual processes are tied together with synchronization and data dependence information to form complete graphs that represent the entire program.« less
C Language Integrated Production System, Ada Version
NASA Technical Reports Server (NTRS)
Culbert, Chris; Riley, Gary; Savely, Robert T.; Melebeck, Clovis J.; White, Wesley A.; Mcgregor, Terry L.; Ferguson, Melisa; Razavipour, Reza
1992-01-01
CLIPS/Ada provides capabilities of CLIPS v4.3 but uses Ada as source language for CLIPS executable code. Implements forward-chaining rule-based language. Program contains inference engine and language syntax providing framework for construction of expert-system program. Also includes features for debugging application program. Based on Rete algorithm which provides efficient method for performing repeated matching of patterns. Written in Ada.
SABRINA: an interactive three-dimensional geometry-mnodeling program for MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T. III
SABRINA is a fully interactive three-dimensional geometry-modeling program for MCNP, a Los Alamos Monte Carlo code for neutron and photon transport. In SABRINA, a user constructs either body geometry or surface geometry models and debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo analysis. 2 refs., 33 figs.
Interactive debug program for evaluation and modification of assembly-language software
NASA Technical Reports Server (NTRS)
Arpasi, D. J.
1979-01-01
An assembly-language debug program written for the Honeywell HDC-601 and DDP-516/316 computers is described. Names and relative addressing to improve operator-machine interaction are used. Features include versatile display, on-line assembly, and improved program execution and analysis. The program is discussed from both a programmer's and an operator's standpoint. Functional diagrams are included to describe the program, and each command is illustrated.
Simple debugging techniques for embedded subsystems
NASA Astrophysics Data System (ADS)
MacPherson, Matthew S.; Martin, Kevin S.
1990-08-01
This paper describes some of the tools and methods used for developing and debugging embedded subsystems at Fermilab. Specifically, these tools have been used for the Flying Wire project and are currently being employed for the New TECAR upgrade. The Flying Wire is a subsystem that swings a wire through the beam in order to measure luminosity and beam density distribution, and TECAR (Tevatron excitation controller and regulator) controls the power-supply ramp generation for the superconducting Tevatron accelerator at Fermilab. In both instances the subsystem hardware consists of a VME crate with one or more processors, shared memory and a network connection to the accelerator control system. Two real-time-operating systems are currently being used: VRTX for the Flying Wire system, and MTOS for New TECAR. The code which runs in these subsystems is a combination of C and assembler and is developed using the Microtec cross-development tools on a VAX 8650 running VMS. This paper explains how multiple debuggers are used to give the greatest possible flexibility from assembly to high-level debugging. Also discussed is how network debugging and network downloading can make a very effective and efficient means of finding bugs in the subsystem environment. The debuggers used are PROBE1, TRACER and the MTOS debugger.
Development and application of structural dynamics analysis capabilities
NASA Technical Reports Server (NTRS)
Heinemann, Klaus W.; Hozaki, Shig
1994-01-01
Extensive research activities were performed in the area of multidisciplinary modeling and simulation of aerospace vehicles that are relevant to NASA Dryden Flight Research Facility. The efforts involved theoretical development, computer coding, and debugging of the STARS code. New solution procedures were developed in such areas as structures, CFD, and graphics, among others. Furthermore, systems-oriented codes were developed for rendering the code truly multidisciplinary and rather automated in nature. Also, work was performed in pre- and post-processing of engineering analysis data.
Approaches to Debugging at Scale on the Peregrine System | High-Performance
nodes=100 walltime=1:00:00:00 -A CSC001 This asks for 100 nodes for 1 day. When the nodes are available interactive debugger such as TotalView. When you are done working, exit the queue-name When you want to disconnect from the session, type control-A then d. The interactive job
Flow-Centric, Back-in-Time Debugging
NASA Astrophysics Data System (ADS)
Lienhard, Adrian; Fierz, Julien; Nierstrasz, Oscar
Conventional debugging tools present developers with means to explore the run-time context in which an error has occurred. In many cases this is enough to help the developer discover the faulty source code and correct it. However, rather often errors occur due to code that has executed in the past, leaving certain objects in an inconsistent state. The actual run-time error only occurs when these inconsistent objects are used later in the program. So-called back-in-time debuggers help developers step back through earlier states of the program and explore execution contexts not available to conventional debuggers. Nevertheless, even Back-in-Time Debuggers do not help answer the question, “Where did this object come from?” The Object-Flow Virtual Machine, which we have proposed in previous work, tracks the flow of objects to answer precisely such questions, but this VM does not provide dedicated debugging support to explore faulty programs. In this paper we present a novel debugger, called Compass, to navigate between conventional run-time stack-oriented control flow views and object flows. Compass enables a developer to effectively navigate from an object contributing to an error back-in-time through all the code that has touched the object. We present the design and implementation of Compass, and we demonstrate how flow-centric, back-in-time debugging can be used to effectively locate the source of hard-to-find bugs.
Flexible Decision Support in Device-Saturated Environments
2003-10-01
also output tuples to a remote MySQL or Postgres database. 3.3 GUI The GUI allows the user to pose queries using SQL and to display query...DatabaseConnection.java – handles connections to an external database (such as MySQL or Postgres ). • Debug.java – contains the code for printing out Debug messages...also provided. It is possible to output the results of queries to a MySQL or Postgres database for archival and the GUI can query those results
The 3DGRAPE book: Theory, users' manual, examples
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.
1989-01-01
A users' manual for a new three-dimensional grid generator called 3DGRAPE is presented. The program, written in FORTRAN, is capable of making zonal (blocked) computational grids in or about almost any shape. Grids are generated by the solution of Poisson's differential equations in three dimensions. The program automatically finds its own values for inhomogeneous terms which give near-orthogonality and controlled grid cell height at boundaries. Grids generated by 3DGRAPE have been applied to both viscous and inviscid aerodynamic problems, and to problems in other fluid-dynamic areas. The smoothness for which elliptic methods are known is seen here, including smoothness across zonal boundaries. An introduction giving the history, motivation, capabilities, and philosophy of 3DGRAPE is presented first. Then follows a chapter on the program itself. The input is then described in detail. A chapter on reading the output and debugging follows. Three examples are then described, including sample input data and plots of output. Last is a chapter on the theoretical development of the method.
Project UNITY: Cross Domain Visualization Collaboration
2015-10-18
location is at the Space Operations Coordination Center (UK-SPOCC) in High Wycombe, UK. Identical AFRL-developed ErgoWorkstations (see Figure 2) were...installed in both locations. The AFRL ErgoWorkstation is made up of a high performance Windows-based PC with three displays: two 30” Dell Cinema ...system can be seen in Figure 1. The intent of using identical hardware is to minimize complexity, to simplify debugging, and to provide an opportunity
Iterative Authoring Using Story Generation Feedback: Debugging or Co-creation?
NASA Astrophysics Data System (ADS)
Swartjes, Ivo; Theune, Mariët
We explore the role that story generation feedback may play within the creative process of interactive story authoring. While such feedback is often used as 'debugging' information, we explore here a 'co-creation' view, in which the outcome of the story generator influences authorial intent. We illustrate an iterative authoring approach in which each iteration consists of idea generation, implementation and simulation. We find that the tension between authorial intent and the partially uncontrollable story generation outcome may be relieved by taking such a co-creation approach.
Insertion of coherence requests for debugging a multiprocessor
Blumrich, Matthias A.; Salapura, Valentina
2010-02-23
A method and system are disclosed to insert coherence events in a multiprocessor computer system, and to present those coherence events to the processors of the multiprocessor computer system for analysis and debugging purposes. The coherence events are inserted in the computer system by adding one or more special insert registers. By writing into the insert registers, coherence events are inserted in the multiprocessor system as if they were generated by the normal coherence protocol. Once these coherence events are processed, the processing of coherence events can continue in the normal operation mode.
1993-03-01
I1. NON COHERENT-REFLECTOMETRY The design of sources of steady-state intencive noise signals of mm wave band with sufficiently wide and homogenious...structures exhibit non -reciprocity effects, as well as magnetically controlled resonances, which are observable in reflection, absorption, and...performance of the oscillator. Accordingly, we designed a 3mm electronically tuned harmonic -420- oscillator in which it is easy to debug and control
An Evaluation of a Management Wargame and the Factors Affecting Game Performance.
1987-09-01
in residence. This is not a criticism of the author, but rather a systematic flaw in game development in general. Therefore, TEMPO-AI is an excellent...establish the test procedure used in this thesis. This stage of game development is absolutely vital, if the game is intended for serious academic use...Unfortunately, this important step is sadly neglected in nearly all military game development . While TEMPO-AI was extensively debugged as a computer
Tethered Forth system for FPGA applications
NASA Astrophysics Data System (ADS)
Goździkowski, Paweł; Zabołotny, Wojciech M.
2013-10-01
This paper presents the tethered Forth system dedicated for testing and debugging of FPGA based electronic systems. Use of the Forth language allows to interactively develop and run complex testing or debugging routines. The solution is based on a small, 16-bit soft core CPU, used to implement the Forth Virtual Machine. Thanks to the use of the tethered Forth model it is possible to minimize usage of the internal RAM memory in the FPGA. The function of the intelligent terminal, which is an essential part of the tethered Forth system, may be fulfilled by the standard PC computer or by the smartphone. System is implemented in Python (the software for intelligent terminal), and in VHDL (the IP core for FPGA), so it can be easily ported to different hardware platforms. The connection between the terminal and FPGA may be established and disconnected many times without disturbing the state of the FPGA based system. The presented system has been verified in the hardware, and may be used as a tool for debugging, testing and even implementing of control algorithms for FPGA based systems.
Multitasking kernel for the C and Fortran programming languages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, E.D. III
1984-09-01
A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
Automated Weather Observing System (AWOS) Demonstration Program.
1984-09-01
month "bur:-in" r "debugging" period and a 10-month ’usefu I life " period. Fhe butrn- in pr i ,J was i sed to establish the Data Acquisition System...Histograms. Histograms provide a graphical means of showing how well the probability distribution of residu : , approaches a normal or Gaussian distribution...Organization Report No. 7- Author’s) Paul .J. O t Brien et al. DOT/FAA/CT-84/20 9. Performing Organlzation Name and Address 10. Work Unit No. (TRAIS
Experience of the ARGO autonomous vehicle
NASA Astrophysics Data System (ADS)
Bertozzi, Massimo; Broggi, Alberto; Conte, Gianni; Fascioli, Alessandra
1998-07-01
This paper presents and discusses the first results obtained by the GOLD (Generic Obstacle and Lane Detection) system as an automatic driver of ARGO. ARGO is a Lancia Thema passenger car equipped with a vision-based system that allows to extract road and environmental information from the acquired scene. By means of stereo vision, obstacles on the road are detected and localized, while the processing of a single monocular image allows to extract the road geometry in front of the vehicle. The generality of the underlying approach allows to detect generic obstacles (without constraints on shape, color, or symmetry) and to detect lane markings even in dark and in strong shadow conditions. The hardware system consists of a PC Pentium 200 Mhz with MMX technology and a frame-grabber board able to acquire 3 b/w images simultaneously; the result of the processing (position of obstacles and geometry of the road) is used to drive an actuator on the steering wheel, while debug information are presented to the user on an on-board monitor and a led-based control panel.
FPGA Flash Memory High Speed Data Acquisition
NASA Technical Reports Server (NTRS)
Gonzalez, April
2013-01-01
The purpose of this research is to design and implement a VHDL ONFI Controller module for a Modular Instrumentation System. The goal of the Modular Instrumentation System will be to have a low power device that will store data and send the data at a low speed to a processor. The benefit of such a system will give an advantage over other purchased binary IP due to the capability of allowing NASA to re-use and modify the memory controller module. To accomplish the performance criteria of a low power system, an in house auxiliary board (Flash/ADC board), FPGA development kit, debug board, and modular instrumentation board will be jointly used for the data acquisition. The Flash/ADC board contains four, 1 MSPS, input channel signals and an Open NAND Flash memory module with an analog to digital converter. The ADC, data bits, and control line signals from the board are sent to an Microsemi/Actel FPGA development kit for VHDL programming of the flash memory WRITE, READ, READ STATUS, ERASE, and RESET operation waveforms using Libero software. The debug board will be used for verification of the analog input signal and be able to communicate via serial interface with the module instrumentation. The scope of the new controller module was to find and develop an ONFI controller with the debug board layout designed and completed for manufacture. Successful flash memory operation waveform test routines were completed, simulated, and tested to work on the FPGA board. Through connection of the Flash/ADC board with the FPGA, it was found that the device specifications were not being meet with Vdd reaching half of its voltage. Further testing showed that it was the manufactured Flash/ADC board that contained a misalignment with the ONFI memory module traces. The errors proved to be too great to fix in the time limit set for the project.
INFORM: An interactive data collection and display program with debugging capability
NASA Technical Reports Server (NTRS)
Cwynar, D. S.
1980-01-01
A computer program was developed to aid ASSEMBLY language programmers of mini and micro computers in solving the man machine communications problems that exist when scaled integers are involved. In addition to producing displays of quasi-steady state values, INFORM provides an interactive mode for debugging programs, making program patches, and modifying the displays. Auxiliary routines SAMPLE and DATAO add dynamic data acquisition and high speed dynamic display capability to the program. Programming information and flow charts to aid in implementing INFORM on various machines together with descriptions of all supportive software are provided. Program modifications to satisfy the individual user's needs are considered.
A practice course to cultivate students' comprehensive ability of photoelectricity
NASA Astrophysics Data System (ADS)
Lv, Yong; Liu, Yang; Niu, Chunhui; Liu, Lishuang
2017-08-01
After the studying of many theoretical courses, it's important and urgent for the students from specialty of optoelectronic information science and engineering to cultivate their comprehensive ability of photoelectricity. We set up a comprehensive practice course named "Integrated Design of Optoelectronic Information System" (IDOIS) for the purpose that students can integrate their knowledge of optics, electronics and computer programming to design, install and debug an optoelectronic system with independent functions. Eight years of practice shows that this practice course can train students' ability of analysis, design/development and debugging of photoelectric system, improve their ability in document retrieval, design proposal and summary report writing, teamwork, innovation consciousness and skill.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandwisch, D W
1995-11-01
This report describes work performed by Solar Cells, Inc. (SCI), under a 3-year subcontract to advance SCI`s PV manufacturing technologies, reduce module production costs, increase module performance, and provide the groundwork for SCI to expand its commercial production capacities. SCI will meet these objectives in three phases by designing, debugging, and operating a 20-MW/year, automated, continuous PV manufacturing line that produces 60-cm {times} 120-cm thin-film CdTe PV modules. This report describes tasks completed under Phase 1 of the US Department of Energy`s PV Manufacturing Technology program.
Symphony: A Framework for Accurate and Holistic WSN Simulation
Riliskis, Laurynas; Osipov, Evgeny
2015-01-01
Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144
A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process
NASA Technical Reports Server (NTRS)
Wang, Yi; Tamai, Tetsuo
2009-01-01
Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.
The Priority Inversion Problem and Real-Time Symbolic Model Checking
1993-04-23
real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties
NASA Technical Reports Server (NTRS)
Brooks, David E.; Gassman, Holly; Beering, Dave R.; Welch, Arun; Hoder, Douglas J.; Ivancic, William D.
1999-01-01
Transmission Control Protocol (TCP) is the underlying protocol used within the Internet for reliable information transfer. As such, there is great interest to have all implementations of TCP efficiently interoperate. This is particularly important for links exhibiting long bandwidth-delay products. The tools exist to perform TCP analysis at low rates and low delays. However, for extremely high-rate and lone-delay links such as 622 Mbps over geosynchronous satellites, new tools and testing techniques are required. This paper describes the tools and techniques used to analyze and debug various TCP implementations over high-speed, long-delay links.
A language comparison for scientific computing on MIMD architectures
NASA Technical Reports Server (NTRS)
Jones, Mark T.; Patrick, Merrell L.; Voigt, Robert G.
1989-01-01
Choleski's method for solving banded symmetric, positive definite systems is implemented on a multiprocessor computer using three FORTRAN based parallel programming languages, the Force, PISCES and Concurrent FORTRAN. The capabilities of the language for expressing parallelism and their user friendliness are discussed, including readability of the code, debugging assistance offered, and expressiveness of the languages. The performance of the different implementations is compared. It is argued that PISCES, using the Force for medium-grained parallelism, is the appropriate choice for programming Choleski's method on the multiprocessor computer, Flex/32.
Automating the generation of finite element dynamical cores with Firedrake
NASA Astrophysics Data System (ADS)
Ham, David; Mitchell, Lawrence; Homolya, Miklós; Luporini, Fabio; Gibson, Thomas; Kelly, Paul; Cotter, Colin; Lange, Michael; Kramer, Stephan; Shipton, Jemma; Yamazaki, Hiroe; Paganini, Alberto; Kärnä, Tuomas
2017-04-01
The development of a dynamical core is an increasingly complex software engineering undertaking. As the equations become more complete, the discretisations more sophisticated and the hardware acquires ever more fine-grained parallelism and deeper memory hierarchies, the problem of building, testing and modifying dynamical cores becomes increasingly complex. Here we present Firedrake, a code generation system for the finite element method with specialist features designed to support the creation of geoscientific models. Using Firedrake, the dynamical core developer writes the partial differential equations in weak form in a high level mathematical notation. Appropriate function spaces are chosen and time stepping loops written at the same high level. When the programme is run, Firedrake generates high performance C code for the resulting numerics which are executed in parallel. Models in Firedrake typically take a tiny fraction of the lines of code required by traditional hand-coding techniques. They support more sophisticated numerics than are easily achieved by hand, and the resulting code is frequently higher performance. Critically, debugging, modifying and extending a model written in Firedrake is vastly easier than by traditional methods due to the small, highly mathematical code base. Firedrake supports a wide range of key features for dynamical core creation: A vast range of discretisations, including both continuous and discontinuous spaces and mimetic (C-grid-like) elements which optimally represent force balances in geophysical flows. High aspect ratio layered meshes suitable for ocean and atmosphere domains. Curved elements for high accuracy representations of the sphere. Support for non-finite element operators, such as parametrisations. Access to PETSc, a world-leading library of programmable linear and nonlinear solvers. High performance adjoint models generated automatically by symbolically reasoning about the forward model. This poster will present the key features of the Firedrake system, as well as those of Gusto, an atmospheric dynamical core, and Thetis, a coastal ocean model, both of which are written in Firedrake.
Automatic Synthesis of UML Designs from Requirements in an Iterative Process
NASA Technical Reports Server (NTRS)
Schumann, Johann; Whittle, Jon; Clancy, Daniel (Technical Monitor)
2001-01-01
The Unified Modeling Language (UML) is gaining wide popularity for the design of object-oriented systems. UML combines various object-oriented graphical design notations under one common framework. A major factor for the broad acceptance of UML is that it can be conveniently used in a highly iterative, Use Case (or scenario-based) process (although the process is not a part of UML). Here, the (pre-) requirements for the software are specified rather informally as Use Cases and a set of scenarios. A scenario can be seen as an individual trace of a software artifact. Besides first sketches of a class diagram to illustrate the static system breakdown, scenarios are a favorite way of communication with the customer, because scenarios describe concrete interactions between entities and are thus easy to understand. Scenarios with a high level of detail are often expressed as sequence diagrams. Later in the design and implementation stage (elaboration and implementation phases), a design of the system's behavior is often developed as a set of statecharts. From there (and the full-fledged class diagram), actual code development is started. Current commercial UML tools support this phase by providing code generators for class diagrams and statecharts. In practice, it can be observed that the transition from requirements to design to code is a highly iterative process. In this talk, a set of algorithms is presented which perform reasonable synthesis and transformations between different UML notations (sequence diagrams, Object Constraint Language (OCL) constraints, statecharts). More specifically, we will discuss the following transformations: Statechart synthesis, introduction of hierarchy, consistency of modifications, and "design-debugging".
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
Automation of the 1.3-meter Robotically Controlled Telescope (RCT)
NASA Astrophysics Data System (ADS)
Gelderman, Richard; Treffers, Richard R.
2011-03-01
This poster describes the automation for the Robotically Controlled Telescope (RCT) Consortium of the 50-inch telescope at Kitt Peak National Observatory. Building upon the work of the previous contractor the telescope, dome and instrument were wired for totally autonomous (robotic) observations. The existing motors, encoders, limit switches and cables were connected to an open industrial panel that allows easy interconnection, troubleshooting and modifications. A sixteen axis Delta Tau Turbo PMAC controller is used to control all motors, encoders, flat field lights and many of the digital functions of the telescope. ADAM industrial I/O bricks are used for additional digital and analog I/O functions. Complex relay logic problems, such as the mirror cover opening sequence and the slit control, are managed using Allen Bradley Pico PLDs. Most of the low level software is written in C using the GNU compiler. The basic functionality uses an ASCII protocol communicating over Berkeley sockets. Early versions of this software were developed at U.C. Berkeley, for what was to become the Katzman Automatic Imaging Telescope (KAIT) at Lick Observatory. ASCII communications are useful for control, testing and easy to debug by looking at the log files; C-shell scripts are written to form more complex orchestrations.
ScaMo: Realisation of an OO-functional DSL for cross platform mobile applications development
NASA Astrophysics Data System (ADS)
Macos, Dragan; Solymosi, Andreas
2013-10-01
The software market is dynamically changing: the Internet is going mobile, the software applications are shifting from the desktop hardware onto the mobile devices. The largest markets are the mobile applications for iOS, Android and Windows Phone and for the purpose the typical programming languages include Objective-C, Java and C ♯. The realization of the native applications implies the integration of the developed software into the environments of mentioned mobile operating systems to enable the access to different hardware components of the devices: GPS module, display, GSM module, etc. This paper deals with the definition and possible implementation of an environment for the automatic application generation for multiple mobile platforms. It is based on a DSL for mobile application development, which includes the programming language Scala and a DSL defined in Scala. As part of a multi-stage cross-compiling algorithm, this language is translated into the language of the affected mobile platform. The advantage of our method lies in the expressiveness of the defined language and the transparent source code translation between different languages, which implies, for example, the advantages of debugging and development of the generated code.
Combining Static Model Checking with Dynamic Enforcement Using the Statecall Policy Language
NASA Astrophysics Data System (ADS)
Madhavapeddy, Anil
Internet protocols encapsulate a significant amount of state, making implementing the host software complex. In this paper, we define the Statecall Policy Language (SPL) which provides a usable middle ground between ad-hoc coding and formal reasoning. It enables programmers to embed automata in their code which can be statically model-checked using SPIN and dynamically enforced. The performance overheads are minimal, and the automata also provide higher-level debugging capabilities. We also describe some practical uses of SPL by describing the automata used in an SSH server written entirely in OCaml/SPL.
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.
1993-01-01
This document briefly describes the use of the CRAY supercomputers that are an integral part of the Supercomputing Network Subsystem of the Central Scientific Computing Complex at LaRC. Features of the CRAY supercomputers are covered, including: FORTRAN, C, PASCAL, architectures of the CRAY-2 and CRAY Y-MP, the CRAY UNICOS environment, batch job submittal, debugging, performance analysis, parallel processing, utilities unique to CRAY, and documentation. The document is intended for all CRAY users as a ready reference to frequently asked questions and to more detailed information contained in the vendor manuals. It is appropriate for both the novice and the experienced user.
The Modular Design and Production of an Intelligent Robot Based on a Closed-Loop Control Strategy.
Zhang, Libo; Zhu, Junjie; Ren, Hao; Liu, Dongdong; Meng, Dan; Wu, Yanjun; Luo, Tiejian
2017-10-14
Intelligent robots are part of a new generation of robots that are able to sense the surrounding environment, plan their own actions and eventually reach their targets. In recent years, reliance upon robots in both daily life and industry has increased. The protocol proposed in this paper describes the design and production of a handling robot with an intelligent search algorithm and an autonomous identification function. First, the various working modules are mechanically assembled to complete the construction of the work platform and the installation of the robotic manipulator. Then, we design a closed-loop control system and a four-quadrant motor control strategy, with the aid of debugging software, as well as set steering gear identity (ID), baud rate and other working parameters to ensure that the robot achieves the desired dynamic performance and low energy consumption. Next, we debug the sensor to achieve multi-sensor fusion to accurately acquire environmental information. Finally, we implement the relevant algorithm, which can recognize the success of the robot's function for a given application. The advantage of this approach is its reliability and flexibility, as the users can develop a variety of hardware construction programs and utilize the comprehensive debugger to implement an intelligent control strategy. This allows users to set personalized requirements based on their needs with high efficiency and robustness.
Permanent magnet synchronous motor servo system control based on μC/OS
NASA Astrophysics Data System (ADS)
Shi, Chongyang; Chen, Kele; Chen, Xinglong
2015-10-01
When Opto-Electronic Tracking system operates in complex environments, every subsystem must operate efficiently and stably. As a important part of Opto-Electronic Tracking system, the performance of PMSM(Permanent Magnet Synchronous Motor) servo system affects the Opto-Electronic Tracking system's accuracy and speed greatly[1][2]. This paper applied embedded real-time operating system μC/OS to the control of PMSM servo system, implemented SVPWM(Space Vector Pulse Width Modulation) algorithm in PMSM servo system, optimized the stability of PMSM servo system. Pointing on the characteristics of the Opto-Electronic Tracking system, this paper expanded μC/OS with software redundancy processes, remote debugging and upgrading. As a result, the Opto- Electronic Tracking system performs efficiently and stably.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Edwin S.
Under the CRADA, NREL will provide assistance to NRGsim to debug and convert the EnergyPlus Hysteresis Phase Change Material ('PCM') model to C++ for adoption into the main code package of the EnergyPlus simulation engine.
2008-03-01
in all parts of the program except the predicates. B. PRELIMINARY EXPERIMENTATION Working with the hand written program initially to get a feel...PROBLEM STATEMENT AND MOTIVATION .......................................2 II. RELATED WORK ...ISOLATION.........................................7 III. PRELIMINARY WORK
Fast Whole-Engine Stirling Analysis
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2006-01-01
This presentation discusses the simulation approach to whole-engine for physical consistency, REV regenerator modeling, grid layering for smoothness, and quality, conjugate heat transfer method adjustment, high-speed low cost parallel cluster, and debugging.
Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
We present views and analysis of the execution of several PVM (Parallel Virtual Machine) codes for Computational Fluid Dynamics on a networks of Sparcstations, including: (1) NAS Parallel Benchmarks CG and MG; (2) a multi-partitioning algorithm for NAS Parallel Benchmark SP; and (3) an overset grid flowsolver. These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains: (1) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (2) Monitor, a library of runtime trace-collection routines; (3) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (4) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran 77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses XIIR5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (1) the impact of long message latencies; (2) the impact of multiprogramming overheads and associated load imbalance; (3) cache and virtual-memory effects; and (4) significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (1) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets: and (2) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.
Who watches the watchers?: preventing fault in a fault tolerance library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanavige, C. D.
The Scalable Checkpoint/Restart library (SCR) was developed and is used by researchers at Lawrence Livermore National Laboratory to provide a fast and efficient method of saving and recovering large applications during runtime on high-performance computing (HPC) systems. Though SCR protects other programs, up until June 2017, nothing was actively protecting SCR. The goal of this project was to automate the building and testing of this library on the varying HPC architectures on which it is used. Our methods centered around the use of a continuous integration tool called Bamboo that allowed for automation agents to be installed on the HPCmore » systems themselves. These agents provided a way for us to establish a new and unique way to automate and customize the allocation of resources and running of tests with CMake’s unit testing framework, CTest, as well as integration testing scripts though an HPC package manager called Spack. These methods provided a parallel environment in which to test the more complex features of SCR. As a result, SCR is now automatically built and tested on several HPC architectures any time changes are made by developers to the library’s source code. The results of these tests are then communicated back to the developers for immediate feedback, allowing them to fix functionality of SCR that may have broken. Hours of developers’ time are now being saved from the tedious process of manually testing and debugging, which saves money and allows the SCR project team to focus their efforts towards development. Thus, HPC system users can use SCR in conjunction with their own applications to efficiently and effectively checkpoint and restart as needed with the assurance that SCR itself is functioning properly.« less
Mahmoudi, Morteza
2018-03-17
Despite considerable efforts in the field of nanomedicine that have been made by researchers, funding agencies, entrepreneurs, and the media, fewer nanoparticle (NP) technologies than expected have made it to clinical trials. The wide gap between the efforts and effective clinical translation is, at least in part, due to multiple overlooked factors in both in vitro and in vivo environments, a poor understanding of the nano-bio interface, and misinterpretation of the data collected in vitro, all of which reduce the accuracy of predictions regarding the NPs' fate and safety in humans. To minimize this bench-to-clinic gap, which may accelerate successful clinical translation of NPs, this opinion paper aims to introduce strategies for systematic debugging of nano-bio interfaces in the current literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gharibyan, N.
In order to fully characterize the NIF neutron spectrum, SAND-II-SNL software was requested/received from the Radiation Safety Information Computational Center. The software is designed to determine the neutron energy spectrum through analysis of experimental activation data. However, given that the source code was developed in Sparcstation 10, it is not compatible with current version of FORTRAN. Accounts have been established through the Lawrence Livermore National Laboratory’s High Performance Computing in order to access different compiles for FORTRAN (e.g. pgf77, pgf90). Additionally, several of the subroutines included in the SAND-II-SNL package have required debugging efforts to allow for proper compiling ofmore » the code.« less
ProjectQ: Compiling quantum programs for various backends
NASA Astrophysics Data System (ADS)
Haener, Thomas; Steiger, Damian S.; Troyer, Matthias
In order to control quantum computers beyond the current generation, a high level quantum programming language and optimizing compilers will be essential. Therefore, we have developed ProjectQ - an open source software framework to facilitate implementing and running quantum algorithms both in software and on actual quantum hardware. Here, we introduce the backends available in ProjectQ. This includes a high-performance simulator and emulator to test and debug quantum algorithms, tools for resource estimation, and interfaces to several small-scale quantum devices. We demonstrate the workings of the framework and show how easily it can be further extended to control upcoming quantum hardware.
The TOTEM T1 read out card motherboard
NASA Astrophysics Data System (ADS)
Minutoli, S.; Lo Vetere, M.; Robutti, E.
2010-12-01
This article describes the Read Out Card (ROC) motherboard, which is the main component of the T1 forward telescope front-end electronic system. The ROC main objectives are to acquire tracking data and trigger information from the detector. It performs data conversion from electrical to optical format and transfers the data streams to the next level of the system and it implements Slow Control modules which are able to receive, decode and distribute the LHC machine low jitter clock and fast command. The ROC also provides a spy mezzanine connection based on programmable FPGA and USB2.0 for laboratory and portable DAQ debugging system.
Risk management technique for liquefied natural gas facilities
NASA Technical Reports Server (NTRS)
Fedor, O. H.; Parsons, W. N.
1975-01-01
Checklists have been compiled for planning, design, construction, startup and debugging, and operation of liquefied natural gas facilities. Lists include references to pertinent safety regulations. Methods described are applicable to handling of other hazardous materials.
NASA Astrophysics Data System (ADS)
Egeland, R.; Huang, C.-H.; Rossman, P.; Sundarrajan, P.; Wildish, T.
2012-12-01
PhEDEx is the data-transfer management solution written by CMS. It consists of agents running at each site, a website for presentation of information, and a web-based data-service for scripted access to information. The website allows users to monitor the progress of data-transfers, the status of site agents and links between sites, and the overall status and behaviour of everything about PhEDEx. It also allows users to make and approve requests for data-transfers and for deletion of data. It is the main point-of-entry for all users wishing to interact with PhEDEx. For several years, the website has consisted of a single perl program with about 10K SLOC. This program has limited capabilities for exploring the data, with only coarse filtering capabilities and no context-sensitive awareness. Graphical information is presented as static images, generated on the server, with no interactivity. It is also not well connected to the rest of the PhEDEx codebase, since much of it was written before the data-service was developed. All this makes it hard to maintain and extend. We are re-implementing the website to address these issues. The UI is being rewritten in Javascript, replacing most of the server-side code. We are using the YUI toolkit to provide advanced features and context-sensitive interaction, and will adopt a Javascript charting library for generating graphical representations client-side. This relieves the server of much of its load, and automatically improves server-side security. The Javascript components can be re-used in many ways, allowing custom pages to be developed for specific uses. In particular, standalone test-cases using small numbers of components make it easier to debug the Javascript than it is to debug a large server program. Information about PhEDEx is accessed through the PhEDEx data-service, since direct SQL is not available from the clients’ browser. This provides consistent semantics with other, externally written monitoring tools, which already use the data-service. It also reduces redundancy in the code, yielding a simpler, consolidated codebase. In this talk we describe our experience of re-factoring this monolithic server-side program into a lighter client-side framework. We describe some of the techniques that worked well for us, and some of the mistakes we made along the way. We present the current state of the project, and its future direction.
Wang, Zhong-Xu; Qin, Ru-Li; Li, Yu-Zhen; Zhang, Xue-Yan; Jia, Ning; Zhang, Qiu-Ling; Li, Gang; Zhao, Jie; Li, Huan-Huan; Jiang, Hai-Qiang
2011-08-01
To investigate the work-related musculoskeletal disorders among automobile assembly workers, to discusses the related risk factors and their relationship. The selected 1508 automobile assembly workers from a north car manufacturing company were regarded as the study object. The hazard zone jobs checklist, Nordic musculoskeletal symptom questionnaire (NMQ) and pain questionnaire were used to perform the epidemiological cross-sectional and retrospective survey and study for the General status, awkward ergonomics factors and related influencing factors, and musculoskeletal disorders of workers. The predominant body sites of occurring WMSDs among automobile assembly workers were mainly low back, wrist, neck and shoulders, the predominant workshop section of occurring WMSDs were mostly concentrated in engine compartment, interior ornament, door cover, chassis and debugging section. The predominant body site of WMSDs among engine compartment and chassis section workers was low back, interior ornament workers were low back and wrist, door cover workers was wrist, chassis workers was low back, debugging workers were neck and low back. Neck musculoskeletal disorders had the trend with the increase of a body height; Smoking may increase the occurrence of musculoskeletal disorders. The WMSDs appears to be a serious ergonomic proble assem among automobile assembly workers, predominant occurring site of WMSDs is with different workshop section, its characteristics is quite obvious, probably related to its existing awkward work position or activities. The worker height and smoking habits may be important factors which affect musculoskeletal disorders happen.
Workstations take over conceptual design
NASA Technical Reports Server (NTRS)
Kidwell, George H.
1987-01-01
Workstations provide sufficient computing memory and speed for early evaluations of aircraft design alternatives to identify those worthy of further study. It is recommended that the programming of such machines permit integrated calculations of the configuration and performance analysis of new concepts, along with the capability of changing up to 100 variables at a time and swiftly viewing the results. Computations can be augmented through links to mainframes and supercomputers. Programming, particularly debugging operations, are enhanced by the capability of working with one program line at a time and having available on-screen error indices. Workstation networks permit on-line communication among users and with persons and computers outside the facility. Application of the capabilities is illustrated through a description of NASA-Ames design efforts for an oblique wing for a jet performed on a MicroVAX network.
Agent Architecture for Aviation Data Integration System
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Wang, Yao; Windrem, May; Patel, Hemil; Wei, Mei
2004-01-01
This paper describes the proposed agent-based architecture of the Aviation Data Integration System (ADIS). ADIS is a software system that provides integrated heterogeneous data to support aviation problem-solving activities. Examples of aviation problem-solving activities include engineering troubleshooting, incident and accident investigation, routine flight operations monitoring, safety assessment, maintenance procedure debugging, and training assessment. A wide variety of information is typically referenced when engaging in these activities. Some of this information includes flight recorder data, Automatic Terminal Information Service (ATIS) reports, Jeppesen charts, weather data, air traffic control information, safety reports, and runway visual range data. Such wide-ranging information cannot be found in any single unified information source. Therefore, this information must be actively collected, assembled, and presented in a manner that supports the users problem-solving activities. This information integration task is non-trivial and presents a variety of technical challenges. ADIS has been developed to do this task and it permits integration of weather, RVR, radar data, and Jeppesen charts with flight data. ADIS has been implemented and used by several airlines FOQA teams. The initial feedback from airlines is that such a system is very useful in FOQA analysis. Based on the feedback from the initial deployment, we are developing a new version of the system that would make further progress in achieving following goals of our project.
Code of Federal Regulations, 2010 CFR
2010-07-01
... objective should an automatic sprinkler system be capable of meeting? 102-80.100 Section 102-80.100 Public... Automatic Sprinkler Systems § 102-80.100 What performance objective should an automatic sprinkler system be capable of meeting? The performance objective of the automatic sprinkler system is that it must be capable...
Code of Federal Regulations, 2013 CFR
2013-07-01
... objective should an automatic sprinkler system be capable of meeting? 102-80.100 Section 102-80.100 Public... Automatic Sprinkler Systems § 102-80.100 What performance objective should an automatic sprinkler system be capable of meeting? The performance objective of the automatic sprinkler system is that it must be capable...
Code of Federal Regulations, 2014 CFR
2014-01-01
... objective should an automatic sprinkler system be capable of meeting? 102-80.100 Section 102-80.100 Public... Automatic Sprinkler Systems § 102-80.100 What performance objective should an automatic sprinkler system be capable of meeting? The performance objective of the automatic sprinkler system is that it must be capable...
Code of Federal Regulations, 2012 CFR
2012-01-01
... objective should an automatic sprinkler system be capable of meeting? 102-80.100 Section 102-80.100 Public... Automatic Sprinkler Systems § 102-80.100 What performance objective should an automatic sprinkler system be capable of meeting? The performance objective of the automatic sprinkler system is that it must be capable...
Code of Federal Regulations, 2011 CFR
2011-01-01
... objective should an automatic sprinkler system be capable of meeting? 102-80.100 Section 102-80.100 Public... Automatic Sprinkler Systems § 102-80.100 What performance objective should an automatic sprinkler system be capable of meeting? The performance objective of the automatic sprinkler system is that it must be capable...
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Stueber, Thomas J.
2013-01-01
A dual flow-path inlet system is being tested to evaluate methodologies for a Turbine Based Combined Cycle (TBCC) propulsion system to perform a controlled inlet mode transition. Prior to experimental testing, simulation models are used to test, debug, and validate potential control algorithms. One simulation package being used for testing is the High Mach Transient Engine Cycle Code simulation, known as HiTECC. This paper discusses the closed loop control system, which utilizes a shock location sensor to improve inlet performance and operability. Even though the shock location feedback has a coarse resolution, the feedback allows for a reduction in steady state error and, in some cases, better performance than with previous proposed pressure ratio based methods. This paper demonstrates the design and benefit with the implementation of a proportional-integral controller, an H-Infinity based controller, and a disturbance observer based controller.
Sketchcode: A Documentation Technique for Computer Hobbyists and Programmers
ERIC Educational Resources Information Center
Voros, Todd, L.
1978-01-01
Sketchcode is a metaprograming pseudo-language documentation technique intended to simplify the process of program writing and debugging for both high and low-level users. Helpful hints and examples for the use of the technique are included. (CMV)
Scientific computation systems quality branch manual
NASA Technical Reports Server (NTRS)
1972-01-01
A manual is presented which is designed to familiarize the GE 635 user with the configuration and operation of the overall system. Work submission, programming standards, restrictions, testing and debugging, and related general information is provided for GE 635 programmer.
Mission and data operations IBM 360 user's guide
NASA Technical Reports Server (NTRS)
Balakirsky, J.
1973-01-01
The M and DO computer systems are introduced and supplemented. The hardware and software status is discussed, along with standard processors and user libraries. Data management techniques are presented, as well as machine independence, debugging facilities, and overlay considerations.
Second CLIPS Conference Proceedings, volume 1
NASA Technical Reports Server (NTRS)
Giarratano, Joseph (Editor); Culbert, Christopher J. (Editor)
1991-01-01
Topics covered at the 2nd CLIPS Conference held at the Johnson Space Center, September 23-25, 1991 are given. Topics include rule groupings, fault detection using expert systems, decision making using expert systems, knowledge representation, computer aided design and debugging expert systems.
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.
1993-01-01
The use of the CONVEX computers that are an integral part of the Supercomputing Network Subsystems (SNS) of the Central Scientific Computing Complex of LaRC is briefly described. Features of the CONVEX computers that are significantly different than the CRAY supercomputers are covered, including: FORTRAN, C, architecture of the CONVEX computers, the CONVEX environment, batch job submittal, debugging, performance analysis, utilities unique to CONVEX, and documentation. This revision reflects the addition of the Applications Compiler and X-based debugger, CXdb. The document id intended for all CONVEX users as a ready reference to frequently asked questions and to more detailed information contained with the vendor manuals. It is appropriate for both the novice and the experienced user.
Statistical modeling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1992-01-01
This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.
Experiences with Cray multi-tasking
NASA Technical Reports Server (NTRS)
Miya, E. N.
1985-01-01
The issues involved in modifying an existing code for multitasking is explored. They include Cray extensions to FORTRAN, an examination of the application code under study, designing workable modifications, specific code modifications to the VAX and Cray versions, performance, and efficiency results. The finished product is a faster, fully synchronous, parallel version of the original program. A production program is partitioned by hand to run on two CPUs. Loop splitting multitasks three key subroutines. Simply dividing subroutine data and control structure down the middle of a subroutine is not safe. Simple division produces results that are inconsistent with uniprocessor runs. The safest way to partition the code is to transfer one block of loops at a time and check the results of each on a test case. Other issues include debugging and performance. Task startup and maintenance (e.g., synchronization) are potentially expensive.
PlanWorks: A Debugging Environment for Constraint Based Planning Systems
NASA Technical Reports Server (NTRS)
Daley, Patrick; Frank, Jeremy; Iatauro, Michael; McGann, Conor; Taylor, Will
2005-01-01
Numerous planning and scheduling systems employ underlying constraint reasoning systems. Debugging such systems involves the search for errors in model rules, constraint reasoning algorithms, search heuristics, and the problem instance (initial state and goals). In order to effectively find such problems, users must see why each state or action is in a plan by tracking causal chains back to part of the initial problem instance. They must be able to visualize complex relationships among many different entities and distinguish between those entities easily. For example, a variable can be in the scope of several constraints, as well as part of a state or activity in a plan; the activity can arise as a consequence of another activity and a model rule. Finally, they must be able to track each logical inference made during planning. We have developed PlanWorks, a comprehensive system for debugging constraint-based planning and scheduling systems. PlanWorks assumes a strong transaction model of the entire planning process, including adding and removing parts of the constraint network, variable assignment, and constraint propagation. A planner logs all transactions to a relational database that is tailored to support queries for of specialized views to display different forms of data (e.g. constraints, activities, resources, and causal links). PlanWorks was specifically developed for the Extensible Universal Remote Operations Planning Architecture (EUROPA(sub 2)) developed at NASA, but the underlying principles behind PlanWorks make it useful for many constraint-based planning systems. The paper is organized as follows. We first describe some fundamentals of EUROPA(sub 2). We then describe PlanWorks' principal components. We then discuss each component in detail, and then describe inter-component navigation features. We close with a discussion of how PlanWorks is used to find model flaws.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-30
... Performance Requirements To Support Air Traffic Control (ATC) Service; Correction AGENCY: Federal Aviation... performance standards for Automatic Dependent Surveillance--Broadcast (ADS-B) Out avionics on aircraft... entitled, ``Automatic Dependent Surveillance--Broadcast (ADS-B) Out Performance Requirements To Support Air...
Microsupercomputers: Design and Implementation
1991-03-01
been ported to the DASH hardware. Hardware problems and software problems with DPV itself prevented its use as a debugging tool until recently. Both the...M.PD) [21]. an LU- decomposition program (LU). and a digita! logic simulation prgram 1 Introduction (PTHOR) [28]. The applcations are typical of those
Data Acquisition Unit for SATCOM Signal Analyzer
1980-01-01
APSIM simulator program APDEBUG debugging program APTEST diagnostic and test program MATH Library IOP-16 16 bit I/O port 223 APPENDIX C Table...3. SYNTEST Corporation, Frequency Synthesizer Module, Data Sheet, The Syntest SM-101 Frequency Synthesizer Module, not dated . 4. DATEL Systems Inc
In-situ FPGA debug driven by on-board microcontroller
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Zachary Kent
2009-01-01
Often we are faced with the situation that the behavior of a circuit changes in an unpredictable way when chassis cover is attached or the system is not easily accessible. For instance, in a deployed environment, such as space, hardware can malfunction in unpredictable ways. What can a designer do to ascertain the cause of the problem? Register interrogations only go so far, and sometimes the problem being debugged is register transactions themselves, or the problem lies in FPGA programming. This work provides a solution to this; namely, the ability to drive a JTAG chain via an on-board microcontroller andmore » use a simple clone of the Xilinx Chipscope core without a Xilinx JTAG cable or any external interfaces required. We have demonstrated the functionality of the prototype system using a Xilinx Spartan 3E FPGA and a Microchip PIC18j2550 microcontroller. This paper will discuss the implementation details as well as present case studies describing how the tools have aided satellite hardware development.« less
DI: An interactive debugging interpreter for applicative languages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skedzielewski, S.K.; Yates, R.K.; Oldehoeft, R.R.
1987-03-12
The DI interpreter is both a debugger and interpreter of SISLAL programs. Its use as a program interpreter is only a small part of its role; it is designed to be a tool for studying compilation techniques for applicative languages. DI interprets dataflow graphs expressed in the IF1 and IF2 languages, and is heavily instrumented to report the activity of dynamic storage activity, reference counting, copying and updating of structured data values. It also aids the SISAL language evaluation by providing an interim execution vehicle for SISAL programs. DI provides determinate, sequential interpretation of graph nodes for sequential and parallelmore » operations in a canonical order. As a debugging aid, DI allows tracing, breakpointing, and interactive display of program data values. DI handles creation of SISAL and IF1 error values for each data type and propagates them according to a well-defined algebra. We have begun to implement IF1 optimizers and have measured the improvements with DI.« less
Dynamic visualization techniques for high consequence software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollock, G.M.
1998-02-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification. The prototype tool is described along with the requirements constraint language after a brief literature review is presented. Examples of howmore » the tool can be used are also presented. In conclusion, the most significant advantage of this tool is to provide a first step in evaluating specification completeness, and to provide a more productive method for program comprehension and debugging. The expected payoff is increased software surety confidence, increased program comprehension, and reduced development and debugging time.« less
NASA Astrophysics Data System (ADS)
Polyakov, S. P.; Kryukov, A. P.; Demichev, A. P.
2018-01-01
We present a simple set of command line interface tools called Docker Container Manager (DCM) that allow users to create and manage Docker containers with preconfigured SSH access while keeping the users isolated from each other and restricting their access to the Docker features that could potentially disrupt the work of the server. Users can access DCM server via SSH and are automatically redirected to DCM interface tool. From there, they can create new containers, stop, restart, pause, unpause, and remove containers and view the status of the existing containers. By default, the containers are also accessible via SSH using the same private key(s) but through different server ports. Additional publicly available ports can be mapped to the respective ports of a container, allowing for some network services to be run within it. The containers are started from read-only filesystem images. Some initial images must be provided by the DCM server administrators, and after containers are configured to meet one’s needs, the changes can be saved as new images. Users can see the available images and remove their own images. DCM server administrators are provided with commands to create and delete users. All commands were implemented as Python scripts. The tools allow to deploy and debug medium-sized distributed systems for simulation in different fields on one or several local computers.
Multi-source Geospatial Data Analysis with Google Earth Engine
NASA Astrophysics Data System (ADS)
Erickson, T.
2014-12-01
The Google Earth Engine platform is a cloud computing environment for data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog is a multi-petabyte archive of georeferenced datasets that include images from Earth observing satellite and airborne sensors (examples: USGS Landsat, NASA MODIS, USDA NAIP), weather and climate datasets, and digital elevation models. Earth Engine supports both a just-in-time computation model that enables real-time preview and debugging during algorithm development for open-ended data exploration, and a batch computation mode for applying algorithms over large spatial and temporal extents. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, and resampling, which facilitates writing algorithms that combine data from multiple sensors and/or models. Although the primary use of Earth Engine, to date, has been the analysis of large Earth observing satellite datasets, the computational platform is generally applicable to a wide variety of use cases that require large-scale geospatial data analyses. This presentation will focus on how Earth Engine facilitates the analysis of geospatial data streams that originate from multiple separate sources (and often communities) and how it enables collaboration during algorithm development and data exploration. The talk will highlight current projects/analyses that are enabled by this functionality.https://earthengine.google.org
Synthetic biology projects in vitro.
Forster, Anthony C; Church, George M
2007-01-01
Advances in the in vitro synthesis and evolution of DNA, RNA, and polypeptides are accelerating the construction of biopolymers, pathways, and organisms with novel functions. Known functions are being integrated and debugged with the aim of synthesizing life-like systems. The goals are knowledge, tools, smart materials, and therapies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-06-30
Peta-scale computing environments pose significant challenges for both system and application developers and addressing them required more than simply scaling up existing tera-scale solutions. Performance analysis tools play an important role in gaining this understanding, but previous monolithic tools with fixed feature sets have not sufficed. Instead, this project worked on the design, implementation, and evaluation of a general, flexible tool infrastructure supporting the construction of performance tools as “pipelines” of high-quality tool building blocks. These tool building blocks provide common performance tool functionality, and are designed for scalability, lightweight data acquisition and analysis, and interoperability. For this project, wemore » built on Open|SpeedShop, a modular and extensible open source performance analysis tool set. The design and implementation of such a general and reusable infrastructure targeted for petascale systems required us to address several challenging research issues. All components needed to be designed for scale, a task made more difficult by the need to provide general modules. The infrastructure needed to support online data aggregation to cope with the large amounts of performance and debugging data. We needed to be able to map any combination of tool components to each target architecture. And we needed to design interoperable tool APIs and workflows that were concrete enough to support the required functionality, yet provide the necessary flexibility to address a wide range of tools. A major result of this project is the ability to use this scalable infrastructure to quickly create tools that match with a machine architecture and a performance problem that needs to be understood. Another benefit is the ability for application engineers to use the highly scalable, interoperable version of Open|SpeedShop, which are reassembled from the tool building blocks into a flexible, multi-user interface set of tools. This set of tools targeted at Office of Science Leadership Class computer systems and selected Office of Science application codes. We describe the contributions made by the team at the University of Wisconsin. The project built on the efforts in Open|SpeedShop funded by DOE/NNSA and the DOE/NNSA Tri-Lab community, extended Open|Speedshop to the Office of Science Leadership Class Computing Facilities, and addressed new challenges found on these cutting edge systems. Work done under this project at Wisconsin can be divided into two categories, new algorithms and techniques for debugging, and foundation infrastructure work on our Dyninst binary analysis and instrumentation toolkits and MRNet scalability infrastructure.« less
NASA Technical Reports Server (NTRS)
Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo
1992-01-01
This report is one of a series discussing configuration management (CM) topics for Space Station ground systems software development. It provides a description of the Software Support Environment (SSE)-developed Software Test Management (STM) capability, and discusses the possible use of this capability for management of developed software during testing performed on target platforms. This is intended to supplement the formal documentation of STM provided by the SEE Project. How STM can be used to integrate contractor CM and formal CM for software before delivery to operations is described. STM provides a level of control that is flexible enough to support integration and debugging, but sufficiently rigorous to insure the integrity of the testing process.
Scaling up digital circuit computation with DNA strand displacement cascades.
Qian, Lulu; Winfree, Erik
2011-06-03
To construct sophisticated biochemical circuits from scratch, one needs to understand how simple the building blocks can be and how robustly such circuits can scale up. Using a simple DNA reaction mechanism based on a reversible strand displacement process, we experimentally demonstrated several digital logic circuits, culminating in a four-bit square-root circuit that comprises 130 DNA strands. These multilayer circuits include thresholding and catalysis within every logical operation to perform digital signal restoration, which enables fast and reliable function in large circuits with roughly constant switching time and linear signal propagation delays. The design naturally incorporates other crucial elements for large-scale circuitry, such as general debugging tools, parallel circuit preparation, and an abstraction hierarchy supported by an automated circuit compiler.
On the Information Content of Program Traces
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Hood, Robert; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Program traces are used for analysis of program performance, memory utilization, and communications as well as for program debugging. The trace contains records of execution events generated by monitoring units inserted into the program. The trace size limits the resolution of execution events and restricts the user's ability to analyze the program execution. We present a study of the information content of program traces and develop a coding scheme which reduces the trace size to the limit given by the trace entropy. We apply the coding to the traces of AIMS instrumented programs executed on the IBM SPA and the SCSI Power Challenge and compare it with other coding methods. Our technique shows size of the trace can be reduced by more than a factor of 5.
Automobile inspection system based on wireless communication
NASA Astrophysics Data System (ADS)
Miao, Changyun; Ye, Chunqing
2010-07-01
This paper aims to research the Automobile Inspection System based on Wireless Communication, and suggests an overall design scheme which uses GPS for speed detection and Bluetooth and GPRS for communication. The communication between PDA and PC was realized by means of GPRS and TCP/IP; and the hardware circuit and software for detection terminal were devised by means of JINOU-3264 Bluetooth Module after analyzing the Bluetooth and its communication protocol. According to the results of debugging test, this system accomplished GPRS based data communication and management as well as the real-time detection on auto safety performance parameters in crash test via PC, whereby the need for mobility and reliability was met and the efficiency and level of detection was improved.
A Bayesian modification to the Jelinski-Moranda software reliability growth model
NASA Technical Reports Server (NTRS)
Littlewood, B.; Sofer, A.
1983-01-01
The Jelinski-Moranda (JM) model for software reliability was examined. It is suggested that a major reason for the poor results given by this model is the poor performance of the maximum likelihood method (ML) of parameter estimation. A reparameterization and Bayesian analysis, involving a slight modelling change, are proposed. It is shown that this new Bayesian-Jelinski-Moranda model (BJM) is mathematically quite tractable, and several metrics of interest to practitioners are obtained. The BJM and JM models are compared by using several sets of real software failure data collected and in all cases the BJM model gives superior reliability predictions. A change in the assumption which underlay both models to present the debugging process more accurately is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egeland, R.; Huang, C. H.; Rossman, P.
PhEDEx is the data-transfer management solution written by CMS. It consists of agents running at each site, a website for presentation of information, and a web-based data-service for scripted access to information. The website allows users to monitor the progress of data-transfers, the status of site agents and links between sites, and the overall status and behaviour of everything about PhEDEx. It also allows users to make and approve requests for data-transfers and for deletion of data. It is the main point-of-entry for all users wishing to interact with PhEDEx. For several years, the website has consisted of a singlemore » perl program with about 10K SLOC. This program has limited capabilities for exploring the data, with only coarse filtering capabilities and no context-sensitive awareness. Graphical information is presented as static images, generated on the server, with no interactivity. It is also not well connected to the rest of the PhEDEx codebase, since much of it was written before the data-service was developed. All this makes it hard to maintain and extend. We are re-implementing the website to address these issues. The UI is being rewritten in Javascript, replacing most of the server-side code. We are using the YUI toolkit to provide advanced features and context-sensitive interaction, and will adopt a Javascript charting library for generating graphical representations client-side. This relieves the server of much of its load, and automatically improves server-side security. The Javascript components can be re-used in many ways, allowing custom pages to be developed for specific uses. In particular, standalone test-cases using small numbers of components make it easier to debug the Javascript than it is to debug a large server program. Information about PhEDEx is accessed through the PhEDEx data-service, since direct SQL is not available from the clients browser. This provides consistent semantics with other, externally written monitoring tools, which already use the data-service. It also reduces redundancy in the code, yielding a simpler, consolidated codebase. In this talk we describe our experience of re-factoring this monolithic server-side program into a lighter client-side framework. We describe some of the techniques that worked well for us, and some of the mistakes we made along the way. We present the current state of the project, and its future direction.« less
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1992-01-01
The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.
Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Tucker, Deanne (Technical Monitor)
1994-01-01
We present views and analysis of the execution of several PVM codes for Computational Fluid Dynamics on a network of Sparcstations, including (a) NAS Parallel benchmarks CG and MG (White, Alund and Sunderam 1993); (b) a multi-partitioning algorithm for NAS Parallel Benchmark SP (Wijngaart 1993); and (c) an overset grid flowsolver (Smith 1993). These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains (a) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (b) Monitor, a library of run-time trace-collection routines; (c) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (d) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses X11R5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (a) the impact of long message latencies; (b) the impact of multiprogramming overheads and associated load imbalance; (c) cache and virtual-memory effects; and (4significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (a) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets; and (b) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.
Real-time automatic registration in optical surgical navigation
NASA Astrophysics Data System (ADS)
Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming
2016-05-01
An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.
Epistemic Gameplay and Discovery in Computational Model-Based Inquiry Activities
ERIC Educational Resources Information Center
Wilkerson, Michelle Hoda; Shareff, Rebecca; Laina, Vasiliki; Gravel, Brian
2018-01-01
In computational modeling activities, learners are expected to discover the inner workings of scientific and mathematical systems: First elaborating their understandings of a given system through constructing a computer model, then "debugging" that knowledge by testing and refining the model. While such activities have been shown to…
Debugging Geographers: Teaching Programming to Non-Computer Scientists
ERIC Educational Resources Information Center
Muller, Catherine L.; Kidd, Chris
2014-01-01
The steep learning curve associated with computer programming can be a daunting prospect, particularly for those not well aligned with this way of logical thinking. However, programming is a skill that is becoming increasingly important. Geography graduates entering careers in atmospheric science are one example of a particularly diverse group who…
Sight Application Analysis Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G.
2014-09-17
The scale and complexity of scientific applications makes it very difficult to optimize, debug and extend them to support new capabilities. We have developed a tool that supports developers’ efforts to understand the logical flow of their applications and interactions between application components and hardware in a way that scales with application complexity and parallelism.
Knowledge Acquisition, Knowledge Programming, and Knowledge Refinement.
ERIC Educational Resources Information Center
Hayes-Roth, Frederick; And Others
This report describes the principal findings and recommendations of a 2-year Rand research project on machine-aided knowledge acquisition and discusses the transfer of expertise from humans to machines, as well as the functions of planning, debugging, knowledge refinement, and autonomous machine learning. The relative advantages of humans and…
ERIC Educational Resources Information Center
Gerhold, George; And Others
This paper describes an effective microprocessor-based CAI system which has been repeatedly tested by a large number of students and edited accordingly. Tasks not suitable for microprocessor based systems (authoring, testing, and debugging) were handled on larger multi-terminal systems. This approach requires that the CAI language used on the…
When "Less is More": The Optimal Design of Language Laboratory Hardware.
ERIC Educational Resources Information Center
Kershaw, Gary; Boyd, Gary
1980-01-01
The results of a process of designing, building, and "de-bugging" two replacement language laboratory hardware systems at Concordia University (Montreal) are described. Because commercially available systems did not meet specifications within budgetary constraints, the systems were built by the university technical department. The systems replaced…
Where Is Logo Taking Our Kids?
ERIC Educational Resources Information Center
Mace, Scott
1984-01-01
Discusses various aspects, features, and uses of the Logo programing language. A comparison (in chart format) of several Logo languages is also included, providing comments on the language as well as producer, current price, number of sprites and turtles, computer needed, and whether debugging aids and list operations are included. (JN)
ERIC Educational Resources Information Center
Gandolfi, Enrico
2018-01-01
This article investigates the phenomenon of open and participative development (e.g. beta testing, Kickstarter projects)--i.e. extended prototyping--in digital entertainment as a potential source of insights for instructional interventions. Despite the increasing popularity of this practice and the potential implications for educators and…
Describing the What and Why of Students' Difficulties in Boolean Logic
ERIC Educational Resources Information Center
Herman, Geoffrey L.; Loui, Michael C.; Kaczmarczyk, Lisa; Zilles, Craig
2012-01-01
The ability to reason with formal logic is a foundational skill for computer scientists and computer engineers that scaffolds the abilities to design, debug, and optimize. By interviewing students about their understanding of propositional logic and their ability to translate from English specifications to Boolean expressions, we characterized…
MARTe: A Multiplatform Real-Time Framework
NASA Astrophysics Data System (ADS)
Neto, André C.; Sartori, Filippo; Piccolo, Fabio; Vitelli, Riccardo; De Tommasi, Gianmaria; Zabeo, Luca; Barbalace, Antonio; Fernandes, Horacio; Valcarcel, Daniel F.; Batista, Antonio J. N.
2010-04-01
Development of real-time applications is usually associated with nonportable code targeted at specific real-time operating systems. The boundary between hardware drivers, system services, and user code is commonly not well defined, making the development in the target host significantly difficult. The Multithreaded Application Real-Time executor (MARTe) is a framework built over a multiplatform library that allows the execution of the same code in different operating systems. The framework provides the high-level interfaces with hardware, external configuration programs, and user interfaces, assuring at the same time hard real-time performances. End-users of the framework are required to define and implement algorithms inside a well-defined block of software, named Generic Application Module (GAM), that is executed by the real-time scheduler. Each GAM is reconfigurable with a set of predefined configuration meta-parameters and interchanges information using a set of data pipes that are provided as inputs and required as output. Using these connections, different GAMs can be chained either in series or parallel. GAMs can be developed and debugged in a non-real-time system and, only once the robustness of the code and correctness of the algorithm are verified, deployed to the real-time system. The software also supplies a large set of utilities that greatly ease the interaction and debugging of a running system. Among the most useful are a highly efficient real-time logger, HTTP introspection of real-time objects, and HTTP remote configuration. MARTe is currently being used to successfully drive the plasma vertical stabilization controller on the largest magnetic confinement fusion device in the world, with a control loop cycle of 50 ?s and a jitter under 1 ?s. In this particular project, MARTe is used with the Real-Time Application Interface (RTAI)/Linux operating system exploiting the new ?86 multicore processors technology.
On-Die Sensors for Transient Events
NASA Astrophysics Data System (ADS)
Suchak, Mihir Vimal
Failures caused by transient electromagnetic events like Electrostatic Discharge (ESD) are a major concern for embedded systems. The component often failing is an integrated circuit (IC). Determining which IC is affected in a multi-device system is a challenging task. Debugging errors often requires sophisticated lab setups which require intentionally disturbing and probing various parts of the system which might not be easily accessible. Opening the system and adding probes may change its response to the transient event, which further compounds the problem. On-die transient event sensors were developed that require relatively little area on die, making them inexpensive, they consume negligible static current, and do not interfere with normal operation of the IC. These circuits can be used to determine the pin involved and the level of the event in the event of a transient event affecting the IC, thus allowing the user to debug system-level transient events without modifying the system. The circuit and detection scheme design has been completed and verified in simulations with Cadence Virtuoso environment. Simulations accounted for the impact of the ESD protection circuits, parasitics from the I/O pin, package and I/O ring, and included a model of an ESD gun to test the circuit's response to an ESD pulse as specified in IEC 61000-4-2. Multiple detection schemes are proposed. The final detection scheme consists of an event detector and a level sensor. The event detector latches on the presence of an event at a pad, to determine on which pin an event occurred. The level sensor generates current proportional to the level of the event. This current is converted to a voltage and digitized at the A/D converter to be read by the microprocessor. Detection scheme shows good performance in simulations when checked against process variations and different kind of events.
ASC-AD penetration modeling FY05 status report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kistler, Bruce L.; Ostien, Jakob T.; Chiesa, Michael L.
2006-04-01
Sandia currently lacks a high fidelity method for predicting loads on and subsequent structural response of earth penetrating weapons. This project seeks to test, debug, improve and validate methodologies for modeling earth penetration. Results of this project will allow us to optimize and certify designs for the B61-11, Robust Nuclear Earth Penetrator (RNEP), PEN-X and future nuclear and conventional penetrator systems. Since this is an ASC Advanced Deployment project the primary goal of the work is to test, debug, verify and validate new Sierra (and Nevada) tools. Also, since this project is part of the V&V program within ASC, uncertaintymore » quantification (UQ), optimization using DAKOTA [1] and sensitivity analysis are an integral part of the work. This project evaluates, verifies and validates new constitutive models, penetration methodologies and Sierra/Nevada codes. In FY05 the project focused mostly on PRESTO [2] using the Spherical Cavity Expansion (SCE) [3,4] and PRESTO Lagrangian analysis with a preformed hole (Pen-X) methodologies. Modeling penetration tests using PRESTO with a pilot hole was also attempted to evaluate constitutive models. Future years work would include the Alegra/SHISM [5] and AlegrdEP (Earth Penetration) methodologies when they are ready for validation testing. Constitutive models such as Soil-and-Foam, the Sandia Geomodel [6], and the K&C Concrete model [7] were also tested and evaluated. This report is submitted to satisfy annual documentation requirements for the ASC Advanced Deployment program. This report summarizes FY05 work performed in the Penetration Mechanical Response (ASC-APPS) and Penetration Mechanics (ASC-V&V) projects. A single report is written to document the two projects because of the significant amount of technical overlap.« less
Planetary-Scale Geospatial Data Analysis Techniques in Google's Earth Engine Platform (Invited)
NASA Astrophysics Data System (ADS)
Hancher, M.
2013-12-01
Geoscientists have more and more access to new tools for large-scale computing. With any tool, some tasks are easy and other tasks hard. It is natural to look to new computing platforms to increase the scale and efficiency of existing techniques, but there is a more exiting opportunity to discover and develop a new vocabulary of fundamental analysis idioms that are made easy and effective by these new tools. Google's Earth Engine platform is a cloud computing environment for earth data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog includes a nearly complete archive of scenes from Landsat 4, 5, 7, and 8 that have been processed by the USGS, as well as a wide variety of other remotely-sensed and ancillary data products. Earth Engine supports a just-in-time computation model that enables real-time preview during algorithm development and debugging as well as during experimental data analysis and open-ended data exploration. Data processing operations are performed in parallel across many computers in Google's datacenters. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, resampling, and associating image metadata with pixel data. Early applications of Earth Engine have included the development of Google's global cloud-free fifteen-meter base map and global multi-decadal time-lapse animations, as well as numerous large and small experimental analyses by scientists from a range of academic, government, and non-governmental institutions, working in a wide variety of application areas including forestry, agriculture, urban mapping, and species habitat modeling. Patterns in the successes and failures of these early efforts have begun to emerge, sketching the outlines of a new set of simple and effective approaches to geospatial data analysis.
Rubus: A compiler for seamless and extensible parallelism.
Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.
NASA Astrophysics Data System (ADS)
Houchin, J. S.
2014-09-01
A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.
Rubus: A compiler for seamless and extensible parallelism
Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758
The transition to increased automaticity during finger sequence learning in adult males who stutter.
Smits-Bandstra, Sarah; De Nil, Luc; Rochon, Elizabeth
2006-01-01
The present study compared the automaticity levels of persons who stutter (PWS) and persons who do not stutter (PNS) on a practiced finger sequencing task under dual task conditions. Automaticity was defined as the amount of attention required for task performance. Twelve PWS and 12 control subjects practiced finger tapping sequences under single and then dual task conditions. Control subjects performed the sequencing task significantly faster and less variably under single versus dual task conditions while PWS' performance was consistently slow and variable (comparable to the dual task performance of control subjects) under both conditions. Control subjects were significantly more accurate on a colour recognition distracter task than PWS under dual task conditions. These results suggested that control subjects transitioned to quick, accurate and increasingly automatic performance on the sequencing task after practice, while PWS did not. Because most stuttering treatment programs for adults include practice and automatization of new motor speech skills, findings of this finger sequencing study and future studies of speech sequence learning may have important implications for how to maximize stuttering treatment effectiveness. As a result of this activity, the participant will be able to: (1) Define automaticity and explain the importance of dual task paradigms to investigate automaticity; (2) Relate the proposed relationship between motor learning and automaticity as stated by the authors; (3) Summarize the reviewed literature concerning the performance of PWS on dual tasks; and (4) Explain why the ability to transition to automaticity during motor learning may have important clinical implications for stuttering treatment effectiveness.
33 CFR 164.03 - Incorporation by reference.
Code of Federal Regulations, 2014 CFR
2014-07-01
... radiocommunication equipment and systems—Automatic identification systems (AIS)—part 2: Class A shipborne equipment of the universal automatic identification system (AIS)—Operational and performance requirements..., Recommendation on Performance Standards for a Universal Shipborne Automatic Identification System (AIS), adopted...
33 CFR 164.03 - Incorporation by reference.
Code of Federal Regulations, 2012 CFR
2012-07-01
... radiocommunication equipment and systems—Automatic identification systems (AIS)—part 2: Class A shipborne equipment of the universal automatic identification system (AIS)—Operational and performance requirements..., Recommendation on Performance Standards for a Universal Shipborne Automatic Identification System (AIS), adopted...
33 CFR 164.03 - Incorporation by reference.
Code of Federal Regulations, 2013 CFR
2013-07-01
... radiocommunication equipment and systems—Automatic identification systems (AIS)—part 2: Class A shipborne equipment of the universal automatic identification system (AIS)—Operational and performance requirements..., Recommendation on Performance Standards for a Universal Shipborne Automatic Identification System (AIS), adopted...
PATHA: Performance Analysis Tool for HPC Applications
Yoo, Wucherl; Koo, Michelle; Cao, Yi; ...
2016-02-18
Large science projects rely on complex workflows to analyze terabytes or petabytes of data. These jobs are often running over thousands of CPU cores and simultaneously performing data accesses, data movements, and computation. It is difficult to identify bottlenecks or to debug the performance issues in these large workflows. In order to address these challenges, we have developed Performance Analysis Tool for HPC Applications (PATHA) using the state-of-art open source big data processing tools. Our framework can ingest system logs to extract key performance measures, and apply the most sophisticated statistical tools and data mining methods on the performance data.more » Furthermore, it utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of PATHA, we conduct a case study on the workflows from an astronomy project known as the Palomar Transient Factory (PTF). This study processed 1.6 TB of system logs collected on the NERSC supercomputer Edison. Using PATHA, we were able to identify performance bottlenecks, which reside in three tasks of PTF workflow with the dependency on the density of celestial objects.« less
A Computer Approach to Mathematics Curriculum Developments Debugging
ERIC Educational Resources Information Center
Martínez-Zarzuelo, Angélica; Roanes-Lozano, Eugenio; Fernández-Díaz, José
2016-01-01
Sequencing contents is of great importance for instructional design within the teaching planning processes. We believe that it is key for a meaningful learning. Therefore, we propose to formally establish a partial order relation among the contents. We have chosen the binary relation "to be a prerequisite" for that purpose. We have…
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Ogilvie, P.
1975-01-01
A special data debugging package called SAT-1P created for the STARS-2P computer program is described. The program was written exclusively in FORTRAN 4 for the IBM 370-165 computer, and then converted to the UNIVAC 1108.
01010000 01001100 01000001 01011001: Play Elements in Computer Programming
ERIC Educational Resources Information Center
Breslin, Samantha
2013-01-01
This article explores the role of play in human interaction with computers in the context of computer programming. The author considers many facets of programming including the literary practice of coding, the abstract design of programs, and more mundane activities such as testing, debugging, and hacking. She discusses how these incorporate the…
ERIC Educational Resources Information Center
Deek, Fadi; Espinosa, Idania
2005-01-01
Traditionally, novice programmers have had difficulties in three distinct areas: breaking down a given problem, designing a workable solution, and debugging the resulting program. Many programming environments, software applications, and teaching tools have been developed to address the difficulties faced by these novices. Along with advancements…
An Introduction to Fortran Programming: An IPI Approach.
ERIC Educational Resources Information Center
Fisher, D. D.; And Others
This text is designed to give individually paced instruction in Fortran Programing. The text contains fifteen units. Unit titles include: Flowcharts, Input and Output, Loops, and Debugging. Also included is an extensive set of appendices. These were designed to contain a great deal of practical information necessary to the course. These appendices…
Debugging and Analysis of Large-Scale Parallel Programs
1989-09-01
Przybylski, T. Riordan , C. Rowen, and D. Van’t Hof, "A CMOS RISC Processor with Integrated System Functions," In Proc. of the 1986 COMPCON. IEEE, March 1986...Sequencers," Communications of the ACM, 22(2):115-123, 1979. 115 [Richardson, 1988] Rick Richardson, "Dhrystone 2.1 Benchmark," Usenet Distribution
Inquiry-Based Learning Case Studies for Computing and Computing Forensic Students
ERIC Educational Resources Information Center
Campbell, Jackie
2012-01-01
Purpose: The purpose of this paper is to describe and discuss the use of specifically-developed, inquiry-based learning materials for Computing and Forensic Computing students. Small applications have been developed which require investigation in order to de-bug code, analyse data issues and discover "illegal" behaviour. The applications…
Predicting the Readability of FORTRAN Programs.
ERIC Educational Resources Information Center
Domangue, J. C.; Karbowski, S. A.
This paper reports the results of two studies of the readability of FORTRAN programs, i.e., the ease with which a programmer can read and analyze programs already written, particularly in the processes of maintenance and debugging. In the first study, low-level characteristics of 202 FORTRAN programs stored on the general-use UNIX systems at Bell…
Don't Gamble with Y2K Compliance.
ERIC Educational Resources Information Center
Sturgeon, Julie
1999-01-01
Examines one school district's (Clark County, Nevada) response to the Y2K computer problem and provides tips on time-saving Y2K preventive measures other school districts can use. Explains how the district de-bugged its computer system including mainframe considerations and client-server applications. Highlights office equipment and teaching…
A Support System for Error Correction Questions in Programming Education
ERIC Educational Resources Information Center
Hachisu, Yoshinari; Yoshida, Atsushi
2014-01-01
For supporting the education of debugging skills, we propose a system for generating error correction questions of programs and checking the correctness. The system generates HTML files for answering questions and CGI programs for checking answers. Learners read and answer questions on Web browsers. For management of error injection, we have…
Development and operations of the astrophysics data system
NASA Technical Reports Server (NTRS)
Murray, Stephen S.; Oliversen, Ronald (Technical Monitor)
2005-01-01
Abstract service - Continued regular updates of abstracts in the databases, both at SA0 and at all mirror sites. - Modified loading scripts to accommodate changes in data format (PhyS) - Discussed data deliveries with providers to clear up problems with format or other errors (EGU) - Continued inclusion of large numbers of historical literature volumes and physics conference volumes xeroxed from the library. - Performed systematic fixes on some data sets in the database to account for changes in article numbering (AGU journals) - Implemented linking of ADS bibliographic records with multimedia files - Debugged and fixed obscure connection problems with the ADS Korean mirror site which were preventing successful updates of the data holdings. - Wrote procedure to parse citation data and characterize an ADS record based on its citation ratios within each database.
Engineering scalable biological systems
2010-01-01
Synthetic biology is focused on engineering biological organisms to study natural systems and to provide new solutions for pressing medical, industrial and environmental problems. At the core of engineered organisms are synthetic biological circuits that execute the tasks of sensing inputs, processing logic and performing output functions. In the last decade, significant progress has been made in developing basic designs for a wide range of biological circuits in bacteria, yeast and mammalian systems. However, significant challenges in the construction, probing, modulation and debugging of synthetic biological systems must be addressed in order to achieve scalable higher-complexity biological circuits. Furthermore, concomitant efforts to evaluate the safety and biocontainment of engineered organisms and address public and regulatory concerns will be necessary to ensure that technological advances are translated into real-world solutions. PMID:21468204
Small passenger car transmission test: Mercury Lynx ATX transmission
NASA Technical Reports Server (NTRS)
Bujold, M. P.
1981-01-01
The testing of a Mercury Lynx automatic transmission is reported. The transmission was tested in accordance with a passenger car automatic transmission test code (SAE J65lb) which required drive performance, coast performance, and no load test conditions. Under these conditions, the transmission attained maximum efficiencies in the mid-ninety percent range both for drive performance test and coast performance tests. The torque, speed, and efficiency curves are presented, which provide the complete performance characteristics for the Mercury Lynx automatic transmission.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 2 2013-01-01 2013-01-01 false Automatic Dependent Surveillance-Broadcast... Dependent Surveillance-Broadcast (ADS-B) Out equipment performance requirements. (a) Definitions. For the..., Extended Squitter Automatic Dependent Surveillance-Broadcast (ADS-B) and Traffic Information Service...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 2 2012-01-01 2012-01-01 false Automatic Dependent Surveillance-Broadcast... Dependent Surveillance-Broadcast (ADS-B) Out equipment performance requirements. (a) Definitions. For the..., Extended Squitter Automatic Dependent Surveillance-Broadcast (ADS-B) and Traffic Information Service...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 2 2014-01-01 2014-01-01 false Automatic Dependent Surveillance-Broadcast... Dependent Surveillance-Broadcast (ADS-B) Out equipment performance requirements. (a) Definitions. For the..., Extended Squitter Automatic Dependent Surveillance-Broadcast (ADS-B) and Traffic Information Service...
Motor automaticity in Parkinson’s disease
Wu, Tao; Hallett, Mark; Chan, Piu
2017-01-01
Bradykinesia is the most important feature contributing to motor difficulties in Parkinson’s disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020
A Unified Algebraic and Logic-Based Framework Towards Safe Routing Implementations
2015-08-13
Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative...and debugging several SDN applications. Example-based SDN synthesis. Recent emergence of software - defined networks offers an opportunity to design...domain of Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative networking
Data-Driven Hint Generation from Peer Debugging Solutions
ERIC Educational Resources Information Center
Liu, Zhongxiu
2015-01-01
Data-driven methods have been a successful approach to generating hints for programming problems. However, the majority of previous studies are focused on procedural hints that aim at moving students to the next closest state to the solution. In this paper, I propose a data-driven method to generate remedy hints for BOTS, a game that teaches…
ERIC Educational Resources Information Center
Taylor, Karen A.
This review of the literature and annotated bibliography summarizes the available research relating to teaching programming to high school students. It is noted that, while the process of programming a computer could be broken down into five steps--problem definition, algorithm design, code writing, debugging, and documentation--current research…
Engineering High Assurance Distributed Cyber Physical Systems
2015-01-15
decisions: number of interacting agents and co-dependent decisions made in real-time without causing interference . To engineer a high assurance DART...environment specification, architecture definition, domain-specific languages, design patterns, code - generation, analysis, test-generation, and simulation...include synchronization between the models and source code , debugging at the model level, expression of the design intent, and quality of service
Young Children and Turtle Graphics Programming: Generating and Debugging Simple Turtle Programs.
ERIC Educational Resources Information Center
Cuneo, Diane O.
Turtle graphics is a popular vehicle for introducing children to computer programming. Children combine simple graphic commands to get a display screen cursor (called a turtle) to draw designs on the screen. The purpose of this study was to examine young children's abilities to function in a simple computer programming environment. Four- and…
Visual Debugging of Object-Oriented Systems With the Unified Modeling Language
2004-03-01
to be “the systematic and imaginative use of the technology of interactive computer graphics and the disciplines of graphic design , typography ... Graphics volume 23 no 6, pp893-901, 1999. [SHN98] Shneiderman, B. Designing the User Interface. Strategies for Effective Human-Computer Interaction...System Design Objectives ................................................................................ 44 3.3 System Architecture
Teaching Conversations with the XDS Sigma 7. System Users Manual.
ERIC Educational Resources Information Center
Mosmann, Charles; Bork, Alfred M.
This manual is intended as a reference handbook for use in writing instructional dialogs on the Sigma-7 computer. The concern is to give concise information which one would need to write and debug dialogs on this system. Metasymbol, the macro-assembly program for the Sigma-7, is described. Definitions of terminology, legal forms descriptions of…
DOT National Transportation Integrated Search
1997-02-01
This report contains a summary of the work performed during the development of a minimum performance standard for lavatory trash receptacle automatic fire extinguishers. The developmental work was performed under the direction of the International Ha...
Comparison of two paradigms for distributed shared memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.
1990-08-01
The paper compares two paradigms for Distributed Shared Memory on loosely coupled computing systems: the shared data-object model as used in Orca, a programming language specially designed for loosely coupled computing systems and the Shared Virtual Memory model. For both paradigms the authors have implemented two systems, one using only point-to-point messages, the other using broadcasting as well. They briefly describe these two paradigms and their implementations. Then they compare their performance on four applications: the traveling salesman problem, alpha-beta search, matrix multiplication and the all pairs shortest paths problem. The measurements show that both paradigms can be used efficientlymore » for programming large-grain parallel applications. Significant speedups were obtained on all applications. The unstructured Shared Virtual Memory paradigm achieves the best absolute performance, although this is largely due to the preliminary nature of the Orca compiler used. The structured shared data-object model achieves the highest speedups and is much easier to program and to debug.« less
Programming Tools: Status, Evaluation, and Comparison
NASA Technical Reports Server (NTRS)
Cheng, Doreen Y.; Cooper, D. M. (Technical Monitor)
1994-01-01
In this tutorial I will first describe the characteristics of scientific applications and their developers, and describe the computing environment in a typical high-performance computing center. I will define the user requirements for tools that support application portability and present the difficulties to satisfy them. These form the basis of the evaluation and comparison of the tools. I will then describe the tools available in the market and the tools available in the public domain. Specifically, I will describe the tools for converting sequential programs, tools for developing portable new programs, tools for debugging and performance tuning, tools for partitioning and mapping, and tools for managing network of resources. I will introduce the main goals and approaches of the tools, and show main features of a few tools in each category. Meanwhile, I will compare tool usability for real-world application development and compare their different technological approaches. Finally, I will indicate the future directions of the tools in each category.
What Physicists Should Know About High Performance Computing - Circa 2002
NASA Astrophysics Data System (ADS)
Frederick, Donald
2002-08-01
High Performance Computing (HPC) is a dynamic, cross-disciplinary field that traditionally has involved applied mathematicians, computer scientists, and others primarily from the various disciplines that have been major users of HPC resources - physics, chemistry, engineering, with increasing use by those in the life sciences. There is a technological dynamic that is powered by economic as well as by technical innovations and developments. This talk will discuss practical ideas to be considered when developing numerical applications for research purposes. Even with the rapid pace of development in the field, the author believes that these concepts will not become obsolete for a while, and will be of use to scientists who either are considering, or who have already started down the HPC path. These principles will be applied in particular to current parallel HPC systems, but there will also be references of value to desktop users. The talk will cover such topics as: computing hardware basics, single-cpu optimization, compilers, timing, numerical libraries, debugging and profiling tools and the emergence of Computational Grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilke, Jeremiah J; Kenny, Joseph P.
2015-02-01
Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less
Rapid Diagnostics of Onboard Sequences
NASA Technical Reports Server (NTRS)
Starbird, Thomas W.; Morris, John R.; Shams, Khawaja S.; Maimone, Mark W.
2012-01-01
Keeping track of sequences onboard a spacecraft is challenging. When reviewing Event Verification Records (EVRs) of sequence executions on the Mars Exploration Rover (MER), operators often found themselves wondering which version of a named sequence the EVR corresponded to. The lack of this information drastically impacts the operators diagnostic capabilities as well as their situational awareness with respect to the commands the spacecraft has executed, since the EVRs do not provide argument values or explanatory comments. Having this information immediately available can be instrumental in diagnosing critical events and can significantly enhance the overall safety of the spacecraft. This software provides auditing capability that can eliminate that uncertainty while diagnosing critical conditions. Furthermore, the Restful interface provides a simple way for sequencing tools to automatically retrieve binary compiled sequence SCMFs (Space Command Message Files) on demand. It also enables developers to change the underlying database, while maintaining the same interface to the existing applications. The logging capabilities are also beneficial to operators when they are trying to recall how they solved a similar problem many days ago: this software enables automatic recovery of SCMF and RML (Robot Markup Language) sequence files directly from the command EVRs, eliminating the need for people to find and validate the corresponding sequences. To address the lack of auditing capability for sequences onboard a spacecraft during earlier missions, extensive logging support was added on the Mars Science Laboratory (MSL) sequencing server. This server is responsible for generating all MSL binary SCMFs from RML input sequences. The sequencing server logs every SCMF it generates into a MySQL database, as well as the high-level RML file and dictionary name inputs used to create the SCMF. The SCMF is then indexed by a hash value that is automatically included in all command EVRs by the onboard flight software. Second, both the binary SCMF result and the RML input file can be retrieved simply by specifying the hash to a Restful web interface. This interface enables command line tools as well as large sophisticated programs to download the SCMF and RMLs on-demand from the database, enabling a vast array of tools to be built on top of it. One such command line tool can retrieve and display RML files, or annotate a list of EVRs by interleaving them with the original sequence commands. This software has been integrated with the MSL sequencing pipeline where it will serve sequences useful in diagnostics, debugging, and situational awareness throughout the mission.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-05
... Automatic Power Reserve (APR), an Automatic Takeoff Thrust Control System (ATTCS), for Go-Around Performance... airplane will have novel or unusual design features associated with utilizing go-around performance credit...: Federal eRegulations Portal: Go to http://www.regulations.gov/ and follow the online instructions for...
NASA Astrophysics Data System (ADS)
Ferreira da Silva, R.; Filgueira, R.; Deelman, E.; Atkinson, M.
2016-12-01
We present Asterism, an open source data-intensive framework, which combines the Pegasus and dispel4py workflow systems. Asterism aims to simplify the effort required to develop data-intensive applications that run across multiple heterogeneous resources, without users having to: re-formulate their methods according to different enactment systems; manage the data distribution across systems; parallelize their methods; co-place and schedule their methods with computing resources; and store and transfer large/small volumes of data. Asterism's key element is to leverage the strengths of each workflow system: dispel4py allows developing scientific applications locally and then automatically parallelize and scale them on a wide range of HPC infrastructures with no changes to the application's code; Pegasus orchestrates the distributed execution of applications while providing portability, automated data management, recovery, debugging, and monitoring, without users needing to worry about the particulars of the target execution systems. Asterism leverages the level of abstractions provided by each workflow system to describe hybrid workflows where no information about the underlying infrastructure is required beforehand. The feasibility of Asterism has been evaluated using the seismic ambient noise cross-correlation application, a common data-intensive analysis pattern used by many seismologists. The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The Asterism workflow is implemented as a Pegasus workflow composed of two tasks (Phase1 and Phase2), where each phase represents a dispel4py workflow. Pegasus tasks describe the in/output data at a logical level, the data dependency between tasks, and the e-Infrastructures and the execution engine to run each dispel4py workflow. We have instantiated the workflow using data from 1000 stations from the IRIS services, and run it across two heterogeneous resources described as Docker containers: MPI (Container2) and Storm (Container3) clusters (Figure 1). Each dispel4py workflow is mapped to a particular execution engine, and data transfers between resources are automatically handled by Pegasus. Asterism is freely available online at http://github.com/dispel4py/pegasus_dispel4py.
Capacity enhancement of indigenous expansion engine based helium liquefier
NASA Astrophysics Data System (ADS)
Doohan, R. S.; Kush, P. K.; Maheshwari, G.
2017-02-01
Development of technology and understanding for large capacity helium refrigeration and liquefaction at helium temperature is indispensable for coming-up projects. A new version of helium liquefier designed and built to provide approximately 35 liters of liquid helium per hour. The refrigeration capacity of this reciprocating type expansion engine machine has been increased from its predecessor version with continuous improvement and deficiency debugging. The helium liquefier has been built using components by local industries including cryogenic Aluminum plate fin heat exchangers. Two compressors with nearly identical capacity have been deployed for the operation of system. Together they consume about 110 kW of electric power. The system employs liquid Nitrogen precooling to enhance liquid Helium yield. This paper describes details of the cryogenic expander design improvements, reconfiguration of heat exchangers, performance simulation and their experimental validation.
NASA Technical Reports Server (NTRS)
Neuman, Frank; Erzberger, Heinz; Schueller, Michael S.
1994-01-01
The analysis program (AN) is specifically designed to produce graphic and tabular information to aid in the design and checkout of the Center TRACON Automation System (CTAS). To best reveal CTAS operation and possible problems, data are plotted in many different ways both in detail and summary form. AN has been designed to analyze both radar surveillance data and output data from CTAS. AN has been extensively used to debug and refine CTAS. It is also being used in the field to monitor and assess CTAS performance. AN is continuously refined to keep up with changing needs. The present version of AN grew out of analysis of Denver Center data. However, the AN software has been written to be adaptable to any other facility Center or TRACON. Presently, one can select Denver Stapleton, Denver International, Dallas/Fort Worth International Airport, and Dallas Love Field.
Apple (LCSI) LOGO vs. MIT (Terrapin/Krell) LOGO: A Comparison for Grades 2 thru 4.
ERIC Educational Resources Information Center
Wappler, Reinhold D.
Two LOGO dialects are compared for appropriateness for use with second, third, and fourth grade students on the basis of 18 months of experience with teaching LOGO programing language at this level in a four-machine laboratory setting. Benefits and drawbacks of the dialects are evaluated in the areas of editing. screen modes, debugging,…
Information Processing Approaches to Cognitive Development
1989-08-04
O’Connor (Eds.), Intelligence and learning . New York: Plenum Press. Deloache, J.S. (1988). The development of representation in young chidren . In H.W...Klahr, D., & Carver, S.M. (1988). Cognitive objectives in a LOGO debugging curriculum: Instruction, Learning , and Transfer. Cognitive Psychology, 20...Production system models of learning and development. Cambridge, MA: MIT Press. TWO KINDS OF INFORMATION PROCESSING APPROACHES TO COGNITIVE DEVELOPMENT
Simulation Testing of Embedded Flight Software
NASA Technical Reports Server (NTRS)
Shahabuddin, Mohammad; Reinholtz, William
2004-01-01
Virtual Real Time (VRT) is a computer program for testing embedded flight software by computational simulation in a workstation, in contradistinction to testing it in its target central processing unit (CPU). The disadvantages of testing in the target CPU include the need for an expensive test bed, the necessity for testers and programmers to take turns using the test bed, and the lack of software tools for debugging in a real-time environment. By virtue of its architecture, most of the flight software of the type in question is amenable to development and testing on workstations, for which there is an abundance of commercially available debugging and analysis software tools. Unfortunately, the timing of a workstation differs from that of a target CPU in a test bed. VRT, in conjunction with closed-loop simulation software, provides a capability for executing embedded flight software on a workstation in a close-to-real-time environment. A scale factor is used to convert between execution time in VRT on a workstation and execution on a target CPU. VRT includes high-resolution operating- system timers that enable the synchronization of flight software with simulation software and ground software, all running on different workstations.
Adaptive pseudolinear compensators of dynamic characteristics of automatic control systems
NASA Astrophysics Data System (ADS)
Skorospeshkin, M. V.; Sukhodoev, M. S.; Timoshenko, E. A.; Lenskiy, F. V.
2016-04-01
Adaptive pseudolinear gain and phase compensators of dynamic characteristics of automatic control systems are suggested. The automatic control system performance with adaptive compensators has been explored. The efficiency of pseudolinear adaptive compensators in the automatic control systems with time-varying parameters has been demonstrated.
Shock Position Control for Mode Transition in a Turbine Based Combined Cycle Engine Inlet Model
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Stueber, Thomas J.
2013-01-01
A dual flow-path inlet for a turbine based combined cycle (TBCC) propulsion system is to be tested in order to evaluate methodologies for performing a controlled inlet mode transition. Prior to experimental testing, simulation models are used to test, debug, and validate potential control algorithms which are designed to maintain shock position during inlet disturbances. One simulation package being used for testing is the High Mach Transient Engine Cycle Code simulation, known as HiTECC. This paper discusses the development of a mode transition schedule for the HiTECC simulation that is analogous to the development of inlet performance maps. Inlet performance maps, derived through experimental means, describe the performance and operability of the inlet as the splitter closes, switching power production from the turbine engine to the Dual Mode Scram Jet. With knowledge of the operability and performance tradeoffs, a closed loop system can be designed to optimize the performance of the inlet. This paper demonstrates the design of the closed loop control system and benefit with the implementation of a Proportional-Integral controller, an H-Infinity based controller, and a disturbance observer based controller; all of which avoid inlet unstart during a mode transition with a simulated disturbance that would lead to inlet unstart without closed loop control.
The roots of stereotype threat: when automatic associations disrupt girls' math performance.
Galdi, Silvia; Cadinu, Mara; Tomasetto, Carlo
2014-01-01
Although stereotype awareness is a prerequisite for stereotype threat effects (Steele & Aronson, 1995), research showed girls' deficit under stereotype threat before the emergence of math-gender stereotype awareness, and in the absence of stereotype endorsement. In a study including 240 six-year-old children, this paradox was addressed by testing whether automatic associations trigger stereotype threat in young girls. Whereas no indicators were found that children endorsed the math-gender stereotype, girls, but not boys, showed automatic associations consistent with the stereotype. Moreover, results showed that girls' automatic associations varied as a function of a manipulation regarding the stereotype content. Importantly, girls' math performance decreased in a stereotype-consistent, relative to a stereotype-inconsistent, condition and automatic associations mediated the relation between stereotype threat and performance. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.
ERIC Educational Resources Information Center
Connelly, E. M.; And Others
A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is described. Ultimately, this approach will allow automatic measurement of pilot performance in a flight simulator or from recorded in-flight data. An efficient method of representing performance data within a computer is…
Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs
NASA Astrophysics Data System (ADS)
RIngenburg, Michael F.
Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in controlling output quality while still maintaining significant energy efficiency gains.
A Change Impact Analysis to Characterize Evolving Program Behaviors
NASA Technical Reports Server (NTRS)
Rungta, Neha Shyam; Person, Suzette; Branchaud, Joshua
2012-01-01
Change impact analysis techniques estimate the potential effects of changes made to software. Directed Incremental Symbolic Execution (DiSE) is an intraprocedural technique for characterizing the impact of software changes on program behaviors. DiSE first estimates the impact of the changes on the source code using program slicing techniques, and then uses the impact sets to guide symbolic execution to generate path conditions that characterize impacted program behaviors. DiSE, however, cannot reason about the flow of impact between methods and will fail to generate path conditions for certain impacted program behaviors. In this work, we present iDiSE, an extension to DiSE that performs an interprocedural analysis. iDiSE combines static and dynamic calling context information to efficiently generate impacted program behaviors across calling contexts. Information about impacted program behaviors is useful for testing, verification, and debugging of evolving programs. We present a case-study of our implementation of the iDiSE algorithm to demonstrate its efficiency at computing impacted program behaviors. Traditional notions of coverage are insufficient for characterizing the testing efforts used to validate evolving program behaviors because they do not take into account the impact of changes to the code. In this work we present novel definitions of impacted coverage metrics that are useful for evaluating the testing effort required to test evolving programs. We then describe how the notions of impacted coverage can be used to configure techniques such as DiSE and iDiSE in order to support regression testing related tasks. We also discuss how DiSE and iDiSE can be configured for debugging finding the root cause of errors introduced by changes made to the code. In our empirical evaluation we demonstrate that the configurations of DiSE and iDiSE can be used to support various software maintenance tasks
A consideration of the operation of automatic production machines.
Hoshi, Toshiro; Sugimoto, Noboru
2015-01-01
At worksites, various automatic production machines are in use to release workers from muscular labor or labor in the detrimental environment. On the other hand, a large number of industrial accidents have been caused by automatic production machines. In view of this, this paper considers the operation of automatic production machines from the viewpoint of accident prevention, and points out two types of machine operation - operation for which quick performance is required (operation that is not permitted to be delayed) - and operation for which composed performance is required (operation that is not permitted to be performed in haste). These operations are distinguished by operation buttons of suitable colors and shapes. This paper shows that these characteristics are evaluated as "asymmetric on the time-axis". Here, in order for workers to accept the risk of automatic production machines, it is preconditioned in general that harm should be sufficiently small or avoidance of harm is easy. In this connection, this paper shows the possibility of facilitating the acceptance of the risk of automatic production machines by enhancing the asymmetric on the time-axis.
A consideration of the operation of automatic production machines
HOSHI, Toshiro; SUGIMOTO, Noboru
2015-01-01
At worksites, various automatic production machines are in use to release workers from muscular labor or labor in the detrimental environment. On the other hand, a large number of industrial accidents have been caused by automatic production machines. In view of this, this paper considers the operation of automatic production machines from the viewpoint of accident prevention, and points out two types of machine operation − operation for which quick performance is required (operation that is not permitted to be delayed) − and operation for which composed performance is required (operation that is not permitted to be performed in haste). These operations are distinguished by operation buttons of suitable colors and shapes. This paper shows that these characteristics are evaluated as “asymmetric on the time-axis”. Here, in order for workers to accept the risk of automatic production machines, it is preconditioned in general that harm should be sufficiently small or avoidance of harm is easy. In this connection, this paper shows the possibility of facilitating the acceptance of the risk of automatic production machines by enhancing the asymmetric on the time-axis. PMID:25739898
The LHEA PDP 11/70 graphics processing facility users guide
NASA Technical Reports Server (NTRS)
1978-01-01
A compilation of all necessary and useful information needed to allow the inexperienced user to program on the PDP 11/70. Information regarding the use of editing and file manipulation utilities as well as operational procedures are included. The inexperienced user is taken through the process of creating, editing, compiling, task building and debugging his/her FORTRAN program. Also, documentation on additional software is included.
Hierarchical Task Network Prototyping In Unity3d
2016-06-01
visually debug. Here we present a solution for prototyping HTNs by extending an existing commercial implementation of Behavior Trees within the Unity3D game ...HTN, dynamic behaviors, behavior prototyping, agent-based simulation, entity-level combat model, game engine, discrete event simulation, virtual...commercial implementation of Behavior Trees within the Unity3D game engine prior to building the HTN in COMBATXXI. Existing HTNs were emulated within
Embracing Statistical Challenges in the Information Technology Age
2006-01-01
computation and feature selection. Moreover, two research projects on network tomography and arctic cloud detection are used throughout the paper to bring...prominent Network Tomography problem, origin- destination (OD) traffic estimation. It demonstrates well how the two modes of data collection interact...software debugging (Biblit et al, 2005 [2]), and network tomography for computer network management. Computer sys- tem problems exist long before the IT
Techniques for the Detection of Faulty Packet Header Modifications
2014-03-12
layer approaches to check if packets are being altered by middleboxes and were primarily developed as network neutrality analysis tools. Switzerland works...local and metropolitan area networks –specific requirements part 11: Wireless LAN medium access control (MAC) and physical layer (PHY) specifications...policy or position of the Department of Defense or the U.S. Government. Understanding, measuring, and debugging IP networks , particularly across
Design and Evaluation for the End-to-End Detection of TCP/IP Header Manipulation
2014-06-01
Cooperative Association for Internet Data Analysis CDN content delivery network CE congestion encountered CRC cyclic redundancy check CWR congestion...Switzerland was primarily developed as a network neutrality analysis tool to detect when internet service providers (ISPs) were interfering with...maximum 200 words) Understanding, measuring, and debugging IP networks , particularly across administrative domains, is challenging. One aspect of the
The QCDSP project —a status report
NASA Astrophysics Data System (ADS)
Chen, Dong; Chen, Ping; Christ, Norman; Edwards, Robert; Fleming, George; Gara, Alan; Hansen, Sten; Jung, Chulwoo; Kaehler, Adrian; Kasow, Steven; Kennedy, Anthony; Kilcup, Gregory; Luo, Yubin; Malureanu, Catalin; Mawhinney, Robert; Parsons, John; Sexton, James; Sui, Chengzhong; Vranas, Pavlos
1998-01-01
We give a brief overview of the massively parallel computer project underway for nearly the past four years, centered at Columbia University. A 6 Gflops and a 50 Gflops machine are presently being debugged for installation at OSU and SCRI respectively, while a 0.4 Tflops machine is under construction for Columbia and a 0.6 Tflops machine is planned for the new RIKEN Brookhaven Research Center.
Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz
2017-01-01
To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.
Very Large Scale Integrated Circuits for Military Systems.
1981-01-01
ABBREVIATIONS A/D Analog-to-digital C AGC Automatic Gain Control A A/J Anti-jam ASP Advanced Signal Processor AU Arithmetic Units C.AD Computer-Aided...ESM) equipments (Ref. 23); in lieu of an adequate automatic proces- sing capability, the function is now performed manually (Ref. 24), which involves...a human operator, displays, etc., and a sacrifice in performance (acquisition speed, saturation signal density). Various automatic processing
On the automaticity of response inhibition in individuals with alcoholism.
Noël, Xavier; Brevers, Damien; Hanak, Catherine; Kornreich, Charles; Verbanck, Paul; Verbruggen, Frederick
2016-06-01
Response inhibition is usually considered a hallmark of executive control. However, recent work indicates that stop performance can become associatively mediated ('automatic') over practice. This study investigated automatic response inhibition in sober and recently detoxified individuals with alcoholism.. We administered to forty recently detoxified alcoholics and forty healthy participants a modified stop-signal task that consisted of a training phase in which a subset of the stimuli was consistently associated with stopping or going, and a test phase in which this mapping was reversed. In the training phase, stop performance improved for the consistent stop stimuli, compared with control stimuli that were not associated with going or stopping. In the test phase, go performance tended to be impaired for old stop stimuli. Combined, these findings support the automatic inhibition hypothesis. Importantly, performance was similar in both groups, which indicates that automatic inhibitory control develops normally in individuals with alcoholism.. This finding is specific to individuals with alcoholism without other psychiatric disorders, which is rather atypical and prevents generalization. Personalized stimuli with a stronger affective content should be used in future studies. These results advance our understanding of behavioral inhibition in individuals with alcoholism. Furthermore, intact automatic inhibitory control may be an important element of successful cognitive remediation of addictive behaviors.. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Connelly, Edward A.; And Others
A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is documented in this report. The ultimate application of the research is to provide methods for automatically measuring pilot performance in a flight simulator or from recorded in-flight data. An efficient method of…
Detecting cheaters without thinking: testing the automaticity of the cheater detection module.
Van Lier, Jens; Revlin, Russell; De Neys, Wim
2013-01-01
Evolutionary psychologists have suggested that our brain is composed of evolved mechanisms. One extensively studied mechanism is the cheater detection module. This module would make people very good at detecting cheaters in a social exchange. A vast amount of research has illustrated performance facilitation on social contract selection tasks. This facilitation is attributed to the alleged automatic and isolated operation of the module (i.e., independent of general cognitive capacity). This study, using the selection task, tested the critical automaticity assumption in three experiments. Experiments 1 and 2 established that performance on social contract versions did not depend on cognitive capacity or age. Experiment 3 showed that experimentally burdening cognitive resources with a secondary task had no impact on performance on the social contract version. However, in all experiments, performance on a non-social contract version did depend on available cognitive capacity. Overall, findings validate the automatic and effortless nature of social exchange reasoning.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey; Stueber, Thomas
2012-01-01
An inlet system is being tested to evaluate methodologies for a turbine based combined cycle propulsion system to perform a controlled inlet mode transition. Prior to wind tunnel based hardware testing of controlled mode transitions, simulation models are used to test, debug, and validate potential control algorithms. One candidate simulation package for this purpose is the High Mach Transient Engine Cycle Code (HiTECC). The HiTECC simulation package models the inlet system, propulsion systems, thermal energy, geometry, nozzle, and fuel systems. This paper discusses the modification and redesign of the simulation package and control system to represent the NASA large-scale inlet model for Combined Cycle Engine mode transition studies, mounted in NASA Glenn s 10-foot by 10-foot Supersonic Wind Tunnel. This model will be used for designing and testing candidate control algorithms before implementation.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Stueber, Thomas J.
2012-01-01
An inlet system is being tested to evaluate methodologies for a turbine based combined cycle propulsion system to perform a controlled inlet mode transition. Prior to wind tunnel based hardware testing of controlled mode transitions, simulation models are used to test, debug, and validate potential control algorithms. One candidate simulation package for this purpose is the High Mach Transient Engine Cycle Code (HiTECC). The HiTECC simulation package models the inlet system, propulsion systems, thermal energy, geometry, nozzle, and fuel systems. This paper discusses the modification and redesign of the simulation package and control system to represent the NASA large-scale inlet model for Combined Cycle Engine mode transition studies, mounted in NASA Glenn s 10- by 10-Foot Supersonic Wind Tunnel. This model will be used for designing and testing candidate control algorithms before implementation.
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.
Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin
2018-01-01
We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.
A Framework to Debug Diagnostic Matrices
NASA Technical Reports Server (NTRS)
Kodal, Anuradha; Robinson, Peter; Patterson-Hine, Ann
2013-01-01
Diagnostics is an important concept in system health and monitoring of space operations. Many of the existing diagnostic algorithms utilize system knowledge in the form of diagnostic matrix (D-matrix, also popularly known as diagnostic dictionary, fault signature matrix or reachability matrix) gleaned from physical models. But, sometimes, this may not be coherent to obtain high diagnostic performance. In such a case, it is important to modify this D-matrix based on knowledge obtained from other sources such as time-series data stream (simulated or maintenance data) within the context of a framework that includes the diagnostic/inference algorithm. A systematic and sequential update procedure, diagnostic modeling evaluator (DME) is proposed to modify D-matrix and wrapper logic considering least expensive solution first. This iterative procedure includes conditions ranging from modifying 0s and 1s in the matrix, or adding/removing the rows (failure sources) columns (tests). We will experiment this framework on datasets from DX challenge 2009.
Firdaus, Ahmad; Anuar, Nor Badrul; Razak, Mohd Faizal Ab; Hashem, Ibrahim Abaker Targio; Bachok, Syafiq; Sangaiah, Arun Kumar
2018-05-04
The increasing demand for Android mobile devices and blockchain has motivated malware creators to develop mobile malware to compromise the blockchain. Although the blockchain is secure, attackers have managed to gain access into the blockchain as legal users, thereby comprising important and crucial information. Examples of mobile malware include root exploit, botnets, and Trojans and root exploit is one of the most dangerous malware. It compromises the operating system kernel in order to gain root privileges which are then used by attackers to bypass the security mechanisms, to gain complete control of the operating system, to install other possible types of malware to the devices, and finally, to steal victims' private keys linked to the blockchain. For the purpose of maximizing the security of the blockchain-based medical data management (BMDM), it is crucial to investigate the novel features and approaches contained in root exploit malware. This study proposes to use the bio-inspired method of practical swarm optimization (PSO) which automatically select the exclusive features that contain the novel android debug bridge (ADB). This study also adopts boosting (adaboost, realadaboost, logitboost, and multiboost) to enhance the machine learning prediction that detects unknown root exploit, and scrutinized three categories of features including (1) system command, (2) directory path and (3) code-based. The evaluation gathered from this study suggests a marked accuracy value of 93% with Logitboost in the simulation. Logitboost also helped to predicted all the root exploit samples in our developed system, the root exploit detection system (RODS).
The automaticity of face perception is influenced by familiarity.
Yan, Xiaoqian; Young, Andrew W; Andrews, Timothy J
2017-10-01
In this study, we explore the automaticity of encoding for different facial characteristics and ask whether it is influenced by face familiarity. We used a matching task in which participants had to report whether the gender, identity, race, or expression of two briefly presented faces was the same or different. The task was made challenging by allowing nonrelevant dimensions to vary across trials. To test for automaticity, we compared performance on trials in which the task instruction was given at the beginning of the trial, with trials in which the task instruction was given at the end of the trial. As a strong criterion for automatic processing, we reasoned that if perception of a given characteristic (gender, race, identity, or emotion) is fully automatic, the timing of the instruction should not influence performance. We compared automaticity for the perception of familiar and unfamiliar faces. Performance with unfamiliar faces was higher for all tasks when the instruction was given at the beginning of the trial. However, we found a significant interaction between instruction and task with familiar faces. Accuracy of gender and identity judgments to familiar faces was the same regardless of whether the instruction was given before or after the trial, suggesting automatic processing of these properties. In contrast, there was an effect of instruction for judgments of expression and race to familiar faces. These results show that familiarity enhances the automatic processing of some types of facial information more than others.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Qiang
At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less
A New Source Biasing Approach in ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevill, Aaron M; Mosher, Scott W
2012-01-01
The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less
Consistent detection of global predicates
NASA Technical Reports Server (NTRS)
Cooper, Robert; Marzullo, Keith
1991-01-01
A fundamental problem in debugging and monitoring is detecting whether the state of a system satisfies some predicate. If the system is distributed, then the resulting uncertainty in the state of the system makes such detection, in general, ill-defined. Three algorithms are presented for detecting global predicates in a well-defined way. These algorithms do so by interpreting predicates with respect to the communication that has occurred in the system.
Viewer: a User Interface for Failure Region Analysis
1990-12-01
another possible area of continued research. The program could detect whether the user is a beginner , intermediate, or expert and provide different...interfaces for each level. The beginner level would provide detailed help functions, and prompt the user with detailed explanations of what the program...June 1990. Brooke, J.B. and Duncan, K.D., "Experimental Studies of Flowchart Use at Different Stages of Program Debugging" (Ergonomics, Vol 23, No
Assessing GPS Constellation Resiliency in an Urban Canyon Environment
2015-03-26
Taipei, Taiwan as his area of interest. His GPS constellation is modeled in the Satellite Toolkit ( STK ) where augmentation satellites can be added and...interaction. SEAS also provides a visual display of the simulation which is useful for verification and debugging portions of the analysis. Furthermore...entire system. Interpreting the model is aided by the visual display of the agents moving in the region of inter- est. Furthermore, SEAS collects
Electronic and software subsystems for an autonomous roving vehicle. M.S. Thesis
NASA Technical Reports Server (NTRS)
Doig, G. A.
1980-01-01
The complete electronics packaging which controls the Mars roving vehicle is described in order to provide a broad overview of the systems that are part of that package. Some software debugging tools are also discussed. Particular emphasis is given to those systems that are controlled by the microprocessor. These include the laser mast, the telemetry system, the command link prime interface board, and the prime software.
Characterizing and Implementing Efficient Primitives for Privacy-Preserving Computation
2015-07-01
the mobile device. From this, the mobile will detect any tampering from the malicious party by a discrepancy in these returned values, eliminating...the need for an output MAC. If no tampering is detected , the mobile device then decrypts the output of computation. APPROVED FOR PUBLIC RELEASE...useful error messages when the compiler detects a problem with an application, making debugging the application significantly easier than with other
Observation sand Results Gained from the Jade Project
2002-05-04
project different dependency-based Center, 5095 Mawson Lakes (Adelaide) SA, Australia, email: models have been created that vary in their levels of...test eris Columna# T indenotesth n erfofmtests the Java programming language. Currently, exception han- of the respective test series. dling and...meets sentation in the debugging of software to reduce the problem error diagnosis in logic programs. In Proceedings 1 3 t h of structural faults in
NASA Technical Reports Server (NTRS)
Jaworski, Allan; Lavallee, David; Zoch, David
1987-01-01
The prototype demonstrates the feasibility of using Ada for expert systems and the implementation of an expert-friendly interface which supports knowledge entry. In the Ford LISP-Ada Connection (FLAC) system LISP and Ada are used in ways which complement their respective capabilities. Future investigation will concentrate on the enhancement of the expert knowledge entry/debugging interface and on the issues associated with multitasking and real-time expert systems implementation in Ada.
6 DOF Nonlinear AUV Simulation Toolbox
1997-01-01
is to supply a flexible 3D -simulation platform for motion visualization, in-lab debugging and testing of mission-specific strategies as well as those...Explorer are modular designed [Smith] in order to cut time and cost for vehicle recontlguration. A flexible 3D -simulation platform is desired to... 3D models. Current implemented modules include a nonlinear dynamic model for the OEX, shared memory and semaphore manager tools, shared memory monitor
A distributed data acquisition software scheme for the Laboratory Telerobotic Manipulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, P.L.; Glassell, R.L.; Rowe, J.C.
1990-01-01
A custom software architecture was developed for use in the Laboratory Telerobotic Manipulator (LTM) to provide support for the distributed data acquisition electronics. This architecture was designed to provide a comprehensive development environment that proved to be useful for both hardware and software debugging. This paper describes the development environment and the operational characteristics of the real-time data acquisition software. 8 refs., 5 figs.
Plan Debugging Using Approximate Domain Theories.
1995-03-01
compelling suggestion that generative plan- ning systems solving large problems will need to exploit the control information implicit in uncertain...control information implicit in uncertain information may well lead the planner to expand one portion of a plan at one point, and a separate portion of...solutions that have been proposed are to abandon declarativism (as suggested in the work on situated automata theory and its variants [1, 16, 56, 72
Solutions and debugging for data consistency in multiprocessors with noncoherent caches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, D.; Mendelson, B.; Breternitz, M. Jr.
1995-02-01
We analyze two important problems that arise in shared-memory multiprocessor systems. The stale data problem involves ensuring that data items in local memory of individual processors are current, independent of writes done by other processors. False sharing occurs when two processors have copies of the same shared data block but update different portions of the block. The false sharing problem involves guaranteeing that subsequent writes are properly combined. In modern architectures these problems are usually solved in hardware, by exploiting mechanisms for hardware controlled cache consistency. This leads to more expensive and nonscalable designs. Therefore, we are concentrating on softwaremore » methods for ensuring cache consistency that would allow for affordable and scalable multiprocessing systems. Unfortunately, providing software control is nontrivial, both for the compiler writer and for the application programmer. For this reason we are developing a debugging environment that will facilitate the development of compiler-based techniques and will help the programmer to tune his or her application using explicit cache management mechanisms. We extend the notion of a race condition for IBM Shared Memory System POWER/4, taking into consideration its noncoherent caches, and propose techniques for detection of false sharing problems. Identification of the stale data problem is discussed as well, and solutions are suggested.« less
Collette, Fabienne; Van der Linden, Martial; Salmon, Eric
2010-01-01
A decline of cognitive functioning affecting several cognitive domains was frequently reported in patients with frontotemporal dementia. We were interested in determining if these deficits can be interpreted as reflecting an impairment of controlled cognitive processes by using an assessment tool specifically developed to explore the distinction between automatic and controlled processes, namely the process dissociation procedure (PDP) developed by Jacoby. The PDP was applied to a word stem completion task to determine the contribution of automatic and controlled processes to episodic memory performance and was administered to a group of 12 patients with the behavioral variant of frontotemporal dementia (bv-FTD) and 20 control subjects (CS). Bv-FTD patients obtained a lower performance than CS for the estimates of controlled processes, but no group differences was observed for estimates of automatic processes. The between-groups comparison of the estimates of controlled and automatic processes showed a larger contribution of automatic processes to performance in bv-FTD, while a slightly more important contribution of controlled processes was observed in control subjects. These results are clearly indicative of an alteration of controlled memory processes in bv-FTD.
GPAW - massively parallel electronic structure calculations with Python-based software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enkovaara, J.; Romero, N.; Shende, S.
2011-01-01
Electronic structure calculations are a widely used tool in materials science and large consumer of supercomputing resources. Traditionally, the software packages for these kind of simulations have been implemented in compiled languages, where Fortran in its different versions has been the most popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most of the productivity enhancing features together with a good numerical performance. We have used thismore » approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges it is possible to obtain good numerical performance and good parallel scalability with Python based software.« less
A parallel strategy for implementing real-time expert systems using CLIPS
NASA Technical Reports Server (NTRS)
Ilyes, Laszlo A.; Villaseca, F. Eugenio; Delaat, John
1994-01-01
As evidenced by current literature, there appears to be a continued interest in the study of real-time expert systems. It is generally recognized that speed of execution is only one consideration when designing an effective real-time expert system. Some other features one must consider are the expert system's ability to perform temporal reasoning, handle interrupts, prioritize data, contend with data uncertainty, and perform context focusing as dictated by the incoming data to the expert system. This paper presents a strategy for implementing a real time expert system on the iPSC/860 hypercube parallel computer using CLIPS. The strategy takes into consideration not only the execution time of the software, but also those features which define a true real-time expert system. The methodology is then demonstrated using a practical implementation of an expert system which performs diagnostics on the Space Shuttle Main Engine (SSME). This particular implementation uses an eight node hypercube to process ten sensor measurements in order to simultaneously diagnose five different failure modes within the SSME. The main program is written in ANSI C and embeds CLIPS to better facilitate and debug the rule based expert system.
20 CFR 404.285 - Recomputations performed automatically.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Recomputations performed automatically. 404.285 Section 404.285 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Recomputing Your Primary Insurance Amount...
Trust, control strategies and allocation of function in human-machine systems.
Lee, J; Moray, N
1992-10-01
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a 'trust transfer function' is developed using time series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
The Muon Conditions Data Management:. Database Architecture and Software Infrastructure
NASA Astrophysics Data System (ADS)
Verducci, Monica
2010-04-01
The management of the Muon Conditions Database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored and their analysis. The Muon conditions database is responsible for almost all of the 'non-event' data and detector quality flags storage needed for debugging of the detector operations and for performing the reconstruction and the analysis. In particular for the early data, the knowledge of the detector performance, the corrections in term of efficiency and calibration will be extremely important for the correct reconstruction of the events. In this work, an overview of the entire Muon conditions database architecture is given, in particular the different sources of the data and the storage model used, including the database technology associated. Particular emphasis is given to the Data Quality chain: the flow of the data, the analysis and the final results are described. In addition, the description of the software interfaces used to access to the conditions data are reported, in particular, in the ATLAS Offline Reconstruction framework ATHENA environment.
Action Centered Contextual Bandits.
Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan
2017-12-01
Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.
Global synchronization algorithms for the Intel iPSC/860
NASA Technical Reports Server (NTRS)
Seidel, Steven R.; Davis, Mark A.
1992-01-01
In a distributed memory multicomputer that has no global clock, global processor synchronization can only be achieved through software. Global synchronization algorithms are used in tridiagonal systems solvers, CFD codes, sequence comparison algorithms, and sorting algorithms. They are also useful for event simulation, debugging, and for solving mutual exclusion problems. For the Intel iPSC/860 in particular, global synchronization can be used to ensure the most effective use of the communication network for operations such as the shift, where each processor in a one-dimensional array or ring concurrently sends a message to its right (or left) neighbor. Three global synchronization algorithms are considered for the iPSC/860: the gysnc() primitive provided by Intel, the PICL primitive sync0(), and a new recursive doubling synchronization (RDS) algorithm. The performance of these algorithms is compared to the performance predicted by communication models of both the long and forced message protocols. Measurements of the cost of shift operations preceded by global synchronization show that the RDS algorithm always synchronizes the nodes more precisely and costs only slightly more than the other two algorithms.
Performance of a scintillation detector array operated with LHAASO-KM2A electronics
NASA Astrophysics Data System (ADS)
Wang, Zhen; Guo, Yiqing; Cai, Hui; Chang, Jinfan; Chen, Tianlu; Danzengluobu; Feng, Youliang; Gao, Qi; Gou, Quanbu; Guo, Yingying; Hou, Chao; Hu, Hongbo; Labaciren; Liu, Cheng; Li, Haijin; Liu, Jia; Liu, Maoyuan; Qiao, Bingqiang; Qian, Xiangli; Sheng, Xiangdong; Tian, Zhen; Wang, Qun; Xue, Liang; Yao, Yuhua; Zhang, Shaoru; Zhang, Xueyao; Zhang, Yi
2018-04-01
A scintillation detector array composed of 115 detectors and covering an area of about 20000 m2 was installed at the end of 2016 at the Yangbajing international cosmic ray observatory and has been taking data since then. The array is equipped with electronics from Large High Altitude Air Shower Observatory Square Kilometer Complex Array (LHAASO-KM2A) and, in turn, currently serves as the largest debugging and testing platform for the LHAASO-KM2A. Furthermore, the array was used to study the performance of a wide field-of-view air Cherenkov telescope by providing accurate information on the shower core, direction and energy, etc. This work is mainly dealing with the scintillation detector array. The experimental setup and the offline calibration are described in detail. Then, a thorough comparison between the data and Monte Carlo (MC) simulations is presented and a good agreement is obtained. With the even-odd method, the resolutions of the shower direction and core are measured. Finally, successful observations of the expected Moon's and Sun's shadows of cosmic rays (CRs) verify the measured angular resolution.
Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Koo, Michelle; Cao, Yu
Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less
Equations for Automotive-Transmission Performance
NASA Technical Reports Server (NTRS)
Chazanoff, S.; Aston, M. B.; Chapman, C. P.
1984-01-01
Curve-fitting procedure ensures high confidence levels. Threedimensional plot represents performance of small automatic transmission coasting in second gear. In equation for plot, PL power loss, S speed and T torque. Equations applicable to manual and automatic transmissions over wide range of speed, torque, and efficiency.
Online automatic tuning and control for fed-batch cultivation
van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.
2007-01-01
Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554
Automatization of hardware configuration for plasma diagnostic system
NASA Astrophysics Data System (ADS)
Wojenski, A.; Pozniak, K. T.; Kasprowicz, G.; Kolasinski, P.; Krawczyk, R. D.; Zabolotny, W.; Linczuk, P.; Chernyshova, M.; Czarski, T.; Malinowski, K.
2016-09-01
Soft X-ray plasma measurement systems are mostly multi-channel, high performance systems. In case of the modular construction it is necessary to perform sophisticated system discovery in parallel with automatic system configuration. In the paper the structure of the modular system designed for tokamak plasma soft X-ray measurements is described. The concept of the system discovery and further automatic configuration is also presented. FCS application (FMC/ FPGA Configuration Software) is used for running sophisticated system setup with automatic verification of proper configuration. In order to provide flexibility of further system configurations (e.g. user setup), common communication interface is also described. The approach presented here is related to the automatic system firmware building presented in previous papers. Modular construction and multichannel measurements are key requirement in term of SXR diagnostics with use of GEM detectors.
ARES v2: new features and improved performance
NASA Astrophysics Data System (ADS)
Sousa, S. G.; Santos, N. C.; Adibekyan, V.; Delgado-Mena, E.; Israelian, G.
2015-05-01
Aims: We present a new upgraded version of ARES. The new version includes a series of interesting new features such as automatic radial velocity correction, a fully automatic continuum determination, and an estimation of the errors for the equivalent widths. Methods: The automatic correction of the radial velocity is achieved with a simple cross-correlation function, and the automatic continuum determination, as well as the estimation of the errors, relies on a new approach to evaluating the spectral noise at the continuum level. Results: ARES v2 is totally compatible with its predecessor. We show that the fully automatic continuum determination is consistent with the previous methods applied for this task. It also presents a significant improvement on its performance thanks to the implementation of a parallel computation using the OpenMP library. Automatic Routine for line Equivalent widths in stellar Spectra - ARES webpage: http://www.astro.up.pt/~sousasag/ares/Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 075.D-0800(A).
Gong, Xuepeng; Lu, Qipeng
2015-01-01
A new monochromator is designed to develop a high performance soft X-ray microscopy beamline at Shanghai Synchrotron Radiation Facility (SSRF). But owing to its high resolving power and high accurate spectrum output, there exist many technical difficulties. In the paper presented, as two primary design targets for the monochromator, theoretical energy resolution and photon flux of the beamline are calculated. For wavelength scanning mechanism, primary factors affecting the rotary angle errors are presented, and the measuring results are 0.15'' and 0.17'' for plane mirror and plane grating, which means that it is possible to provide sufficient scanning precision to specific wavelength. For plane grating switching mechanism, the repeatabilities of roll, yaw and pitch angles are 0.08'', 0.12'' and 0.05'', which can guarantee the high accurate switch of the plane grating effectively. After debugging, the repeatability of light spot drift reaches to 0.7'', which further improves the performance of the monochromator. The commissioning results show that the energy resolving power is higher than 10000 at Ar L-edge, the photon flux is higher than 1 × 108 photons/sec/200 mA, and the spatial resolution is better than 30 nm, demonstrating that the monochromator performs very well and reaches theoretical predictions.
Stefanidis, Dimitrios; Scerbo, Mark W; Montero, Paul N; Acker, Christina E; Smith, Warren D
2012-01-01
We hypothesized that novices will perform better in the operating room after simulator training to automaticity compared with traditional proficiency based training (current standard training paradigm). Simulator-acquired skill translates to the operating room, but the skill transfer is incomplete. Secondary task metrics reflect the ability of trainees to multitask (automaticity) and may improve performance assessment on simulators and skill transfer by indicating when learning is complete. Novices (N = 30) were enrolled in an IRB-approved, blinded, randomized, controlled trial. Participants were randomized into an intervention (n = 20) and a control (n = 10) group. The intervention group practiced on the FLS suturing task until they achieved expert levels of time and errors (proficiency), were tested on a live porcine fundoplication model, continued simulator training until they achieved expert levels on a visual spatial secondary task (automaticity) and were retested on the operating room (OR) model. The control group participated only during testing sessions. Performance scores were compared within and between groups during testing sessions. : Intervention group participants achieved proficiency after 54 ± 14 and automaticity after additional 109 ± 57 repetitions. Participants achieved better scores in the OR after automaticity training [345 (range, 0-537)] compared with after proficiency-based training [220 (range, 0-452; P < 0.001]. Simulator training to automaticity takes more time but is superior to proficiency-based training, as it leads to improved skill acquisition and transfer. Secondary task metrics that reflect trainee automaticity should be implemented during simulator training to improve learning and skill transfer.
Zhang, Jing; Lipp, Ottmar V; Hu, Ping
2017-01-01
The current study investigated the interactive effects of individual differences in automatic emotion regulation (AER) and primed emotion regulation strategy on skin conductance level (SCL) and heart rate during provoked anger. The study was a 2 × 2 [AER tendency (expression vs. control) × priming (expression vs. control)] between subject design. Participants were assigned to two groups according to their performance on an emotion regulation-IAT (differentiating automatic emotion control tendency and automatic emotion expression tendency). Then participants of the two groups were randomly assigned to two emotion regulation priming conditions (emotion control priming or emotion expression priming). Anger was provoked by blaming participants for slow performance during a subsequent backward subtraction task. In anger provocation, SCL of individuals with automatic emotion control tendencies in the control priming condition was lower than of those with automatic emotion control tendencies in the expression priming condition. However, SCL of individuals with automatic emotion expression tendencies did no differ in the automatic emotion control priming or the automatic emotion expression priming condition. Heart rate during anger provocation was higher in individuals with automatic emotion expression tendencies than in individuals with automatic emotion control tendencies regardless of priming condition. This pattern indicates an interactive effect of individual differences in AER and emotion regulation priming on SCL, which is an index of emotional arousal. Heart rate was only sensitive to the individual differences in AER, and did not reflect this interaction. This finding has implications for clinical studies of the use of emotion regulation strategy training suggesting that different practices are optimal for individuals who differ in AER tendencies.
Detecting Cheaters without Thinking: Testing the Automaticity of the Cheater Detection Module
Van Lier, Jens; Revlin, Russell; De Neys, Wim
2013-01-01
Evolutionary psychologists have suggested that our brain is composed of evolved mechanisms. One extensively studied mechanism is the cheater detection module. This module would make people very good at detecting cheaters in a social exchange. A vast amount of research has illustrated performance facilitation on social contract selection tasks. This facilitation is attributed to the alleged automatic and isolated operation of the module (i.e., independent of general cognitive capacity). This study, using the selection task, tested the critical automaticity assumption in three experiments. Experiments 1 and 2 established that performance on social contract versions did not depend on cognitive capacity or age. Experiment 3 showed that experimentally burdening cognitive resources with a secondary task had no impact on performance on the social contract version. However, in all experiments, performance on a non-social contract version did depend on available cognitive capacity. Overall, findings validate the automatic and effortless nature of social exchange reasoning. PMID:23342012
ERIC Educational Resources Information Center
Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F.; Logan, Gordon D.
2018-01-01
Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this…
Van Weyenberg, Stephanie; Van Nuffel, Annelies; Lauwers, Ludwig; Vangeyte, Jürgen
2017-01-01
Simple Summary Most prototypes of systems to automatically detect lameness in dairy cattle are still not available on the market. Estimating their potential adoption rate could support developers in defining development goals towards commercially viable and well-adopted systems. We simulated the potential market shares of such prototypes to assess the effect of altering the system cost and detection performance on the potential adoption rate. We found that system cost and lameness detection performance indeed substantially influence the potential adoption rate. In order for farmers to prefer automatic detection over current visual detection, the usefulness that farmers attach to a system with specific characteristics should be higher than that of visual detection. As such, we concluded that low system costs and high detection performances are required before automatic lameness detection systems become applicable in practice. Abstract Most automatic lameness detection system prototypes have not yet been commercialized, and are hence not yet adopted in practice. Therefore, the objective of this study was to simulate the effect of detection performance (percentage missed lame cows and percentage false alarms) and system cost on the potential market share of three automatic lameness detection systems relative to visual detection: a system attached to the cow, a walkover system, and a camera system. Simulations were done using a utility model derived from survey responses obtained from dairy farmers in Flanders, Belgium. Overall, systems attached to the cow had the largest market potential, but were still not competitive with visual detection. Increasing the detection performance or lowering the system cost led to higher market shares for automatic systems at the expense of visual detection. The willingness to pay for extra performance was €2.57 per % less missed lame cows, €1.65 per % less false alerts, and €12.7 for lame leg indication, respectively. The presented results could be exploited by system designers to determine the effect of adjustments to the technology on a system’s potential adoption rate. PMID:28991188
Rock Deformation at High Confining Pressure and Temperature.
debugged, delivered and installed to the contracting agency. Clay specimens of illite, kaolinite and montmorillonite were deformed in tri-axial compression...at 25 and 3000C at a constant confining pressure of 2 kb and a constant strain rate of .0001 sec. The illite and kaolinite are stronger under these...conditions than montmorillonite . Cores from dolomite single crystals were deformed at a confining pressure of 7 kb and temperatures of 300 and 500C
REDIR: Automated Static Detection of Obfuscated Anti-Debugging Techniques
2014-03-27
analyzing code samples that resist other forms of analysis. 2.5.6 RODS and HASTI: Software Engineering Cognitive Support Software Engineering (SE) is another...and (c) this method is resistant to common obfuscation techniques. To achieve this goal, the Data/Frame sensemaking theory guides the process of...No Starch Press, 2012. [46] C.-W. Hsu, S. W. Shieh et al., “Divergence Detector: A Fine-Grained Approach to Detecting VM-Awareness Malware,” in
Monitoring and tracing of critical software systems: State of the work and project definition
2008-12-01
analysis, troubleshooting and debugging. Some of these subsystems already come with ad hoc tracers for events like wireless connections or SCSI disk... SQLite ). Additional synthetic events (e.g. states) are added to the database. The database thus consists in contexts (process, CPU, state), event...capability on a [operating] system-by-system basis. Additionally, the mechanics of querying the data in an ad - hoc manner outside the boundaries of the
Sample Batch Scripts for Running Jobs on the Peregrine System |
script for a serial job in the debug queue #!/bin/bash #PBS -lnodes=1:ppn=1,walltime=500 #PBS -N test1 limit #PBS -l nodes=1 # one node #PBS -N test1 # Name of job #PBS -A CSC001 # project handle cd #PBS -q short # short queue #PBS -l nodes=4:ppn=24 # Number of nodes, put 24 processes on each #PBS -N
The Design and Implementation of a Data Flow Multiprocessor.
1981-12-01
to thank Captain Charles Papp who taught me how to use the logic analyzer and the storage oscilloscope. Without these tools, I could never have...debugged and repaired the microprocessors. Finally, I wish to thank my thesis readers, Major Charles Lillie and Major Walt Seward, for taking valuable time...Neumann/ Babbage architecture with the a data flow architecture. The next section describes the benefits of data flow computing. The following section
1980-01-15
Code B364078464 V99QAXNH30303 H2590D. IS KEY WORDS fCo.. e.1 Odn Od It -C.eWV WHO Idnlif b 61-k n 0ber) Strategic Targeting Copper Industry INDATAK 20...develop, debug and test an industrial simulation model (INDATAK) using the LOGATAK model as a point of departure. The copper processing industry is...significant processes in the copper industry, including the transportation network connecting the processing elements, have been formatted for use in
Programming Environments Based on Structured Editors: The MENTOR Experience,
1980-07-01
ambitious plan has been actually implemented in MENTOR- PASCAL. There are niostly two reaons for this, which are actually cnmplementary aspects of...languages. As might be expecteo, these desig,, * r.t.. ia ar closely 8 related to those based on semantic considerationslO. We have good hope that the...d) it has reasonably good user interaction facilities: there are various debugging aids osuch as a trace package, an interrupt facility, and the user
Generating a 2D Representation of a Complex Data Structure
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A computer program, designed to assist in the development and debugging of other software, generates a two-dimensional (2D) representation of a possibly complex n-dimensional (where n is an integer >2) data structure or abstract rank-n object in that other software. The nature of the 2D representation is such that it can be displayed on a non-graphical output device and distributed by non-graphical means.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holcomb, F.; Kroes, J.; Jessen, T.
1973-10-18
EZQUERY is a generalized information retrieval and reporting system developed by the Data Processing Services Department to provide a method of accessing and displaying information from common types of data-base files. By eliminating the costs and delays associated with coding and debugging special purpose programs, it produces simple reports. It was designed with the user in mind, and may be used by programmers and nonprogrammers to access data base files and obtain reports in a reasonably brief period of time. (auth)
Recent advances in automatic alignment system for the National Ignition Facility
NASA Astrophysics Data System (ADS)
Wilhelmsen, Karl; Awwal, Abdul A. S.; Kalantar, Dan; Leach, Richard; Lowe-Webb, Roger; McGuigan, David; Miller Kamm, Vicki
2011-03-01
The automatic alignment system for the National Ignition Facility (NIF) is a large-scale parallel system that directs all 192 laser beams along the 300-m optical path to a 50-micron focus at target chamber in less than 50 minutes. The system automatically commands 9,000 stepping motors to adjust mirrors and other optics based upon images acquired from high-resolution digital cameras viewing beams at various locations. Forty-five control loops per beamline request image processing services running on a LINUX cluster to analyze these images of the beams and references, and automatically steer the beams toward the target. This paper discusses the upgrades to the NIF automatic alignment system to handle new alignment needs and evolving requirements as related to various types of experiments performed. As NIF becomes a continuously-operated system and more experiments are performed, performance monitoring is increasingly important for maintenance and commissioning work. Data, collected during operations, is analyzed for tuning of the laser and targeting maintenance work. Handling evolving alignment and maintenance needs is expected for the planned 30-year operational life of NIF.
12 CFR 19.244 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... OF PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing Audit Services § 19.244 Automatic removal, suspension, and debarment. (a) An independent public accountant or accounting firm may not perform audit services for insured national banks if the accountant or firm: (1) Is...
12 CFR 19.244 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OF PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing Audit Services § 19.244 Automatic removal, suspension, and debarment. (a) An independent public accountant or accounting firm may not perform audit services for insured national banks if the accountant or firm: (1) Is...
12 CFR 19.244 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... OF PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing Audit Services § 19.244 Automatic removal, suspension, and debarment. (a) An independent public accountant or accounting firm may not perform audit services for insured national banks if the accountant or firm: (1) Is...
12 CFR 19.244 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... OF PRACTICE AND PROCEDURE Removal, Suspension, and Debarment of Accountants From Performing Audit Services § 19.244 Automatic removal, suspension, and debarment. (a) An independent public accountant or accounting firm may not perform audit services for insured national banks if the accountant or firm: (1) Is...
10 CFR 431.133 - Materials incorporated by reference.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND INDUSTRIAL EQUIPMENT Automatic Commercial Ice Makers Test Procedures § 431.133 Materials..., (“AHRI 810”), Performance Rating of Automatic Commercial Ice-Makers, March 2011; IBR approved for §§ 431... Automatic Ice Makers, (including Errata Sheets issued April 8, 2010 and April 21, 2010), approved January 28...
10 CFR 431.133 - Materials incorporated by reference.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND INDUSTRIAL EQUIPMENT Automatic Commercial Ice Makers Test Procedures § 431.133 Materials..., (“AHRI 810”), Performance Rating of Automatic Commercial Ice-Makers, March 2011; IBR approved for §§ 431... Automatic Ice Makers, (including Errata Sheets issued April 8, 2010 and April 21, 2010), approved January 28...
Enhancing Automaticity through Task-Based Language Learning
ERIC Educational Resources Information Center
De Ridder, Isabelle; Vangehuchten, Lieve; Gomez, Marta Sesena
2007-01-01
In general terms automaticity could be defined as the subconscious condition wherein "we perform a complex series of tasks very quickly and efficiently, without having to think about the various components and subcomponents of action involved" (DeKeyser 2001: 125). For language learning, Segalowitz (2003) characterised automaticity as a…
Shaping electromagnetic waves using software-automatically-designed metasurfaces.
Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie
2017-06-15
We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.
Handwriting Automaticity: The Search for Performance Thresholds
ERIC Educational Resources Information Center
Medwell, Jane; Wray, David
2014-01-01
Evidence is accumulating that handwriting has an important role in written composition. In particular, handwriting automaticity appears to relate to success in composition. This relationship has been little explored in British contexts and we currently have little idea of what threshold performance levels might be. In this paper, we report on two…
12 CFR 263.403 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... independent public accountant or accounting firm may not perform audit services for banking organizations if... permission to such accountant or firm to perform audit services for banking organizations. The request shall... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Automatic removal, suspension, and debarment...
Automaticity and Attentional Processes in Aging.
ERIC Educational Resources Information Center
Madden, David J.; Mitchell, David B.
In recent research, two qualitatively different classes of mental operations have been identified. The performance of one type of cognitive task requires attention, in the sense of mental effort, for its execution, while the second type can be performed automatically, independent of attentional control. Further research has shown that automatic…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-11
...-29305; Amdt. No. 91-314] RIN 2120-AI92 Automatic Dependent Surveillance-Broadcast (ADS-B) Out... Surveillance- Broadcast (ADS-B) Out Performance Requirements To Support Air Traffic Control (ATC) Service..., Surveillance and Broadcast Services, AJE-6, Air Traffic Organization, Federal Aviation Administration, 800...
12 CFR 263.403 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Accountants From Performing Audit Services § 263.403 Automatic removal, suspension, and debarment. (a) An independent public accountant or accounting firm may not perform audit services for banking organizations if the accountant or firm: (1) Is subject to a final order of removal, suspension, or debarment (other...
12 CFR 263.403 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Accountants From Performing Audit Services § 263.403 Automatic removal, suspension, and debarment. (a) An independent public accountant or accounting firm may not perform audit services for banking organizations if the accountant or firm: (1) Is subject to a final order of removal, suspension, or debarment (other...
12 CFR 263.403 - Automatic removal, suspension, and debarment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Accountants From Performing Audit Services § 263.403 Automatic removal, suspension, and debarment. (a) An independent public accountant or accounting firm may not perform audit services for banking organizations if the accountant or firm: (1) Is subject to a final order of removal, suspension, or debarment (other...
Scholtz, Jan-Erik; Wichmann, Julian L; Kaup, Moritz; Fischer, Sebastian; Kerl, J Matthias; Lehnert, Thomas; Vogl, Thomas J; Bauer, Ralf W
2015-03-01
To evaluate software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine on CT in terms of accuracy, potential for time savings and workflow improvement. 77 patients (28 women, 49 men, mean age 65.3±14.4 years) with known or suspected spinal disorders (degenerative spine disease n=32; disc herniation n=36; traumatic vertebral fractures n=9) underwent 64-slice MDCT with thin-slab reconstruction. Time for automatic labeling of the thoracolumbar spine and reconstruction of double-angulated axial images of the pathological vertebrae was compared with manually performed reconstruction of anatomical aligned axial images. Reformatted images of both reconstruction methods were assessed by two observers regarding accuracy of symmetric depiction of anatomical structures. In 33 cases double-angulated axial images were created in 1 vertebra, in 28 cases in 2 vertebrae and in 16 cases in 3 vertebrae. Correct automatic labeling was achieved in 72 of 77 patients (93.5%). Errors could be manually corrected in 4 cases. Automatic labeling required 1min in average. In cases where anatomical aligned axial images of 1 vertebra were created, reconstructions made by hand were significantly faster (p<0.05). Automatic reconstruction was time-saving in cases of 2 and more vertebrae (p<0.05). Both reconstruction methods revealed good image quality with excellent inter-observer agreement. The evaluated software for automatic labeling and anatomically aligned, double-angulated axial image reconstruction of the thoracolumbar spine on CT is time-saving when reconstructions of 2 and more vertebrae are performed. Checking results of automatic labeling is necessary to prevent errors in labeling. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Performance Engineering Research Institute SciDAC-2 Enabling Technologies Institute Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert
2013-04-20
Enhancing the performance of SciDAC applications on petascale systems had high priority within DOE SC at the start of the second phase of the SciDAC program, SciDAC-2, as it continues to do so today. Achieving expected levels of performance on high-end computing (HEC) systems is growing ever more challenging due to enormous scale, increasing architectural complexity, and increasing application complexity. To address these challenges, the University of Southern California?s Information Sciences Institute organized the Performance Engineering Research Institute (PERI). PERI implemented a unified, tripartite research plan encompassing: (1) performance modeling and prediction; (2) automatic performance tuning; and (3) performance engineeringmore » of high profile applications. Within PERI, USC?s primary research activity was automatic tuning (autotuning) of scientific software. This activity was spurred by the strong user preference for automatic tools and was based on previous successful activities such as ATLAS, which automatically tuned components of the LAPACK linear algebra library, and other recent work on autotuning domain-specific libraries. Our other major component was application engagement, to which we devoted approximately 30% of our effort to work directly with SciDAC-2 applications. This report is a summary of the overall results of the USC PERI effort.« less
Performance of automatic scanning microscope for nuclear emulsion experiments
NASA Astrophysics Data System (ADS)
Güler, A. Murat; Altınok, Özgür
2015-12-01
The impressive improvements in scanning technology and methods let nuclear emulsion to be used as a target in recent large experiments. We report the performance of an automatic scanning microscope for nuclear emulsion experiments. After successful calibration and alignment of the system, we have reached 99% tracking efficiency for the minimum ionizing tracks that penetrating through the emulsions films. The automatic scanning system is successfully used for the scanning of emulsion films in the OPERA experiment and plan to use for the next generation of nuclear emulsion experiments.
Performance of automatic scanning microscope for nuclear emulsion experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Güler, A. Murat, E-mail: mguler@newton.physics.metu.edu.tr; Altınok, Özgür; Tufts University, Medford, MA 02155
The impressive improvements in scanning technology and methods let nuclear emulsion to be used as a target in recent large experiments. We report the performance of an automatic scanning microscope for nuclear emulsion experiments. After successful calibration and alignment of the system, we have reached 99% tracking efficiency for the minimum ionizing tracks that penetrating through the emulsions films. The automatic scanning system is successfully used for the scanning of emulsion films in the OPERA experiment and plan to use for the next generation of nuclear emulsion experiments.
Small passenger car transmission test; Ford C4 transmission
NASA Technical Reports Server (NTRS)
Bujold, M. P.
1980-01-01
A 1979 Ford C4 automatic transmission was tested per a passenger car automatic transmission test code (SAE J651b) which required drive performance, coast performance, and no load test conditions. Under these test conditions, the transmission attained maximum efficiencies in the mid-eighty percent range for both drive performance tests and coast performance tests. The major results of this test (torque, speed, and efficiency curves) are presented. Graphs map the complete performance characteristics for the Ford C4 transmission.
IPACS Electronics: Comments on the Original Design and Current Efforts at Langley Research Center
NASA Technical Reports Server (NTRS)
Gowdey, J. C.
1983-01-01
The development of the integrated power altitude control system (IPACS) is described. The power bridge was fabricated, and all major parts are in hand. The bridge was tested with a 1/4 HP motor for another program. The PWM, Control Logic, and upper bridge driver power supply are breadboarded and are debugged prior to starting testing on a passive load. The Hall sensor circuit for detecting rotor position is in design.
Causality-Preserving Timestamps in Distributed Programs
1993-06-01
monitoring,deh!bigging. tac-hvon,catisaIi tvý Abstract A tachyon is an improperly ordered event in a distributed program. Tachvonis are most often...that tachyons do in fact. occur commonly in distributed pro- grams on our Ethernet at Carnegie Mellon University. and we disc’ss some ways of...before it is sent) is called a tach yon. (’learly it is very disconcerting to try to debug a parallel program that contains tachyons . Of course, in "real
Color graphics, interactive processing, and the supercomputer
NASA Technical Reports Server (NTRS)
Smith-Taylor, Rudeen
1987-01-01
The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.
NASA Technical Reports Server (NTRS)
Butler, C.; Kindle, E. C.
1984-01-01
The capabilities of the DIAL data acquisition system (DAS) for the remote measurement of atmospheric trace gas concentrations from ground and aircraft platforms were extended through the purchase and integration of other hardware and the implementation of improved software. An operational manual for the current system is presented. Hardware and peripheral device registers are outlined only as an aid in debugging any DAS problems which may arise.
2009-11-01
interest of scientific and technical information exchange. This work is sponsored by the U.S. Department of Defense. The Software Engineering Institute is a...an interesting conti- nuum between how many different requirements a program must satisfy: the more complex and diverse the requirements, the more... Gender differences in approaches to end-user software development have also been reported in debugging feature usage [1] and in end-user web programming
Parallel-Processing Test Bed For Simulation Software
NASA Technical Reports Server (NTRS)
Blech, Richard; Cole, Gary; Townsend, Scott
1996-01-01
Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).
Volume Sensor Canadian Demonstrator Prototype User’s Guide
2011-03-23
The “ VSCS ” checkbox controls whether or not all network communications traffic is logged locally for debugging purposes. All of the shown settings...given in the VSCS.Bridge application. On the “Clusters” tab, as shown in Figure 10-4, the information to form the VSCS string ID (SID) are shown...for VSCS communication that is specified on the “Destinations” tab. Changes are committed by pressing the [+] button. On the “Destinations” tab, as
The remote controlling technique based on the serial port for SR-620 universal counter
NASA Astrophysics Data System (ADS)
Su, Jian-Feng; Chen, Shu-Fang; Li, Xiao-Hui; Wu, Hai-Tao; Bian, Yu-Jing
2004-12-01
The function of SR-620 universal counter and the remote work mode are introduced, and the remote controlling technique for the counter is analysed. A method to realize the remote controlling via the serial port for the counter is demonstrated, in which an ActiveX control is used. Besides, some points for attention in debugging are discussed based on the experience, and a case of program running for measuring time-delay is presented.
United States Air Force College Science and Engineering Program. Volume 1
1988-12-01
with debugging and testing Potfit and AtmBis and for explaining the chemical concepts necessary to understand these two programs. Dr. Phil Christiansen ...work interesting, and in general, making the summer an extremely informative experience. Mr. Russ Leighton gave me invaluable assistance in programming...help and guidance in all phases of my work. i My gratitude also extends to Russ Leighton for his technical advice; to Les Tepe for his support; to my
Quantification of regional fat volume in rat MRI
NASA Astrophysics Data System (ADS)
Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren
2003-05-01
Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.
Investigating the Relationship between Stable Personality Characteristics and Automatic Imitation
Butler, Emily E.; Ward, Robert; Ramsey, Richard
2015-01-01
Automatic imitation is a cornerstone of nonverbal communication that fosters rapport between interaction partners. Recent research has suggested that stable dimensions of personality are antecedents to automatic imitation, but the empirical evidence linking imitation with personality traits is restricted to a few studies with modest sample sizes. Additionally, atypical imitation has been documented in autism spectrum disorders and schizophrenia, but the mechanisms underpinning these behavioural profiles remain unclear. Using a larger sample than prior studies (N=243), the current study tested whether performance on a computer-based automatic imitation task could be predicted by personality traits associated with social behaviour (extraversion and agreeableness) and with disorders of social cognition (autistic-like and schizotypal traits). Further personality traits (narcissism and empathy) were assessed in a subsample of participants (N=57). Multiple regression analyses showed that personality measures did not predict automatic imitation. In addition, using a similar analytical approach to prior studies, no differences in imitation performance emerged when only the highest and lowest 20 participants on each trait variable were compared. These data weaken support for the view that stable personality traits are antecedents to automatic imitation and that neural mechanisms thought to support automatic imitation, such as the mirror neuron system, are dysfunctional in autism spectrum disorders or schizophrenia. In sum, the impact that personality variables have on automatic imitation is less universal than initial reports suggest. PMID:26079137
Investigating the Relationship between Stable Personality Characteristics and Automatic Imitation.
Butler, Emily E; Ward, Robert; Ramsey, Richard
2015-01-01
Automatic imitation is a cornerstone of nonverbal communication that fosters rapport between interaction partners. Recent research has suggested that stable dimensions of personality are antecedents to automatic imitation, but the empirical evidence linking imitation with personality traits is restricted to a few studies with modest sample sizes. Additionally, atypical imitation has been documented in autism spectrum disorders and schizophrenia, but the mechanisms underpinning these behavioural profiles remain unclear. Using a larger sample than prior studies (N=243), the current study tested whether performance on a computer-based automatic imitation task could be predicted by personality traits associated with social behaviour (extraversion and agreeableness) and with disorders of social cognition (autistic-like and schizotypal traits). Further personality traits (narcissism and empathy) were assessed in a subsample of participants (N=57). Multiple regression analyses showed that personality measures did not predict automatic imitation. In addition, using a similar analytical approach to prior studies, no differences in imitation performance emerged when only the highest and lowest 20 participants on each trait variable were compared. These data weaken support for the view that stable personality traits are antecedents to automatic imitation and that neural mechanisms thought to support automatic imitation, such as the mirror neuron system, are dysfunctional in autism spectrum disorders or schizophrenia. In sum, the impact that personality variables have on automatic imitation is less universal than initial reports suggest.
To do it or to let an automatic tool do it? The priority of control over effort.
Osiurak, François; Wagner, Clara; Djerbi, Sara; Navarro, Jordan
2013-01-01
The aim of the present study is to provide experimental data relevant to the issue of what leads humans to use automatic tools. Two answers can be offered. The first is that humans strive to minimize physical and/or cognitive effort (principle of least effort). The second is that humans tend to keep their perceived control over the environment (principle of more control). These two factors certainly play a role, but the question raised here is to what do people give priority in situations wherein both manual and automatic actions take the same time - minimizing effort or keeping perceived control? To answer that question, we built four experiments in which participants were confronted with a recurring choice between performing a task manually (physical effort) or in a semi-automatic way (cognitive effort) versus using an automatic tool that completes the task for them (no effort). In this latter condition, participants were required to follow the progression of the automatic tool step by step. Our results showed that participants favored the manual or semi-automatic condition over the automatic condition. However, when they were offered the opportunity to perform recreational tasks in parallel, the shift toward manual condition disappeared. The findings give support to the idea that people give priority to keeping control over minimizing effort.
Stewart, Brandon D; Payne, B Keith
2008-10-01
The evidence for whether intentional control strategies can reduce automatic stereotyping is mixed. Therefore, the authors tested the utility of implementation intentions--specific plans linking a behavioral opportunity to a specific response--in reducing automatic bias. In three experiments, automatic stereotyping was reduced when participants made an intention to think specific counterstereotypical thoughts whenever they encountered a Black individual. The authors used two implicit tasks and process dissociation analysis, which allowed them to separate contributions of automatic and controlled thinking to task performance. Of importance, the reduction in stereotyping was driven by a change in automatic stereotyping and not controlled thinking. This benefit was acquired with little practice and generalized to novel faces. Thus, implementation intentions may be an effective and efficient means for controlling automatic aspects of thought.
The Influence of Inattention on Rapid Automatized Naming and Reading Skills
ERIC Educational Resources Information Center
Pham, Andy V.
2010-01-01
The purpose of this study is to determine how behavioral symptoms of inattention predict rapid automatized naming (RAN) performance and reading skills in typically developing children. Participants included 104 third- and fourth-grade children from different elementary schools in mid-Michigan. RAN performance was assessed using the four Rapid…
Cognitive tasks promote automatization of postural control in young and older adults.
Potvin-Desrochers, Alexandra; Richer, Natalie; Lajoie, Yves
2017-09-01
Researchers looking at the effects of performing a concurrent cognitive task on postural control in young and older adults using traditional center-of-pressure measures and complexity measures found discordant results. Results of experiments showing improvements of stability have suggested the use of strategies such as automatization of postural control or stiffening strategy. This experiment aimed to confirm in healthy young and older adults that performing a cognitive task while standing leads to improvements that are due to automaticity of sway by using sample entropy. Twenty-one young adults and twenty-five older adults were asked to stand on a force platform while performing a cognitive task. There were four cognitive tasks: simple reaction time, go/no-go reaction time, equation and occurrence of a digit in a number sequence. Results demonstrated decreased sway area and variability as well as increased sample entropy for both groups when performing a cognitive task. Results suggest that performing a concurrent cognitive task promotes the adoption of an automatic postural control in young and older adults as evidenced by an increased postural stability and postural sway complexity. Copyright © 2017 Elsevier B.V. All rights reserved.
A procedure for automating CFD simulations of an inlet-bleed problem
NASA Technical Reports Server (NTRS)
Chyu, Wei J.; Rimlinger, Mark J.; Shih, Tom I.-P.
1995-01-01
A procedure was developed to improve the turn-around time for computational fluid dynamics (CFD) simulations of an inlet-bleed problem involving oblique shock-wave/boundary-layer interactions on a flat plate with bleed into a plenum through one or more circular holes. This procedure is embodied in a preprocessor called AUTOMAT. With AUTOMAT, once data for the geometry and flow conditions have been specified (either interactively or via a namelist), it will automatically generate all input files needed to perform a three-dimensional Navier-Stokes simulation of the prescribed inlet-bleed problem by using the PEGASUS and OVERFLOW codes. The input files automatically generated by AUTOMAT include those for the grid system and those for the initial and boundary conditions. The grid systems automatically generated by AUTOMAT are multi-block structured grids of the overlapping type. Results obtained by using AUTOMAT are presented to illustrate its capability.
Automatic Extraction of Metadata from Scientific Publications for CRIS Systems
ERIC Educational Resources Information Center
Kovacevic, Aleksandar; Ivanovic, Dragan; Milosavljevic, Branko; Konjovic, Zora; Surla, Dusan
2011-01-01
Purpose: The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS). Design/methodology/approach: The system is based on machine learning and performs automatic extraction…
ERIC Educational Resources Information Center
Young, Victoria; Mihailidis, Alex
2010-01-01
Despite their growing presence in home computer applications and various telephony services, commercial automatic speech recognition technologies are still not easily employed by everyone; especially individuals with speech disorders. In addition, relatively little research has been conducted on automatic speech recognition performance with older…
Design of a real-time tax-data monitoring intelligent card system
NASA Astrophysics Data System (ADS)
Gu, Yajun; Bi, Guotang; Chen, Liwei; Wang, Zhiyuan
2009-07-01
To solve the current problem of low efficiency of domestic Oil Station's information management, Oil Station's realtime tax data monitoring system has been developed to automatically access tax data of Oil pumping machines, realizing Oil-pumping machines' real-time automatic data collection, displaying and saving. The monitoring system uses the noncontact intelligent card or network to directly collect data which can not be artificially modified and so seals the loopholes and improves the tax collection's automatic level. It can perform real-time collection and management of the Oil Station information, and find the problem promptly, achieves the automatic management for the entire process covering Oil sales accounting and reporting. It can also perform remote query to the Oil Station's operation data. This system has broad application future and economic value.
Automated vehicle counting using image processing and machine learning
NASA Astrophysics Data System (ADS)
Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae
2017-04-01
Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes
Attention and reach-to-grasp movements in Parkinson's disease.
Lu, Cathy; Bharmal, Aamir; Kiss, Zelma H; Suchowersky, Oksana; Haffenden, Angela M
2010-08-01
The role of attention in grasping movements directed at common objects has not been examined in Parkinson's disease (PD), though these movements are critical to activities of daily living. Our primary objective was to determine whether patients with PD demonstrate automaticity in grasping movements directed toward common objects. Automaticity is assumed when tasks can be performed with little or no interference from concurrent tasks. Grasping performance in three patient groups (newly diagnosed, moderate, and advanced/surgically treated PD) on and off of their medication or deep brain stimulation was compared to performance in an age-matched control group. Automaticity was demonstrated by the absence of a decrement in grasping performance when attention was consumed by a concurrent spatial-visualization task. Only the control group and newly diagnosed PD group demonstrated automaticity in their grasping movements. The moderate and advanced PD groups did not demonstrate automaticity. Furthermore, the well-known effects of pharmacotherapy and surgical intervention on movement speed and muscle activation patterns did not appear to reduce the impact of attention-demanding tasks on grasping movements in those with moderate to advanced PD. By the moderate stage of PD, grasping is an attention-demanding process; this change is not ameliorated by dopaminergic or surgical treatments. These findings have important implications for activities of daily living, as devoting attention to the simplest of daily tasks would interfere with complex activities and potentially exacerbate fatigue.
Neural networks: Alternatives to conventional techniques for automatic docking
NASA Technical Reports Server (NTRS)
Vinz, Bradley L.
1994-01-01
Automatic docking of orbiting spacecraft is a crucial operation involving the identification of vehicle orientation as well as complex approach dynamics. The chaser spacecraft must be able to recognize the target spacecraft within a scene and achieve accurate closing maneuvers. In a video-based system, a target scene must be captured and transformed into a pattern of pixels. Successful recognition lies in the interpretation of this pattern. Due to their powerful pattern recognition capabilities, artificial neural networks offer a potential role in interpretation and automatic docking processes. Neural networks can reduce the computational time required by existing image processing and control software. In addition, neural networks are capable of recognizing and adapting to changes in their dynamic environment, enabling enhanced performance, redundancy, and fault tolerance. Most neural networks are robust to failure, capable of continued operation with a slight degradation in performance after minor failures. This paper discusses the particular automatic docking tasks neural networks can perform as viable alternatives to conventional techniques.
Searchfield, Grant D; Linford, Tania; Kobayashi, Kei; Crowhen, David; Latzel, Matthias
2018-03-01
To compare preference for and performance of manually selected programmes to an automatic sound classifier, the Phonak AutoSense OS. A single blind repeated measures study. Participants were fit with Phonak Virto V90 ITE aids; preferences for different listening programmes were compared across four different sound scenarios (speech in: quiet, noise, loud noise and a car). Following a 4-week trial preferences were reassessed and the users preferred programme was compared to the automatic classifier for sound quality and hearing in noise (HINT test) using a 12 loudspeaker array. Twenty-five participants with symmetrical moderate-severe sensorineural hearing loss. Participant preferences of manual programme for scenarios varied considerably between and within sessions. A HINT Speech Reception Threshold (SRT) advantage was observed for the automatic classifier over participant's manual selection for speech in quiet, loud noise and car noise. Sound quality ratings were similar for both manual and automatic selections. The use of a sound classifier is a viable alternative to manual programme selection.
Training and subjective workload in a category search task
NASA Technical Reports Server (NTRS)
Vidulich, Michael A.; Pandit, Parimal
1986-01-01
This study examined automaticity as a means by which training influences mental workload. Two groups were trained in a category search task. One group received a training paradigm designed to promote the development of automaticity; the other group received a training paradigm designed to prohibit it. Resultant performance data showed the expected improvement as a result of the development of automaticity. Subjective workload assessments mirrored the performance results in most respects. The results supported the position that subjective mental workload assessments may be sensitive to the effect of training when it produces a lower level of cognitive load.
Performance Engineering Research Institute SciDAC-2 Enabling Technologies Institute Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Mary
2014-09-19
Enhancing the performance of SciDAC applications on petascale systems has high priority within DOE SC. As we look to the future, achieving expected levels of performance on high-end com-puting (HEC) systems is growing ever more challenging due to enormous scale, increasing archi-tectural complexity, and increasing application complexity. To address these challenges, PERI has implemented a unified, tripartite research plan encompassing: (1) performance modeling and prediction; (2) automatic performance tuning; and (3) performance engineering of high profile applications. The PERI performance modeling and prediction activity is developing and refining performance models, significantly reducing the cost of collecting the data upon whichmore » the models are based, and increasing model fidelity, speed and generality. Our primary research activity is automatic tuning (autotuning) of scientific software. This activity is spurred by the strong user preference for automatic tools and is based on previous successful activities such as ATLAS, which has automatically tuned components of the LAPACK linear algebra library, and other re-cent work on autotuning domain-specific libraries. Our third major component is application en-gagement, to which we are devoting approximately 30% of our effort to work directly with Sci-DAC-2 applications. This last activity not only helps DOE scientists meet their near-term per-formance goals, but also helps keep PERI research focused on the real challenges facing DOE computational scientists as they enter the Petascale Era.« less
Automatic Assessment of Complex Task Performance in Games and Simulations. CRESST Report 775
ERIC Educational Resources Information Center
Iseli, Markus R.; Koenig, Alan D.; Lee, John J.; Wainess, Richard
2010-01-01
Assessment of complex task performance is crucial to evaluating personnel in critical job functions such as Navy damage control operations aboard ships. Games and simulations can be instrumental in this process, as they can present a broad range of complex scenarios without involving harm to people or property. However, "automatic"…
Automatic analysis and classification of surface electromyography.
Abou-Chadi, F E; Nashar, A; Saad, M
2001-01-01
In this paper, parametric modeling of surface electromyography (EMG) algorithms that facilitates automatic SEMG feature extraction and artificial neural networks (ANN) are combined for providing an integrated system for the automatic analysis and diagnosis of myopathic disorders. Three paradigms of ANN were investigated: the multilayer backpropagation algorithm, the self-organizing feature map algorithm and a probabilistic neural network model. The performance of the three classifiers was compared with that of the old Fisher linear discriminant (FLD) classifiers. The results have shown that the three ANN models give higher performance. The percentage of correct classification reaches 90%. Poorer diagnostic performance was obtained from the FLD classifier. The system presented here indicates that surface EMG, when properly processed, can be used to provide the physician with a diagnostic assist device.
Programming methodology for a general purpose automation controller
NASA Technical Reports Server (NTRS)
Sturzenbecker, M. C.; Korein, J. U.; Taylor, R. H.
1987-01-01
The General Purpose Automation Controller is a multi-processor architecture for automation programming. A methodology has been developed whose aim is to simplify the task of programming distributed real-time systems for users in research or manufacturing. Programs are built by configuring function blocks (low-level computations) into processes using data flow principles. These processes are activated through the verb mechanism. Verbs are divided into two classes: those which support devices, such as robot joint servos, and those which perform actions on devices, such as motion control. This programming methodology was developed in order to achieve the following goals: (1) specifications for real-time programs which are to a high degree independent of hardware considerations such as processor, bus, and interconnect technology; (2) a component approach to software, so that software required to support new devices and technologies can be integrated by reconfiguring existing building blocks; (3) resistance to error and ease of debugging; and (4) a powerful command language interface.
Beam position reconstruction for the g2p experiment in Hall A at Jefferson lab
NASA Astrophysics Data System (ADS)
Zhu, Pengjia; Allada, Kalyan; Allison, Trent; Badman, Toby; Camsonne, Alexandre; Chen, Jian-ping; Cummings, Melissa; Gu, Chao; Huang, Min; Liu, Jie; Musson, John; Slifer, Karl; Sulkosky, Vincent; Ye, Yunxiu; Zhang, Jixie; Zielinski, Ryan
2016-02-01
Beam-line equipment was upgraded for experiment E08-027 (g2p) in Hall A at Jefferson Lab. Two beam position monitors (BPMs) were necessary to measure the beam position and angle at the target. A new BPM receiver was designed and built to handle the low beam currents (50-100 nA) used for this experiment. Two new super-harps were installed for calibrating the BPMs. In addition to the existing fast raster system, a slow raster system was installed. Before and during the experiment, these new devices were tested and debugged, and their performance was also evaluated. In order to achieve the required accuracy (1-2 mm in position and 1-2 mrad in angle at the target location), the data of the BPMs and harps were carefully analyzed, as well as reconstructing the beam position and angle event by event at the target location. The calculated beam position will be used in the data analysis to accurately determine the kinematics for each event.
Veta, Mitko; van Diest, Paul J.; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P. W.
2016-01-01
Background Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. Methods The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an “external” dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. Results The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial agreement with human experts. PMID:27529701
Veta, Mitko; van Diest, Paul J; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P W
2016-01-01
Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an "external" dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial agreement with human experts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Y; Huang, H; Su, T
Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less
Automatic Clustering Using FSDE-Forced Strategy Differential Evolution
NASA Astrophysics Data System (ADS)
Yasid, A.
2018-01-01
Clustering analysis is important in datamining for unsupervised data, cause no adequate prior knowledge. One of the important tasks is defining the number of clusters without user involvement that is known as automatic clustering. This study intends on acquiring cluster number automatically utilizing forced strategy differential evolution (AC-FSDE). Two mutation parameters, namely: constant parameter and variable parameter are employed to boost differential evolution performance. Four well-known benchmark datasets were used to evaluate the algorithm. Moreover, the result is compared with other state of the art automatic clustering methods. The experiment results evidence that AC-FSDE is better or competitive with other existing automatic clustering algorithm.
The Ruggedized STD Bus Microcomputer - A low cost computer suitable for Space Shuttle experiments
NASA Technical Reports Server (NTRS)
Budney, T. J.; Stone, R. W.
1982-01-01
Previous space flight computers have been costly in terms of both hardware and software. The Ruggedized STD Bus Microcomputer is based on the commercial Mostek/Pro-Log STD Bus. Ruggedized PC cards can be based on commercial cards from more than 60 manufacturers, reducing hardware cost and design time. Software costs are minimized by using standard 8-bit microprocessors and by debugging code using commercial versions of the ruggedized flight boards while the flight hardware is being fabricated.
Estimation in a discrete tail rate family of recapture sampling models
NASA Technical Reports Server (NTRS)
Gupta, Rajan; Lee, Larry D.
1990-01-01
In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.
VLSI (Very Large Scale Integrated Circuits) Design with the MacPitts Silicon Compiler.
1985-09-01
the background. If the algorithm is not fully debugged, then issue instead macpitts basename herald so MacPitts diagnostics and Liszt diagnostics both...command interpreter. Upon compilation, however, the following LI!F compiler ( Liszt ) diagnostic results, Error: Non-number to minus nil where the first...language used in the MacPitts source code. The more instructive solution is to write the Franz LISP code to decide if a jumper wire is needed, and if so, to
The Design of the Digital Multiplexer based on Power Carrier Communication on Sports Venues
NASA Astrophysics Data System (ADS)
Lu, Ming-jing; Liang, Li; Yu, Xiao-yan
In this paper, one kind of double CPU, the low power loss, the low cost digital multiplexer has been designed in conducted the full research to this communicated way, which is satisfied the need of the electric power correspondence transmission system, especially in sports venues. This article is elaborated the digital multiplexer's hardware and the software principle of design in detail, carries on the simulation using the monolithic integrated circuit simulator, has achieved the satisfactory effect through the debug.
Comprehensive analysis of helicopters with bearingless rotors
NASA Technical Reports Server (NTRS)
Murthy, V. R.
1988-01-01
A modified Galerkin method is developed to analyze the dynamic problems of multiple-load-path bearingless rotor blades. The development and selection of functions are quite parallel to CAMRAD procedures, greatly facilitating the implementation of the method into the CAMRAD program. A software is developed implementing the modified Galerkin method to determine free vibration characteristics of multiple-load-path rotor blades undergoing coupled flapwise bending, chordwise bending, twisting, and extensional motions. Results are in the process of being obtained by debugging the software.
How to avoid the ten most frequent EMS pitfalls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, W.
1982-04-19
It pays to do your homework before investing in an energy management system if you want to avoid the 10 most common pitfalls listed by users, consultants, and manufacturers as: oversimplification, improper maintenance, failure to involve operating personnel, inaccurate savings estimates, failure to include monitoring capability, incompetent or fradulent firms, improper load control, not allowing for a de-bugging period, failure to include manual override, and software problems. The article describes how each of these pitfalls can lead to poor decisions and poor results. (DCK)
1994-01-20
Category 2 - Investigation/Debug Required ..................................... 11 Table 3-1 Field Test Report Status/Corrective Action...in Table 3-1 in section 3.1. The Field Test Reports and SP/CR’s are listed below for the two categories: Table 3.0-1. Category 1 - LADS PMO Direction...symbology, consisting Wing Aircraft of the laser code A - H plus the four digit data field shall be displayed for 10 seconds, after which time only
Survey and Recommendations for the Use of Microcomputers in the Naval Audit Service.
1987-03-01
capital investment * Higher maintenance costs * Longer design-time * Troublesome de-bugging during the start-up period * Serious compounding of downtime...traditional revi.ws have often ailed to see the "total picture." This problem has been turther compounded by the fact thatconventional reviews are freuentlv...328 W11 1M E3 130 II1.5 ".A . m . MICROCOP RESOLUTION TEST CHART NATIOMAl. BURMA OF STANDARDS- 1963-A * .~ .*w -- - ~. -. w- ~ ~ w % W% the auditor
NASA Technical Reports Server (NTRS)
Friend, J.
1971-01-01
A manual designed both as an instructional manual for beginning coders and as a reference manual for the coding language INSTRUCT, is presented. The manual includes the major programs necessary to implement the teaching system and lists the limitation of current implementation. A detailed description is given of how to code a lesson, what buttons to push, and what utility programs to use. Suggestions for debugging coded lessons and the error messages that may be received during assembly or while running the lesson are given.
A Process Elaboration Formalism for Writing and Analyzing Programs
1975-10-01
program is to be proved, a description of its (i) See [MANN 73] for a survey of these debugging tools, di) See [ELSPAS 72] for a complete review of this...öy the instructions which might be found on a shampoo bottle. 1) Wet hair 2) Lather 3) Rinse «) Repeat Statement 4, the source of the problem, *M...tor this simple aigonthm is shown m Figure 52. SHAMPOO WET-HAIR LATHER RINSE REPEAT - WET-HAIR -> LATHER -> RINSE -> REPEAT - (TCRMMAL "WE’T.’NG
NASA Astrophysics Data System (ADS)
Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat
2018-04-01
Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.
NASA Astrophysics Data System (ADS)
Le Bras, Ronan; Kushida, Noriyuki; Mialle, Pierrick; Tomuta, Elena; Arora, Nimar
2017-04-01
The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing a Bayesian method and software to perform the key step of automatic association of seismological, hydroacoustic, and infrasound (SHI) parametric data. In our preliminary testing in the CTBTO, NET_VISA shows much better performance than its currently operating automatic association module, with a rate for automatic events matching the analyst-reviewed events increased by 10%, signifying that the percentage of missed events is lowered by 40%. Initial tests involving analysts also showed that the new software will complete the automatic bulletins of the CTBTO by adding previously missed events. Because products by the CTBTO are also widely distributed to its member States as well as throughout the seismological community, the introduction of a new technology must be carried out carefully, and the first step of operational integration is to first use NET-VISA results within the interactive analysts' software so that the analysts can check the robustness of the Bayesian approach. We report on the latest results both on the progress for automatic processing and for the initial introduction of NET-VISA results in the analyst review process
Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George
2017-06-26
We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.
A quality score for coronary artery tree extraction results
NASA Astrophysics Data System (ADS)
Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke
2018-02-01
Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.
Speeding response, saving lives : automatic vehicle location capabilities for emergency services.
DOT National Transportation Integrated Search
1999-01-01
Information from automatic vehicle location systems, when combined with computeraided dispatch software, can provide a rich source of data for analyzing emergency vehicle operations and evaluating agency performance.
Höink, Anna Janina; Schülke, Christoph; Koch, Raphael; Löhnert, Annika; Kammerer, Sara; Fortkamp, Rasmus; Heindel, Walter; Buerke, Boris
2017-11-01
Purpose To compare measurement precision and interobserver variability in the evaluation of hepatocellular carcinoma (HCC) and liver metastases in MSCT before and after transarterial local ablative therapies. Materials and Methods Retrospective study of 72 patients with malignant liver lesions (42 metastases; 30 HCCs) before and after therapy (43 SIRT procedures; 29 TACE procedures). Established (LAD; SAD; WHO) and vitality-based parameters (mRECIST; mLAD; mSAD; EASL) were assessed manually and semi-automatically by two readers. The relative interobserver difference (RID) and intraclass correlation coefficient (ICC) were calculated. Results The median RID for vitality-based parameters was lower from semi-automatic than from manual measurement of mLAD (manual 12.5 %; semi-automatic 3.4 %), mSAD (manual 12.7 %; semi-automatic 5.7 %) and EASL (manual 10.4 %; semi-automatic 1.8 %). The difference in established parameters was not statistically noticeable (p > 0.05). The ICCs of LAD (manual 0.984; semi-automatic 0.982), SAD (manual 0.975; semi-automatic 0.958) and WHO (manual 0.984; semi-automatic 0.978) are high, both in manual and semi-automatic measurements. The ICCs of manual measurements of mLAD (0.897), mSAD (0.844) and EASL (0.875) are lower. This decrease cannot be found in semi-automatic measurements of mLAD (0.997), mSAD (0.992) and EASL (0.998). Conclusion Vitality-based tumor measurements of HCC and metastases after transarterial local therapies should be performed semi-automatically due to greater measurement precision, thus increasing the reproducibility and in turn the reliability of therapeutic decisions. Key points · Liver lesion measurements according to EASL and mRECIST are more precise when performed semi-automatically.. · The higher reproducibility may facilitate a more reliable classification of therapy response.. · Measurements according to RECIST and WHO offer equivalent precision semi-automatically and manually.. Citation Format · Höink AJ, Schülke C, Koch R et al. Response Evaluation of Malignant Liver Lesions After TACE/SIRT: Comparison of Manual and Semi-Automatic Measurement of Different Response Criteria in Multislice CT. Fortschr Röntgenstr 2017; 189: 1067 - 1075. © Georg Thieme Verlag KG Stuttgart · New York.
Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations
Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W.
2016-01-01
Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures. PMID:26904094
Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations.
Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W
2016-01-01
Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures.
Papageorgiou, Eirini; Nieuwenhuys, Angela; Desloovere, Kaat
2017-01-01
Background This study aimed to improve the automatic probabilistic classification of joint motion gait patterns in children with cerebral palsy by using the expert knowledge available via a recently developed Delphi-consensus study. To this end, this study applied both Naïve Bayes and Logistic Regression classification with varying degrees of usage of the expert knowledge (expert-defined and discretized features). A database of 356 patients and 1719 gait trials was used to validate the classification performance of eleven joint motions. Hypotheses Two main hypotheses stated that: (1) Joint motion patterns in children with CP, obtained through a Delphi-consensus study, can be automatically classified following a probabilistic approach, with an accuracy similar to clinical expert classification, and (2) The inclusion of clinical expert knowledge in the selection of relevant gait features and the discretization of continuous features increases the performance of automatic probabilistic joint motion classification. Findings This study provided objective evidence supporting the first hypothesis. Automatic probabilistic gait classification using the expert knowledge available from the Delphi-consensus study resulted in accuracy (91%) similar to that obtained with two expert raters (90%), and higher accuracy than that obtained with non-expert raters (78%). Regarding the second hypothesis, this study demonstrated that the use of more advanced machine learning techniques such as automatic feature selection and discretization instead of expert-defined and discretized features can result in slightly higher joint motion classification performance. However, the increase in performance is limited and does not outweigh the additional computational cost and the higher risk of loss of clinical interpretability, which threatens the clinical acceptance and applicability. PMID:28570616
Gerth, Sabrina; Klassert, Annegret; Dolk, Thomas; Fliesser, Michael; Fischer, Martin H; Nottbusch, Guido; Festman, Julia
2016-01-01
Due to their multifunctionality, tablets offer tremendous advantages for research on handwriting dynamics or for interactive use of learning apps in schools. Further, the widespread use of tablet computers has had a great impact on handwriting in the current generation. But, is it advisable to teach how to write and to assess handwriting in pre- and primary schoolchildren on tablets rather than on paper? Since handwriting is not automatized before the age of 10 years, children's handwriting movements require graphomotor and visual feedback as well as permanent control of movement execution during handwriting. Modifications in writing conditions, for instance the smoother writing surface of a tablet, might influence handwriting performance in general and in particular those of non-automatized beginning writers. In order to investigate how handwriting performance is affected by a difference in friction of the writing surface, we recruited three groups with varying levels of handwriting automaticity: 25 preschoolers, 27 second graders, and 25 adults. We administered three tasks measuring graphomotor abilities, visuomotor abilities, and handwriting performance (only second graders and adults). We evaluated two aspects of handwriting performance: the handwriting quality with a visual score and the handwriting dynamics using online handwriting measures [e.g., writing duration, writing velocity, strokes and number of inversions in velocity (NIV)]. In particular, NIVs which describe the number of velocity peaks during handwriting are directly related to the level of handwriting automaticity. In general, we found differences between writing on paper compared to the tablet. These differences were partly task-dependent. The comparison between tablet and paper revealed a faster writing velocity for all groups and all tasks on the tablet which indicates that all participants-even the experienced writers-were influenced by the lower friction of the tablet surface. Our results for the group-comparison show advancing levels in handwriting automaticity from preschoolers to second graders to adults, which confirms that our method depicts handwriting performance in groups with varying degrees of handwriting automaticity. We conclude that the smoother tablet surface requires additional control of handwriting movements and therefore might present an additional challenge for learners of handwriting.
Gerth, Sabrina; Klassert, Annegret; Dolk, Thomas; Fliesser, Michael; Fischer, Martin H.; Nottbusch, Guido; Festman, Julia
2016-01-01
Due to their multifunctionality, tablets offer tremendous advantages for research on handwriting dynamics or for interactive use of learning apps in schools. Further, the widespread use of tablet computers has had a great impact on handwriting in the current generation. But, is it advisable to teach how to write and to assess handwriting in pre- and primary schoolchildren on tablets rather than on paper? Since handwriting is not automatized before the age of 10 years, children's handwriting movements require graphomotor and visual feedback as well as permanent control of movement execution during handwriting. Modifications in writing conditions, for instance the smoother writing surface of a tablet, might influence handwriting performance in general and in particular those of non-automatized beginning writers. In order to investigate how handwriting performance is affected by a difference in friction of the writing surface, we recruited three groups with varying levels of handwriting automaticity: 25 preschoolers, 27 second graders, and 25 adults. We administered three tasks measuring graphomotor abilities, visuomotor abilities, and handwriting performance (only second graders and adults). We evaluated two aspects of handwriting performance: the handwriting quality with a visual score and the handwriting dynamics using online handwriting measures [e.g., writing duration, writing velocity, strokes and number of inversions in velocity (NIV)]. In particular, NIVs which describe the number of velocity peaks during handwriting are directly related to the level of handwriting automaticity. In general, we found differences between writing on paper compared to the tablet. These differences were partly task-dependent. The comparison between tablet and paper revealed a faster writing velocity for all groups and all tasks on the tablet which indicates that all participants—even the experienced writers—were influenced by the lower friction of the tablet surface. Our results for the group-comparison show advancing levels in handwriting automaticity from preschoolers to second graders to adults, which confirms that our method depicts handwriting performance in groups with varying degrees of handwriting automaticity. We conclude that the smoother tablet surface requires additional control of handwriting movements and therefore might present an additional challenge for learners of handwriting. PMID:27672372
Closed circuit TV system automatically guides welding arc
NASA Technical Reports Server (NTRS)
Stephans, D. L.; Wall, W. A., Jr.
1968-01-01
Closed circuit television /CCTV/ system automatically guides a welding torch to position the welding arc accurately along weld seams. Digital counting and logic techniques incorporated in the control circuitry, ensure performance reliability.
Optimization of the High-speed On-off Valve of an Automatic Transmission
NASA Astrophysics Data System (ADS)
Li-mei, ZHAO; Huai-chao, WU; Lei, ZHAO; Yun-xiang, LONG; Guo-qiao, LI; Shi-hao, TANG
2018-03-01
The response time of the high-speed on-off solenoid valve has a great influence on the performance of the automatic transmission. In order to reduce the response time of the high-speed on-off valve, the simulation model of the valve was built by use of AMESim and Ansoft Maxwell softwares. To reduce the response time, an objective function based on ITAE criterion was built and the Genetic Algorithms was used to optimize five parameters including circle number, working air gap, et al. The comparison between experiment and simulation shows that the model is verified. After optimization, the response time of the valve is reduced by 38.16%, the valve can meet the demands of the automatic transmission well. The results can provide theoretical reference for the improvement of automatic transmission performance.
NASA Astrophysics Data System (ADS)
Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that produced the respective hydrograph. Therefore, the result of the poll can be seen as an additional quality criterion for the comparison of the two different approaches and help in the evaluation of the automatic calibration method.
Validation of Computerized Automatic Calculation of the Sequential Organ Failure Assessment Score
Harrison, Andrew M.; Pickering, Brian W.; Herasevich, Vitaly
2013-01-01
Purpose. To validate the use of a computer program for the automatic calculation of the sequential organ failure assessment (SOFA) score, as compared to the gold standard of manual chart review. Materials and Methods. Adult admissions (age > 18 years) to the medical ICU with a length of stay greater than 24 hours were studied in the setting of an academic tertiary referral center. A retrospective cross-sectional analysis was performed using a derivation cohort to compare automatic calculation of the SOFA score to the gold standard of manual chart review. After critical appraisal of sources of disagreement, another analysis was performed using an independent validation cohort. Then, a prospective observational analysis was performed using an implementation of this computer program in AWARE Dashboard, which is an existing real-time patient EMR system for use in the ICU. Results. Good agreement between the manual and automatic SOFA calculations was observed for both the derivation (N=94) and validation (N=268) cohorts: 0.02 ± 2.33 and 0.29 ± 1.75 points, respectively. These results were validated in AWARE (N=60). Conclusion. This EMR-based automatic tool accurately calculates SOFA scores and can facilitate ICU decisions without the need for manual data collection. This tool can also be employed in a real-time electronic environment. PMID:23936639
Van De Gucht, Tim; Van Weyenberg, Stephanie; Van Nuffel, Annelies; Lauwers, Ludwig; Vangeyte, Jürgen; Saeys, Wouter
2017-10-08
Most automatic lameness detection system prototypes have not yet been commercialized, and are hence not yet adopted in practice. Therefore, the objective of this study was to simulate the effect of detection performance (percentage missed lame cows and percentage false alarms) and system cost on the potential market share of three automatic lameness detection systems relative to visual detection: a system attached to the cow, a walkover system, and a camera system. Simulations were done using a utility model derived from survey responses obtained from dairy farmers in Flanders, Belgium. Overall, systems attached to the cow had the largest market potential, but were still not competitive with visual detection. Increasing the detection performance or lowering the system cost led to higher market shares for automatic systems at the expense of visual detection. The willingness to pay for extra performance was €2.57 per % less missed lame cows, €1.65 per % less false alerts, and €12.7 for lame leg indication, respectively. The presented results could be exploited by system designers to determine the effect of adjustments to the technology on a system's potential adoption rate.
Presentation video retrieval using automatically recovered slide and spoken text
NASA Astrophysics Data System (ADS)
Cooper, Matthew
2013-03-01
Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.
NASA Technical Reports Server (NTRS)
Coggeshall, M. E.; Hoffer, R. M.
1973-01-01
Remote sensing equipment and automatic data processing techniques were employed as aids in the institution of improved forest resource management methods. On the basis of automatically calculated statistics derived from manually selected training samples, the feature selection processor of LARSYS selected, upon consideration of various groups of the four available spectral regions, a series of channel combinations whose automatic classification performances (for six cover types, including both deciduous and coniferous forest) were tested, analyzed, and further compared with automatic classification results obtained from digitized color infrared photography.
Hantke, Simone; Weninger, Felix; Kurle, Richard; Ringeval, Fabien; Batliner, Anton; Mousa, Amr El-Desoky; Schuller, Björn
2016-01-01
We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient. PMID:27176486
High-speed, multi-channel detector readout electronics for fast radiation detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hennig, Wolfgang
2012-06-22
In this project, we are developing a high speed digital spectrometer that a) captures detector waveforms at rates up to 500 MSPS b) has upgraded event data acquisition with additional data buffers for zero dead time operation c) moves energy calculations to the FPGA to increase spectrometer throughput in fast scintillator applications d) uses a streamlined architecture and high speed data interface for even faster readout to the host PC These features are in addition to the standard functions in our existing spectrometers such as digitization, programmable trigger and energy filters, pileup inspection, data acquisition with energy and time stamps,more » MCA histograms, and run statistics. In Phase I, we upgraded one of our existing spectrometer designs to demonstrate the key principle of fast waveform capture using a 500 MSPS, 12 bit ADC and a Xilinx Virtex-4 FPGA. This upgraded spectrometer, named P500, performed well in initial tests of energy resolution, pulse shape analysis, and timing measurements, thus achieving item (a) above. In Phase II, we are revising the P500 to build a commercial prototype with the improvements listed in items (b)-(d). As described in the previous report, two devices were built to pursue this goal, named the Pixie-500 and the Pixie-500 Express. The Pixie-500 has only minor improvements from the Phase I prototype and is intended as an early commercial product (its production and part of its development were funded outside the SBIR). It also allows testing of the ADC performance in real applications.The Pixie-500 Express (or Pixie-500e) includes all of the improvements (b)-(d). At the end of Phase II of the project, we have tested and debugged the hardware, firmware and software of the Pixie-500 Express prototype boards delivered 12/3/2010. This proved substantially more complex than anticipated. At the time of writing, all hardware bugs have been fixed, the PCI Express interface is working, the SDRAM has been successfully tested and the SHARC DSP has been booted with preliminary code. All new ICs and circuitry on the prototype are working properly, however some of the planned firmware and software functions have not yet been completely implemented and debugged. Overall, due to the unanticipated complexity of the PCI Express interface, some aspects of the project could not be completed with the time and funds available in Phase II. These aspects will be completed in self-funded Phase III.« less
Automatic emotional expression analysis from eye area
NASA Astrophysics Data System (ADS)
Akkoç, Betül; Arslan, Ahmet
2015-02-01
Eyes play an important role in expressing emotions in nonverbal communication. In the present study, emotional expression classification was performed based on the features that were automatically extracted from the eye area. Fırst, the face area and the eye area were automatically extracted from the captured image. Afterwards, the parameters to be used for the analysis through discrete wavelet transformation were obtained from the eye area. Using these parameters, emotional expression analysis was performed through artificial intelligence techniques. As the result of the experimental studies, 6 universal emotions consisting of expressions of happiness, sadness, surprise, disgust, anger and fear were classified at a success rate of 84% using artificial neural networks.
Volumetric breast density affects performance of digital screening mammography.
Wanders, Johanna O P; Holland, Katharina; Veldhuis, Wouter B; Mann, Ritse M; Pijnappel, Ruud M; Peeters, Petra H M; van Gils, Carla H; Karssemeijer, Nico
2017-02-01
To determine to what extent automatically measured volumetric mammographic density influences screening performance when using digital mammography (DM). We collected a consecutive series of 111,898 DM examinations (2003-2011) from one screening unit of the Dutch biennial screening program (age 50-75 years). Volumetric mammographic density was automatically assessed using Volpara. We determined screening performance measures for four density categories comparable to the American College of Radiology (ACR) breast density categories. Of all the examinations, 21.6% were categorized as density category 1 ('almost entirely fatty') and 41.5, 28.9, and 8.0% as category 2-4 ('extremely dense'), respectively. We identified 667 screen-detected and 234 interval cancers. Interval cancer rates were 0.7, 1.9, 2.9, and 4.4‰ and false positive rates were 11.2, 15.1, 18.2, and 23.8‰ for categories 1-4, respectively (both p-trend < 0.001). The screening sensitivity, calculated as the proportion of screen-detected among the total of screen-detected and interval tumors, was lower in higher density categories: 85.7, 77.6, 69.5, and 61.0% for categories 1-4, respectively (p-trend < 0.001). Volumetric mammographic density, automatically measured on digital mammograms, impacts screening performance measures along the same patterns as established with ACR breast density categories. Since measuring breast density fully automatically has much higher reproducibility than visual assessment, this automatic method could help with implementing density-based supplemental screening.
Wolterink, Jelmer M; Leiner, Tim; de Vos, Bob D; Coatrieux, Jean-Louis; Kelm, B Michael; Kondo, Satoshi; Salgado, Rodrigo A; Shahzad, Rahil; Shu, Huazhong; Snoeren, Miranda; Takx, Richard A P; van Vliet, Lucas J; van Walsum, Theo; Willems, Tineke P; Yang, Guanyu; Zheng, Yefeng; Viergever, Max A; Išgum, Ivana
2016-05-01
The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiac CT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiac CT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiac CT using a publicly available standardized framework. Cardiac CT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CT scanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen's kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiac CT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.
NASA Astrophysics Data System (ADS)
Letts, J.; Magini, N.
2011-12-01
Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model, and already represents an important component of CMS PhEDEx data transfer volume. The experience, challenges and methods used to debug and commission the thousands of data transfers links between CMS Tier-2 sites world-wide are explained and summarized. The resulting operational experience with Tier-2 to Tier-2 transfers is also presented.
Research and design of portable photoelectric rotary table data-acquisition and analysis system
NASA Astrophysics Data System (ADS)
Yang, Dawei; Yang, Xiufang; Han, Junfeng; Yan, Xiaoxu
2015-02-01
Photoelectric rotary table as the main test tracking measurement platform, widely use in shooting range and aerospace fields. In the range of photoelectric tracking measurement system, in order to meet the photoelectric testing instruments and equipment of laboratory and field application demand, research and design the portable photoelectric rotary table data acquisition and analysis system, and introduces the FPGA device based on Xilinx company Virtex-4 series and its peripheral module of the system hardware design, and the software design of host computer in VC++ 6.0 programming platform and MFC package based on class libraries. The data acquisition and analysis system for data acquisition, display and storage, commission control, analysis, laboratory wave playback, transmission and fault diagnosis, and other functions into an organic whole, has the advantages of small volume, can be embedded, high speed, portable, simple operation, etc. By photoelectric tracking turntable as experimental object, carries on the system software and hardware alignment, the experimental results show that the system can realize the data acquisition, analysis and processing of photoelectric tracking equipment and control of turntable debugging good, and measurement results are accurate, reliable and good maintainability and extensibility. The research design for advancing the photoelectric tracking measurement equipment debugging for diagnosis and condition monitoring and fault analysis as well as the standardization and normalization of the interface and improve the maintainability of equipment is of great significance, and has certain innovative and practical value.
Focus of attention and automaticity in handwriting.
MacMahon, Clare; Charness, Neil
2014-04-01
This study investigated the nature of automaticity in everyday tasks by testing handwriting performance under single and dual-task conditions. Item familiarity and hand dominance were also manipulated to understand both cognitive and motor components of the task. In line with previous literature, performance was superior in an extraneous focus of attention condition compared to two different skill focus conditions. This effect was found only when writing with the dominant hand. In addition, performance was superior for high familiarity compared to low familiarity items. These findings indicate that motor and cognitive familiarity are related to the degree of automaticity of motor skills and can be manipulated to produce different performance outcomes. The findings also imply that the progression of skill acquisition from novel to novice to expert levels can be traced using different dual-task conditions. The separation of motor and cognitive familiarity is a new approach in the handwriting domain, and provides insight into the nature of attentional demands during performance. Copyright © 2013 Elsevier B.V. All rights reserved.
Does the use of automated fetal biometry improve clinical work flow efficiency?
Espinoza, Jimmy; Good, Sara; Russell, Evie; Lee, Wesley
2013-05-01
This study was designed to compare the work flow efficiency of manual measurements of 5 fetal parameters with a novel technique that automatically measures these parameters from 2-dimensional sonograms. This prospective study included 200 singleton pregnancies between 15 and 40 weeks' gestation. Patients were randomly allocated to either manual (n = 100) or automatic (n = 100) fetal biometry. The automatic measurement was performed using a commercially available software application. A digital video recorder captured all on-screen activity associated with the sonographic examination. The examination time and number of steps required to obtain fetal measurements were compared between manual and automatic methods. The mean time required to obtain the biometric measurements was significantly shorter using the automated technique than the manual approach (P < .001 for all comparisons). Similarly, the mean number of steps required to perform these measurements was significantly fewer with automatic measurements compared to the manual technique (P < .001). In summary, automated biometry reduced the examination time required for standard fetal measurements. This approach may improve work flow efficiency in busy obstetric sonography practices.
Automatic intraaortic balloon pump timing using an intrabeat dicrotic notch prediction algorithm.
Schreuder, Jan J; Castiglioni, Alessandro; Donelli, Andrea; Maisano, Francesco; Jansen, Jos R C; Hanania, Ramzi; Hanlon, Pat; Bovelander, Jan; Alfieri, Ottavio
2005-03-01
The efficacy of intraaortic balloon counterpulsation (IABP) during arrhythmic episodes is questionable. A novel algorithm for intrabeat prediction of the dicrotic notch was used for real time IABP inflation timing control. A windkessel model algorithm was used to calculate real-time aortic flow from aortic pressure. The dicrotic notch was predicted using a percentage of calculated peak flow. Automatic inflation timing was set at intrabeat predicted dicrotic notch and was combined with automatic IAB deflation. Prophylactic IABP was applied in 27 patients with low ejection fraction (< 35%) undergoing cardiac surgery. Analysis of IABP at a 1:4 ratio revealed that IAB inflation occurred at a mean of 0.6 +/- 5 ms from the dicrotic notch. In all patients accurate automatic timing at a 1:1 assist ratio was performed. Seventeen patients had episodes of severe arrhythmia, the novel IABP inflation algorithm accurately assisted 318 of 320 arrhythmic beats at a 1:1 ratio. The novel real-time intrabeat IABP inflation timing algorithm performed accurately in all patients during both regular rhythms and severe arrhythmia, allowing fully automatic intrabeat IABP timing.
Optical Automatic Car Identification (OACI) : Volume 1. Advanced System Specification.
DOT National Transportation Integrated Search
1978-12-01
A performance specification is provided in this report for an Optical Automatic Car Identification (OACI) scanner system which features 6% improved readability over existing industry scanner systems. It also includes the analysis and rationale which ...
Evaluation of the Monitor-CTA Automatic Vehicle Monitoring System
DOT National Transportation Integrated Search
1974-03-01
In June 1972 the Urban Mass Transportation Administration requested that the Transportation System Center of DOT perform an evaluation of the CTA (Chicago Transit Authority) Monitor-Automatic Vehicle Monitor (AVM) system. TSC planned the overall eval...
Assessment of WMATA's Automatic Fare Collection Equipment Performance
DOT National Transportation Integrated Search
1981-01-01
The Washington Metropolitan Area Transit Authority (WMATA) has had an Automatic Fare Collection (AFC) system in operation since June 1977. The AFC system, comprised of entry/exit gates, farecard vendors, and addfare machines, initially encountered ma...
Automatic analysis of microscopic images of red blood cell aggregates
NASA Astrophysics Data System (ADS)
Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.
2015-06-01
Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).
Improved automatic adjustment of density and contrast in FCR system using neural network
NASA Astrophysics Data System (ADS)
Takeo, Hideya; Nakajima, Nobuyoshi; Ishida, Masamitsu; Kato, Hisatoyo
1994-05-01
FCR system has an automatic adjustment of image density and contrast by analyzing the histogram of image data in the radiation field. Advanced image recognition methods proposed in this paper can improve the automatic adjustment performance, in which neural network technology is used. There are two methods. Both methods are basically used 3-layer neural network with back propagation. The image data are directly input to the input-layer in one method and the histogram data is input in the other method. The former is effective to the imaging menu such as shoulder joint in which the position of interest region occupied on the histogram changes by difference of positioning and the latter is effective to the imaging menu such as chest-pediatrics in which the histogram shape changes by difference of positioning. We experimentally confirm the validity of these methods (about the automatic adjustment performance) as compared with the conventional histogram analysis methods.
Automatic system for ionization chamber current measurements.
Brancaccio, Franco; Dias, Mauro S; Koskinas, Marina F
2004-12-01
The present work describes an automatic system developed for current integration measurements at the Laboratório de Metrologia Nuclear of Instituto de Pesquisas Energéticas e Nucleares. This system includes software (graphic user interface and control) and a module connected to a microcomputer, by means of a commercial data acquisition card. Measurements were performed in order to check the performance and for validating the proposed design.
Computational Modeling of Emotions and Affect in Social-Cultural Interaction
2013-10-02
acoustic and textual information sources. Second, a cross-lingual study was performed that shed light on how human perception and automatic recognition...speech is produced, a speaker’s pitch and intonational pattern, and word usage. Better feature representation and advanced approaches were used to...recognition performance, and improved our understanding of language/cultural impact on human perception of emotion and automatic classification. • Units
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Huang, Hsun-Chin; Liao, Yung-Kun; Shih, Ching-Tien; Chiang, Ming-Shan
2010-01-01
The latest researches adopted software technology to improve pointing performance; however, Drag-and-Drop (DnD) operation is also commonly used in modern GUI programming. This study evaluated whether two children with developmental disabilities would be able to improve their DnD performance, through an Automatic DnD Assistive Program (ADnDAP). At…
NASA Technical Reports Server (NTRS)
White, W. F.; Clark, L.
1980-01-01
The flight performance of the Terminal Configured Vehicle airplane is summarized. Demonstration automatic approaches and landings utilizing time reference scanning beam microwave landing system (TRSB/MLS) guidance are presented. The TRSB/MLS was shown to provide the terminal area guidance necessary for flying curved automatic approaches with final legs as short as 2 km.
Run-Time Support for Rapid Prototyping
1988-12-01
prototyping. One such system is the Computer-Aided Proto- typing System (CAPS). It combines rapid prototypng with automatic program generation. Some of the...a design database, and a design management system [Ref. 3:p. 66. By using both rapid prototyping and automatic program genera- tion. CAPS will be...Most proto- typing systems perform these functions. CAPS is different in that it combines rapid prototyping with a variant of automatic program
Riley, Gerard A; Venn, Paul
2015-01-01
Thirty-four participants with acquired brain injury learned word lists under two forms of vanishing cues - one in which the learning trial instructions encouraged intentional retrieval (i.e., explicit memory) and one in which they encouraged automatic retrieval (which encompasses implicit memory). The automatic instructions represented a novel approach in which the cooperation of participants was actively sought to avoid intentional retrieval. Intentional instructions resulted in fewer errors during the learning trials and better performance on immediate and delayed retrieval tests. The advantage of intentional over automatic instructions was generally less for those who had more severe memory and/or executive impairments. Most participants performed better under intentional instructions on both the immediate and the delayed tests. Although those who were more severely impaired in both memory and executive function also did better with intentional instructions on the immediate retrieval test, they were significantly more likely to show an advantage for automatic instructions on the delayed test. It is suggested that this pattern of results may reflect impairments in the consolidation of intentional memories in this group. When using vanishing cues, automatic instructions may be better for those with severe consolidation impairments, but otherwise intentional instructions may be better.
Kal, E. C.; van der Kamp, J.; Houdijk, H.; Groet, E.; van Bennekom, C. A. M.; Scherder, E. J. A.
2015-01-01
Dual-task performance is often impaired after stroke. This may be resolved by enhancing patients’ automaticity of movement. This study sets out to test the constrained action hypothesis, which holds that automaticity of movement is enhanced by triggering an external focus (on movement effects), rather than an internal focus (on movement execution). Thirty-nine individuals with chronic, unilateral stroke performed a one-leg-stepping task with both legs in single- and dual-task conditions. Attentional focus was manipulated with instructions. Motor performance (movement speed), movement automaticity (fluency of movement), and dual-task performance (dual-task costs) were assessed. The effects of focus on movement speed, single- and dual-task movement fluency, and dual-task costs were analysed with generalized estimating equations. Results showed that, overall, single-task performance was unaffected by focus (p = .341). Regarding movement fluency, no main effects of focus were found in single- or dual-task conditions (p’s ≥ .13). However, focus by leg interactions suggested that an external focus reduced movement fluency of the paretic leg compared to an internal focus (single-task conditions: p = .068; dual-task conditions: p = .084). An external focus also tended to result in inferior dual-task performance (β = -2.38, p = .065). Finally, a near-significant interaction (β = 2.36, p = .055) suggested that dual-task performance was more constrained by patients’ attentional capacity in external focus conditions. We conclude that, compared to an internal focus, an external focus did not result in more automated movements in chronic stroke patients. Contrary to expectations, trends were found for enhanced automaticity with an internal focus. These findings might be due to patients’ strong preference to use an internal focus in daily life. Future work needs to establish the more permanent effects of learning with different attentional foci on re-automating motor control after stroke. PMID:26317437
Evaluation of Prototype Automatic Truck Rollover Warning Systems
DOT National Transportation Integrated Search
1998-01-01
Three operating prototype Automatic Truck Rollover Warning Systems (ATRWS) installed on the Capital Beltway in Maryland and Virginia were evaluated for 3 years. The general objectives of this evaluation were to assess how the ATRWS performed and to d...
Testing & Evaluation of Close-Range SAR for Monitoring & Automatically Detecting Pavement Conditions
DOT National Transportation Integrated Search
2012-01-01
This report summarizes activities in support of the DOT contract on Testing & Evaluating Close-Range SAR for Monitoring & Automatically Detecting Pavement Conditions & Improve Visual Inspection Procedures. The work of this project was performed by Dr...
Rail Transit System Maintenance Practices for Automatic Fare Collection Equipment
DOT National Transportation Integrated Search
1984-05-01
A review of rail transit system maintenance practices for automatic fare collection (AFC) equipment was performed. This study supports an UMTA sponsored program to improve the reliability of AFC equipment. The maintenance practices of the transit sys...
Assessment of Automatic Fare Collection Equipment at Three European Transit Properties
DOT National Transportation Integrated Search
1982-12-01
This report is an assessment of automatic fare collection (AFC) equipment performance conducted at three European properties in accordance with procedures defined in the Property Evaluation Plan (PEP) developed by Input Output Computer Services, Inc....
A Declarative Design Approach to Modeling Traditional and Non-Traditional Space Systems
NASA Astrophysics Data System (ADS)
Hoag, Lucy M.
The space system design process is known to be laborious, complex, and computationally demanding. It is highly multi-disciplinary, involving several interdependent subsystems that must be both highly optimized and reliable due to the high cost of launch. Satellites must also be capable of operating in harsh and unpredictable environments, so integrating high-fidelity analysis is important. To address each of these concerns, a holistic design approach is necessary. However, while the sophistication of space systems has evolved significantly in the last 60 years, improvements in the design process have been comparatively stagnant. Space systems continue to be designed using a procedural, subsystem-by-subsystem approach. This method is inadequate since it generally requires extensive iteration and limited or heuristic-based search, which can be slow, labor-intensive, and inaccurate. The use of a declarative design approach can potentially address these inadequacies. In the declarative programming style, the focus of a problem is placed on what the objective is, and not necessarily how it should be achieved. In the context of design, this entails knowledge expressed as a declaration of statements that are true about the desired artifact instead of explicit instructions on how to implement it. A well-known technique is through constraint-based reasoning, where a design problem is represented as a network of rules and constraints that are reasoned across by a solver to dynamically discover the optimal candidate(s). This enables implicit instantiation of the tradespace and allows for automatic generation of all feasible design candidates. As such, this approach also appears to be well-suited to modeling adaptable space systems, which generally have large tradespaces and possess configurations that are not well-known a priori. This research applied a declarative design approach to holistic satellite design and to tradespace exploration for adaptable space systems. The approach was tested during the design of USC's Aeneas nanosatellite project, and a case study was performed to assess the advantages of the new approach over past procedural approaches. It was found that use of the declarative approach improved design accuracy through exhaustive tradespace search and provable optimality; decreased design time through improved model generation, faster run time, and reduction in time and number of iteration cycles; and enabled modular and extensible code. Observed weaknesses included non-intuitive model abstraction; increased debugging time; and difficulty of data extrapolation and analysis.
MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.
2011-01-01
MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically
NASA Technical Reports Server (NTRS)
Vidulich, M. A.; Wickens, C. D.
1985-01-01
Dissociations between subjective workload assessments and performance were investigated. The difficulty of a Sternberg memory search task was manipulated by varying stimulus presentation rate, stimulus discernibility, value of good performance, and automaticity of performance. All Sternberg task conditions were performed both alone and concurrently with a tracking task. Bipolar subjective workload assessments were collected. Dissociations between workload and performance were found related to automaticity, presentation rate, and motivation level. The results were interpreted as supporting the hypothesis that the specific cognitive processes responsible for subjective assessments can differ from those responsible for performance. The potential contamination these dissociations could inflict on operational workload assessments is discussed.
Design and implementation of online automatic judging system
NASA Astrophysics Data System (ADS)
Liang, Haohui; Chen, Chaojie; Zhong, Xiuyu; Chen, Yuefeng
2017-06-01
For lower efficiency and poorer reliability in programming training and competition by currently artificial judgment, design an Online Automatic Judging (referred to as OAJ) System. The OAJ system including the sandbox judging side and Web side, realizes functions of automatically compiling and running the tested codes, and generating evaluation scores and corresponding reports. To prevent malicious codes from damaging system, the OAJ system utilizes sandbox, ensuring the safety of the system. The OAJ system uses thread pools to achieve parallel test, and adopt database optimization mechanism, such as horizontal split table, to improve the system performance and resources utilization rate. The test results show that the system has high performance, high reliability, high stability and excellent extensibility.
NASA Technical Reports Server (NTRS)
Halyo, N.
1979-01-01
The development of a digital automatic control law for a small jet transport to perform a steep final approach in automatic landings is reported along with the development of a steady-state Kalman filter used to provide smooth estimates to the control law. The control law performs the functions of localizer and glides capture, localizer and glideslope track, decrab, and place. The control law uses the microwave landing system position data, and aircraft body-mounted accelerators, attitude and attitude rate information. The results obtained from a digital simulation of the aircraft dynamics, wind conditions, and sensor noises using the control law and filter developed are described.
Automatic segmentation of vessels in in-vivo ultrasound scans
NASA Astrophysics Data System (ADS)
Tamimi-Sarnikowski, Philip; Brink-Kjær, Andreas; Moshavegh, Ramin; Arendt Jensen, Jørgen
2017-03-01
Ultrasound has become highly popular to monitor atherosclerosis, by scanning the carotid artery. The screening involves measuring the thickness of the vessel wall and diameter of the lumen. An automatic segmentation of the vessel lumen, can enable the determination of lumen diameter. This paper presents a fully automatic segmentation algorithm, for robustly segmenting the vessel lumen in longitudinal B-mode ultrasound images. The automatic segmentation is performed using a combination of B-mode and power Doppler images. The proposed algorithm includes a series of preprocessing steps, and performs a vessel segmentation by use of the marker-controlled watershed transform. The ultrasound images used in the study were acquired using the bk3000 ultrasound scanner (BK Ultrasound, Herlev, Denmark) with two transducers "8L2 Linear" and "10L2w Wide Linear" (BK Ultrasound, Herlev, Denmark). The algorithm was evaluated empirically and applied to a dataset of in-vivo 1770 images recorded from 8 healthy subjects. The segmentation results were compared to manual delineation performed by two experienced users. The results showed a sensitivity and specificity of 90.41+/-11.2 % and 97.93+/-5.7% (mean+/-standard deviation), respectively. The amount of overlap of segmentation and manual segmentation, was measured by the Dice similarity coefficient, which was 91.25+/-11.6%. The empirical results demonstrated the feasibility of segmenting the vessel lumen in ultrasound scans using a fully automatic algorithm.
Automatic Collision Avoidance Technology (ACAT)
NASA Technical Reports Server (NTRS)
Swihart, Donald E.; Skoog, Mark A.
2007-01-01
This document represents two views of the Automatic Collision Avoidance Technology (ACAT). One viewgraph presentation reviews the development and system design of Automatic Collision Avoidance Technology (ACAT). Two types of ACAT exist: Automatic Ground Collision Avoidance (AGCAS) and Automatic Air Collision Avoidance (AACAS). The AGCAS Uses Digital Terrain Elevation Data (DTED) for mapping functions, and uses Navigation data to place aircraft on map. It then scans DTED in front of and around aircraft and uses future aircraft trajectory (5g) to provide automatic flyup maneuver when required. The AACAS uses data link to determine position and closing rate. It contains several canned maneuvers to avoid collision. Automatic maneuvers can occur at last instant and both aircraft maneuver when using data link. The system can use sensor in place of data link. The second viewgraph presentation reviews the development of a flight test and an evaluation of the test. A review of the operation and comparison of the AGCAS and a pilot's performance are given. The same review is given for the AACAS is given.
Flight performance of the TCV B-737 airplane at Kennedy Airport using TRSB/MLS guidance
NASA Technical Reports Server (NTRS)
White, W. F.; Clark, L. V.
1979-01-01
The terminal configured vehicle (TCV) B 737 was flown in demonstration of the time reference scanning beam/microwave landing system (TRSB/MLS). The flight performance of the TCV airplane during the demonstration automatic approaches and landings while utilizing TRSB/MLS guidance is reported. The TRSB/MLS is shown to provide the terminal area guidance necessary for flying curved automatic approaches with short finals.
Attention to Automatic Movements in Parkinson's Disease: Modified Automatic Mode in the Striatum
Wu, Tao; Liu, Jun; Zhang, Hejia; Hallett, Mark; Zheng, Zheng; Chan, Piu
2015-01-01
We investigated neural correlates when attending to a movement that could be made automatically in healthy subjects and Parkinson's disease (PD) patients. Subjects practiced a visuomotor association task until they could perform it automatically, and then directed their attention back to the automated task. Functional MRI was obtained during the early-learning, automatic stage, and when re-attending. In controls, attention to automatic movement induced more activation in the dorsolateral prefrontal cortex (DLPFC), anterior cingulate cortex, and rostral supplementary motor area. The motor cortex received more influence from the cortical motor association regions. In contrast, the pattern of the activity and connectivity of the striatum remained at the level of the automatic stage. In PD patients, attention enhanced activity in the DLPFC, premotor cortex, and cerebellum, but the connectivity from the putamen to the motor cortex decreased. Our findings demonstrate that, in controls, when a movement achieves the automatic stage, attention can influence the attentional networks and cortical motor association areas, but has no apparent effect on the striatum. In PD patients, attention induces a shift from the automatic mode back to the controlled pattern within the striatum. The shifting between controlled and automatic behaviors relies in part on striatal function. PMID:24925772
Application of nonlinear transformations to automatic flight control
NASA Technical Reports Server (NTRS)
Meyer, G.; Su, R.; Hunt, L. R.
1984-01-01
The theory of transformations of nonlinear systems to linear ones is applied to the design of an automatic flight controller for the UH-1H helicopter. The helicopter mathematical model is described and it is shown to satisfy the necessary and sufficient conditions for transformability. The mapping is constructed, taking the nonlinear model to canonical form. The performance of the automatic control system in a detailed simulation on the flight computer is summarized.
Data visualization as a tool for improved decision making within transit agencies
DOT National Transportation Integrated Search
2007-02-01
TriMet, the regional transit provider in the Portland, OR, area has been a leader in bus transit performance monitoring using data collected via automatic vehicle location and automatic passenger counter technologies. This information is collected an...
Automatic Indexing Using Term Discrimination and Term Precision Measurements
ERIC Educational Resources Information Center
Salton, G.; And Others
1976-01-01
These two indexing systems are briefly described and experimental evidence is cited showing that a combination of both theories produces better retrieval performance than either one alone. Appropriate conclusions are reached concerning viable automatic indexing procedures usable in practice. (Author)
NASA Astrophysics Data System (ADS)
Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.
2016-03-01
Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.
Braem, Senne; Liefooghe, Baptist; De Houwer, Jan; Brass, Marcel; Abrahamse, Elger L
2017-03-01
Unlike other animals, humans have the unique ability to share and use verbal instructions to prepare for upcoming tasks. Recent research showed that instructions are sufficient for the automatic, reflex-like activation of responses. However, systematic studies into the limits of these automatic effects of task instructions remain relatively scarce. In this study, the authors set out to investigate whether this instruction-based automatic activation of responses can be context-dependent. Specifically, participants performed a task of which the stimulus-response rules and context (location on the screen) could either coincide or not with those of an instructed to-be-performed task (whose instructions changed every run). In 2 experiments, the authors showed that the instructed task rules had an automatic impact on performance-performance was slowed down when the merely instructed task rules did not coincide, but, importantly, this effect was not context-dependent. Interestingly, a third and fourth experiment suggests that context dependency can actually be observed, but only when practicing the task in its appropriate context for over 60 trials or after a sufficient amount of practice on a fixed context (the context was the same for all instructed tasks). Together, these findings seem to suggest that instructions can establish stimulus-response representations that have a reflexive impact on behavior but are insensitive to the context in which the task is known to be valid. Instead, context-specific task representations seem to require practice. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Optimizing Input/Output Using Adaptive File System Policies
NASA Technical Reports Server (NTRS)
Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.
1996-01-01
Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.
1982-11-01
Avionic Systems Integration Facilities, Mark van den Broek 1113 and Paul M. Vicen, AFLC/LOE Planning of Operational Software Implementation Tool...classified as software tools, including: * o" Operating System " Language Processors (compilers, assem’blers, link editors) o Source Editors " Debug Systems ...o Data Base Systems o Utilities o Etc . This talk addresses itself to the current set of tools provided JOVIAL iJ73 1750A application programmners by
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1977-07-18
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 16 figures, 7 tables.
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1976-10-07
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 8 figures, 4 tables.
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1975-06-02
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers and a broad array of peripheral equipment, from any of 800 remote terminals. Octopus will soon include the Laboratory's STAR-100 computers. 9 figures, 5 tables. (auth)
NASA Technical Reports Server (NTRS)
Smith, W. W.
1973-01-01
A Langley Research Center version of NASTRAN Level 15.1.0 designed to provide the analyst with an added tool for debugging massive NASTRAN input data is described. The program checks all NASTRAN input data cards and displays on a CRT the graphic representation of the undeformed structure. In addition, the program permits the display and alteration of input data and allows reexecution without physically resubmitting the job. Core requirements on the CDC 6000 computer are approximately 77,000 octal words of central memory.
Automated solar panel assembly line
NASA Technical Reports Server (NTRS)
Somberg, H.
1981-01-01
The initial stage of the automated solar panel assembly line program was devoted to concept development and proof of approach through simple experimental verification. In this phase, laboratory bench models were built to demonstrate and verify concepts. Following this phase was machine design and integration of the various machine elements. The third phase was machine assembly and debugging. In this phase, the various elements were operated as a unit and modifications were made as required. The final stage of development was the demonstration of the equipment in a pilot production operation.
Jdpd: an open java simulation kernel for molecular fragment dissipative particle dynamics.
van den Broek, Karina; Kuhn, Hubert; Zielesny, Achim
2018-05-21
Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The new kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated "all-in-one" simulation systems.
A monitoring system based on electric vehicle three-stage wireless charging
NASA Astrophysics Data System (ADS)
Hei, T.; Liu, Z. Z.; Yang, Y.; Hongxing, CHEN; Zhou, B.; Zeng, H.
2016-08-01
An monitoring system for three-stage wireless charging was designed. The vehicle terminal contained the core board which was used for battery information collection and charging control and the power measurement and charging control core board was provided at the transmitting terminal which communicated with receiver by Bluetooth. A touch-screen display unit was designed based on MCGS (Monitor and Control Generated System) to simulate charging behavior and to debug the system conveniently. The practical application shown that the system could be stable and reliable, and had a favorable application foreground.
ACCELERATORS: Preliminary application of turn-by-turn data analysis to the SSRF storage ring
NASA Astrophysics Data System (ADS)
Chen, Jian-Hui; Zhao, Zhen-Tang
2009-07-01
There is growing interest in utilizing the beam position monitor turn-by-turn (TBT) data to debug accelerators. TBT data can be used to determine the linear optics, coupled optics and nonlinear behaviors of the storage ring lattice. This is not only a useful complement to other methods of determining the linear optics such as LOCO, but also provides a possibility to uncover more hidden phenomena. In this paper, a preliminary application of a β function measurement to the SSRF storage ring is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Erik; Blume-Kohout, Robin; Rudinger, Kenneth
PyGSTi is an implementation of Gate Set Tomography in the python programming language. Gate Set Tomography (GST) is a theory and protocol for simultaneously estimating the state preparation, gate operations, and measurement effects of a physical system of one or many quantum bits (qubits). These estimates are based entirely on the statistics of experimental measurements, and their interpretation and analysis can provide a detailed understanding of the types of errors/imperfections in the physical system. In this way, GST provides not only a means of certifying the "goodness" of qubits but also a means of debugging (i.e. improving) them.
The RAVE/VERTIGO vertex reconstruction toolkit and framework
NASA Astrophysics Data System (ADS)
Waltenberger, W.; Mitaroff, W.; Moser, F.; Pflugfelder, B.; Riedel, H. V.
2008-07-01
A detector-independent toolkit for vertex reconstruction (RAVE1) is being developed, along with a standalone framework (VERTIGO2) for testing, analyzing and debugging. The core algorithms represent state-of-the-art for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Main design goals are ease of use, flexibility for embedding into existing software frameworks, extensibility, and openness. The implementation is based on modern object-oriented techniques, is coded in C++ with interfaces for Java and Python, and follows an open-source approach. A beta release is available.
Off-line robot programming and graphical verification of path planning
NASA Technical Reports Server (NTRS)
Tonkay, Gregory L.
1989-01-01
The objective of this project was to develop or specify an integrated environment for off-line programming, graphical path verification, and debugging for robotic systems. Two alternatives were compared. The first was the integration of the ASEA Off-line Programming package with ROBSIM, a robotic simulation program. The second alternative was the purchase of the commercial product IGRIP. The needs of the RADL (Robotics Applications Development Laboratory) were explored and the alternatives were evaluated based on these needs. As a result, IGRIP was proposed as the best solution to the problem.
Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.
Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool
NASA Astrophysics Data System (ADS)
Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.
1997-12-01
Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.
Automatic Management of Parallel and Distributed System Resources
NASA Technical Reports Server (NTRS)
Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.
1990-01-01
Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.
Automatic performance budget: towards a risk reduction
NASA Astrophysics Data System (ADS)
Laporte, Philippe; Blake, Simon; Schmoll, Jürgen; Rulten, Cameron; Savoie, Denis
2014-08-01
In this paper, we discuss the performance matrix of the SST-GATE telescope developed to allow us to partition and allocate the important characteristics to the various subsystems as well as to describe the process in order to verify that the current design will deliver the required performance. Due to the integrated nature of the telescope, a large number of parameters have to be controlled and effective calculation tools must be developed such as an automatic performance budget. Its main advantages consist in alleviating the work of the system engineer when changes occur in the design, in avoiding errors during any re-allocation process and recalculate automatically the scientific performance of the instrument. We explain in this paper the method to convert the ensquared energy (EE) and the signal-to-noise ratio (SNR) required by the science cases into the "as designed" instrument. To ensure successful design, integration and verification of the next generation instruments, it is of the utmost importance to have methods to control and manage the instrument's critical performance characteristics at its very early design steps to limit technical and cost risks in the project development. Such a performance budget is a tool towards this goal.
Automatic sentence extraction for the detection of scientific paper relations
NASA Astrophysics Data System (ADS)
Sibaroni, Y.; Prasetiyowati, S. S.; Miftachudin, M.
2018-03-01
The relations between scientific papers are very useful for researchers to see the interconnection between scientific papers quickly. By observing the inter-article relationships, researchers can identify, among others, the weaknesses of existing research, performance improvements achieved to date, and tools or data typically used in research in specific fields. So far, methods that have been developed to detect paper relations include machine learning and rule-based methods. However, a problem still arises in the process of sentence extraction from scientific paper documents, which is still done manually. This manual process causes the detection of scientific paper relations longer and inefficient. To overcome this problem, this study performs an automatic sentences extraction while the paper relations are identified based on the citation sentence. The performance of the built system is then compared with that of the manual extraction system. The analysis results suggested that the automatic sentence extraction indicates a very high level of performance in the detection of paper relations, which is close to that of manual sentence extraction.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
Investigation of an automatic trim algorithm for restructurable aircraft control
NASA Technical Reports Server (NTRS)
Weiss, J.; Eterno, J.; Grunberg, D.; Looze, D.; Ostroff, A.
1986-01-01
This paper develops and solves an automatic trim problem for restructurable aircraft control. The trim solution is applied as a feed-forward control to reject measurable disturbances following control element failures. Disturbance rejection and command following performances are recovered through the automatic feedback control redesign procedure described by Looze et al. (1985). For this project the existence of a failure detection mechanism is assumed, and methods to cope with potential detection and identification inaccuracies are addressed.
NASA MSFC hardware in the loop simulations of automatic rendezvous and capture systems
NASA Technical Reports Server (NTRS)
Tobbe, Patrick A.; Naumann, Charles B.; Sutton, William; Bryan, Thomas C.
1991-01-01
Two complementary hardware-in-the-loop simulation facilities for automatic rendezvous and capture systems at MSFC are described. One, the Flight Robotics Laboratory, uses an 8 DOF overhead manipulator with a work volume of 160 by 40 by 23 feet to evaluate automatic rendezvous algorithms and range/rate sensing systems. The other, the Space Station/Station Operations Mechanism Test Bed, uses a 6 DOF hydraulic table to perform docking and berthing dynamics simulations.
Recent Research on the Automated Mass Measuring System
NASA Astrophysics Data System (ADS)
Yao, Hong; Ren, Xiao-Ping; Wang, Jian; Zhong, Rui-Lin; Ding, Jing-An
The research development of robotic measurement system as well as the representative automatic system were introduced in the paper, and then discussed a sub-multiple calibration scheme adopted on a fully-automatic CCR10 system effectively. Automatic robot system can be able to perform the dissemination of the mass scale without any manual intervention as well as the fast speed calibration of weight samples against a reference weight. At the last, evaluation of the expanded uncertainty was given out.
Sasanguie, Delphine; Reynvoet, Bert
2014-01-01
Several studies have shown that performance on symbolic number tasks is related to individual differences in arithmetic. However, it is not clear which process is responsible for this association, i.e. fast, automatic processing of symbols per se or access to the underlying non-symbolic representation of the symbols. To dissociate between both options, adult participants performed an audiovisual matching paradigm. Auditory presented number words needed to be matched with either Arabic digits or dot patterns. The results revealed that a distance effect was present in the dots-number word matching task and absent in the digit-number word matching task. Crucially, only performance in the digit task contributed to the variance in arithmetical abilities. This led us to conclude that adults' arithmetic builds on the ability to quickly and automatically process Arabic digits, without the underlying non-symbolic magnitude representation being activated. PMID:24505308
POPCORN: a Supervisory Control Simulation for Workload and Performance Research
NASA Technical Reports Server (NTRS)
Hart, S. G.; Battiste, V.; Lester, P. T.
1984-01-01
A multi-task simulation of a semi-automatic supervisory control system was developed to provide an environment in which training, operator strategy development, failure detection and resolution, levels of automation, and operator workload can be investigated. The goal was to develop a well-defined, but realistically complex, task that would lend itself to model-based analysis. The name of the task (POPCORN) reflects the visual display that depicts different task elements milling around waiting to be released and pop out to be performed. The operator's task was to complete each of 100 task elements that ere represented by different symbols, by selecting a target task and entering the desired a command. The simulated automatic system then completed the selected function automatically. Highly significant differences in performance, strategy, and rated workload were found as a function of all experimental manipulations (except reward/penalty).
Pina, Violeta; Castillo, Alejandro; Cohen Kadosh, Roi; Fuentes, Luis J.
2015-01-01
Previous studies have suggested that numerical processing relates to mathematical performance, but it seems that such relationship is more evident for intentional than for automatic numerical processing. In the present study we assessed the relationship between the two types of numerical processing and specific mathematical abilities in a sample of 109 children in grades 1–6. Participants were tested in an ample range of mathematical tests and also performed both a numerical and a size comparison task. The results showed that numerical processing related to mathematical performance only when inhibitory control was involved in the comparison tasks. Concretely, we found that intentional numerical processing, as indexed by the numerical distance effect in the numerical comparison task, was related to mathematical reasoning skills only when the task-irrelevant dimension (the physical size) was incongruent; whereas automatic numerical processing, indexed by the congruency effect in the size comparison task, was related to mathematical calculation skills only when digits were separated by small distance. The observed double dissociation highlights the relevance of both intentional and automatic numerical processing in mathematical skills, but when inhibitory control is also involved. PMID:25873909
Control Law for Automatic Landing Using Fuzzy-Logic Control
NASA Astrophysics Data System (ADS)
Kato, Akio; Inagaki, Yoshiki
The effectiveness of a fuzzy-logic control law for automatically landing an aircraft that handles both the control to lead an aircraft from horizontal flight at an altitude of 500 meters to flight along the glide-path course near the runway, as well as the control to direct the aircraft to land smoothly on a runway, was investigated. The control law for the automatic landing was designed to match the design goals of directing an aircraft from horizontal flight to flight along a glide-path course quickly and smoothly, and for landing smoothly on a runway. The design of the control law and evaluation of the control performance were performed considering the ground effect at landing. As a result, it was confirmed that the design goals were achieved. Even if the characteristics of the aircraft change greatly, the proposed control law is able to maintain the control performance. Moreover, it was confirmed to be able to land an aircraft safely during air turbulence. The present paper indicates that fuzzy-logic control is an effective and flexible method when applied to the control law for automatic landing, and the design method of the control law using fuzzy-logic control was obtained.
NASA Technical Reports Server (NTRS)
Freeman, Frederick
1995-01-01
A biocybernetic system for use in adaptive automation was evaluated using EEG indices based on the beta, alpha, and theta bandwidths. Subjects performed a compensatory tracking task while their EEG was recorded and one of three engagement indices was derived: beta/(alpha + theta), beta/alpha, or 1/alpha. The task was switched between manual and automatic modes as a function of the subjects' level of engagement and whether they were under a positive or negative feedback condition. It was hypothesized that negative feedback would produce more switches between manual and automatic modes, and that the beta/(alpha + theta) index would produce the strongest effect. The results confirmed these hypotheses. There were no systematic changes in these effects over three 16-minute trials. Tracking performance was found to be better under negative feedback. An analysis of the different EEG bands under positive and negative feedback in manual and automatic modes found more beta power in the positive feedback/manual condition and less in the positive feedback/automatic condition. The opposite effect was observed for alpha and theta power. The implications of biocybernetic systems for adaptive automation are discussed.
NASA Technical Reports Server (NTRS)
Brown, S. C.; Hardy, G. H.; Hindson, W. S.
1984-01-01
As part of a comprehensive flight-test investigation of short takeoff and landing (STOL) operating systems for the terminal systems for the terminal area, an automatic landing system has been developed and evaluated for a light wing-loading turboprop-powered aircraft. An advanced digital avionics system performed display, navigation, guidance, and control functions for the test aircraft. Control signals were generated in order to command powered actuators for all conventional controls and for a set of symmetrically driven wing spoilers. This report describes effects of the spoiler control on longitudinal autoland (automatic landing) performance. Flight-test results, with and without spoiler control, are presented and compared with available (basically, conventional takeoff and landing) performance criteria. These comparisons are augmented by results from a comprehensive simulation of the controlled aircraft that included representations of the microwave landing system navigation errors that were encountered in flight as well as expected variations in atmospheric turbulence and wind shear. Flight-test results show that the addition of spoiler control improves the touchdown performance of the automatic landing system. Spoilers improve longitudinal touchdown and landing pitch-attitude performance, particularly in tailwind conditions. Furthermore, simulation results indicate that performance would probably be satisfactory for a wider range of atmospheric disturbances than those encountered in flight. Flight results also indicate that the addition of spoiler control during the final approach does not result in any measurable change in glidepath track performance, and results in a very small deterioration in airspeed tracking. This difference contrasts with simulations results, which indicate some improvement in glidepath tracking and no appreciable change in airspeed tracking. The modeling problem in the simulation that contributed to this discrepancy with flight was not resolved.
Automatic Layout Design for Power Module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ning, Puqi; Wang, Fei; Ngo, Khai
The layout of power modules is one of the most important elements in power module design, especially for high power densities, where couplings are increased. In this paper, an automatic design process using a genetic algorithm is presented. Some practical considerations are introduced in the optimization of the layout design of the module. This paper presents a process for automatic layout design for high power density modules. Detailed GA implementations are introduced both for outer loop and inner loop. As verified by a design example, the results of the automatic design process presented here are better than those from manualmore » design and also better than the results from a popular design software. This automatic design procedure could be a major step toward improving the overall performance of future layout design.« less
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294
Automatic contact in DYNA3D for vehicle crashworthiness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whirley, R.G.; Engelmann, B.E.
1993-07-15
This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit nonlinear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. This paper discusses in detail a new four-step automatic contact algorithm. Key aspects of the proposed method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a smoothly varying surface normal which allows a consistent treatment of shell intersection and corner contact conditions without ad-hocmore » rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public DYNA3D code.« less
Research and Development of Fully Automatic Alien Smoke Stack and Packaging System
NASA Astrophysics Data System (ADS)
Yang, Xudong; Ge, Qingkuan; Peng, Tao; Zuo, Ping; Dong, Weifu
2017-12-01
The problem of low efficiency of manual sorting packaging for the current tobacco distribution center, which developed a set of safe efficient and automatic type of alien smoke stack and packaging system. The functions of fully automatic alien smoke stack and packaging system adopt PLC control technology, servo control technology, robot technology, image recognition technology and human-computer interaction technology. The characteristics, principles, control process and key technology of the system are discussed in detail. Through the installation and commissioning fully automatic alien smoke stack and packaging system has a good performance and has completed the requirements for shaped cigarette.
Automatic Operation For A Robot Lawn Mower
NASA Astrophysics Data System (ADS)
Huang, Y. Y.; Cao, Z. L.; Oh, S. J.; Kattan, E. U.; Hall, E. L.
1987-02-01
A domestic mobile robot, lawn mower, which performs the automatic operation mode, has been built up in the Center of Robotics Research, University of Cincinnati. The robot lawn mower automatically completes its work with the region filling operation, a new kind of path planning for mobile robots. Some strategies for region filling of path planning have been developed for a partly-known or a unknown environment. Also, an advanced omnidirectional navigation system and a multisensor-based control system are used in the automatic operation. Research on the robot lawn mower, especially on the region filling of path planning, is significant in industrial and agricultural applications.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.
1991-01-01
Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.
Small passenger car transmission test-Chevrolet 200 transmission
NASA Technical Reports Server (NTRS)
Bujold, M. P.
1980-01-01
The small passenger car transmission was tested to supply electric vehicle manufacturers with technical information regarding the performance of commerically available transmissions which would enable them to design a more energy efficient vehicle. With this information the manufacturers could estimate vehicle driving range as well as speed and torque requirements for specific road load performance characteristics. A 1979 Chevrolet Model 200 automatic transmission was tested per a passenger car automatic transmission test code (SAE J651b) which required drive performance, coast performance, and no load test conditions. The transmission attained maximum efficiencies in the mid-eighty percent range for both drive performance tests and coast performance tests. Torque, speed and efficiency curves map the complete performance characteristics for Chevrolet Model 200 transmission.
Jung, Jaehoon; Yoon, Inhye; Paik, Joonki
2016-01-01
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978
Automatic Condensation of Electronic Publications by Sentence Selection.
ERIC Educational Resources Information Center
Brandow, Ronald; And Others
1995-01-01
Describes a system that performs automatic summaries of news from a large commercial news service encompassing 41 different publications. This system was compared to a system that used only the lead sentences of the texts. Lead-based summaries significantly outperformed the sentence-selection summaries. (AEF)
The Infrared Automatic Mass Screening (IRAMS) System For Printed Circuit Board Fault Detection
NASA Astrophysics Data System (ADS)
Hugo, Perry W.
1987-05-01
Office of the Program Manager for TMDE (OPM TMDE) has initiated a program to develop techniques for evaluating the performance of printed circuit boards (PCB's) using infrared thermal imaging. It is OPM TMDE's expectation that the standard thermal profile (STP) will become the basis for the future rapid automatic detection and isolation of gross failure mechanisms on units under test (UUT's). To accomplish this OPM TMDE has purchased two Infrared Automatic Mass Screening ( I RAMS) systems which are scheduled for delivery in 1987. The IRAMS system combines a high resolution infrared thermal imager with a test bench and diagnostic computer hardware and software. Its purpose is to rapidly and automatically compare the thermal profiles of a UUT with the STP of that unit, recalled from memory, in order to detect thermally responsive failure mechanisms in PCB's. This paper will review the IRAMS performance requirements, outline the plan for implementing the two systems and report on progress to date.
Bergmeister, Konstantin D; Gröger, Marion; Aman, Martin; Willensdorfer, Anna; Manzano-Szalai, Krisztina; Salminger, Stefan; Aszmann, Oskar C
2016-08-01
Skeletal muscle consists of different fiber types which adapt to exercise, aging, disease, or trauma. Here we present a protocol for fast staining, automatic acquisition, and quantification of fiber populations with ImageJ. Biceps and lumbrical muscles were harvested from Sprague-Dawley rats. Quadruple immunohistochemical staining was performed on single sections using antibodies against myosin heavy chains and secondary fluorescent antibodies. Slides were scanned automatically with a slide scanner. Manual and automatic analyses were performed and compared statistically. The protocol provided rapid and reliable staining for automated image acquisition. Analyses between manual and automatic data indicated Pearson correlation coefficients for biceps of 0.645-0.841 and 0.564-0.673 for lumbrical muscles. Relative fiber populations were accurate to a degree of ± 4%. This protocol provides a reliable tool for quantification of muscle fiber populations. Using freely available software, it decreases the required time to analyze whole muscle sections. Muscle Nerve 54: 292-299, 2016. © 2016 Wiley Periodicals, Inc.
A Machine Learning-based Method for Question Type Classification in Biomedical Question Answering.
Sarrouti, Mourad; Ouatik El Alaoui, Said
2017-05-18
Biomedical question type classification is one of the important components of an automatic biomedical question answering system. The performance of the latter depends directly on the performance of its biomedical question type classification system, which consists of assigning a category to each question in order to determine the appropriate answer extraction algorithm. This study aims to automatically classify biomedical questions into one of the four categories: (1) yes/no, (2) factoid, (3) list, and (4) summary. In this paper, we propose a biomedical question type classification method based on machine learning approaches to automatically assign a category to a biomedical question. First, we extract features from biomedical questions using the proposed handcrafted lexico-syntactic patterns. Then, we feed these features for machine-learning algorithms. Finally, the class label is predicted using the trained classifiers. Experimental evaluations performed on large standard annotated datasets of biomedical questions, provided by the BioASQ challenge, demonstrated that our method exhibits significant improved performance when compared to four baseline systems. The proposed method achieves a roughly 10-point increase over the best baseline in terms of accuracy. Moreover, the obtained results show that using handcrafted lexico-syntactic patterns as features' provider of support vector machine (SVM) lead to the highest accuracy of 89.40 %. The proposed method can automatically classify BioASQ questions into one of the four categories: yes/no, factoid, list, and summary. Furthermore, the results demonstrated that our method produced the best classification performance compared to four baseline systems.
Development of a multiplexed electrospray micro-thruster with post-acceleration and beam containment
NASA Astrophysics Data System (ADS)
Lenguito, G.; Gomez, A.
2013-10-01
We report the development of a compact thruster based on Multiplexed ElectroSprays (MES). It relied on a microfabricated Si array of emitters coupled with an extractor electrode and an accelerator electrode. The accelerator stage was introduced for two purposes: containing beam opening and avoiding electrode erosion due to droplet impingement, as well as boosting specific impulse and thrust. Multiplexing is generally necessary as a thrust multiplier to reach eventually the level required (O(102) μN) by small satellites. To facilitate system optimization and debugging, we focused on a 7-nozzle MES device and compared its performance to that of a single emitter. To ensure uniformity of operation of all nozzles their hydraulic impedance was augmented by packing them with micrometer-size beads. Two propellants were tested: a solution of 21.5% methyl ammonium formate in formamide and the better performing pure ionic liquid ethyl ammonium nitrate (EAN). The 7-MES device spraying EAN at ΔV = 5.93 kV covered a specific impulse range from 620 s to 1900 s and a thrust range from 0.6 μN to 5.4 μN, at 62% efficiency. Remarkably, less than 1% of the beam was demonstrated to impact on the accelerator electrode, which bodes well for long-term applications in space.
PC-CUBE: A Personal Computer Based Hypercube
NASA Technical Reports Server (NTRS)
Ho, Alex; Fox, Geoffrey; Walker, David; Snyder, Scott; Chang, Douglas; Chen, Stanley; Breaden, Matt; Cole, Terry
1988-01-01
PC-CUBE is an ensemble of IBM PCs or close compatibles connected in the hypercube topology with ordinary computer cables. Communication occurs at the rate of 115.2 K-band via the RS-232 serial links. Available for PC-CUBE is the Crystalline Operating System III (CrOS III), Mercury Operating System, CUBIX and PLOTIX which are parallel I/O and graphics libraries. A CrOS performance monitor was developed to facilitate the measurement of communication and computation time of a program and their effects on performance. Also available are CXLISP, a parallel version of the XLISP interpreter; GRAFIX, some graphics routines for the EGA and CGA; and a general execution profiler for determining execution time spent by program subroutines. PC-CUBE provides a programming environment similar to all hypercube systems running CrOS III, Mercury and CUBIX. In addition, every node (personal computer) has its own graphics display monitor and storage devices. These allow data to be displayed or stored at every processor, which has much instructional value and enables easier debugging of applications. Some application programs which are taken from the book Solving Problems on Concurrent Processors (Fox 88) were implemented with graphics enhancement on PC-CUBE. The applications range from solving the Mandelbrot set, Laplace equation, wave equation, long range force interaction, to WaTor, an ecological simulation.
An executable specification for the message processor in a simple combining network
NASA Technical Reports Server (NTRS)
Middleton, David
1995-01-01
While the primary function of the network in a parallel computer is to communicate data between processors, it is often useful if the network can also perform rudimentary calculations. That is, some simple processing ability in the network itself, particularly for performing parallel prefix computations, can reduce both the volume of data being communicated and the computational load on the processors proper. Unfortunately, typical implementations of such networks require a large fraction of the hardware budget, and so combining networks are viewed as being impractical. The FFP Machine has such a combining network, and various characteristics of the machine allow a good deal of simplification in the network design. Despite being simple in construction however, the network relies on many subtle details to work correctly. This paper describes an executable model of the network which will serve several purposes. It provides a complete and detailed description of the network which can substantiate its ability to support necessary functions. It provides an environment in which algorithms to be run on the network can be designed and debugged more easily than they would on physical hardware. Finally, it provides the foundation for exploring the design of the message receiving facility which connects the network to the individual processors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Tongning, E-mail: TongningHu@hust.edu.cn, E-mail: yjpei@ustc.edu.cn; Qin, Bin; Tan, Ping
A novel thermionic electron gun adopted for use in a high power THz free electron laser (FEL) is proposed in this paper. By optimization of the structural and radiofrequency (RF) parameters, the physical design of the gun is performed using dynamic calculations. Velocity bunching is used to minimize the bunch's energy spread, and the dynamic calculation results indicate that high quality beams can be provided. The transverse properties of the beams generated by the gun are also analyzed. The novel RF focusing effects of the resonance cavity are investigated precisely and are used to establish emittance compensation, which enables themore » injector length to be reduced. In addition, the causes of the extrema of the beam radius and the normalized transverse emittance are analyzed and interpreted, respectively, and slice simulations are performed to illustrate how the RF focusing varies along the bunch length and to determine the effects of that variation on the emittance compensation. Finally, by observation of the variations of the beam properties in the drift tube behind the electron gun, prospective assembly scenarios for the complete THz-FEL injector are discussed, and a joint-debugging process for the injector is implemented.« less
Van De Gucht, Tim; Saeys, Wouter; Van Meensel, Jef; Van Nuffel, Annelies; Vangeyte, Jurgen; Lauwers, Ludwig
2018-01-01
Although prototypes of automatic lameness detection systems for dairy cattle exist, information about their economic value is lacking. In this paper, a conceptual and operational framework for simulating the farm-specific economic value of automatic lameness detection systems was developed and tested on 4 system types: walkover pressure plates, walkover pressure mats, camera systems, and accelerometers. The conceptual framework maps essential factors that determine economic value (e.g., lameness prevalence, incidence and duration, lameness costs, detection performance, and their relationships). The operational simulation model links treatment costs and avoided losses with detection results and farm-specific information, such as herd size and lameness status. Results show that detection performance, herd size, discount rate, and system lifespan have a large influence on economic value. In addition, lameness prevalence influences the economic value, stressing the importance of an adequate prior estimation of the on-farm prevalence. The simulations provide first estimates for the upper limits for purchase prices of automatic detection systems. The framework allowed for identification of knowledge gaps obstructing more accurate economic value estimation. These include insights in cost reductions due to early detection and treatment, and links between specific lameness causes and their related losses. Because this model provides insight in the trade-offs between automatic detection systems' performance and investment price, it is a valuable tool to guide future research and developments. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Automatic SAR/optical cross-matching for GCP monograph generation
NASA Astrophysics Data System (ADS)
Nutricato, Raffaele; Morea, Alberto; Nitti, Davide Oscar; La Mantia, Claudio; Agrimano, Luigi; Samarelli, Sergio; Chiaradia, Maria Teresa
2016-10-01
Ground Control Points (GCP), automatically extracted from Synthetic Aperture Radar (SAR) images through 3D stereo analysis, can be effectively exploited for an automatic orthorectification of optical imagery if they can be robustly located in the basic optical images. The present study outlines a SAR/Optical cross-matching procedure that allows a robust alignment of radar and optical images, and consequently to derive automatically the corresponding sub-pixel position of the GCPs in the optical image in input, expressed as fractional pixel/line image coordinates. The cross-matching in performed in two subsequent steps, in order to gradually gather a better precision. The first step is based on the Mutual Information (MI) maximization between optical and SAR chips while the last one uses the Normalized Cross-Correlation as similarity metric. This work outlines the designed algorithmic solution and discusses the results derived over the urban area of Pisa (Italy), where more than ten COSMO-SkyMed Enhanced Spotlight stereo images with different beams and passes are available. The experimental analysis involves different satellite images, in order to evaluate the performances of the algorithm w.r.t. the optical spatial resolution. An assessment of the performances of the algorithm has been carried out, and errors are computed by measuring the distance between the GCP pixel/line position in the optical image, automatically estimated by the tool, and the "true" position of the GCP, visually identified by an expert user in the optical images.
NASA Astrophysics Data System (ADS)
Kushida, N.; Kebede, F.; Feitio, P.; Le Bras, R.
2016-12-01
The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing and testing NET-VISA (Arora et al., 2013), a Bayesian automatic event detection and localization program, and evaluating its performance in a realistic operational mode. In our preliminary testing at the CTBTO, NET-VISA shows better performance than its currently operating automatic localization program. However, given CTBTO's role and its international context, a new technology should be introduced cautiously when it replaces a key piece of the automatic processing. We integrated the results of NET-VISA into the Analyst Review Station, extensively used by the analysts so that they can check the accuracy and robustness of the Bayesian approach. We expect the workload of the analysts to be reduced because of the better performance of NET-VISA in finding missed events and getting a more complete set of stations than the current system which has been operating for nearly twenty years. The results of a series of tests indicate that the expectations born from the automatic tests, which show an overall overlap improvement of 11%, meaning that the missed events rate is cut by 42%, hold for the integrated interactive module as well. New events are found by analysts, which qualify for the CTBTO Reviewed Event Bulletin, beyond the ones analyzed through the standard procedures. Arora, N., Russell, S., and Sudderth, E., NET-VISA: Network Processing Vertically Integrated Seismic Analysis, 2013, Bull. Seismol. Soc. Am., 103, 709-729.
Design and development of 24 times high-power laser beam expander
NASA Astrophysics Data System (ADS)
Lin, Zhao-heng; Gong, Xiu-ming; Wu, Shi-bin; Tan, Yi; Jing, Hong-wei; Wei, Zhong-wei
2013-09-01
As currently, laser calibration, laser radar, laser ranging and the relative field raised up the demand for high magnification laser beam expander. This article intends to introduce a high-energy laser beam expander research and design, large- diameter, wide-band, high-magnification and small obscuration ratio are the main features. By using Cassegrain reflective optical system, this laser beam expander achieves 24 times beam expand, and outgoing effective limiting aperture is Φ600 mm, band scope between 0.45μm to 5μm, single-pulse laser damage threshold greater than 1J/cm2, continuous-wave laser damage threshold greater than 200W/cm2 and obscuration ratio 1:10. Primary mirror underside support uses 9 points float supporting, lateral support mainly depends on mercury belt support and assists by mandrel ball head positioning support. An analyzing base on finite element analysis software ANSYS, and primary mirror deformation status analysis with debug mode and operativemode, when inputs four groups of Angle 170°, 180°, 210° and 240° , mercury belt under each group of angle load-bearing is 65%, 75% , 85% and 100% respectively, totally 16 workingcondition analyze results. At last, the best way to support primary mirror is finalized. Through design of secondary mirror to achieve a five-dimensional precision fine-tune. By assembling and debugging laser beam expander, Zygo interferometer detection system proof image quality (RMS) is 0.043λ (λ=632.8nm), stability (RMS) is 0.007λ (λ=632.8nm), and effective transmission hit 94%, meets the requirements of practical application completely.
Investigation of possible causes for human-performance degradation during microgravity flight
NASA Technical Reports Server (NTRS)
Schroeder, James E.; Tuttle, Megan L.
1992-01-01
The results of the first year of a three year study of the effects of microgravity on human performance are given. Test results show support for the hypothesis that the effects of microgravity can be studied indirectly on Earth by measuring performance in an altered gravitational field. The hypothesis was that an altered gravitational field could disrupt performance on previously automated behaviors if gravity was a critical part of the stimulus complex controlling those behaviors. In addition, it was proposed that performance on secondary cognitive tasks would also degrade, especially if the subject was provided feedback about degradation on the previously automated task. In the initial experimental test of these hypotheses, there was little statistical support. However, when subjects were categorized as high or low in automated behavior, results for the former group supported the hypotheses. The predicted interaction between body orientation and level of workload in their joint effect on performance in the secondary cognitive task was significant for the group high in automatized behavior and receiving feedback, but no such interventions were found for the group high in automatized behavior but not receiving feedback, or the group low in automatized behavior.
WOLF; automatic typing program
Evenden, G.I.
1982-01-01
A FORTRAN IV program for the Hewlett-Packard 1000 series computer provides for automatic typing operations and can, when employed with manufacturer's text editor, provide a system to greatly facilitate preparation of reports, letters and other text. The input text and imbedded control data can perform nearly all of the functions of a typist. A few of the features available are centering, titles, footnotes, indentation, page numbering (including Roman numerals), automatic paragraphing, and two forms of tab operations. This documentation contains both user and technical description of the program.
Automatic identification of artifacts in electrodermal activity data.
Taylor, Sara; Jaques, Natasha; Chen, Weixuan; Fedor, Szymon; Sano, Akane; Picard, Rosalind
2015-01-01
Recently, wearable devices have allowed for long term, ambulatory measurement of electrodermal activity (EDA). Despite the fact that ambulatory recording can be noisy, and recording artifacts can easily be mistaken for a physiological response during analysis, to date there is no automatic method for detecting artifacts. This paper describes the development of a machine learning algorithm for automatically detecting EDA artifacts, and provides an empirical evaluation of classification performance. We have encoded our results into a freely available web-based tool for artifact and peak detection.
[Modeling and implementation method for the automatic biochemistry analyzer control system].
Wang, Dong; Ge, Wan-cheng; Song, Chun-lin; Wang, Yun-guang
2009-03-01
In this paper the system structure The automatic biochemistry analyzer is a necessary instrument for clinical diagnostics. First of is analyzed. The system problems description and the fundamental principles for dispatch are brought forward. Then this text puts emphasis on the modeling for the automatic biochemistry analyzer control system. The objects model and the communications model are put forward. Finally, the implementation method is designed. It indicates that the system based on the model has good performance.
DOT National Transportation Integrated Search
2014-09-09
Automatic Dependent Surveillance-Broadcast (ADS-B) In technology supports the display of traffic data on Cockpit Displays of Traffic Information (CDTIs). The data are used by flightcrews to perform defined self-separation procedures, such as the in-t...
Automatic orbital GTAW welding: Highest quality welds for tomorrow's high-performance systems
NASA Technical Reports Server (NTRS)
Henon, B. K.
1985-01-01
Automatic orbital gas tungsten arc welding (GTAW) or TIG welding is certain to play an increasingly prominent role in tomorrow's technology. The welds are of the highest quality and the repeatability of automatic weldings is vastly superior to that of manual welding. Since less heat is applied to the weld during automatic welding than manual welding, there is less change in the metallurgical properties of the parent material. The possibility of accurate control and the cleanliness of the automatic GTAW welding process make it highly suitable to the welding of the more exotic and expensive materials which are now widely used in the aerospace and hydrospace industries. Titanium, stainless steel, Inconel, and Incoloy, as well as, aluminum can all be welded to the highest quality specifications automatically. Automatic orbital GTAW equipment is available for the fusion butt welding of tube-to-tube, as well as, tube to autobuttweld fittings. The same equipment can also be used for the fusion butt welding of up to 6 inch pipe with a wall thickness of up to 0.154 inches.
Good practices in normal childbirth: reliability analysis of an instrument by Cronbach's Alpha.
Gottems, Leila Bernarda Donato; Carvalho, Elisabete Mesquita Peres De; Guilhem, Dirce; Pires, Maria Raquel Gomes Maia
2018-01-01
to analyze the internal consistency of the evaluation instrument of the adherence to the good practices of childbirth and birth care in the professionals, through Cronbach's Alpha Coefficient for each of the dimensions and for the total instrument. this is a descriptive and cross-sectional study performed in obstetric centers of eleven public hospitals in the Federal District, with a questionnaire applied to 261 professionals who worked in the delivery care. The study was attended by 261 professionals, 42.5% (111) nurses and 57.5% (150) physicians. The reliability evaluation of the instrument by the Cronbach Alfa resulted in 0.53, 0.78 and 0.76 for dimensions 1, 2 and 3, after debugging that resulted in the exclusion of 11 items. the instrument obtained Cronbach's alpha of 0.80. There is a need for improvement in the items of dimension 1 that refer to attitudes, knowledge, and practices of the organization of the network of care to gestation, childbirth, and birth. However, it can be applied in the way it is used to evaluate practices based on scientific evidence of childbirth care.
Beam position reconstruction for the g2p experiment in Hall A at Jefferson Lab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Pengjia; Allada, Kalyan; Allison, Trent
2015-11-03
Beam-line equipment was upgraded for experiment E08-027 (g2p) in Hall A at Jefferson Lab. Two beam position monitors (BPMs) were necessary to measure the beam position and angle at the target. A new BPM receiver was designed and built to handle the low beam currents (50-100 nA) used for this experiment. Two new super-harps were installed for calibrating the BPMs. In addition to the existing fast raster system, a slow raster system was installed. We found that before and during the experiment, these new devices were tested and debugged, and their performance was also evaluated. In order to achieve themore » required accuracy (1-2 mm in position and 1-2 mrad in angle at the target location), the data of the BPMs and harps were carefully analyzed, as well as reconstructing the beam position and angle event by event at the target location. Finally, the calculated beam position will be used in the data analysis to accurately determine the kinematics for each event.« less
[A capillary blood flow velocity detection system based on linear array charge-coupled devices].
Zhou, Houming; Wang, Ruofeng; Dang, Qi; Yang, Li; Wang, Xiang
2017-12-01
In order to detect the flow characteristics of blood samples in the capillary, this paper introduces a blood flow velocity measurement system based on field-programmable gate array (FPGA), linear charge-coupled devices (CCD) and personal computer (PC) software structure. Based on the analysis of the TCD1703C and AD9826 device data sheets, Verilog HDL hardware description language was used to design and simulate the driver. Image signal acquisition and the extraction of the real-time edge information of the blood sample were carried out synchronously in the FPGA. Then a series of discrete displacement were performed in a differential operation to scan each of the blood samples displacement, so that the sample flow rate could be obtained. Finally, the feasibility of the blood flow velocity detection system was verified by simulation and debugging. After drawing the flow velocity curve and analyzing the velocity characteristics, the significance of measuring blood flow velocity is analyzed. The results show that the measurement of the system is less time-consuming and less complex than other flow rate monitoring schemes.
García-Magariño, Iván; Lacuesta, Raquel; Lloret, Jaime
2018-03-27
Smart communication protocols are becoming a key mechanism for improving communication performance in networks such as wireless sensor networks. However, the literature lacks mechanisms for simulating smart communication protocols in precision agriculture for decreasing production costs. In this context, the current work presents an agent-based simulator of smart communication protocols for efficiently managing pesticides. The simulator considers the needs of electric power, crop health, percentage of alive bugs and pesticide consumption. The current approach is illustrated with three different communication protocols respectively called (a) broadcast, (b) neighbor and (c) low-cost neighbor. The low-cost neighbor protocol obtained a statistically-significant reduction in the need of electric power over the neighbor protocol, with a very large difference according to the common interpretations about the Cohen's d effect size. The presented simulator is called ABS-SmartComAgri and is freely distributed as open-source from a public research data repository. It ensures the reproducibility of experiments and allows other researchers to extend the current approach.
2018-01-01
Smart communication protocols are becoming a key mechanism for improving communication performance in networks such as wireless sensor networks. However, the literature lacks mechanisms for simulating smart communication protocols in precision agriculture for decreasing production costs. In this context, the current work presents an agent-based simulator of smart communication protocols for efficiently managing pesticides. The simulator considers the needs of electric power, crop health, percentage of alive bugs and pesticide consumption. The current approach is illustrated with three different communication protocols respectively called (a) broadcast, (b) neighbor and (c) low-cost neighbor. The low-cost neighbor protocol obtained a statistically-significant reduction in the need of electric power over the neighbor protocol, with a very large difference according to the common interpretations about the Cohen’s d effect size. The presented simulator is called ABS-SmartComAgri and is freely distributed as open-source from a public research data repository. It ensures the reproducibility of experiments and allows other researchers to extend the current approach. PMID:29584703
Design and analysis of DNA strand displacement devices using probabilistic model checking
Lakin, Matthew R.; Parker, David; Cardelli, Luca; Kwiatkowska, Marta; Phillips, Andrew
2012-01-01
Designing correct, robust DNA devices is difficult because of the many possibilities for unwanted interference between molecules in the system. DNA strand displacement has been proposed as a design paradigm for DNA devices, and the DNA strand displacement (DSD) programming language has been developed as a means of formally programming and analysing these devices to check for unwanted interference. We demonstrate, for the first time, the use of probabilistic verification techniques to analyse the correctness, reliability and performance of DNA devices during the design phase. We use the probabilistic model checker prism, in combination with the DSD language, to design and debug DNA strand displacement components and to investigate their kinetics. We show how our techniques can be used to identify design flaws and to evaluate the merits of contrasting design decisions, even on devices comprising relatively few inputs. We then demonstrate the use of these components to construct a DNA strand displacement device for approximate majority voting. Finally, we discuss some of the challenges and possible directions for applying these methods to more complex designs. PMID:22219398
Support for Diagnosis of Custom Computer Hardware
NASA Technical Reports Server (NTRS)
Molock, Dwaine S.
2008-01-01
The Coldfire SDN Diagnostics software is a flexible means of exercising, testing, and debugging custom computer hardware. The software is a set of routines that, collectively, serve as a common software interface through which one can gain access to various parts of the hardware under test and/or cause the hardware to perform various functions. The routines can be used to construct tests to exercise, and verify the operation of, various processors and hardware interfaces. More specifically, the software can be used to gain access to memory, to execute timer delays, to configure interrupts, and configure processor cache, floating-point, and direct-memory-access units. The software is designed to be used on diverse NASA projects, and can be customized for use with different processors and interfaces. The routines are supported, regardless of the architecture of a processor that one seeks to diagnose. The present version of the software is configured for Coldfire processors on the Subsystem Data Node processor boards of the Solar Dynamics Observatory. There is also support for the software with respect to Mongoose V, RAD750, and PPC405 processors or their equivalents.
NASA Technical Reports Server (NTRS)
Osder, S.; Keller, R.
1971-01-01
Guidance and control design studies that were performed for three specific space shuttle candidate vehicles are described. Three types of simulation were considered. The manual control investigations and pilot evaluations of the automatic system performance is presented. Recommendations for systems and equipment, both airborne and ground-based, necessary to flight test the guidance and control concepts for shuttlecraft terminal approach and landing are reported.
Studies to design and develop improved remote manipulator systems
NASA Technical Reports Server (NTRS)
Hill, J. W.; Sword, A. J.
1973-01-01
Remote manipulator control considered is based on several levels of automatic supervision which derives manipulator commands from an analysis of sensor states and task requirements. Principle sensors are manipulator joint position, tactile, and currents. The tactile sensor states can be displayed visually in perspective or replicated in the operator's control handle of perceived by the automatic supervisor. Studies are reported on control organization, operator performance and system performance measures. Unusual hardware and software details are described.
Demonstration of subsidence monitoring system
NASA Astrophysics Data System (ADS)
Conroy, P. J.; Gyarmaty, J. H.; Pearson, M. L.
1981-06-01
Data on coal mine subsidence were studied as a basis for the development of subsidence control technology. Installation, monitoring, and evaluation of three subsidence monitoring instrument systems were examined: structure performance, performance of supported systems, and performance of caving systems. Objectives of the instrument program were: (1) to select, test, assemble, install, monitor, and maintain all instrumentation required for implementing the three subsidence monitoring systems; and (2) to evaluate performance of each instrument individually and as part of the appropriate monitoring system or systems. The use of an automatic level and a rod extensometer for measuring structure performance, and the automatic level, steel tape extensometer, FPBX, FPBI, USBM borehole deformation gauge, and vibrating wire stressmeters for measuring the performance of caving systems are recommended.
ERIC Educational Resources Information Center
Beale, Ivan L.
2005-01-01
Computer assisted learning (CAL) can involve a computerised intelligent learning environment, defined as an environment capable of automatically, dynamically and continuously adapting to the learning context. One aspect of this adaptive capability involves automatic adjustment of instructional procedures in response to each learner's performance,…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-09
... Envelope Protection: Performance Credit for Automatic Takeoff Thrust Control System (ATTCS) During Go... Automatic Takeoff Thrust Control System (ATTCS) during go-around. The applicable airworthiness regulations... FAA-2012-1199 using any of the following methods: Federal eRegulations Portal: Go to http://www...
A Theory of Term Importance in Automatic Text Analysis.
ERIC Educational Resources Information Center
Salton, G.; And Others
Most existing automatic content analysis and indexing techniques are based on work frequency characteristics applied largely in an ad hoc manner. Contradictory requirements arise in this connection, in that terms exhibiting high occurrence frequencies in individual documents are often useful for high recall performance (to retrieve many relevant…
Experiments in Multi-Lingual Information Retrieval.
ERIC Educational Resources Information Center
Salton, Gerard
A comparison was made of the performance in an automatic information retrieval environment of user queries and document abstracts available in natural language form in both English and French. The results obtained indicate that the automatic indexing and retrieval techniques actually used appear equally effective in handling the query and document…
40 CFR 60.543 - Performance test and compliance provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... organic solvent-based sprays are used, each Michelin-A operation, each Michelin-B operation, and each Michelin-C-automatic operation where the owner or operator seeks to comply with the uncontrolled monthly... sprays are used, each Michelin-A operation, each Michelin-B operation, and each Michelin-C-automatic...
40 CFR 60.543 - Performance test and compliance provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... organic solvent-based sprays are used, each Michelin-A operation, each Michelin-B operation, and each Michelin-C-automatic operation where the owner or operator seeks to comply with the uncontrolled monthly... sprays are used, each Michelin-A operation, each Michelin-B operation, and each Michelin-C-automatic...
40 CFR 60.543 - Performance test and compliance provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... organic solvent-based sprays are used, each Michelin-A operation, each Michelin-B operation, and each Michelin-C-automatic operation where the owner or operator seeks to comply with the uncontrolled monthly... sprays are used, each Michelin-A operation, each Michelin-B operation, and each Michelin-C-automatic...
Concept Recognition in an Automatic Text-Processing System for the Life Sciences.
ERIC Educational Resources Information Center
Vleduts-Stokolov, Natasha
1987-01-01
Describes a system developed for the automatic recognition of biological concepts in titles of scientific articles; reports results of several pilot experiments which tested the system's performance; analyzes typical ambiguity problems encountered by the system; describes a disambiguation technique that was developed; and discusses future plans…
Do Judgments of Learning Predict Automatic Influences of Memory?
ERIC Educational Resources Information Center
Undorf, Monika; Böhm, Simon; Cüpper, Lutz
2016-01-01
Current memory theories generally assume that memory performance reflects both recollection and automatic influences of memory. Research on people's predictions about the likelihood of remembering recently studied information on a memory test, that is, on judgments of learning (JOLs), suggests that both magnitude and resolution of JOLs are linked…
Electrophysiological Evidence of Automatic Early Semantic Processing
ERIC Educational Resources Information Center
Hinojosa, Jose A.; Martin-Loeches, Manuel; Munoz, Francisco; Casado, Pilar; Pozo, Miguel A.
2004-01-01
This study investigates the automatic-controlled nature of early semantic processing by means of the Recognition Potential (RP), an event-related potential response that reflects lexical selection processes. For this purpose tasks differing in their processing requirements were used. Half of the participants performed a physical task involving a…
Performance of a wireless sensor network for crop monitoring and irrigation control
USDA-ARS?s Scientific Manuscript database
Robust automatic irrigation scheduling has been demonstrated using wired sensors and sensor network systems with subsurface drip and moving irrigation systems. However, there are limited studies that report on crop yield and water use efficiency resulting from the use of wireless networks to automat...
Lacson, Ronilda C; Barzilay, Regina; Long, William J
2006-10-01
Spoken medical dialogue is a valuable source of information for patients and caregivers. This work presents a first step towards automatic analysis and summarization of spoken medical dialogue. We first abstract a dialogue into a sequence of semantic categories using linguistic and contextual features integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). We then describe and implement a summarizer that utilizes this automatically induced structure. Our evaluation results indicate that automatically generated summaries exhibit high resemblance to summaries written by humans. In addition, task-based evaluation shows that physicians can reasonably answer questions related to patient care by looking at the automatically generated summaries alone, in contrast to the physicians' performance when they were given summaries from a naïve summarizer (p<0.05). This work demonstrates the feasibility of automatically structuring and summarizing spoken medical dialogue.
NASA Technical Reports Server (NTRS)
Edwards, F. G.; Foster, J. D.
1973-01-01
Unpowered automatic approaches and landings with a CV990 aircraft were conducted to study navigation, guidance, and control problems associated with terminal area approach and landing for the space shuttle. The flight tests were designed to study from 11,300 m to touchdown the performance of a navigation and guidance concept which utilized blended radio/inertial navigation using VOR, DME, and ILS as the ground navigation aids. In excess of fifty automatic approaches and landings were conducted. Preliminary results indicate that this concept may provide sufficient accuracy to accomplish automatic landing of the shuttle orbiter without air-breathing engines on a conventional size runway.
Automatic tracking of labeled red blood cells in microchannels.
Pinho, Diana; Lima, Rui; Pereira, Ana I; Gayubo, Fernando
2013-09-01
The current study proposes an automatic method for the segmentation and tracking of red blood cells flowing through a 100- μm glass capillary. The original images were obtained by means of a confocal system and then processed in MATLAB using the Image Processing Toolbox. The measurements obtained with the proposed automatic method were compared with the results determined by a manual tracking method. The comparison was performed by using both linear regressions and Bland-Altman analysis. The results have shown a good agreement between the two methods. Therefore, the proposed automatic method is a powerful way to provide rapid and accurate measurements for in vitro blood experiments in microchannels. Copyright © 2012 John Wiley & Sons, Ltd.
An Expert System for the Development of Efficient Parallel Code
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit
2004-01-01
We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.
Lefebvre, Christine; Cousineau, Denis; Larochelle, Serge
2008-11-01
Schneider and Shiffrin (1977) proposed that training under consistent stimulus-response mapping (CM) leads to automatic target detection in search tasks. Other theories, such as Treisman and Gelade's (1980) feature integration theory, consider target-distractor discriminability as the main determinant of search performance. The first two experiments pit these two principles against each other. The results show that CM training is neither necessary nor sufficient to achieve optimal search performance. Two other experiments examine whether CM trained targets, presented as distractors in unattended display locations, attract attention away from current targets. The results are again found to vary with target-distractor similarity. Overall, the present study strongly suggests that CM training does not invariably lead to automatic attention attraction in search tasks.
A semi-automatic traffic sign detection, classification, and positioning system
NASA Astrophysics Data System (ADS)
Creusen, I. M.; Hazelhoff, L.; de With, P. H. N.
2012-01-01
The availability of large-scale databases containing street-level panoramic images offers the possibility to perform semi-automatic surveying of real-world objects such as traffic signs. These inventories can be performed significantly more efficiently than using conventional methods. Governmental agencies are interested in these inventories for maintenance and safety reasons. This paper introduces a complete semi-automatic traffic sign inventory system. The system consists of several components. First, a detection algorithm locates the 2D position of the traffic signs in the panoramic images. Second, a classification algorithm is used to identify the traffic sign. Third, the 3D position of the traffic sign is calculated using the GPS position of the photographs. Finally, the results are listed in a table for quick inspection and are also visualized in a web browser.
Molinari, Filippo; Meiburger, Kristen M; Suri, Jasjit
2011-01-01
The evaluation of the carotid artery wall is fundamental for the assessment of cardiovascular risk. This paper presents the general architecture of an automatic strategy, which segments the lumen-intima and media-adventitia borders, classified under a class of Patented AtheroEdge™ systems (Global Biomedical Technologies, Inc, CA, USA). Guidelines to produce accurate and repeatable measurements of the intima-media thickness are provided and the problem of the different distance metrics one can adopt is confronted. We compared the results of a completely automatic algorithm that we developed with those of a semi-automatic algorithm, and showed final segmentation results for both techniques. The overall rationale is to provide user-independent high-performance techniques suitable for screening and remote monitoring.
Langhanns, Christine; Müller, Hermann
2018-01-01
Motor-cognitive dual tasks have been intensely studied and it has been demonstrated that even well practiced movements like walking show signs of interference when performed concurrently with a challenging cognitive task. Typically walking speed is reduced, at least in elderly persons. In contrast to these findings, some authors report an increased movement frequency under dual-task conditions, which they call hastening . A tentative explanation has been proposed, assuming that the respective movements are governed by an automatic control regime. Though, under single-task conditions, these automatic processes are supervised by "higher-order" cognitive control processes. However, when a concurrent cognitive task binds all cognitive resources, the automatic process is freed from the detrimental effect of cognitive surveillance, allowing higher movement frequencies. Fast rhythmic movements (>1 Hz) should more likely be governed by such an automatic process than low frequency discrete repetitive movements. Fifteen subjects performed two repetitive movements under single and dual-task condition, that is, in combination with a mental calculation task. According to the expectations derived from the explanatory concept, we found an increased movement frequency under dual-task conditions only for the fast rhythmic movement (paddleball task) but not for the slower discrete repetitive task (pegboard task). fNIRS measurements of prefrontal cortical load confirmed the idea of an automatic processing in the paddleball task, whereas the pegboard task seems to be more controlled by processes interfering with the calculation related processing.
NASA Technical Reports Server (NTRS)
White, W. F.; Clark, L. V.
1980-01-01
The NASA terminal configured vehicle B-737 was flown in support of the world wide FAA demonstration of the time reference scanning beam microwave landing system. A summary of the flight performance of the TCV airplane during demonstration automatic approaches and landings while utilizing TRSB/MLS guidance is presented. The TRSB/MLS provided the terminal area guidance necessary for automatically flying curved, noise abatement type approaches and landings with short finals.
NASA Astrophysics Data System (ADS)
Morais, Pedro; Queirós, Sandro; Heyde, Brecht; Engvall, Jan; 'hooge, Jan D.; Vilaça, João L.
2017-09-01
Cardiovascular diseases are among the leading causes of death and frequently result in local myocardial dysfunction. Among the numerous imaging modalities available to detect these dysfunctional regions, cardiac deformation imaging through tagged magnetic resonance imaging (t-MRI) has been an attractive approach. Nevertheless, fully automatic analysis of these data sets is still challenging. In this work, we present a fully automatic framework to estimate left ventricular myocardial deformation from t-MRI. This strategy performs automatic myocardial segmentation based on B-spline explicit active surfaces, which are initialized using an annular model. A non-rigid image-registration technique is then used to assess myocardial deformation. Three experiments were set up to validate the proposed framework using a clinical database of 75 patients. First, automatic segmentation accuracy was evaluated by comparing against manual delineations at one specific cardiac phase. The proposed solution showed an average perpendicular distance error of 2.35 ± 1.21 mm and 2.27 ± 1.02 mm for the endo- and epicardium, respectively. Second, starting from either manual or automatic segmentation, myocardial tracking was performed and the resulting strain curves were compared. It is shown that the automatic segmentation adds negligible differences during the strain-estimation stage, corroborating its accuracy. Finally, segmental strain was compared with scar tissue extent determined by delay-enhanced MRI. The results proved that both strain components were able to distinguish between normal and infarct regions. Overall, the proposed framework was shown to be accurate, robust, and attractive for clinical practice, as it overcomes several limitations of a manual analysis.
Luo, Jiaying; Xiao, Sichang; Qiu, Zhihui; Song, Ning; Luo, Yuanming
2013-04-01
Whether the therapeutic nasal continuous positive airway pressure (CPAP) derived from manual titration is the same as derived from automatic titration is controversial. The purpose of this study was to compare the therapeutic pressure derived from manual titration with automatic titration. Fifty-one patients with obstructive sleep apnoea (OSA) (mean apnoea/hypopnoea index (AHI) = 50.6 ± 18.6 events/h) who were newly diagnosed after an overnight full polysomnography and who were willing to accept CPAP as a long-term treatment were recruited for the study. Manual titration during full polysomnography monitoring and unattended automatic titration with an automatic CPAP device (REMstar Auto) were performed. A separate cohort study of one hundred patients with OSA (AHI = 54.3 ± 18.9 events/h) was also performed by observing the efficacy of CPAP derived from manual titration. The treatment pressure derived from automatic titration (9.8 ± 2.2 cmH(2)O) was significantly higher than that derived from manual titration (7.3 ± 1.5 cmH(2)O; P < 0.001) in 51 patients. The cohort study of 100 patients showed that AHI was satisfactorily decreased after CPAP treatment using a pressure derived from manual titration (54.3 ± 18.9 events/h before treatment and 3.3 ± 1.7 events/h after treatment; P < 0.001). The results suggest that automatic titration pressure derived from REMstar Auto is usually higher than the pressure derived from manual titration. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
Ozhinsky, Eugene; Vigneron, Daniel B; Nelson, Sarah J
2011-04-01
To develop a technique for optimizing coverage of brain 3D (1) H magnetic resonance spectroscopic imaging (MRSI) by automatic placement of outer-volume suppression (OVS) saturation bands (sat bands) and to compare the performance for point-resolved spectroscopic sequence (PRESS) MRSI protocols with manual and automatic placement of sat bands. The automated OVS procedure includes the acquisition of anatomic images from the head, obtaining brain and lipid tissue maps, calculating optimal sat band placement, and then using those optimized parameters during the MRSI acquisition. The data were analyzed to quantify brain coverage volume and data quality. 3D PRESS MRSI data were acquired from three healthy volunteers and 29 patients using protocols that included either manual or automatic sat band placement. On average, the automatic sat band placement allowed the acquisition of PRESS MRSI data from 2.7 times larger brain volumes than the conventional method while maintaining data quality. The technique developed helps solve two of the most significant problems with brain PRESS MRSI acquisitions: limited brain coverage and difficulty in prescription. This new method will facilitate routine clinical brain 3D MRSI exams and will be important for performing serial evaluation of response to therapy in patients with brain tumors and other neurological diseases. Copyright © 2011 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Brown, S. C.; Hardy, G. H.; Hindson, W. S.
1983-01-01
As part of a comprehensive flight-test program of STOL operating systems for the terminal area, an automatic landing system was developed and evaluated for a light wing loading turboprop aircraft. The aircraft utilized an onboard advanced digital avionics system. Flight tests were conducted at a facility that included a STOL runway site with a microwave landing system. Longitudinal flight-test results were presented and compared with available (basically CTOL) criteria. These comparisons were augmented by results from a comprehensive simulation of the controlled aircraft which included representations of navigation errors that were encountered in flight and atmospheric disturbances. Acceptable performance on final approach and at touchdown was achieved by the autoland (automatic landing) system for the moderate winds and turbulence conditions encountered in flight. However, some touchdown performance goals were marginally achieved, and simulation results suggested that difficulties could be encountered in the presence of more extreme atmospheric conditions. Suggestions were made for improving performance under those more extreme conditions.
NASA Astrophysics Data System (ADS)
Wang, Jen-Cheng; Liao, Min-Sheng; Lee, Yeun-Chung; Liu, Cheng-Yue; Kuo, Kun-Chang; Chou, Cheng-Ying; Huang, Chen-Kang; Jiang, Joe-Air
2018-02-01
The performance of photovoltaic (PV) modules under outdoor operation is greatly affected by their location and environmental conditions. The temperature of a PV module gradually increases as it is exposed to solar irradiation, resulting in degradation of its electrical characteristics and power generation efficiency. This study adopts wireless sensor network (WSN) technology to develop an automatic water-cooling system for PV modules in order to improve their PV power generation efficiency. A temperature estimation method is developed to quickly and accurately estimate the PV module temperatures based on weather data provided from the WSN monitoring system. Further, an estimation method is also proposed for evaluation of the electrical characteristics and output power of the PV modules, which is performed remotely via a control platform. The automatic WSN-based water-cooling mechanism is designed to avoid the PV module temperature from reaching saturation. Equipping each PV module with the WSN-based cooling system, the ambient conditions are monitored automatically so that the temperature of the PV module is controlled by sprinkling water on the panel surface. The field-test experiment results show an increase in the energy harvested by the PV modules of approximately 17.75% when using the proposed WSN-based cooling system.