Sample records for exploratory software testing

  1. Development and integration of a LabVIEW-based modular architecture for automated execution of electrochemical catalyst testing.

    PubMed

    Topalov, Angel A; Katsounaros, Ioannis; Meier, Josef C; Klemm, Sebastian O; Mayrhofer, Karl J J

    2011-11-01

    This paper describes a system for performing electrochemical catalyst testing where all hardware components are controlled simultaneously using a single LabVIEW-based software application. The software that we developed can be operated in both manual mode for exploratory investigations and automatic mode for routine measurements, by using predefined execution procedures. The latter enables the execution of high-throughput or combinatorial investigations, which decrease substantially the time and cost for catalyst testing. The software was constructed using a modular architecture which simplifies the modification or extension of the system, depending on future needs. The system was tested by performing stability tests of commercial fuel cell electrocatalysts, and the advantages of the developed system are discussed. © 2011 American Institute of Physics

  2. Preliminary Radiation Testing of a State-of-the-Art Commercial 14nm CMOS Processor - System-on-a-Chip

    NASA Technical Reports Server (NTRS)

    Szabo, Carl M., Jr.; Duncan, Adam; LaBel, Kenneth A.; Kay, Matt; Bruner, Pat; Krzesniak, Mike; Dong, Lei

    2015-01-01

    Hardness assurance test results of Intel state-of-the-art 14nm Broadwell U-series processor System-on-a-Chip (SoC) for total dose are presented, along with first-look exploratory results from trials at a medical proton facility. Test method builds upon previous efforts by utilizing commercial laptop motherboards and software stress applications as opposed to more traditional automated test equipment (ATE).

  3. Preliminary Radiation Testing of a State-of-the-Art Commercial 14nm CMOS Processor/System-on-a-Chip

    NASA Technical Reports Server (NTRS)

    Szabo, Carl M., Jr.; Duncan, Adam; LaBel, Kenneth A.; Kay, Matt; Bruner, Pat; Krzesniak, Mike; Dong, Lei

    2015-01-01

    Hardness assurance test results of Intel state-of-the-art 14nm “Broadwell” U-series processor / System-on-a-Chip (SoC) for total ionizing dose (TID) are presented, along with exploratory results from trials at a medical proton facility. Test method builds upon previous efforts [1] by utilizing commercial laptop motherboards and software stress applications as opposed to more traditional automated test equipment (ATE).

  4. Secure software practices among Malaysian software practitioners: An exploratory study

    NASA Astrophysics Data System (ADS)

    Mohamed, Shafinah Farvin Packeer; Baharom, Fauziah; Deraman, Aziz; Yahya, Jamaiah; Mohd, Haslina

    2016-08-01

    Secure software practices is increasingly gaining much importance among software practitioners and researchers due to the rise of computer crimes in the software industry. It has become as one of the determinant factors for producing high quality software. Even though its importance has been revealed, its current practice in the software industry is still scarce, particularly in Malaysia. Thus, an exploratory study is conducted among software practitioners in Malaysia to study their experiences and practices in the real-world projects. This paper discusses the findings from the study, which involved 93 software practitioners. Structured questionnaire is utilized for data collection purpose whilst statistical methods such as frequency, mean, and cross tabulation are used for data analysis. Outcomes from this study reveal that software practitioners are becoming increasingly aware on the importance of secure software practices, however, they lack of appropriate implementation, which could affect the quality of produced software.

  5. Statistical analysis of Turbine Engine Diagnostic (TED) field test data

    NASA Astrophysics Data System (ADS)

    Taylor, Malcolm S.; Monyak, John T.

    1994-11-01

    During the summer of 1993, a field test of turbine engine diagnostic (TED) software, developed jointly by U.S. Army Research Laboratory and the U.S. Army Ordnance Center and School, was conducted at Fort Stuart, GA. The data were collected in conformance with a cross-over design, some of whose considerations are detailed. The initial analysis of the field test data was exploratory, followed by a more formal investigation. Technical aspects of the data analysis insights that were elicited are reported.

  6. A Review of CEFA Software: Comprehensive Exploratory Factor Analysis Program

    ERIC Educational Resources Information Center

    Lee, Soon-Mook

    2010-01-01

    CEFA 3.02(Browne, Cudeck, Tateneni, & Mels, 2008) is a factor analysis computer program designed to perform exploratory factor analysis. It provides the main properties that are needed for exploratory factor analysis, namely a variety of factoring methods employing eight different discrepancy functions to be minimized to yield initial…

  7. Verifying Diagnostic Software

    NASA Technical Reports Server (NTRS)

    Lindsey, Tony; Pecheur, Charles

    2004-01-01

    Livingstone PathFinder (LPF) is a simulation-based computer program for verifying autonomous diagnostic software. LPF is designed especially to be applied to NASA s Livingstone computer program, which implements a qualitative-model-based algorithm that diagnoses faults in a complex automated system (e.g., an exploratory robot, spacecraft, or aircraft). LPF forms a software test bed containing a Livingstone diagnosis engine, embedded in a simulated operating environment consisting of a simulator of the system to be diagnosed by Livingstone and a driver program that issues commands and faults according to a nondeterministic scenario provided by the user. LPF runs the test bed through all executions allowed by the scenario, checking for various selectable error conditions after each step. All components of the test bed are instrumented, so that execution can be single-stepped both backward and forward. The architecture of LPF is modular and includes generic interfaces to facilitate substitution of alternative versions of its different parts. Altogether, LPF provides a flexible, extensible framework for simulation-based analysis of diagnostic software; these characteristics also render it amenable to application to diagnostic programs other than Livingstone.

  8. Spectral Knowledge (SK-UTALCA): Software for Exploratory Analysis of High-Resolution Spectral Reflectance Data on Plant Breeding

    PubMed Central

    Lobos, Gustavo A.; Poblete-Echeverría, Carlos

    2017-01-01

    This article describes public, free software that provides efficient exploratory analysis of high-resolution spectral reflectance data. Spectral reflectance data can suffer from problems such as poor signal to noise ratios in various wavebands or invalid measurements due to changes in incoming solar radiation or operator fatigue leading to poor orientation of sensors. Thus, exploratory data analysis is essential to identify appropriate data for further analyses. This software overcomes the problem that analysis tools such as Excel are cumbersome to use for the high number of wavelengths and samples typically acquired in these studies. The software, Spectral Knowledge (SK-UTALCA), was initially developed for plant breeding, but it is also suitable for other studies such as precision agriculture, crop protection, ecophysiology plant nutrition, and soil fertility. Various spectral reflectance indices (SRIs) are often used to relate crop characteristics to spectral data and the software is loaded with 255 SRIs which can be applied quickly to the data. This article describes the architecture and functions of SK-UTALCA and the features of the data that led to the development of each of its modules. PMID:28119705

  9. Spectral Knowledge (SK-UTALCA): Software for Exploratory Analysis of High-Resolution Spectral Reflectance Data on Plant Breeding.

    PubMed

    Lobos, Gustavo A; Poblete-Echeverría, Carlos

    2016-01-01

    This article describes public, free software that provides efficient exploratory analysis of high-resolution spectral reflectance data. Spectral reflectance data can suffer from problems such as poor signal to noise ratios in various wavebands or invalid measurements due to changes in incoming solar radiation or operator fatigue leading to poor orientation of sensors. Thus, exploratory data analysis is essential to identify appropriate data for further analyses. This software overcomes the problem that analysis tools such as Excel are cumbersome to use for the high number of wavelengths and samples typically acquired in these studies. The software, Spectral Knowledge (SK-UTALCA), was initially developed for plant breeding, but it is also suitable for other studies such as precision agriculture, crop protection, ecophysiology plant nutrition, and soil fertility. Various spectral reflectance indices (SRIs) are often used to relate crop characteristics to spectral data and the software is loaded with 255 SRIs which can be applied quickly to the data. This article describes the architecture and functions of SK-UTALCA and the features of the data that led to the development of each of its modules.

  10. Enabling Advanced Wind-Tunnel Research Methods Using the NASA Langley 12-Foot Low Speed Tunnel

    NASA Technical Reports Server (NTRS)

    Busan, Ronald C.; Rothhaar, Paul M.; Croom, Mark A.; Murphy, Patrick C.; Grafton, Sue B.; O-Neal, Anthony W.

    2014-01-01

    Design of Experiment (DOE) testing methods were used to gather wind tunnel data characterizing the aerodynamic and propulsion forces and moments acting on a complex vehicle configuration with 10 motor-driven propellers, 9 control surfaces, a tilt wing, and a tilt tail. This paper describes the potential benefits and practical implications of using DOE methods for wind tunnel testing - with an emphasis on describing how it can affect model hardware, facility hardware, and software for control and data acquisition. With up to 23 independent variables (19 model and 2 tunnel) for some vehicle configurations, this recent test also provides an excellent example of using DOE methods to assess critical coupling effects in a reasonable timeframe for complex vehicle configurations. Results for an exploratory test using conventional angle of attack sweeps to assess aerodynamic hysteresis is summarized, and DOE results are presented for an exploratory test used to set the data sampling time for the overall test. DOE results are also shown for one production test characterizing normal force in the Cruise mode for the vehicle.

  11. Quality Assurance in Software Development: An Exploratory Investigation in Software Project Failures and Business Performance

    ERIC Educational Resources Information Center

    Ichu, Emmanuel A.

    2010-01-01

    Software quality is perhaps one of the most sought-after attributes in product development, however; this goal is unattained. Problem factors in software development and how these have affected the maintainability of the delivered software systems requires a thorough investigation. It was, therefore, very important to understand software…

  12. PAF: A software tool to estimate free-geometry extended bodies of anomalous pressure from surface deformation data

    NASA Astrophysics Data System (ADS)

    Camacho, A. G.; Fernández, J.; Cannavò, F.

    2018-02-01

    We present a software package to carry out inversions of surface deformation data (any combination of InSAR, GPS, and terrestrial data, e.g., EDM, levelling) as produced by 3D free-geometry extended bodies with anomalous pressure changes. The anomalous structures are described as an aggregation of elementary cells (whose effects are estimated as coming from point sources) in an elastic half space. The linear inverse problem (considering some simple regularization conditions) is solved by means of an exploratory approach. This software represents the open implementation of a previously published methodology (Camacho et al., 2011). It can be freely used with large data sets (e.g. InSAR data sets) or with data coming from small control networks (e.g. GPS monitoring data), mainly in volcanic areas, to estimate the expected pressure bodies representing magmatic intrusions. Here, the software is applied to some real test cases.

  13. Man-machine Integration Design and Analysis System (MIDAS) Task Loading Model (TLM) experimental and software detailed design report

    NASA Technical Reports Server (NTRS)

    Staveland, Lowell

    1994-01-01

    This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.

  14. An Overview of Software for Conducting Dimensionality Assessment in Multidimensional Models

    ERIC Educational Resources Information Center

    Svetina, Dubravka; Levy, Roy

    2012-01-01

    An overview of popular software packages for conducting dimensionality assessment in multidimensional models is presented. Specifically, five popular software packages are described in terms of their capabilities to conduct dimensionality assessment with respect to the nature of analysis (exploratory or confirmatory), types of data (dichotomous,…

  15. An Exploratory Study of Software Cost Estimating at the Electronic Systems Division.

    DTIC Science & Technology

    1976-07-01

    action’. to improve the software cost Sestimating proces., While thin research was limited to the M.nD onvironment, the same types of problema may exist...Methods in Social Science. Now York: Random House, 1969. 57. Smith, Ronald L. Structured Programming Series (Vol. XI) - Estimating Software Project

  16. Information Leaks and Limitations of Role-Based Access Control Mechanisms: A Qualitative Exploratory Single Case Study

    ERIC Educational Resources Information Center

    Antony, Laljith

    2016-01-01

    Failing to prevent leaks of confidential and proprietary information to unauthorized users from software applications is a major challenge that companies face. Access control policies defined in software applications with access control mechanisms are unable to prevent information leaks from software applications to unauthorized users. Role-based…

  17. Application Development Methodology Appropriateness: An Exploratory Case Study Bridging the Gap between Framework Characteristics and Selection

    ERIC Educational Resources Information Center

    Williams, Lawrence H., Jr.

    2013-01-01

    This qualitative study analyzed experiences of twenty software developers. The research showed that all software development methodologies are distinct from each other. While some, such as waterfall, focus on traditional, plan-driven approaches that allow software requirements and design to evolve; others facilitate ambiguity and uncertainty by…

  18. Identification of genes in anonymous DNA sequences. Annual performance report, February 1, 1991--January 31, 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fields, C.A.

    1996-06-01

    The objective of this project is the development of practical software to automate the identification of genes in anonymous DNA sequences from the human, and other higher eukaryotic genomes. A software system for automated sequence analysis, gm (gene modeler) has been designed, implemented, tested, and distributed to several dozen laboratories worldwide. A significantly faster, more robust, and more flexible version of this software, gm 2.0 has now been completed, and is being tested by operational use to analyze human cosmid sequence data. A range of efforts to further understand the features of eukaryoyic gene sequences are also underway. This progressmore » report also contains papers coming out of the project including the following: gm: a Tool for Exploratory Analysis of DNA Sequence Data; The Human THE-LTR(O) and MstII Interspersed Repeats are subfamilies of a single widely distruted highly variable repeat family; Information contents and dinucleotide compostions of plant intron sequences vary with evolutionary origin; Splicing signals in Drosophila: intron size, information content, and consensus sequences; Integration of automated sequence analysis into mapping and sequencing projects; Software for the C. elegans genome project.« less

  19. Software Reviews.

    ERIC Educational Resources Information Center

    Teles, Elizabeth, Ed.; And Others

    1990-01-01

    Reviewed are two computer software packages for Macintosh microcomputers including "Phase Portraits," an exploratory graphics tool for studying first-order planar systems; and "MacMath," a set of programs for exploring differential equations, linear algebra, and other mathematical topics. Features, ease of use, cost, availability, and hardware…

  20. Something Drawn, Something Touched, Something Scrolled: An Exploratory Comparison of Perimeter and Area Interventions Including Kidspiration

    ERIC Educational Resources Information Center

    Sossi, Dino; Jamalian, Azadeh; Richardson, Shenetta

    2011-01-01

    This exploratory study compared a computer-based mathematics education intervention with two more traditional approaches with the purpose of improving instruction in perimeter and area. Kidspiration software, tile/stick manipulatives and pencil/paper-based copying/drawing of shapes were implemented in a 3rd Grade New York City public school…

  1. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  2. Deindividuation and Internet software piracy.

    PubMed

    Hinduja, Sameer

    2008-08-01

    Computer crime has increased exponentially in recent years as hardware, software, and network resources become more affordable and available to individuals from all walks of life. Software piracy is one prevalent type of cybercrime and has detrimentally affected the economic health of the software industry. Moreover, piracy arguably represents a rend in the moral fabric associated with the respect of intellectual property and reduces the financial incentive of product creation and innovation. Deindividuation theory, originating from the field of social psychology, argues that individuals are extricated from responsibility for their actions simply because they no longer have an acute awareness of the identity of self and of others. That is, external and internal constraints that would typically regulate questionable behavior are rendered less effective via certain anonymizing and disinhibiting conditions of the social and environmental context. This exploratory piece seeks to establish the role of deindividuation in liberating individuals to commit software piracy by testing the hypothesis that persons who prefer the anonymity and pseudonymity associated with interaction on the Internet are more likely to pirate software. Through this research, it is hoped that the empirical identification of such a social psychological determinant will help further illuminate the phenomenon.

  3. Open Crowdsourcing: Leveraging Community Software Developers for IT Projects

    ERIC Educational Resources Information Center

    Phair, Derek

    2012-01-01

    This qualitative exploratory single-case study was designed to examine and understand the use of volunteer community participants as software developers and other project related roles, such as testers, in completing a web-based application project by a non-profit organization. This study analyzed the strategic decision to engage crowd…

  4. Development of a new multimedia instrument to measure cancer-specific quality of life in Portuguese-speaking patients with varying literacy skills.

    PubMed

    Paiva, Carlos Eduardo; Siquelli, Felipe Augusto Ferreira; Zaia, Gabriela Rossi; de Andrade, Diocésio Alves Pinto; Borges, Marcos Aristoteles; Jácome, Alexandre A; Giroldo, Gisele Augusta Sousa Nascimento; Santos, Henrique Amorim; Hahn, Elizabeth A; Uemura, Gilberto; Paiva, Bianca Sakamoto Ribeiro

    2016-01-01

    To develop and validate a new multimedia instrument to measure health-related quality of life (HRQOL) in Portuguese-speaking patients with cancer. A mixed-methods study conducted in a large Brazilian Cancer Hospital. The instrument was developed along the following sequential phases: identification of HRQOL issues through qualitative content analysis of individual interviews, evaluation of the most important items according to the patients, review of the literature, evaluation by an expert committee, and pretesting. In sequence, an exploratory factor analysis was conducted (pilot testing, n = 149) to reduce the number of items and to define domains and scores. The psychometric properties of the IQualiV-OG-21 were measured in a large multicentre Brazilian study (n = 323). A software containing multimedia resources were developed to facilitate self-administration of IQualiV-OG-21; its feasibility and patients' preferences ("paper and pencil" vs. software) were further tested (n = 54). An exploratory factor analysis reduced the 30-item instrument to 21 items. The IQualiV-OG-21 was divided into 6 domains: emotional, physical, existential, interpersonal relationships, functional and financial. The multicentre study confirmed that it was valid and reliable. The electronic multimedia instrument was easy to complete and acceptable to patients. Regarding preferences, 61.1 % of them preferred the electronic format in comparison with the paper and pencil format. The IQualiV-OG-21 is a new valid and reliable multimedia HRQOL instrument that is well-understood, even by patients with low literacy skills, and can be answered quickly. It is a useful new tool that can be translated and tested in other cultures and languages.

  5. Known and Unknown Weaknesses in Software Animated Demonstrations (Screencasts): A Study in Self-Paced Learning Settings

    ERIC Educational Resources Information Center

    Palaigeorgiou, George; Despotakis, Theofanis

    2010-01-01

    Learning about computers continues to be regarded as a rather informal and complex landscape dominated by individual exploratory and opportunistic approaches, even for students and instructors in Computer Science Departments. During the last two decades, software animated demonstrations (SADs), also known as screencasts, have attracted particular…

  6. Effectiveness of Software Training Using Simulations: An Exploratory Study

    ERIC Educational Resources Information Center

    McElroy, Arnold D., Jr.; Pan, Cheng-Chang

    2009-01-01

    This study was designed to explore the effectiveness in student performance and confidence of limited and full device simulators. The 30 employees from an information technology company who participated in this study were assigned to one of three groups. Each group received practice for learning a complex software procedure using traditional…

  7. Perceptions of Open Source versus Commercial Software: Is Higher Education Still on the Fence?

    ERIC Educational Resources Information Center

    van Rooij, Shahron Williams

    2007-01-01

    This exploratory study investigated the perceptions of technology and academic decision-makers about open source benefits and risks versus commercial software applications. The study also explored reactions to a concept for outsourcing campus-wide deployment and maintenance of open source. Data collected from telephone interviews were analyzed,…

  8. Identification challenges for large space structures

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.

    1990-01-01

    The paper examines the on-orbit modal identification of large space structures, stressing the importance of planning and experience, in preparation for the Space Station Structural Characterization Experiment (SSSCE) for the Space Station Freedom. The necessary information to foresee and overcome practical difficulties is considered in connection with seven key factors, including test objectives, dynamic complexity of the structure, data quality, extent of exploratory studies, availability and understanding of software tools, experience with similar problems, and pretest analytical conditions. These factors affect identification success in ground tests. Comparisons with similar ground tests of assembled systems are discussed, showing that the constraints of space tests make these factors more significant. The absence of data and experiences relating to on-orbit modal identification testing is shown to make identification a uniquely mathematical problem, although all spacecraft are constructed and verified by proven engineering methods.

  9. Data Visualization: An Exploratory Study into the Software Tools Used by Businesses

    ERIC Educational Resources Information Center

    Diamond, Michael; Mattia, Angela

    2017-01-01

    Data visualization is a key component to business and data analytics, allowing analysts in businesses to create tools such as dashboards for business executives. Various software packages allow businesses to create these tools in order to manipulate data for making informed business decisions. The focus is to examine what skills employers are…

  10. Digital Modeling in Design Foundation Coursework: An Exploratory Study of the Effectiveness of Conceptual Design Software

    ERIC Educational Resources Information Center

    Guidera, Stan; MacPherson, D. Scot

    2008-01-01

    This paper presents the results of a study that was conducted to identify and document student perceptions of the effectiveness of computer modeling software introduced in a design foundations course that had previously utilized only conventional manually-produced representation techniques. Rather than attempt to utilize a production-oriented CAD…

  11. Why Don't All Maths Teachers Use Dynamic Geometry Software in Their Classrooms?

    ERIC Educational Resources Information Center

    Stols, Gerrit; Kriek, Jeanne

    2011-01-01

    In this exploratory study, we sought to examine the influence of mathematics teachers' beliefs on their intended and actual usage of dynamic mathematics software in their classrooms. The theory of planned behaviour (TPB), the technology acceptance model (TAM) and the innovation diffusion theory (IDT) were used to examine the influence of teachers'…

  12. Data Visualization: An Exploratory Study into the Software Tools Used by Businesses

    ERIC Educational Resources Information Center

    Diamond, Michael; Mattia, Angela

    2015-01-01

    Data visualization is a key component to business and data analytics, allowing analysts in businesses to create tools such as dashboards for business executives. Various software packages allow businesses to create these tools in order to manipulate data for making informed business decisions. The focus is to examine what skills employers are…

  13. The Generalizability of Private Sector Research on Software Project Management in Two USAF Organizations: An Exploratory Study

    DTIC Science & Technology

    2003-03-01

    private sector . Researchers have also identified software acquisitions as one of the major differences between the private sector and public sector MIS. This indicates that the elements for a successful software project in the public sector may be different from the private sector . Private sector project success depends on many elements. Three of them are user interaction with the project’s development, critical success factors, and how the project manager prioritizes the traditional success criteria.

  14. Adoption of Requirements Engineering Practices in Malaysian Software Development Companies

    NASA Astrophysics Data System (ADS)

    Solemon, Badariah; Sahibuddin, Shamsul; Ghani, Abdul Azim Abd

    This paper presents exploratory survey results on Requirements Engineering (RE) practices of some software development companies in Malaysia. The survey attempted to identify patterns of RE practices the companies are implementing. Information required for the survey was obtained through a survey, mailed self-administered questionnaires distributed to project managers and software developers who are working at software development companies operated across the country. The results showed that the overall adoption of the RE practices in these companies is strong. However, the results also indicated that fewer companies in the survey have use appropriate CASE tools or software to support their RE process and practices, define traceability policies and maintain traceability manual in their projects.

  15. The use and misuse of statistical methodologies in pharmacology research.

    PubMed

    Marino, Michael J

    2014-01-01

    Descriptive, exploratory, and inferential statistics are necessary components of hypothesis-driven biomedical research. Despite the ubiquitous need for these tools, the emphasis on statistical methods in pharmacology has become dominated by inferential methods often chosen more by the availability of user-friendly software than by any understanding of the data set or the critical assumptions of the statistical tests. Such frank misuse of statistical methodology and the quest to reach the mystical α<0.05 criteria has hampered research via the publication of incorrect analysis driven by rudimentary statistical training. Perhaps more critically, a poor understanding of statistical tools limits the conclusions that may be drawn from a study by divorcing the investigator from their own data. The net result is a decrease in quality and confidence in research findings, fueling recent controversies over the reproducibility of high profile findings and effects that appear to diminish over time. The recent development of "omics" approaches leading to the production of massive higher dimensional data sets has amplified these issues making it clear that new approaches are needed to appropriately and effectively mine this type of data. Unfortunately, statistical education in the field has not kept pace. This commentary provides a foundation for an intuitive understanding of statistics that fosters an exploratory approach and an appreciation for the assumptions of various statistical tests that hopefully will increase the correct use of statistics, the application of exploratory data analysis, and the use of statistical study design, with the goal of increasing reproducibility and confidence in the literature. Copyright © 2013. Published by Elsevier Inc.

  16. [Effects of nootropic drugs on behavior of BALB/c and C57BL/6 mice in the exploratory cross-maze test].

    PubMed

    Vasil'eva, E V; Salimov, R M; Kovalev, G I

    2012-01-01

    Exploratory behavior, locomotor activity, and anxiety in inbred mice of C57BL/6 and BALB/c strains subchronically treated with placebo or various types of nootropic (cognition enhancing) drugs (piracetam, phenotropil, noopept, semax, pantogam, nooglutil) have been evaluated using the exploratory cross-maze test. It was found that BALB/c mice in comparison to C57BL/6 mice are characterized by greater anxiety and lower efficiency of exploratory behavior in the previously unfamiliar environment. All tested drugs clearly improved the exploratory behavior in BALB/c mice only. In BALB/c mice, piracetam, phenotropil, noopept, and semax also reduced anxiety, while phenotropil additionally increased locomotor activity. Thus, the nootropic drugs displayed clear positive modulation of spontaneous orientation in the mice strain with initially low exploratory efficiency (BALB/c) in the cross-maze test. Some drugs (pantogam, nooglutil) exhibited only nootropic properties, while the other drugs exhibited both nootropic effects on the exploratory activity and produced modulation of the anxiety level (piracetam, fenotropil, noopept, semax) and locomotor activity (fenotropil).

  17. Computational Analysis of the Transonic Dynamics Tunnel Using FUN3D

    NASA Technical Reports Server (NTRS)

    Chwalowski, Pawel; Quon, Eliot; Brynildsen, Scott E.

    2016-01-01

    This paper presents results from an exploratory two-year effort of applying Computational Fluid Dynamics (CFD) to analyze the empty-tunnel flow in the NASA Langley Research Center Transonic Dynamics Tunnel (TDT). The TDT is a continuous-flow, closed circuit, 16- x 16-foot slotted-test-section wind tunnel, with capabilities to use air or heavy gas as a working fluid. In this study, experimental data acquired in the empty tunnel using the R-134a test medium was used to calibrate the computational data. The experimental calibration data includes wall pressures, boundary-layer profiles, and the tunnel centerline Mach number profiles. Subsonic and supersonic flow regimes were considered, focusing on Mach 0.5, 0.7 and Mach 1.1 in the TDT test section. This study discusses the computational domain, boundary conditions, and initial conditions selected and the resulting steady-state analyses using NASA's FUN3D CFD software.

  18. A bootstrap method for estimating uncertainty of water quality trends

    USGS Publications Warehouse

    Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura

    2015-01-01

    Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.

  19. VarDetect: a nucleotide sequence variation exploratory tool

    PubMed Central

    Ngamphiw, Chumpol; Kulawonganunchai, Supasak; Assawamakin, Anunchai; Jenwitheesuk, Ekachai; Tongsima, Sissades

    2008-01-01

    Background Single nucleotide polymorphisms (SNPs) are the most commonly studied units of genetic variation. The discovery of such variation may help to identify causative gene mutations in monogenic diseases and SNPs associated with predisposing genes in complex diseases. Accurate detection of SNPs requires software that can correctly interpret chromatogram signals to nucleotides. Results We present VarDetect, a stand-alone nucleotide variation exploratory tool that automatically detects nucleotide variation from fluorescence based chromatogram traces. Accurate SNP base-calling is achieved using pre-calculated peak content ratios, and is enhanced by rules which account for common sequence reading artifacts. The proposed software tool is benchmarked against four other well-known SNP discovery software tools (PolyPhred, novoSNP, Genalys and Mutation Surveyor) using fluorescence based chromatograms from 15 human genes. These chromatograms were obtained from sequencing 16 two-pooled DNA samples; a total of 32 individual DNA samples. In this comparison of automatic SNP detection tools, VarDetect achieved the highest detection efficiency. Availability VarDetect is compatible with most major operating systems such as Microsoft Windows, Linux, and Mac OSX. The current version of VarDetect is freely available at . PMID:19091032

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prescott, Ryan; Marger, Bernard L.; Chiu, Ailsa

    During the second iteration of the US NDC Modernization Elaboration phase (E2), the SNL US NDC Modernization project team completed follow-on COTS surveys & exploratory prototyping related to the Object Storage & Distribution (OSD) mechanism, and the processing control software infrastructure. This report summarizes the E2 prototyping work.

  1. Software for Planning Scientific Activities on Mars

    NASA Technical Reports Server (NTRS)

    Ai-Chang, Mitchell; Bresina, John; Jonsson, Ari; Hsu, Jennifer; Kanefsky, Bob; Morris, Paul; Rajan, Kanna; Yglesias, Jeffrey; Charest, Len; Maldague, Pierre

    2003-01-01

    Mixed-Initiative Activity Plan Generator (MAPGEN) is a ground-based computer program for planning and scheduling the scientific activities of instrumented exploratory robotic vehicles, within the limitations of available resources onboard the vehicle. MAPGEN is a combination of two prior software systems: (1) an activity-planning program, APGEN, developed at NASA s Jet Propulsion Laboratory and (2) the Europa planner/scheduler from NASA Ames Research Center. MAPGEN performs all of the following functions: Automatic generation of plans and schedules for scientific and engineering activities; Testing of hypotheses (or what-if analyses of various scenarios); Editing of plans; Computation and analysis of resources; and Enforcement and maintenance of constraints, including resolution of temporal and resource conflicts among planned activities. MAPGEN can be used in either of two modes: one in which the planner/scheduler is turned off and only the basic APGEN functionality is utilized, or one in which both component programs are used to obtain the full planning, scheduling, and constraint-maintenance functionality.

  2. Novel test of motor and other dysfunctions in mouse neurological disease models.

    PubMed

    Barth, Albert M I; Mody, Istvan

    2014-01-15

    Just like human neurological disorders, corresponding mouse models present multiple deficiencies. Estimating disease progression or potential treatment effectiveness in such models necessitates the use of time consuming and multiple tests usually requiring a large number of scarcely available genetically modified animals. Here we present a novel and simple single camera arrangement and analysis software for detailed motor function evaluation in mice walking on a wire mesh that provides complex 3D information (instantaneous position, speed, distance traveled, foot fault depth, duration, location, relationship to speed of movement, etc.). We investigated 3 groups of mice with various neurological deficits: (1) unilateral motor cortical stroke; (2) effects of moderate ethanol doses; and (3) aging (96-99 weeks old). We show that post stroke recovery can be divided into separate stages based on strikingly different characteristics of motor function deficits, some resembling the human motor neglect syndrome. Mice treated with moderate dose of alcohol and aged mice showed specific motor and exploratory deficits. Other tests rely either partially or entirely on manual video analysis introducing a significant subjective component into the analysis, and analyze a single aspect of motor function. Our novel experimental approach provides qualitatively new, complex information about motor impairments and locomotor/exploratory activity. It should be useful for the detailed characterization of a broad range of human neurological disease models in mice, and for the more accurate assessment of disease progression or treatment effectiveness. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. NCAR global model topography generation software for unstructured grids

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.

    2015-06-01

    It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.

  4. NASA Tech Briefs, February 2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Topics include: Simulation Testing of Embedded Flight Software; Improved Indentation Test for Measuring Nonlinear Elasticity; Ultraviolet-Absorption Spectroscopic Biofilm Monitor; Electronic Tongue for Quantitation of Contaminants in Water; Radar for Measuring Soil Moisture Under Vegetation; Modular Wireless Data-Acquisition and Control System; Microwave System for Detecting Ice on Aircraft; Routing Algorithm Exploits Spatial Relations; Two-Finger EKG Method of Detecting Evasive Responses; Updated System-Availability and Resource-Allocation Program; Routines for Computing Pressure Drops in Venturis; Software for Fault-Tolerant Matrix Multiplication; Reproducible Growth of High-Quality Cubic-SiC Layers; Nonlinear Thermoelastic Model for SMAs and SMA Hybrid Composites; Liquid-Crystal Thermosets, a New Generation of High-Performance Liquid-Crystal Polymers; Formulations for Stronger Solid Oxide Fuel-Cell Electrolytes; Simulation of Hazards and Poses for a Rocker-Bogie Rover; Autonomous Formation Flight; Expandable Purge Chambers Would Protect Cryogenic Fittings; Wavy-Planform Helicopter Blades Make Less Noise; Miniature Robotic Spacecraft for Inspecting Other Spacecraft; Miniature Ring-Shaped Peristaltic Pump; Compact Plasma Accelerator; Improved Electrohydraulic Linear Actuators; A Software Architecture for Semiautonomous Robot Control; Fabrication of Channels for Nanobiotechnological Devices; Improved Thin, Flexible Heat Pipes; Miniature Radioisotope Thermoelectric Power Cubes; Permanent Sequestration of Emitted Gases in the Form of Clathrate Hydrates; Electrochemical, H2O2-Boosted Catalytic Oxidation System; Electrokinetic In Situ Treatment of Metal-Contaminated Soil; Pumping Liquid Oxygen by Use of Pulsed Magnetic Fields; Magnetocaloric Pumping of Liquid Oxygen; Tailoring Ion-Thruster Grid Apertures for Greater Efficiency; and Lidar for Guidance of a Spacecraft or Exploratory Robot.

  5. BeeSpace Navigator: exploratory analysis of gene function using semantic indexing of biological literature.

    PubMed

    Sen Sarma, Moushumi; Arcoleo, David; Khetani, Radhika S; Chee, Brant; Ling, Xu; He, Xin; Jiang, Jing; Mei, Qiaozhu; Zhai, ChengXiang; Schatz, Bruce

    2011-07-01

    With the rapid decrease in cost of genome sequencing, the classification of gene function is becoming a primary problem. Such classification has been performed by human curators who read biological literature to extract evidence. BeeSpace Navigator is a prototype software for exploratory analysis of gene function using biological literature. The software supports an automatic analogue of the curator process to extract functions, with a simple interface intended for all biologists. Since extraction is done on selected collections that are semantically indexed into conceptual spaces, the curation can be task specific. Biological literature containing references to gene lists from expression experiments can be analyzed to extract concepts that are computational equivalents of a classification such as Gene Ontology, yielding discriminating concepts that differentiate gene mentions from other mentions. The functions of individual genes can be summarized from sentences in biological literature, to produce results resembling a model organism database entry that is automatically computed. Statistical frequency analysis based on literature phrase extraction generates offline semantic indexes to support these gene function services. The website with BeeSpace Navigator is free and open to all; there is no login requirement at www.beespace.illinois.edu for version 4. Materials from the 2010 BeeSpace Software Training Workshop are available at www.beespace.illinois.edu/bstwmaterials.php.

  6. "Library Quarterly," 1956-2004: An Exploratory Bibliometric Analysis

    ERIC Educational Resources Information Center

    Young, Arthur P.

    2006-01-01

    "Library Quarterly's" seventy-fifth anniversary invites an analysis of the journal's bibliometric dimension, including contributor attributes, various author rankings, and citation impact. Eugene Garfield's HistCite software, linked to Thomson Scientific's Web of Science, as made available by Garfield, for the period 1956-2004, was used as the…

  7. Billiards in the Classroom: Learning Physics with Microworlds.

    ERIC Educational Resources Information Center

    Bertz, Michael D.

    1997-01-01

    Trickshot! is an exploratory environment that allows learners to experiment with various physical properties to develop an intuitive understanding of the behavior of objects in physical systems. The software is geared to secondary students with little exposure to pool or physics. When used in conjunction with meaningful class activities, such an…

  8. Information Loss: Exploring the Information Systems Management's Neglect Affecting Softcopy Reproduction of Heritage-Data

    ERIC Educational Resources Information Center

    Oskooie, Kamran Rezai

    2012-01-01

    This exploratory mixed methods study quantified and explored leadership interest in legacy-data conversion and information processing. Questionnaires were administered electronically to 92 individuals in design, manufacturing, and other professions from the manufacturing, processing, Internet, computing, software and technology divisions. Research…

  9. Teaching Visual Texts with the Multimodal Analysis Software

    ERIC Educational Resources Information Center

    Lim Fei, Victor; O'Halloran, Kay L.; Tan, Sabine; E., Marissa K. L.

    2015-01-01

    This exploratory study introduces the systemic approach and the explicit teaching of a meta-language to provide conceptual tools for students for the analysis and interpretation of multimodal texts. Equipping students with a set of specialised vocabulary with conventionalised meanings associated with specific choices in multimodal texts empowers…

  10. Computers on Wheels.

    ERIC Educational Resources Information Center

    Rosemead Elementary School District, CA.

    THE FOLLOWING IS THE FULL TEXT OF THIS DOCUMENT: How does a school provide the computer learning experiences for students given the paucity of available funding for hardware, software, and staffing? Here is what one school, Emma W. Shuey in Rosemead, did after exploratory research on computers by a committee of teachers and administrators. The…

  11. Networking Labs in the Online Environment: Indicators for Success

    ERIC Educational Resources Information Center

    Lahoud, Hilmi A.; Krichen, Jack P.

    2010-01-01

    Several techniques have been used to provide hands-on educational experiences to online learners, including remote labs, simulation software, and virtual labs, which offer a more structured environment, including simulations and scheduled asynchronous access to physical resources. This exploratory study investigated how these methods can be used…

  12. Exploratory research for the development of a computer aided software design environment with the software technology program

    NASA Technical Reports Server (NTRS)

    Hardwick, Charles

    1991-01-01

    Field studies were conducted by MCC to determine areas of research of mutual interest to MCC and JSC. NASA personnel from the Information Systems Directorate and research faculty from UHCL/RICIS visited MCC in Austin, Texas to examine tools and applications under development in the MCC Software Technology Program. MCC personnel presented workshops in hypermedia, design knowledge capture, and design recovery on site at JSC for ISD personnel. The following programs were installed on workstations in the Software Technology Lab, NASA/JSC: (1) GERM (Graphic Entity Relations Modeler); (2) gIBIS (Graphic Issues Based Information System); and (3) DESIRE (Design Recovery tool). These applications were made available to NASA for inspection and evaluation. Programs developed in the MCC Software Technology Program run on the SUN workstation. The programs do not require special configuration, but they will require larger than usual amounts of disk space and RAM to operate properly.

  13. Impairment of exploratory behavior and spatial memory in adolescent rats in lithium-pilocarpine model of temporal lobe epilepsy.

    PubMed

    Kalemenev, S V; Zubareva, O E; Frolova, E V; Sizov, V V; Lavrentyeva, V V; Lukomskaya, N Ya; Kim, K Kh; Zaitsev, A V; Magazanik, L G

    2015-01-01

    Cognitive impairment in six-week -old rats has been studied in the lithium-pilocarpine model of adolescent temporal lobe epilepsy in humans. The pilocarpine-treated rats (n =21) exhibited (a) a decreased exploratory activity in comparison with control rats (n = 20) in the open field (OP) test and (b) a slower extinction of exploratory behavior in repeated OP tests. The Morris Water Maze (MWM) test showed that the effect of training was less pronounced in the pilocarpine-treated rats, which demonstrated disruption of predominantly short-term memory. Therefore, our study has shown that lithium-pilocarpine seizures induce substantial changes in exploratory behavior and spatial memory in adolescent rats. OP and MWM tests can be used in the search of drugs reducing cognitive impairments associated with temporal lobe epilepsy.

  14. Exploratory modeling of forest disturbance scenarios in central Oregon using computational experiments in GIS

    Treesearch

    Deana D. Pennington

    2007-01-01

    Exploratory modeling is an approach used when process and/or parameter uncertainties are such that modeling attempts at realistic prediction are not appropriate. Exploratory modeling makes use of computational experimentation to test how varying model scenarios drive model outcome. The goal of exploratory modeling is to better understand the system of interest through...

  15. Software for Displaying Data from Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Powell, Mark; Backers, Paul; Norris, Jeffrey; Vona, Marsette; Steinke, Robert

    2003-01-01

    Science Activity Planner (SAP) DownlinkBrowser is a computer program that assists in the visualization of processed telemetric data [principally images, image cubes (that is, multispectral images), and spectra] that have been transmitted to Earth from exploratory robotic vehicles (rovers) on remote planets. It is undergoing adaptation to (1) the Field Integrated Design and Operations (FIDO) rover (a prototype Mars-exploration rover operated on Earth as a test bed) and (2) the Mars Exploration Rover (MER) mission. This program has evolved from its predecessor - the Web Interface for Telescience (WITS) software - and surpasses WITS in the processing, organization, and plotting of data. SAP DownlinkBrowser creates Extensible Markup Language (XML) files that organize data files, on the basis of content, into a sortable, searchable product database, without the overhead of a relational database. The data-display components of SAP DownlinkBrowser (descriptively named ImageView, 3DView, OrbitalView, PanoramaView, ImageCubeView, and SpectrumView) are designed to run in a memory footprint of at least 256MB on computers that utilize the Windows, Linux, and Solaris operating systems.

  16. Online Software Applications for Learning: Observations from an Elementary School

    ERIC Educational Resources Information Center

    Tay, Lee Yong; Lim, Cher Ping; Nair, Shanthi Suraj; Lim, Siew Khiaw

    2014-01-01

    This exploratory case study research describes the integration of Information Communication Technology (ICT) into the teaching and learning of English, mathematics and science in an elementary school in Singapore. The school in this case study research is one of the first primary-level future schools that was set up under the…

  17. Computational Modelling and Children's Expressions of Signal and Noise

    ERIC Educational Resources Information Center

    Ainley, Janet; Pratt, Dave

    2017-01-01

    Previous research has demonstrated how young children can identify the signal in data. In this exploratory study we considered how they might also express meanings for noise when creating computational models using recent developments in software tools. We conducted extended clinical interviews with four groups of 11-year-olds and analysed the…

  18. Children as Educational Computer Game Designers: An Exploratory Study

    ERIC Educational Resources Information Center

    Baytak, Ahmet; Land, Susan M.; Smith, Brian K.

    2011-01-01

    This study investigated how children designed computer games as artifacts that reflected their understanding of nutrition. Ten 5th grade students were asked to design computer games with the software "Game Maker" for the purpose of teaching 1st graders about nutrition. The results from the case study show that students were able to…

  19. Surface-Roughness-Based Virtual Textiles: Evaluation Using a Multi-Contactor Display.

    PubMed

    Philpott, Matthew; Summers, Ian R

    2015-01-01

    Virtual textiles, generated in response to exploratory movements, are presented to the fingertip via a 24-contactor vibrotactile array. Software models are based on surface-roughness profiles from real textiles. Results suggest that distinguishable "textile-like" surfaces are produced, but these lack the necessary accuracy for reliable matching to real textiles.

  20. Arc_Mat: a Matlab-based spatial data analysis toolbox

    NASA Astrophysics Data System (ADS)

    Liu, Xingjian; Lesage, James

    2010-03-01

    This article presents an overview of Arc_Mat, a Matlab-based spatial data analysis software package whose source code has been placed in the public domain. An earlier version of the Arc_Mat toolbox was developed to extract map polygon and database information from ESRI shapefiles and provide high quality mapping in the Matlab software environment. We discuss revisions to the toolbox that: utilize enhanced computing and graphing capabilities of more recent versions of Matlab, restructure the toolbox with object-oriented programming features, and provide more comprehensive functions for spatial data analysis. The Arc_Mat toolbox functionality includes basic choropleth mapping; exploratory spatial data analysis that provides exploratory views of spatial data through various graphs, for example, histogram, Moran scatterplot, three-dimensional scatterplot, density distribution plot, and parallel coordinate plots; and more formal spatial data modeling that draws on the extensive Spatial Econometrics Toolbox functions. A brief review of the design aspects of the revised Arc_Mat is described, and we provide some illustrative examples that highlight representative uses of the toolbox. Finally, we discuss programming with and customizing the Arc_Mat toolbox functionalities.

  1. [Social representations of elders' quality of life].

    PubMed

    Silva, Luípa Michele; Silva, Antonia Oliveira; Tura, Luiz Fernando Rangel; Moreira, Maria Adelaide Silva Paredes; Rodrigues, Rosalina Aparecida Partezani; Marques, Maria do Céu

    2012-03-01

    This study aimed to identify elders' social representations of quality of life. This is an exploratory study with a sample of 240 elders, of both sexes. For data collection we used a Free Association Test with Words, using the inductive stimulus 'quality of life" and sociodemographic variables. The interviews were analyzed with the software Alceste. Of the 240 studied eslders, 167 were women, with the dominant age from 60 to 69 years, income between two and three minimum wages, most of the married and with catholicism as the predominant religion. The results from Alceste pointed towards seven hierarchical classes: accessibility, work, activity, support affection, care and interactions. Social representations of quality of life by elders can support professionals in understanding the adhesion to preventive practices for the elderly and in strengthening policies directed to this population.

  2. Astronomy Data Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-08-01

    We present innovative methods and techniques for using Blender, a 3D software package, in the visualization of astronomical data. N-body simulations, data cubes, galaxy and stellar catalogs, and planetary surface maps can be rendered in high quality videos for exploratory data analysis. Blender's API is Python based, making it advantageous for use in astronomy with flexible libraries like astroPy. Examples will be exhibited that showcase the features of the software in astronomical visualization paradigms. 2D and 3D voxel texture applications, animations, camera movement, and composite renders are introduced to the astronomer's toolkit and how they mesh with different forms of data.

  3. Design Considerations of Help Options in Computer-Based L2 Listening Materials Informed by Participatory Design

    ERIC Educational Resources Information Center

    Cárdenas-Claros, Mónica Stella

    2015-01-01

    This paper reports on the findings of two qualitative exploratory studies that sought to investigate design features of help options in computer-based L2 listening materials. Informed by principles of participatory design, language learners, software designers, language teachers, and a computer programmer worked collaboratively in a series of…

  4. Alcohol, Tobacco and Other Drugs: College Student Satisfaction with an Interactive Educational Software Program

    ERIC Educational Resources Information Center

    Rotunda, Rob J.; West, Laura; Epstein, Joel

    2003-01-01

    Alcohol and drug use education and prevention continue to be core educational issues. In seeking to inform students at all levels about drug use, the present exploratory study highlights the potential educational use of interactive computer programs for this purpose. Seventy-three college students from two substance abuse classes interacted for at…

  5. Turnitin[R]: The Student Perspective on Using Plagiarism Detection Software

    ERIC Educational Resources Information Center

    Dahl, Stephan

    2007-01-01

    Recently there has been an increasing interest in plagiarism detection systems, such as the web-based Turnitin system. However, no study has so far tried to look at how students react towards those systems being used. This exploratory study examines the attitudes of students on a postgraduate module after using Turnitin as their standard way of…

  6. The Influence of Reflection on Employee Psychological Empowerment: Report of an Exploratory Workplace Field Study

    ERIC Educational Resources Information Center

    Cyboran, Vincent L.

    2005-01-01

    The study examined the influences of reflection on the self-perception of empowerment in the workplace. The convenience sample consisted of non-management knowledge workers at a software company headquartered in the United States. A pretest, posttest control group design was used. The experimental group kept guided journals of their learning…

  7. US corn and soybeans exploratory experiment

    NASA Technical Reports Server (NTRS)

    Carnes, J. G. (Principal Investigator)

    1981-01-01

    The results from the U.S. corn/soybeans exploratory experiment which was completed during FY 1980 are summarized. The experiment consisted of two parts: the classification procedures verification test and the simulated aggregation test. Evaluations of labeling, proportion estimation, and aggregation procedures are presented.

  8. Corticosterone level and central dopaminergic activity involved in agile and exploratory behaviours in formosan wood mice (Apodemus semotus).

    PubMed

    Shieh, Kun-Ruey; Yang, Shu-Chuan

    2018-03-27

    The native Formosan wood mouse (Apodemus semotus) is the dominant rodent in Taiwan. In their natural environment, Formosan wood mice exhibit high locomotor activity, including searching and exploratory behaviours, which is observed similarly in the laboratory environment. How the behavioural responses of Formosan wood mice exhibit in elevated plus maze and marble burying tests remains unclear. How corticosterone levels and central dopaminergic activities are related to the behaviours in these tests is also unclear. This study compared the behaviours of Formosan wood mice with that of C57BL/6J mice using the elevated plus maze and marble burying tests, and measured the corticosterone levels and central dopaminergic activities. Formosan wood mice showed greater locomotor and exploratory activity than the C57BL/6J mice. Similarly, the marble burying and rearing numbers were higher for Formosan wood mice. High locomotor and exploratory behaviours were strongly correlated with corticosterone levels after acute mild restraint stress in Formosan wood mice. The anxiolytic, diazepam, reduced the high exploratory activity, corticosterone levels and central dopaminergic activities. The high locomotor and exploratory behaviours of Formosan wood mice are related to the corticosterone levels and central dopaminergic activities. These data may explain Formosan wood mice dominance in the intermediate altitude of Taiwan.

  9. Testing of technology readiness index model based on exploratory factor analysis approach

    NASA Astrophysics Data System (ADS)

    Ariani, AF; Napitupulu, D.; Jati, RK; Kadar, JA; Syafrullah, M.

    2018-04-01

    SMEs readiness in using ICT will determine the adoption of ICT in the future. This study aims to evaluate the model of technology readiness in order to apply the technology on SMEs. The model is tested to find if TRI model is relevant to measure ICT adoption, especially for SMEs in Indonesia. The research method used in this paper is survey to a group of SMEs in South Tangerang. The survey measures the readiness to adopt ICT based on four variables which is Optimism, Innovativeness, Discomfort, and Insecurity. Each variable contains several indicators to make sure the variable is measured thoroughly. The data collected through survey is analysed using factor analysis methodwith the help of SPSS software. The result of this study shows that TRI model gives more descendants on some indicators and variables. This result can be caused by SMEs owners’ knowledge is not homogeneous about either the technology that they are used, knowledge or the type of their business.

  10. Evaluation Study of the Exploratory Visit: An Innovative Outreach Activity of the ILGWU's Friendly Visiting Program

    ERIC Educational Resources Information Center

    Wright, Holly; And Others

    1977-01-01

    The exploratory visit to recent retirees, an outreach component of the International Ladies Garment Workers Union Friendly Visiting Program, was evaluated. A post-test only control group effect study revealed exploratory visits were effective in establishing a link between the program and the retiree. (Author)

  11. Dubbing Projects for the Language Learner: A Framework for Integrating Audiovisual Translation into Task-Based Instruction

    ERIC Educational Resources Information Center

    Danan, Martine

    2010-01-01

    This article describes a series of exploratory L1 to L2 dubbing projects for which students translated and used editing software to dub short American film and TV clips into their target language. Translating and dubbing into the target language involve students in multifaceted, high-level language production tasks that lead to enhanced vocabulary…

  12. Using Lexical Profiling Tools to Investigate Children's Written Vocabulary in Grade 3: An Exploratory Study

    ERIC Educational Resources Information Center

    Roessingh, Hetty; Elgie, Susan; Kover, Pat

    2015-01-01

    Research in the study of students' writing concludes that vocabulary use is a key variable in determining the holistic quality of the writing. In the present study, 77 writing samples from a mixed group of Grade 3 children were analyzed for features of linguistic diversity using public domain vocabulary-profiling software. The writing was also…

  13. The Use of a Metacognitive Tool in an Online Social Supportive Learning Environment: An Activity Theory Analysis

    ERIC Educational Resources Information Center

    Martinez, Ray Earl

    2010-01-01

    This investigation is an exploratory study of the use of a metacognitive software tool in a social supportive learning environment. The tool combined metacognitive knowledge and regulation functionality embedded within the content of an eight week online graduate education course. Twenty-three learners, who were practicing teachers, used the tool.…

  14. Virtual Classroom versus Physical Classroom: An Exploratory Study of Class Discussion Patterns and Student Learning in an Asynchronous Internet-Based MBA Course.

    ERIC Educational Resources Information Center

    Arbaugh, J. B.

    2000-01-01

    Class discussions and student interaction were compared in a conventional class (n=33) and an Internet-based class using LearningSpace(R) software (n=29). No significant differences in learning or interaction quality were found. There was significantly more participation in the Internet course, particularly by women. (SK)

  15. Driving simulation in the clinic: testing visual exploratory behavior in daily life activities in patients with visual field defects.

    PubMed

    Hamel, Johanna; Kraft, Antje; Ohl, Sven; De Beukelaer, Sophie; Audebert, Heinrich J; Brandt, Stephan A

    2012-09-18

    Patients suffering from homonymous hemianopia after infarction of the posterior cerebral artery (PCA) report different degrees of constraint in daily life, despite similar visual deficits. We assume this could be due to variable development of compensatory strategies such as altered visual scanning behavior. Scanning compensatory therapy (SCT) is studied as part of the visual training after infarction next to vision restoration therapy. SCT consists of learning to make larger eye movements into the blind field enlarging the visual field of search, which has been proven to be the most useful strategy(1), not only in natural search tasks but also in mastering daily life activities(2). Nevertheless, in clinical routine it is difficult to identify individual levels and training effects of compensatory behavior, since it requires measurement of eye movements in a head unrestrained condition. Studies demonstrated that unrestrained head movements alter the visual exploratory behavior compared to a head-restrained laboratory condition(3). Martin et al.(4) and Hayhoe et al.(5) showed that behavior demonstrated in a laboratory setting cannot be assigned easily to a natural condition. Hence, our goal was to develop a study set-up which uncovers different compensatory oculomotor strategies quickly in a realistic testing situation: Patients are tested in the clinical environment in a driving simulator. SILAB software (Wuerzburg Institute for Traffic Sciences GmbH (WIVW)) was used to program driving scenarios of varying complexity and recording the driver's performance. The software was combined with a head mounted infrared video pupil tracker, recording head- and eye-movements (EyeSeeCam, University of Munich Hospital, Clinical Neurosciences). The positioning of the patient in the driving simulator and the positioning, adjustment and calibration of the camera is demonstrated. Typical performances of a patient with and without compensatory strategy and a healthy control are illustrated in this pilot study. Different oculomotor behaviors (frequency and amplitude of eye- and head-movements) are evaluated very quickly during the drive itself by dynamic overlay pictures indicating where the subjects gaze is located on the screen, and by analyzing the data. Compensatory gaze behavior in a patient leads to a driving performance comparable to a healthy control, while the performance of a patient without compensatory behavior is significantly worse. The data of eye- and head-movement-behavior as well as driving performance are discussed with respect to different oculomotor strategies and in a broader context with respect to possible training effects throughout the testing session and implications on rehabilitation potential.

  16. Comprehensive Adult Student Assessment Systems Braille Reading Assessment: An Exploratory Study

    ERIC Educational Resources Information Center

    Posey, Virginia K.; Henderson, Barbara W.

    2012-01-01

    Introduction: This exploratory study determined whether transcribing selected test items on an adult life and work skills reading test into braille could maintain the same approximate scale-score range and maintain fitness within the item response theory model as used by the Comprehensive Adult Student Assessment Systems (CASAS) for developing…

  17. An Exploratory Study of Listening Practice Relative to Memory Testing and Lecture in Business Administration Courses

    ERIC Educational Resources Information Center

    Peterson, Robin T.

    2007-01-01

    This study investigates the combined impact of a memory test and subsequent listening practice in enhancing student listening abilities in collegiate business administration courses. The article reviews relevant literature and describes an exploratory study that was undertaken to compare the effectiveness of this technique with traditional…

  18. Academic and Personal Development through Group Work: An Exploratory Study

    ERIC Educational Resources Information Center

    Steen, Sam

    2011-01-01

    This exploratory study linked academic and personal development within a group counseling intervention. A pre-test post-test research design compared social skills, learning behaviors, and achievement with a convenience sample and control group of students from three elementary schools. For the treatment group, grade point average in Language Arts…

  19. Learning, memory and exploratory similarities in genetically identical cloned dogs.

    PubMed

    Shin, Chi Won; Kim, Geon A; Park, Won Jun; Park, Kwan Yong; Jeon, Jeong Min; Oh, Hyun Ju; Kim, Min Jung; Lee, Byeong Chun

    2016-12-30

    Somatic cell nuclear transfer allows generation of genetically identical animals using donor cells derived from animals with particular traits. To date, few studies have investigated whether or not these cloned dogs will show identical behavior patterns. To address this question, learning, memory and exploratory patterns were examined using six cloned dogs with identical nuclear genomes. The variance of total incorrect choice number in the Y-maze test among cloned dogs was significantly lower than that of the control dogs. There was also a significant decrease in variance in the level of exploratory activity in the open fields test compared to age-matched control dogs. These results indicate that cloned dogs show similar cognitive and exploratory patterns, suggesting that these behavioral phenotypes are related to the genotypes of the individuals.

  20. AgRISTARS: Foreign commodity production forecasting. The 1980 US corn and soybeans exploratory experiment

    NASA Technical Reports Server (NTRS)

    Malin, J. T.; Carnes, J. G. (Principal Investigator)

    1981-01-01

    The U.S. corn and soybeans exploratory experiment is described which consisted of evaluations of two technology components of a production forecasting system: classification procedures (crop labeling and proportion estimation at the level of a sampling unit) and sampling and aggregation procedures. The results from the labeling evaluations indicate that the corn and soybeans labeling procedure works very well in the U.S. corn belt with full season (after tasseling) LANDSAT data. The procedure should be readily adaptable to corn and soybeans labeling required for subsequent exploratory experiments or pilot tests. The machine classification procedures evaluated in this experiment were not effective in improving the proportion estimates. The corn proportions produced by the machine procedures had a large bias when the bias correction was not performed. This bias was caused by the manner in which the machine procedures handled spectrally impure pixels. The simulation test indicated that the weighted aggregation procedure performed quite well. Although further work can be done to improve both the simulation tests and the aggregation procedure, the results of this test show that the procedure should serve as a useful baseline procedure in future exploratory experiments and pilot tests.

  1. Involvement of the cholinergic system of CA1 on harmane-induced amnesia in the step-down passive avoidance test.

    PubMed

    Nasehi, Mohammad; Sharifi, Shahrbano; Zarrindast, Mohammad Reza

    2012-08-01

    β-carboline alkaloids such as harmane (HA) are naturally present in the human food chain. They are derived from the plant Peganum harmala and have many cognitive effects. In the present study, effects of the nicotinic system of the dorsal hippocampus (CA1) on HA-induced amnesia and exploratory behaviors were examined. One-trial step-down and hole-board paradigms were used to assess memory retention and exploratory behaviors in adult male mice. Pre-training (15 mg/kg) but not pre-testing intraperitoneal (i.p.) administration of HA decreased memory formation but did not alter exploratory behaviors. Moreover, pre-testing administration of nicotine (0.5 µg/mouse, intra-CA1) decreased memory retrieval, but induced anxiogenic-like behaviors. On the other hand, pre-test intra-CA1 injection of ineffective doses of nicotine (0.1 and 0.25 µg/mouse) fully reversed HA-induced impairment of memory after pre-training injection of HA (15 mg/kg, i.p.) which did not alter exploratory behaviors. Furthermore, pre-testing administration of mecamylamine (0.5, 1 and 2 µg/mouse, intra-CA1) did not alter memory retrieval but fully reversed HA-induced impairment of memory after pre-training injection of HA (15 mg/kg, i.p.) which had no effect on exploratory behaviors. In conclusion, the present findings suggest the involvement of the nicotinic cholinergic system in the HA-induced impairment of memory formation.

  2. Sociotechnical Human Factors Involved in Remote Online Usability Testing of Two eHealth Interventions.

    PubMed

    Wozney, Lori M; Baxter, Pamela; Fast, Hilary; Cleghorn, Laura; Hundert, Amos S; Newton, Amanda S

    2016-02-03

    Research in the fields of human performance technology and human computer interaction are challenging the traditional macro focus of usability testing arguing for methods that help test moderators assess "use in context" (ie, cognitive skills, usability understood over time) and in authentic "real world" settings. Human factors in these complex test scenarios may impact on the quality of usability results being derived yet there is a lack of research detailing moderator experiences in these test environments. Most comparative research has focused on the impact of the physical environment on results, and rarely on how the sociotechnical elements of the test environment affect moderator and test user performance. Improving our understanding of moderator roles and experiences with conducting "real world" usability testing can lead to improved techniques and strategies To understand moderator experiences of using Web-conferencing software to conduct remote usability testing of 2 eHealth interventions. An exploratory case study approach was used to study 4 moderators' experiences using Blackboard Collaborate for remote testing sessions of 2 different eHealth interventions. Data collection involved audio-recording iterative cycles of test sessions, collecting summary notes taken by moderators, and conducting 2 90-minute focus groups via teleconference. A direct content analysis with an inductive coding approach was used to explore personal accounts, assess the credibility of data interpretation, and generate consensus on the thematic structure of the results. Following the convergence of data from the various sources, 3 major themes were identified: (1) moderators experienced and adapted to unpredictable changes in cognitive load during testing; (2) moderators experienced challenges in creating and sustaining social presence and untangling dialogue; and (3) moderators experienced diverse technical demands, but were able to collaboratively troubleshoot with test users. Results highlight important human-computer interactions and human factor qualities that impact usability testing processes. Moderators need an advanced skill and knowledge set to address the social interaction aspects of Web-based usability testing and technical aspects of conferencing software during test sessions. Findings from moderator-focused studies can inform the design of remote testing platforms and real-time usability evaluation processes that place less cognitive burden on moderators and test users.

  3. Software For Integer Programming

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1992-01-01

    Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.

  4. Attitudes Expressed in Online Comments about Environmental Factors in the Tourism Sector: An Exploratory Study.

    PubMed

    Saura, Jose Ramon; Palos-Sanchez, Pedro; Rios Martin, Miguel Angel

    2018-03-19

    The object of this exploratory study is to identify the positive, neutral and negative environment factors that affect users who visit Spanish hotels in order to help the hotel managers decide how to improve the quality of the services provided. To carry out the research a Sentiment Analysis was initially performed, grouping the sample of tweets ( n = 14459) according to the feelings shown and then a textual analysis was used to identify the key environment factors in these feelings using the qualitative analysis software Nvivo (QSR International, Melbourne, Australia). The results of the exploratory study present the key environment factors that affect the users experience when visiting hotels in Spain, such as actions that support local traditions and products, the maintenance of rural areas respecting the local environment and nature, or respecting air quality in the areas where hotels have facilities and offer services. The conclusions of the research can help hotels improve their services and the impact on the environment, as well as improving the visitors experience based on the positive, neutral and negative environment factors which the visitors themselves identified.

  5. Attitudes Expressed in Online Comments about Environmental Factors in the Tourism Sector: An Exploratory Study

    PubMed Central

    2018-01-01

    The object of this exploratory study is to identify the positive, neutral and negative environment factors that affect users who visit Spanish hotels in order to help the hotel managers decide how to improve the quality of the services provided. To carry out the research a Sentiment Analysis was initially performed, grouping the sample of tweets (n = 14459) according to the feelings shown and then a textual analysis was used to identify the key environment factors in these feelings using the qualitative analysis software Nvivo (QSR International, Melbourne, Australia). The results of the exploratory study present the key environment factors that affect the users experience when visiting hotels in Spain, such as actions that support local traditions and products, the maintenance of rural areas respecting the local environment and nature, or respecting air quality in the areas where hotels have facilities and offer services. The conclusions of the research can help hotels improve their services and the impact on the environment, as well as improving the visitors experience based on the positive, neutral and negative environment factors which the visitors themselves identified. PMID:29562724

  6. SEURAT: visual analytics for the integrated analysis of microarray data.

    PubMed

    Gribov, Alexander; Sill, Martin; Lück, Sonja; Rücker, Frank; Döhner, Konstanze; Bullinger, Lars; Benner, Axel; Unwin, Antony

    2010-06-03

    In translational cancer research, gene expression data is collected together with clinical data and genomic data arising from other chip based high throughput technologies. Software tools for the joint analysis of such high dimensional data sets together with clinical data are required. We have developed an open source software tool which provides interactive visualization capability for the integrated analysis of high-dimensional gene expression data together with associated clinical data, array CGH data and SNP array data. The different data types are organized by a comprehensive data manager. Interactive tools are provided for all graphics: heatmaps, dendrograms, barcharts, histograms, eventcharts and a chromosome browser, which displays genetic variations along the genome. All graphics are dynamic and fully linked so that any object selected in a graphic will be highlighted in all other graphics. For exploratory data analysis the software provides unsupervised data analytics like clustering, seriation algorithms and biclustering algorithms. The SEURAT software meets the growing needs of researchers to perform joint analysis of gene expression, genomical and clinical data.

  7. PAPST, a User Friendly and Powerful Java Platform for ChIP-Seq Peak Co-Localization Analysis and Beyond.

    PubMed

    Bible, Paul W; Kanno, Yuka; Wei, Lai; Brooks, Stephen R; O'Shea, John J; Morasso, Maria I; Loganantharaj, Rasiah; Sun, Hong-Wei

    2015-01-01

    Comparative co-localization analysis of transcription factors (TFs) and epigenetic marks (EMs) in specific biological contexts is one of the most critical areas of ChIP-Seq data analysis beyond peak calling. Yet there is a significant lack of user-friendly and powerful tools geared towards co-localization analysis based exploratory research. Most tools currently used for co-localization analysis are command line only and require extensive installation procedures and Linux expertise. Online tools partially address the usability issues of command line tools, but slow response times and few customization features make them unsuitable for rapid data-driven interactive exploratory research. We have developed PAPST: Peak Assignment and Profile Search Tool, a user-friendly yet powerful platform with a unique design, which integrates both gene-centric and peak-centric co-localization analysis into a single package. Most of PAPST's functions can be completed in less than five seconds, allowing quick cycles of data-driven hypothesis generation and testing. With PAPST, a researcher with or without computational expertise can perform sophisticated co-localization pattern analysis of multiple TFs and EMs, either against all known genes or a set of genomic regions obtained from public repositories or prior analysis. PAPST is a versatile, efficient, and customizable tool for genome-wide data-driven exploratory research. Creatively used, PAPST can be quickly applied to any genomic data analysis that involves a comparison of two or more sets of genomic coordinate intervals, making it a powerful tool for a wide range of exploratory genomic research. We first present PAPST's general purpose features then apply it to several public ChIP-Seq data sets to demonstrate its rapid execution and potential for cutting-edge research with a case study in enhancer analysis. To our knowledge, PAPST is the first software of its kind to provide efficient and sophisticated post peak-calling ChIP-Seq data analysis as an easy-to-use interactive application. PAPST is available at https://github.com/paulbible/papst and is a public domain work.

  8. PAPST, a User Friendly and Powerful Java Platform for ChIP-Seq Peak Co-Localization Analysis and Beyond

    PubMed Central

    Bible, Paul W.; Kanno, Yuka; Wei, Lai; Brooks, Stephen R.; O’Shea, John J.; Morasso, Maria I.; Loganantharaj, Rasiah; Sun, Hong-Wei

    2015-01-01

    Comparative co-localization analysis of transcription factors (TFs) and epigenetic marks (EMs) in specific biological contexts is one of the most critical areas of ChIP-Seq data analysis beyond peak calling. Yet there is a significant lack of user-friendly and powerful tools geared towards co-localization analysis based exploratory research. Most tools currently used for co-localization analysis are command line only and require extensive installation procedures and Linux expertise. Online tools partially address the usability issues of command line tools, but slow response times and few customization features make them unsuitable for rapid data-driven interactive exploratory research. We have developed PAPST: Peak Assignment and Profile Search Tool, a user-friendly yet powerful platform with a unique design, which integrates both gene-centric and peak-centric co-localization analysis into a single package. Most of PAPST’s functions can be completed in less than five seconds, allowing quick cycles of data-driven hypothesis generation and testing. With PAPST, a researcher with or without computational expertise can perform sophisticated co-localization pattern analysis of multiple TFs and EMs, either against all known genes or a set of genomic regions obtained from public repositories or prior analysis. PAPST is a versatile, efficient, and customizable tool for genome-wide data-driven exploratory research. Creatively used, PAPST can be quickly applied to any genomic data analysis that involves a comparison of two or more sets of genomic coordinate intervals, making it a powerful tool for a wide range of exploratory genomic research. We first present PAPST’s general purpose features then apply it to several public ChIP-Seq data sets to demonstrate its rapid execution and potential for cutting-edge research with a case study in enhancer analysis. To our knowledge, PAPST is the first software of its kind to provide efficient and sophisticated post peak-calling ChIP-Seq data analysis as an easy-to-use interactive application. PAPST is available at https://github.com/paulbible/papst and is a public domain work. PMID:25970601

  9. On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai

    2007-01-01

    In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…

  10. Exploratory Usability Testing of User Interface Options in LibGuides 2

    ERIC Educational Resources Information Center

    Thorngate, Sarah; Hoden, Allison

    2017-01-01

    Online research guides offer librarians a way to provide digital researchers with point-of-need support. If these guides are to support student learning well, it is critical that they provide an effective user experience. This article details the results of an exploratory comparison study that tested three key user interface options in LibGuides…

  11. "Exploratory experimentation" as a probe into the relation between historiography and philosophy of science.

    PubMed

    Schickore, Jutta

    2016-02-01

    This essay utilizes the concept "exploratory experimentation" as a probe into the relation between historiography and philosophy of science. The essay traces the emergence of the historiographical concept "exploratory experimentation" in the late 1990s. The reconstruction of the early discussions about exploratory experimentation shows that the introduction of the concept had unintended consequences: Initially designed to debunk philosophical ideas about theory testing, the concept "exploratory experimentation" quickly exposed the poverty of our conceptual tools for the analysis of experimental practice. Looking back at a number of detailed analyses of experimental research, we can now appreciate that the concept of exploratory experimentation is too vague and too elusive to fill the desideratum whose existence it revealed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Optimel: Software for selecting the optimal method

    NASA Astrophysics Data System (ADS)

    Popova, Olga; Popov, Boris; Romanov, Dmitry; Evseeva, Marina

    Optimel: software for selecting the optimal method automates the process of selecting a solution method from the optimization methods domain. Optimel features practical novelty. It saves time and money when conducting exploratory studies if its objective is to select the most appropriate method for solving an optimization problem. Optimel features theoretical novelty because for obtaining the domain a new method of knowledge structuring was used. In the Optimel domain, extended quantity of methods and their properties are used, which allows identifying the level of scientific studies, enhancing the user's expertise level, expand the prospects the user faces and opening up new research objectives. Optimel can be used both in scientific research institutes and in educational institutions.

  13. Gray scale enhances display readability of bitmapped documents

    NASA Astrophysics Data System (ADS)

    Ostberg, Olov; Disfors, Dennis; Feng, Yingduo

    1994-05-01

    Bitmapped images of high resolution, say 300 dpi rastered documents, stored in the memory of a PC are at best only borderline readable on the PC's display screen (say a 72 dpi VGA monitor). Results from a series of exploratory psycho-physical experiments, using the Adobe PhotoshopR software, show that the readability can be significantly enhanced by making use of the monitor's capability to display shades of gray. It is suggested that such a gray scale adaptation module should be bundled to all software products for electronic document management. In fact, fax modems are already available in which this principle is employed, hereby making it possible to read incoming fax documents directly on the screen.

  14. What can be learned from the effects of benzodiazepines on exploratory behavior?

    PubMed

    File, S E

    1985-01-01

    The purpose of this review is to assess the value of using tests of exploratory behavior to study the actions of benzodiazepines. The methods of measuring exploration and the factors influencing it are briefly described. The effects of benzodiazepines on exploratory behavior of rats and mice are reviewed; and the dangers of interpreting the results of such tests in terms of any of the clinical effects of the benzodiazepines is stressed. Finally, the interactions between benzodiazepines and other drugs acting at the GABA-benzodiazepine receptor complex are described. The results of these experiments caution against global classification of compounds as benzodiazepine "antagonists."

  15. iMindMap as an Innovative Tool in Teaching and Learning Accounting: An Exploratory Study

    ERIC Educational Resources Information Center

    Wan Jusoh, Wan Noor Hazlina; Ahmad, Suraya

    2016-01-01

    Purpose: The purpose of this study is to explore the use of iMindMap software as an interactive tool in the teaching and learning method and also to be able to consider iMindMap as an alternative instrument in achieving the ultimate learning outcome. Design/Methodology/Approach: Out of 268 students of the management accounting at the University of…

  16. Collaborative Interactive Visualization Exploratory Concept

    DTIC Science & Technology

    2015-06-01

    the FIAC concepts. It consists of various DRDC-RDDC-2015-N004 intelligence analysis web services build of top of big data technologies exploited...sits on the UDS where validated common knowledge is stored. Based on the Lumify software2, this important component exploits big data technologies such...interfaces. Above this database resides the Big Data Manager responsible for transparent data transmission between the UDS and the rest of the S3

  17. Viewpoints: A High-Performance High-Dimensional Exploratory Data Analysis Tool

    NASA Astrophysics Data System (ADS)

    Gazis, P. R.; Levit, C.; Way, M. J.

    2010-12-01

    Scientific data sets continue to increase in both size and complexity. In the past, dedicated graphics systems at supercomputing centers were required to visualize large data sets, but as the price of commodity graphics hardware has dropped and its capability has increased, it is now possible, in principle, to view large complex data sets on a single workstation. To do this in practice, an investigator will need software that is written to take advantage of the relevant graphics hardware. The Viewpoints visualization package described herein is an example of such software. Viewpoints is an interactive tool for exploratory visual analysis of large high-dimensional (multivariate) data. It leverages the capabilities of modern graphics boards (GPUs) to run on a single workstation or laptop. Viewpoints is minimalist: it attempts to do a small set of useful things very well (or at least very quickly) in comparison with similar packages today. Its basic feature set includes linked scatter plots with brushing, dynamic histograms, normalization, and outlier detection/removal. Viewpoints was originally designed for astrophysicists, but it has since been used in a variety of fields that range from astronomy, quantum chemistry, fluid dynamics, machine learning, bioinformatics, and finance to information technology server log mining. In this article, we describe the Viewpoints package and show examples of its usage.

  18. Feature extraction from multiple data sources using genetic programming

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Brumby, Steven P.; Pope, Paul A.; Eads, Damian R.; Esch-Mosher, Diana M.; Galassi, Mark C.; Harvey, Neal R.; McCulloch, Hersey D.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Bloch, Jeffrey J.; David, Nancy A.

    2002-08-01

    Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  19. Interactive graphics for the Macintosh: software review of FlexiGraphs.

    PubMed

    Antonak, R F

    1990-01-01

    While this product is clearly unique, its usefulness to individuals outside small business environments is somewhat limited. FlexiGraphs is, however, a reasonable first attempt to design a microcomputer software package that controls data through interactive editing within a graph. Although the graphics capabilities of mainframe programs such as MINITAB (Ryan, Joiner, & Ryan, 1981) and the graphic manipulations available through exploratory data analysis (e.g., Velleman & Hoaglin, 1981) will not be surpassed anytime soon by this program, a researcher may want to add this program to a software library containing other Macintosh statistics, drawing, and graphics programs if only to obtain the easy-to-obtain curve fitting and line smoothing options. I welcome the opportunity to review the enhanced "scientific" version of FlexiGraphs that the author of the program indicates is currently under development. An MS-DOS version of the program should be available within the year.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniquesmore » to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.« less

  1. Neuroanatomical correlates of intelligence in healthy young adults: the role of basal ganglia volume.

    PubMed

    Rhein, Cosima; Mühle, Christiane; Richter-Schmidinger, Tanja; Alexopoulos, Panagiotis; Doerfler, Arnd; Kornhuber, Johannes

    2014-01-01

    In neuropsychiatric diseases with basal ganglia involvement, higher cognitive functions are often impaired. In this exploratory study, we examined healthy young adults to gain detailed insight into the relationship between basal ganglia volume and cognitive abilities under non-pathological conditions. We investigated 137 healthy adults that were between the ages of 21 and 35 years with similar educational backgrounds. Magnetic resonance imaging (MRI) was performed, and volumes of basal ganglia nuclei in both hemispheres were calculated using FreeSurfer software. The cognitive assessment consisted of verbal, numeric and figural aspects of intelligence for either the fluid or the crystallised intelligence factor using the intelligence test Intelligenz-Struktur-Test (I-S-T 2000 R). Our data revealed significant correlations of the caudate nucleus and pallidum volumes with figural and numeric aspects of intelligence, but not with verbal intelligence. Interestingly, figural intelligence associations were dependent on sex and intelligence factor; in females, the pallidum volumes were correlated with crystallised figural intelligence (r = 0.372, p = 0.01), whereas in males, the caudate volumes were correlated with fluid figural intelligence (r = 0.507, p = 0.01). Numeric intelligence was correlated with right-lateralised caudate nucleus volumes for both females and males, but only for crystallised intelligence (r = 0.306, p = 0.04 and r = 0.459, p = 0.04, respectively). The associations were not mediated by prefrontal cortical subfield volumes when controlling with partial correlation analyses. The findings of our exploratory analysis indicate that figural and numeric intelligence aspects, but not verbal aspects, are strongly associated with basal ganglia volumes. Unlike numeric intelligence, the type of figural intelligence appears to be related to distinct basal ganglia nuclei in a sex-specific manner. Subcortical brain structures thus may contribute substantially to cognitive performance.

  2. Factor analysis for instruments of science learning motivation and its implementation for the chemistry and biology teacher candidates

    NASA Astrophysics Data System (ADS)

    Prasetya, A. T.; Ridlo, S.

    2018-03-01

    The purpose of this study is to test the learning motivation of science instruments and compare the learning motivation of science from chemistry and biology teacher candidates. Kuesioner Motivasi Sains (KMS) in Indonesian adoption of the Science Motivation Questionnaire II (SMQ II) consisting of 25 items with a 5-point Likert scale. The number of respondents for the Exploratory Factor Analysis (EFA) test was 312. The Kaiser-Meyer-Olkin (KMO), determinant, Bartlett’s Sphericity, Measures of Sampling Adequacy (MSA) tests against KMS using SPSS 20.0, and Lisrel 8.51 software indicate eligible indications. However testing of Communalities obtained results that there are 4 items not qualified, so the item is discarded. The second test, all parameters of eligibility and has a magnitude of Root Mean Square Error of Approximation (RMSEA), P-Value for the Test of Close Fit (RMSEA <0.05), Goodness of Fit Index (GFI) was good. The new KMS with 21 valid items and composite reliability of 0.9329 can be used to test the level of learning motivation of science which includes Intrinsic Motivation, Sefl-Efficacy, Self-Determination, Grade Motivation and Career Motivation for students who master the Indonesian language. KMS trials of chemistry and biology teacher candidates obtained no significant difference in the learning motivation between the two groups.

  3. Evaluation of Early Ground Control Station Configurations for Interacting with a UAS Traffic Management (UTM) System

    NASA Technical Reports Server (NTRS)

    Dao, Arik-Quang V.; Martin, Lynne; Mohlenbrink, Christoph; Bienert, Nancy; Wolte, Cynthia; Gomez, Ashley; Claudatos, Lauren; Mercer, Joey

    2017-01-01

    The purpose of this paper is to report on a human factors evaluation of ground control station design concepts for interacting with an unmanned traffic management system. The data collected for this paper comes from recent field tests for NASA's Unmanned Traffic Management (UTM) project, and covers the following topics; workload, situation awareness, as well as flight crew communication, coordination, and procedures. The goal of this evaluation was to determine if the various software implementations for interacting with the UTM system can be described and classified into design concepts to provide guidance for the development of future UTM interfaces. We begin with a brief description of NASA's UTM project, followed by a description of the test range configuration related to a second development phase. We identified (post hoc) two classes in which the ground control stations could be grouped. This grouping was based on level of display integration. The analysis was exploratory and informal. It was conducted to compare ground stations across those two classes and against the aforementioned topics. Herein, we discuss the results.

  4. Agile methods in biomedical software development: a multi-site experience report.

    PubMed

    Kane, David W; Hohman, Moses M; Cerami, Ethan G; McCormick, Michael W; Kuhlmman, Karl F; Byrd, Jeff A

    2006-05-30

    Agile is an iterative approach to software development that relies on strong collaboration and automation to keep pace with dynamic environments. We have successfully used agile development approaches to create and maintain biomedical software, including software for bioinformatics. This paper reports on a qualitative study of our experiences using these methods. We have found that agile methods are well suited to the exploratory and iterative nature of scientific inquiry. They provide a robust framework for reproducing scientific results and for developing clinical support systems. The agile development approach also provides a model for collaboration between software engineers and researchers. We present our experience using agile methodologies in projects at six different biomedical software development organizations. The organizations include academic, commercial and government development teams, and included both bioinformatics and clinical support applications. We found that agile practices were a match for the needs of our biomedical projects and contributed to the success of our organizations. We found that the agile development approach was a good fit for our organizations, and that these practices should be applicable and valuable to other biomedical software development efforts. Although we found differences in how agile methods were used, we were also able to identify a set of core practices that were common to all of the groups, and that could be a focus for others seeking to adopt these methods.

  5. Agile methods in biomedical software development: a multi-site experience report

    PubMed Central

    Kane, David W; Hohman, Moses M; Cerami, Ethan G; McCormick, Michael W; Kuhlmman, Karl F; Byrd, Jeff A

    2006-01-01

    Background Agile is an iterative approach to software development that relies on strong collaboration and automation to keep pace with dynamic environments. We have successfully used agile development approaches to create and maintain biomedical software, including software for bioinformatics. This paper reports on a qualitative study of our experiences using these methods. Results We have found that agile methods are well suited to the exploratory and iterative nature of scientific inquiry. They provide a robust framework for reproducing scientific results and for developing clinical support systems. The agile development approach also provides a model for collaboration between software engineers and researchers. We present our experience using agile methodologies in projects at six different biomedical software development organizations. The organizations include academic, commercial and government development teams, and included both bioinformatics and clinical support applications. We found that agile practices were a match for the needs of our biomedical projects and contributed to the success of our organizations. Conclusion We found that the agile development approach was a good fit for our organizations, and that these practices should be applicable and valuable to other biomedical software development efforts. Although we found differences in how agile methods were used, we were also able to identify a set of core practices that were common to all of the groups, and that could be a focus for others seeking to adopt these methods. PMID:16734914

  6. SEURAT: Visual analytics for the integrated analysis of microarray data

    PubMed Central

    2010-01-01

    Background In translational cancer research, gene expression data is collected together with clinical data and genomic data arising from other chip based high throughput technologies. Software tools for the joint analysis of such high dimensional data sets together with clinical data are required. Results We have developed an open source software tool which provides interactive visualization capability for the integrated analysis of high-dimensional gene expression data together with associated clinical data, array CGH data and SNP array data. The different data types are organized by a comprehensive data manager. Interactive tools are provided for all graphics: heatmaps, dendrograms, barcharts, histograms, eventcharts and a chromosome browser, which displays genetic variations along the genome. All graphics are dynamic and fully linked so that any object selected in a graphic will be highlighted in all other graphics. For exploratory data analysis the software provides unsupervised data analytics like clustering, seriation algorithms and biclustering algorithms. Conclusions The SEURAT software meets the growing needs of researchers to perform joint analysis of gene expression, genomical and clinical data. PMID:20525257

  7. An interactive, multi-touch videowall for scientific data exploration

    NASA Astrophysics Data System (ADS)

    Blower, Jon; Griffiths, Guy; van Meersbergen, Maarten; Lusher, Scott; Styles, Jon

    2014-05-01

    The use of videowalls for scientific data exploration is rising as hardware becomes cheaper and the availability of software and multimedia content grows. Most videowalls are used primarily for outreach and communication purposes, but there is increasing interest in using large display screens to support exploratory visualization as an integral part of scientific research. In this PICO presentation we will present a brief overview of a new videowall system at the University of Reading, which is designed specifically to support interactive, exploratory visualization activities in climate science and Earth Observation. The videowall consists of eight 42-inch full-HD screens (in 4x2 formation), giving a total resolution of about 16 megapixels. The display is managed by a videowall controller, which can direct video to the screen from up to four external laptops, a purpose-built graphics workstation, or any combination thereof. A multi-touch overlay provides the capability for the user to interact directly with the data. There are many ways to use the videowall, and a key technical challenge is to make the most of the touch capabilities - touch has the potential to greatly reduce the learning curve in interactive data exploration, but most software is not yet designed for this purpose. In the PICO we will present an overview of some ways in which the wall can be employed in science, seeking feedback and discussion from the community. The system was inspired by an existing and highly-successful system (known as the "Collaboratorium") at the Netherlands e-Science Center (NLeSC). We will demonstrate how we have adapted NLeSC's visualization software to our system for touch-enabled multi-screen climate data exploration.

  8. Comparisons of Exploratory and Confirmatory Factor Analysis.

    ERIC Educational Resources Information Center

    Daniel, Larry G.

    Historically, most researchers conducting factor analysis have used exploratory methods. However, more recently, confirmatory factor analytic methods have been developed that can directly test theory either during factor rotation using "best fit" rotation methods or during factor extraction, as with the LISREL computer programs developed…

  9. Sociotechnical Human Factors Involved in Remote Online Usability Testing of Two eHealth Interventions

    PubMed Central

    2016-01-01

    Background Research in the fields of human performance technology and human computer interaction are challenging the traditional macro focus of usability testing arguing for methods that help test moderators assess “use in context” (ie, cognitive skills, usability understood over time) and in authentic “real world” settings. Human factors in these complex test scenarios may impact on the quality of usability results being derived yet there is a lack of research detailing moderator experiences in these test environments. Most comparative research has focused on the impact of the physical environment on results, and rarely on how the sociotechnical elements of the test environment affect moderator and test user performance. Improving our understanding of moderator roles and experiences with conducting “real world” usability testing can lead to improved techniques and strategies Objective To understand moderator experiences of using Web-conferencing software to conduct remote usability testing of 2 eHealth interventions. Methods An exploratory case study approach was used to study 4 moderators’ experiences using Blackboard Collaborate for remote testing sessions of 2 different eHealth interventions. Data collection involved audio-recording iterative cycles of test sessions, collecting summary notes taken by moderators, and conducting 2 90-minute focus groups via teleconference. A direct content analysis with an inductive coding approach was used to explore personal accounts, assess the credibility of data interpretation, and generate consensus on the thematic structure of the results. Results Following the convergence of data from the various sources, 3 major themes were identified: (1) moderators experienced and adapted to unpredictable changes in cognitive load during testing; (2) moderators experienced challenges in creating and sustaining social presence and untangling dialogue; and (3) moderators experienced diverse technical demands, but were able to collaboratively troubleshoot with test users. Conclusions Results highlight important human-computer interactions and human factor qualities that impact usability testing processes. Moderators need an advanced skill and knowledge set to address the social interaction aspects of Web-based usability testing and technical aspects of conferencing software during test sessions. Findings from moderator-focused studies can inform the design of remote testing platforms and real-time usability evaluation processes that place less cognitive burden on moderators and test users. PMID:27026291

  10. Drilling, construction, and aquifer-test data from wells 3-3307-20 and -21, Thompson Corner exploratory wells I and II, Oahu, Hawaii

    USGS Publications Warehouse

    Presley, T.K.; Oki, D.S.

    1996-01-01

    The Thompson Corner exploratory wells I and II (State well numbers 3-3307-20 and -21) were drilled near Thompson Corner, about 2.2 miles south-southwest of the town of Haleiwa. The wells are located on agricultural land in the Waialua ground-water area. The wells are about 50 feet apart and penetrate about 90 feet into the ground water. Aquifer tests were conducted using well 3-3307-20 as a pumping well and well 3-3307-21 as an observation well. Well-construction data, logs of drilling notes, geologic descriptions for the samples, and aquifer-test data are presented for the wells. The wells are two of twelve exploratory wells drilled in the north-central Oahu area between July 1993 and May 1994 in cooperation with the Honolulu Board of Water Supply.

  11. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  12. Large-Scale Exploratory Analysis, Cleaning, and Modeling for Event Detection in Real-World Power Systems Data

    DTIC Science & Technology

    2013-11-01

    big data with R is relatively new. RHadoop is a mature product from Revolution Analytics that uses R with Hadoop Streaming [15] and provides...agnostic all- data summaries or computations, in which case we use MapReduce directly. 2.3 D&R Software Environment In this work, we use the Hadoop ...job scheduling and tracking, data distribu- tion, system architecture, heterogeneity, and fault-tolerance. Hadoop also provides a distributed key-value

  13. P-MartCancer–Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb-Robertson, Bobbie-Jo M.; Bramer, Lisa M.; Jensen, Jeffrey L.

    P-MartCancer is a new interactive web-based software environment that enables biomedical and biological scientists to perform in-depth analyses of global proteomics data without requiring direct interaction with the data or with statistical software. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access to multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium (CPTAC) at the peptide, gene and protein levels. P-MartCancer is deployed using Azure technologies (http://pmart.labworks.org/cptac.html), the web-service is alternativelymore » available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/) and many statistical functions can be utilized directly from an R package available on GitHub (https://github.com/pmartR).« less

  14. An Exploratory Investigation of the Effects of a Thin Plastic Film Cover on the Profile Drag of an Aircraft Wing Panel

    NASA Technical Reports Server (NTRS)

    Beasley, W. D.; Mcghee, R. J.

    1977-01-01

    Exploratory wind tunnel tests were conducted on a large chord aircraft wing panel to evaluate the potential for drag reduction resulting from the application of a thin plastic film cover. The tests were conducted at a Mach number of 0.15 over a Reynolds number range from about 7 x 10 to the 6th power to 63 x 10 to the 6th power.

  15. Changes in the pattern of exploratory behavior are associated with the emergence of social dominance relationships in male rats.

    PubMed

    Arakawa, Hiroyuki

    2006-01-01

    This study examined the effect of the establishment of dominance relationships and subordination on exploratory behavior for both postpubertal and adult male rats. Prior to an open field test, subjects were housed either in isolation (IS) or in littermate pairs (PS) with mild dominance relationships without overt victory or defeat, or in pairs with clear hierarchical relationships as dominants (DOM) or subordinates (SUB). Stretch-attend postures and entries into the center area of the open-field were measured as an index of passive and active exploratory behavior, respectively, and crossings in the peripheral area were counted as activity. SUB rats, both postpubertal and adult, displayed less activity and lower levels of active exploratory behavior, whereas adult IS rats showed higher levels of active exploratory behavior compared to the other groups. Furthermore, both DOM and PS rats exhibited a more passive pattern of exploratory behavior in adulthood than in postpuberty. Thus the results show that an increase in the active exploratory pattern is inhibited by the establishment of social relationships among adult rats, while a decrease in activity is a primarily effect of subordination. The capacity to change exploratory patterns following subordination is found even in the postpubertal stage when adultlike social relationships have not yet appeared. Copyright 2005 Wiley Periodicals, Inc.

  16. Comparisons of Means Using Exploratory and Confirmatory Approaches

    ERIC Educational Resources Information Center

    Kuiper, Rebecca M.; Hoijtink, Herbert

    2010-01-01

    This article discusses comparisons of means using exploratory and confirmatory approaches. Three methods are discussed: hypothesis testing, model selection based on information criteria, and Bayesian model selection. Throughout the article, an example is used to illustrate and evaluate the two approaches and the three methods. We demonstrate that…

  17. Docking-based classification models for exploratory toxicology studies on high-quality estrogenic experimental data

    EPA Science Inventory

    Background: Exploratory toxicology is a new emerging research area whose ultimate mission is that of protecting human health and environment from risks posed by chemicals. In this regard, the ethical and practical limitation of animal testing has encouraged the promotion of compu...

  18. Multi-walled carbon nanotubes increase anxiety levels in rats and reduce exploratory activity in the open field test.

    PubMed

    Sayapina, N V; Batalova, T A; Chaika, V V; Kuznetsov, V L; Sergievich, A A; Kolosov, V P; Perel'man, Yu M; Golokhvast, K S

    2015-01-01

    The results of the first study on the effects of multi-walled carbon nanotubes (MWNTs) on the exploratory activity and the emotional state in laboratory rats assessed by the open field test are reported. During three or ten days, rats received 8-10 nm MWNTs added to their food at a dose of 500 mg/kg. It was demonstrated that, in the group of rats which were fed with MWNTs, the integrated anxiety level index began to increase as early as the third day of the experiment; on the tenth day, it appeared to be twice increased. It was also demonstrated that MWNTs decreased the integrated exploratory activity index nearly twofold on the third day and nearly fourfold on the tenth day.

  19. What Websites Are Patients Using: Results of a Tracking Study Exploring Patients Use of Websites at a Multi-Media Patient Education Center

    PubMed Central

    Ravitch, Stephanie; Fleisher, Linda; Torres, Stephen

    2005-01-01

    An exploratory study utilized computer tracking software at a hospital based patient education center was conducted to access use of the Web. During six months, 625 hits were tracked with 1/3 to www.cancer.gov one of the recommended websites, while over half of the sites were not on the recommended list. Here we report the challenges and results of this tracking study. PMID:16779379

  20. Can Automated Facial Expression Analysis Show Differences Between Autism and Typical Functioning?

    PubMed

    Borsos, Zsófia; Gyori, Miklos

    2017-01-01

    Exploratory analyses of emotional expressions using a commercially available facial expression recognition software are reported, from the context of a serious game for screening purposes. Our results are based on a comparative analysis of two matched groups of kindergarten-age children (high-functioning children with autism spectrum condition: n=13; typically developing children: n=13). Results indicate that this technology has the potential to identify autism-specific emotion expression features, and may play a role in affective diagnostic and assistive technologies.

  1. Beowawe Geothermal Area evaluation program. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iovenitti, J. L

    Several exploration programs were conducted at the Beowawe Geothermal Prospect, Lander and Eureka County, Nevada. Part I, consisting of a shallow temperature hole program, a mercury soil sampling survey, and a self-potential survey were conducted in order to select the optimum site for an exploratory well. Part II consisted of drilling a 5927-foot exploratory well, running geophysical logs, conducting a drill stem test (2937-3208 feet), and a short-term (3-day) flow test (1655-2188 feet). All basic data collected is summarized.

  2. Effects of subchronic inhalation of vaporized plastic cement on exploratory behavior and Purkinje cell differentiation in the rat.

    PubMed

    Pascual, R; Salgado, C; Viancos, L; Figueroa, H R

    1996-12-06

    In the present study, the effects of preweaning cement vapor inhalation on exploratory behavior and cerebellar Purkinje cell differentiation were assessed. Sprague-Dawley albino rats were daily exposed to glue vapors between postnatal d 2 and 21. At postnatal d 22, all animals were submitted to the open-field test in order to evaluate their exploratory behavior. Then they were sacrificed, their brains dissected out, and cerebella stained according to the Golgi-Cox-Sholl procedure. Purkinje cells sampled from parasagittal sections of the cerebellar vermis were drawn under camera lucida and their dendritic domain was determined. The collected data indicate that glue solvent inhalation impairs both Purkinje cell differentiation and locomotor exploratory behavior.

  3. Equity in access to health care provision under the medicare security for small scale entrepreneurs in Dar es Salaam.

    PubMed

    Urassa, J A E

    2012-03-01

    The main objective of this study was to assess equity in access to health care provision under the Medicare Security for Small Scale Entrepreneurs (SSE). Methodological triangulation was used to an exploratory and randomized cross- sectional study in order to supplement information on the topic under investigation. Questionnaires were administered to 281 respondents and 6 Focus Group Discussions (FGDs) were held with males and females. Documentary review was also used. For quantitative aspect of the study, significant associations were measured using confidence intervals (95% CI) testing. Qualitative data were analyzed with assistance of Open code software. The results show that inequalities in access to health care services were found in respect to affordability of medical care costs, distance from home to health facilities, availability of drugs as well as medical equipments and supplies. As the result of existing inequalities some of clients were not satisfied with the provided health services. The study concludes by drawing policy and research implications of the findings.

  4. Query optimization for graph analytics on linked data using SPARQL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Seokyong; Lee, Sangkeun; Lim, Seung -Hwan

    2015-07-01

    Triplestores that support query languages such as SPARQL are emerging as the preferred and scalable solution to represent data and meta-data as massive heterogeneous graphs using Semantic Web standards. With increasing adoption, the desire to conduct graph-theoretic mining and exploratory analysis has also increased. Addressing that desire, this paper presents a solution that is the marriage of Graph Theory and the Semantic Web. We present software that can analyze Linked Data using graph operations such as counting triangles, finding eccentricity, testing connectedness, and computing PageRank directly on triple stores via the SPARQL interface. We describe the process of optimizing performancemore » of the SPARQL-based implementation of such popular graph algorithms by reducing the space-overhead, simplifying iterative complexity and removing redundant computations by understanding query plans. Our optimized approach shows significant performance gains on triplestores hosted on stand-alone workstations as well as hardware-optimized scalable supercomputers such as the Cray XMT.« less

  5. An exploratory mixed-methods crossover study comparing DVD- vs. Web-based patient decision support in three conditions: The importance of patient perspectives.

    PubMed

    Halley, Meghan C; Rendle, Katharine A S; Gillespie, Katherine A; Stanley, Katherine M; Frosch, Dominick L

    2015-12-01

    The last 15 years have witnessed considerable progress in the development of decision support interventions (DESIs). However, fundamental questions about design and format of delivery remain. An exploratory, randomized mixed-method crossover study was conducted to compare a DVD- and Web-based DESI. Randomized participants used either the Web or the DVD first, followed by the alternative format. Participants completed a questionnaire to assess decision-specific knowledge at baseline and a questionnaire and structured qualitative interview after viewing each format. Tracking software was used to capture Web utilization. Transcripts were analyzed using integrated inductive and deductive approaches. Quantitative data were analyzed using exploratory bivariate and multivariate analyses. Exploratory knowledge analyses suggest that both formats increased knowledge, with limited evidence that the DVD increased knowledge more than the Web. Format preference varied across participants: 44% preferred the Web, 32% preferred the DVD and 24% preferred 'both'. Patient discussions of preferences for DESI information structure and the importance of a patients' stage of a given decision suggest these characteristics may be important factors underlying variation in utilization, format preferences and knowledge outcomes. Our results suggest that both DESI formats effectively increase knowledge. Patients' perceptions of these two formats further suggest that there may be no single 'best' format for all patients. These results have important implications for understanding why different DESI formats might be preferable to and more effective for different patients. Further research is needed to explore the relationship between these factors and DESI utilization outcomes across diverse patient populations. © 2014 John Wiley & Sons Ltd.

  6. Application of Exploratory Structural Equation Modeling to Evaluate the Academic Motivation Scale

    ERIC Educational Resources Information Center

    Guay, Frédéric; Morin, Alexandre J. S.; Litalien, David; Valois, Pierre; Vallerand, Robert J.

    2015-01-01

    In this research, the authors examined the construct validity of scores of the Academic Motivation Scale using exploratory structural equation modeling. Study 1 and Study 2 involved 1,416 college students and 4,498 high school students, respectively. First, results of both studies indicated that the factor structure tested with exploratory…

  7. An Exploratory Study of Intrinsic & Extrinsic Motivators and Student Performance in an Auditing Course

    ERIC Educational Resources Information Center

    Mo, Songtao

    2011-01-01

    The objective of this study is to investigate the association of intrinsic and extrinsic motivators and student performance. This study performs an exploratory analysis and presents evidence to demonstrate that intrinsic motivators affect the connection between external motivators and student performance. The empirical tests follow the framework…

  8. An Exploratory Multiple Case Study about Using Game-Based Learning in STEM Classrooms

    ERIC Educational Resources Information Center

    Vu, Phu; Feinstein, Sheryl

    2017-01-01

    This exploratory multiple case study attempted to examine whether game-based learning activities had any impacts on students' academic performances and behaviors, and what perceptions the teachers had toward implementing games into their classrooms. Data used in this study included 101 students' pre and post-test scores, and four structured…

  9. 50 CFR 680.40 - Crab Quota Share (QS), Processor QS (PQS), Individual Fishing Quota (IFQ), and Individual...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... exclude any deadloss, test fishing, fishing conducted under an experimental, exploratory, or scientific..., education, exploratory, or experimental permit, or under the Western Alaska CDQ Program. (iv) Documentation... information is true, correct, and complete to the best of his/her knowledge and belief. If the application is...

  10. 50 CFR 680.40 - Crab Quota Share (QS), Processor QS (PQS), Individual Fishing Quota (IFQ), and Individual...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... exclude any deadloss, test fishing, fishing conducted under an experimental, exploratory, or scientific..., education, exploratory, or experimental permit, or under the Western Alaska CDQ Program. (iv) Documentation... information is true, correct, and complete to the best of his/her knowledge and belief. If the application is...

  11. 50 CFR 680.40 - Crab Quota Share (QS), Processor QS (PQS), Individual Fishing Quota (IFQ), and Individual...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... exclude any deadloss, test fishing, fishing conducted under an experimental, exploratory, or scientific..., education, exploratory, or experimental permit, or under the Western Alaska CDQ Program. (iv) Documentation... information is true, correct, and complete to the best of his/her knowledge and belief. If the application is...

  12. 50 CFR 680.40 - Crab Quota Share (QS), Processor QS (PQS), Individual Fishing Quota (IFQ), and Individual...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... exclude any deadloss, test fishing, fishing conducted under an experimental, exploratory, or scientific..., education, exploratory, or experimental permit, or under the Western Alaska CDQ Program. (iv) Documentation... information is true, correct, and complete to the best of his/her knowledge and belief. If the application is...

  13. Ignition potential of muzzle-loading firearms: An exploratory investigation

    Treesearch

    David V. Haston; Mark A. Finney; Andy Horcher; Philip A. Yates; Kahlil Detrich

    2009-01-01

    The National Technology and Development Program of the Forest Service, U.S. Department of Agriculture, was asked to conduct an exploratory study on the ignition potential of muzzle-loading firearms. The five independent variables investigated include projectile type, powder type, powder load, patch thickness, and patch lubricant treatment. Indoor testing was performed...

  14. An Exploratory Investigation of the Factor Structure of the Reynolds Intellectual Assessment Scales (RIAS)

    ERIC Educational Resources Information Center

    Dombrowski, Stefan C.; Watkins, Marley W.; Brogan, Michael J.

    2009-01-01

    This study investigated the factor structure of the Reynolds Intellectual Assessment Scales (RIAS) using rigorous exploratory factor analytic and factor extraction procedures. The results of this study indicate that the RIAS is a single factor test. Despite these results, higher order factor analysis using the Schmid-Leiman procedure indicates…

  15. Testing Measurement Invariance in the Target Rotated Multigroup Exploratory Factor Model

    ERIC Educational Resources Information Center

    Dolan, Conor V.; Oort, Frans J.; Stoel, Reinoud D.; Wicherts, Jelte M.

    2009-01-01

    We propose a method to investigate measurement invariance in the multigroup exploratory factor model, subject to target rotation. We consider both oblique and orthogonal target rotation. This method has clear advantages over other approaches, such as the use of congruence measures. We demonstrate that the model can be implemented readily in the…

  16. Analysis of Performance of Stereoscopic-Vision Software

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  17. Representation of Serendipitous Scientific Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A computer program defines and implements an innovative kind of data structure than can be used for representing information derived from serendipitous discoveries made via collection of scientific data on long exploratory spacecraft missions. Data structures capable of collecting any kind of data can easily be implemented in advance, but the task of designing a fixed and efficient data structure suitable for processing raw data into useful information and taking advantage of serendipitous scientific discovery is becoming increasingly difficult as missions go deeper into space. The present software eases the task by enabling definition of arbitrarily complex data structures that can adapt at run time as raw data are transformed into other types of information. This software runs on a variety of computers, and can be distributed in either source code or binary code form. It must be run in conjunction with any one of a number of Lisp compilers that are available commercially or as shareware. It has no specific memory requirements and depends upon the other software with which it is used. This program is implemented as a library that is called by, and becomes folded into, the other software with which it is used.

  18. Testing Scientific Software: A Systematic Literature Review.

    PubMed

    Kanewala, Upulee; Bieman, James M

    2014-10-01

    Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.

  19. Lower risk taking and exploratory behavior in alcohol-preferring sP rats than in alcohol non-preferring sNP rats in the multivariate concentric square field (MCSF) test.

    PubMed

    Roman, Erika; Colombo, Giancarlo

    2009-12-14

    The present investigation continues previous behavioral profiling studies of selectively bred alcohol-drinking and alcohol non-drinking rats. In this study, alcohol-naïve adult Sardinian alcohol-preferring (sP) and non-preferring (sNP) rats were tested in the multivariate concentric square field (MCSF) test. The MCSF test has an ethoexperimental approach and measures general activity, exploration, risk assessment, risk taking, and shelter seeking in laboratory rodents. The multivariate design enables behavioral profiling in one and the same test situation. Age-matched male Wistar rats were included as a control group. Five weeks after the first MCSF trial, a repeated testing was done to explore differences in acquired experience. The results revealed distinct differences in exploratory strategies and behavioral profiles between sP and sNP rats. The sP rats were characterized by lower activity, lower exploratory drive, higher risk assessment, and lower risk taking behavior than in sNP rats. In the repeated trial, risk-taking behavior was almost abolished in sP rats. When comparing the performance of sP and sNP rats with that of Wistar rats, the principal component analysis revealed that the sP rats were the most divergent group. The vigilant behavior observed in sP rats with low exploratory drive and low risk-taking behavior is interpreted here as high innate anxiety-related behaviors and may be related to their propensity for high voluntary alcohol intake and preference. We suggest that the different lines of alcohol-preferring rats with different behavioral characteristics constitute valuable animal models that mimic the heterogeneity in human alcohol dependence.

  20. Age-dependent change in exploratory behavior of male rats following exposure to threat stimulus: effect of juvenile experience.

    PubMed

    Arakawa, Hiroyuki

    2007-07-01

    The ontogeny of exploratory behavior depending on the intensity of threat in a modified open-field was investigated in male rats aged 40, 65, and 130 days, by comparing with less threatening condition with no shock and more threatening condition where they were exposed to mild electric shock. The number of crossings in a dim peripheral alley was counted as the level of activity. The total duration of stay in the central area was measured as the level of exploration. The number of entries and stretch-attend postures into a bright center square were measured as active exploratory behavior and the risk assessment behavior, respectively. When exposed to mild shock prior to the test, 40-day-old rats decreased these exploratory behaviors, while 65- and 130-day-old rats increased active exploratory behavior (Experiment 1). A lower level of exploratory behavior following a mild shock was found in 65 and 130-day-old rats isolated during the juvenile stage, but not in rats isolated after puberty (Experiment 2). These findings suggest that the direction of changes in exploratory behavior of male rats following an increase in potential danger showed ontogenetic transition, which is mediated by social experiences as juveniles, but not as adults. This transition may be associated with the emergence of active exploratory behavior during the juvenile stage, which is activated by social interaction.

  1. Long-term Behavioral Tracking of Freely Swimming Weakly Electric Fish

    PubMed Central

    Jun, James J.; Longtin, André; Maler, Leonard

    2014-01-01

    Long-term behavioral tracking can capture and quantify natural animal behaviors, including those occurring infrequently. Behaviors such as exploration and social interactions can be best studied by observing unrestrained, freely behaving animals. Weakly electric fish (WEF) display readily observable exploratory and social behaviors by emitting electric organ discharge (EOD). Here, we describe three effective techniques to synchronously measure the EOD, body position, and posture of a free-swimming WEF for an extended period of time. First, we describe the construction of an experimental tank inside of an isolation chamber designed to block external sources of sensory stimuli such as light, sound, and vibration. The aquarium was partitioned to accommodate four test specimens, and automated gates remotely control the animals' access to the central arena. Second, we describe a precise and reliable real-time EOD timing measurement method from freely swimming WEF. Signal distortions caused by the animal's body movements are corrected by spatial averaging and temporal processing stages. Third, we describe an underwater near-infrared imaging setup to observe unperturbed nocturnal animal behaviors. Infrared light pulses were used to synchronize the timing between the video and the physiological signal over a long recording duration. Our automated tracking software measures the animal's body position and posture reliably in an aquatic scene. In combination, these techniques enable long term observation of spontaneous behavior of freely swimming weakly electric fish in a reliable and precise manner. We believe our method can be similarly applied to the study of other aquatic animals by relating their physiological signals with exploratory or social behaviors. PMID:24637642

  2. Technical Communications in Aeronautics: Results of an Exploratory Study. An Analysis of Managers' and Nonmanagers' Responses. NASA Technical Memorandum 101625.

    ERIC Educational Resources Information Center

    Pinelli, Thomas E.; And Others

    Data collected from an exploratory study concerned with the technical communications practices of aerospace engineers and scientists were analyzed to test the primary assumption that aerospace managers and nonmanagers have different technical communications practices. Five secondary assumptions were established for the analysis: (1) that the…

  3. Gender-Related Quality of Parent-Child Interactions and Early Adolescent Problem Behaviors: Exploratory Study with Midwestern Samples

    ERIC Educational Resources Information Center

    Spoth, Richard; Neppl, Tricia; Goldberg-Lillehoj, Catherine; Jung, Tony; Ramisetty-Mikler, Suhasini

    2006-01-01

    This article reports two exploratory studies testing a model guided by a social interactional perspective, positing an inverse relation between the quality of parent-child interactions and adolescent problem behaviors. It addresses mixed findings in the literature related to gender differences. Study 1 uses cross-sectional survey data from…

  4. An Exploratory Factor Analysis and Reliability Analysis of the Student Online Learning Readiness (SOLR) Instrument

    ERIC Educational Resources Information Center

    Yu, Taeho; Richardson, Jennifer C.

    2015-01-01

    The purpose of this study was to develop an effective instrument to measure student readiness in online learning with reliable predictors of online learning success factors such as learning outcomes and learner satisfaction. The validity and reliability of the Student Online Learning Readiness (SOLR) instrument were tested using exploratory factor…

  5. The 1980 US/Canada wheat and barley exploratory experiment. Volume 2: Addenda

    NASA Technical Reports Server (NTRS)

    Bizzell, R. M.; Prior, H. L.; Payne, R. W.; Disler, J. M.

    1983-01-01

    Three study areas supporting the U.S./Canada Wheat and Barley Exploratory Experiment are discussed including an evaluation of the experiment shakedown test analyst labeling results, an evaluation of the crop proportion estimate procedure 1A component, and the evaluation of spring wheat and barley crop calendar models for the 1979 crop year.

  6. Testing Scientific Software: A Systematic Literature Review

    PubMed Central

    Kanewala, Upulee; Bieman, James M.

    2014-01-01

    Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798

  7. Exploring physics concepts among novice teachers through CMAP tools

    NASA Astrophysics Data System (ADS)

    Suprapto, N.; Suliyanah; Prahani, B. K.; Jauhariyah, M. N. R.; Admoko, S.

    2018-03-01

    Concept maps are graphical tools for organising, elaborating and representing knowledge. Through Cmap tools software, it can be explored the understanding and the hierarchical structuring of physics concepts among novice teachers. The software helps physics teachers indicated a physics context, focus questions, parking lots, cross-links, branching, hierarchy, and propositions. By using an exploratory quantitative study, a total 13-concept maps with different physics topics created by novice physics teachers were analysed. The main differences of scoring between lecturer and peer-teachers’ scoring were also illustrated. The study offered some implications, especially for physics educators to determine the hierarchical structure of the physics concepts, to construct a physics focus question, and to see how a concept in one domain of knowledge represented on the map is related to a concept in another domain shown on the map.

  8. Exploratory evaluation of ceramics for automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1972-01-01

    An exploratory evaluation of ceramics for automobile thermal reactors was conducted. Potential ceramic materials were evaluated in several reactor designs using both engine dynamometer and vehicle road tests. Silicon carbide contained in a corrugated metal support structure exhibited the best performance lasting over 800 hours in engine dynamometer tests and over 15,000 miles (24,200 km) of vehicle road tests. Reactors containing glass-ceramic components did not perform as well as silicon carbide. But the glass-ceramics still offer good potential for reactor use. The results of this study are considered to be a reasonable demonstration of the potential use of ceramics in thermal reactors.

  9. Utilization of Solar Dynamics Observatory space weather digital image data for comparative analysis with application to Baryon Oscillation Spectroscopic Survey

    NASA Astrophysics Data System (ADS)

    Shekoyan, V.; Dehipawala, S.; Liu, Ernest; Tulsee, Vivek; Armendariz, R.; Tremberger, G.; Holden, T.; Marchese, P.; Cheung, T.

    2012-10-01

    Digital solar image data is available to users with access to standard, mass-market software. Many scientific projects utilize the Flexible Image Transport System (FITS) format, which requires specialized software typically used in astrophysical research. Data in the FITS format includes photometric and spatial calibration information, which may not be useful to researchers working with self-calibrated, comparative approaches. This project examines the advantages of using mass-market software with readily downloadable image data from the Solar Dynamics Observatory for comparative analysis over with the use of specialized software capable of reading data in the FITS format. Comparative analyses of brightness statistics that describe the solar disk in the study of magnetic energy using algorithms included in mass-market software have been shown to give results similar to analyses using FITS data. The entanglement of magnetic energy associated with solar eruptions, as well as the development of such eruptions, has been characterized successfully using mass-market software. The proposed algorithm would help to establish a publicly accessible, computing network that could assist in exploratory studies of all FITS data. The advances in computer, cell phone and tablet technology could incorporate such an approach readily for the enhancement of high school and first-year college space weather education on a global scale. Application to ground based data such as that contained in the Baryon Oscillation Spectroscopic Survey is discussed.

  10. Development of the quality assessment model of EHR software in family medicine practices: research based on user satisfaction.

    PubMed

    Kralj, Damir; Kern, Josipa; Tonkovic, Stanko; Koncar, Miroslav

    2015-09-09

    Family medicine practices (FMPs) make the basis for the Croatian health care system. Use of electronic health record (EHR) software is mandatory and it plays an important role in running these practices, but important functional features still remain uneven and largely left to the will of the software developers. The objective of this study was to develop a novel and comprehensive model for functional evaluation of the EHR software in FMPs, based on current world standards, models and projects, as well as on actual user satisfaction and requirements. Based on previous theoretical and experimental research in this area, we made the initial framework model consisting of six basic categories as a base for online survey questionnaire. Family doctors assessed perceived software quality by using a five-point Likert-type scale. Using exploratory factor analysis and appropriate statistical methods over the collected data, the final optimal structure of the novel model was formed. Special attention was focused on the validity and quality of the novel model. The online survey collected a total of 384 cases. The obtained results indicate both the quality of the assessed software and the quality in use of the novel model. The intense ergonomic orientation of the novel measurement model was particularly emphasised. The resulting novel model is multiple validated, comprehensive and universal. It could be used to assess the user-perceived quality of almost all forms of the ambulatory EHR software and therefore useful to all stakeholders in this area of the health care informatisation.

  11. A Framework of the Use of Information in Software Testing

    ERIC Educational Resources Information Center

    Kaveh, Payman

    2010-01-01

    With the increasing role that software systems play in our daily lives, software quality has become extremely important. Software quality is impacted by the efficiency of the software testing process. There are a growing number of software testing methodologies, models, and initiatives to satisfy the need to improve software quality. The main…

  12. Neuroanatomical Correlates of Intelligence in Healthy Young Adults: The Role of Basal Ganglia Volume

    PubMed Central

    Rhein, Cosima; Mühle, Christiane; Richter-Schmidinger, Tanja; Alexopoulos, Panagiotis; Doerfler, Arnd; Kornhuber, Johannes

    2014-01-01

    Background In neuropsychiatric diseases with basal ganglia involvement, higher cognitive functions are often impaired. In this exploratory study, we examined healthy young adults to gain detailed insight into the relationship between basal ganglia volume and cognitive abilities under non-pathological conditions. Methodology/Principal Findings We investigated 137 healthy adults that were between the ages of 21 and 35 years with similar educational backgrounds. Magnetic resonance imaging (MRI) was performed, and volumes of basal ganglia nuclei in both hemispheres were calculated using FreeSurfer software. The cognitive assessment consisted of verbal, numeric and figural aspects of intelligence for either the fluid or the crystallised intelligence factor using the intelligence test Intelligenz-Struktur-Test (I-S-T 2000 R). Our data revealed significant correlations of the caudate nucleus and pallidum volumes with figural and numeric aspects of intelligence, but not with verbal intelligence. Interestingly, figural intelligence associations were dependent on sex and intelligence factor; in females, the pallidum volumes were correlated with crystallised figural intelligence (r = 0.372, p = 0.01), whereas in males, the caudate volumes were correlated with fluid figural intelligence (r = 0.507, p = 0.01). Numeric intelligence was correlated with right-lateralised caudate nucleus volumes for both females and males, but only for crystallised intelligence (r = 0.306, p = 0.04 and r = 0.459, p = 0.04, respectively). The associations were not mediated by prefrontal cortical subfield volumes when controlling with partial correlation analyses. Conclusions/Significance The findings of our exploratory analysis indicate that figural and numeric intelligence aspects, but not verbal aspects, are strongly associated with basal ganglia volumes. Unlike numeric intelligence, the type of figural intelligence appears to be related to distinct basal ganglia nuclei in a sex-specific manner. Subcortical brain structures thus may contribute substantially to cognitive performance. PMID:24699871

  13. Forebrain-Specific Loss of BMPRII in Mice Reduces Anxiety and Increases Object Exploration.

    PubMed

    McBrayer, Zofeyah L; Dimova, Jiva; Pisansky, Marc T; Sun, Mu; Beppu, Hideyuki; Gewirtz, Jonathan C; O'Connor, Michael B

    2015-01-01

    To investigate the role of Bone Morphogenic Protein Receptor Type II (BMPRII) in learning, memory, and exploratory behavior in mice, a tissue-specific knockout of BMPRII in the post-natal hippocampus and forebrain was generated. We found that BMPRII mutant mice had normal spatial learning and memory in the Morris water maze, but showed significantly reduced swimming speeds with increased floating behavior. Further analysis using the Porsolt Swim Test to investigate behavioral despair did not reveal any differences in immobility between mutants and controls. In the Elevated Plus Maze, BMPRII mutants and Smad4 mutants showed reduced anxiety, while in exploratory tests, BMPRII mutants showed more interest in object exploration. These results suggest that loss of BMPRII in the mouse hippocampus and forebrain does not disrupt spatial learning and memory encoding, but instead impacts exploratory and anxiety-related behaviors.

  14. Forebrain-Specific Loss of BMPRII in Mice Reduces Anxiety and Increases Object Exploration

    PubMed Central

    McBrayer, Zofeyah L.; Dimova, Jiva; Pisansky, Marc T.; Sun, Mu; Beppu, Hideyuki; Gewirtz, Jonathan C.; O’Connor, Michael B.

    2015-01-01

    To investigate the role of Bone Morphogenic Protein Receptor Type II (BMPRII) in learning, memory, and exploratory behavior in mice, a tissue-specific knockout of BMPRII in the post-natal hippocampus and forebrain was generated. We found that BMPRII mutant mice had normal spatial learning and memory in the Morris water maze, but showed significantly reduced swimming speeds with increased floating behavior. Further analysis using the Porsolt Swim Test to investigate behavioral despair did not reveal any differences in immobility between mutants and controls. In the Elevated Plus Maze, BMPRII mutants and Smad4 mutants showed reduced anxiety, while in exploratory tests, BMPRII mutants showed more interest in object exploration. These results suggest that loss of BMPRII in the mouse hippocampus and forebrain does not disrupt spatial learning and memory encoding, but instead impacts exploratory and anxiety-related behaviors. PMID:26444546

  15. Exploratory behaviour in the open field test adapted for larval zebrafish: impact of environmental complexity.

    PubMed

    Ahmad, Farooq; Richardson, Michael K

    2013-01-01

    This study aimed to develop and characterize a novel (standard) open field test adapted for larval zebrafish. We also developed and characterized a variant of the same assay consisting of a colour-enriched open field; this was used to assess the impact of environmental complexity on patterns of exploratory behaviours as well to determine natural colour preference/avoidance. We report the following main findings: (1) zebrafish larvae display characteristic patterns of exploratory behaviours in the standard open field, such as thigmotaxis/centre avoidance; (2) environmental complexity (i.e. presence of colours) differentially affects patterns of exploratory behaviours and greatly attenuates natural zone preference; (3) larvae displayed the ability to discriminate colours. As reported previously in adult zebrafish, larvae showed avoidance towards blue and black; however, in contrast to the reported adult behaviour, larvae displayed avoidance towards red. Avoidance towards yellow and preference for green and orange are shown for the first time, (4) compared to standard open field tests, exposure to the colour-enriched open field resulted in an enhanced expression of anxiety-like behaviours. To conclude, we not only developed and adapted a traditional rodent behavioural assay that serves as a gold standard in preclinical drug screening, but we also provide a version of the same test that affords the possibility to investigate the impact of environmental stress on behaviour in larval zebrafish while representing the first test for assessment of natural colour preference/avoidance in larval zebrafish. In the future, these assays will improve preclinical drug screening methodologies towards the goal to uncover novel drugs. This article is part of a Special Issue entitled: insert SI title. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Hypothesis testing in hydrology: Theory and practice

    NASA Astrophysics Data System (ADS)

    Kirchner, James; Pfister, Laurent

    2017-04-01

    Well-posed hypothesis tests have spurred major advances in hydrological theory. However, a random sample of recent research papers suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias - the tendency to value and trust confirmations more than refutations - among both researchers and reviewers. Hypothesis testing is not the only recipe for scientific progress, however: exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.

  17. Working at the Nexus of Generic and Content-Specific Teaching Practices: An Exploratory Study Based on TIMSS Secondary Analyses

    ERIC Educational Resources Information Center

    Charalambous, Charalambos Y.; Kyriakides, Ermis

    2017-01-01

    For years scholars have attended to either generic or content-specific teaching practices attempting to understand instructional quality and its effects on student learning. Drawing on the TIMSS 2007 and 2011 databases, this exploratory study empirically tests the hypothesis that attending to both types of practices can help better explain student…

  18. Transactional Distance and Dialogue: An Exploratory Study to Refine the Theoretical Construct of Dialogue in Online Learning

    ERIC Educational Resources Information Center

    Shearer, Rick L.

    2009-01-01

    Theory building is complex and ongoing. Theories need to be constantly tested and the underlying constructs explored, as knowledge of a field evolves. This study, which is in support of Moore's (1980, 1993) theory of transactional distance, is exploratory and descriptive, and focuses on one of the key variables in the theory dialogue. As…

  19. Technical Communications in Aeronautics: Results of an Exploratory Study. An Analysis of Profit Managers' and Nonprofit Managers' Responses. NASA Technical Memorandum 101626.

    ERIC Educational Resources Information Center

    Pinelli, Thomas E.; And Others

    Data collected from an exploratory study concerned with the technical communications practices of aerospace engineers and scientists were analyzed to test the primary assumption that profit and nonprofit managers in the aerospace community have different technical communications practices. Profit and nonprofit managers were compared in five…

  20. Rotation Criteria and Hypothesis Testing for Exploratory Factor Analysis: Implications for Factor Pattern Loadings and Interfactor Correlations

    ERIC Educational Resources Information Center

    Schmitt, Thomas A.; Sass, Daniel A.

    2011-01-01

    Exploratory factor analysis (EFA) has long been used in the social sciences to depict the relationships between variables/items and latent traits. Researchers face many choices when using EFA, including the choice of rotation criterion, which can be difficult given that few research articles have discussed and/or demonstrated their differences.…

  1. Exploratory Development Research Effectiveness: A Second Evaluation,

    DTIC Science & Technology

    1978-09-01

    A12i 537 EXPLORATORY DEVELOPMENT RESERCH EFFECTIVENESS: A / SECOND EVRLUATfON(U) CALIFORNIA STATE UNIV SACRAMENTO T A BUCKLES ET AL. SEP 78 CSUS/NPS...Administration - CSUS-NPS J077091 ii TABLE OF CONTENTS ABSTRACT . OBJECTIVE.............. . .. .. .. .... 1 DISCUSSION............. .. .. .. .... 1... contention that Work Unit Cost influenced the degree of transition. The last postulate that was tested concerned work unit classification by dollar amount. It

  2. Scale Development and Initial Tests of the Multidimensional Complex Adaptive Leadership Scale for School Principals: An Exploratory Mixed Method Study

    ERIC Educational Resources Information Center

    Özen, Hamit; Turan, Selahattin

    2017-01-01

    This study was designed to develop the scale of the Complex Adaptive Leadership for School Principals (CAL-SP) and examine its psychometric properties. This was an exploratory mixed method research design (ES-MMD). Both qualitative and quantitative methods were used to develop and assess psychometric properties of the questionnaire. This study…

  3. Effects of beta-adrenergic antagonist, propranolol on spatial memory and exploratory behavior in mice.

    PubMed

    Sun, Huaying; Mao, Yu; Wang, Jianhong; Ma, Yuanye

    2011-07-08

    The beta-adrenergic system has been suggested to be involved in novelty detection and memory modulation. The present study aimed to investigate the role of beta-adrenergic receptors on novelty-based spatial recognition memory and exploratory behavior in mice using Y-maze test and open-field respectively. Mice were injected with three doses of beta-adrenergic receptor antagonist, propranolol (2, 10 and 20 mg/kg) or saline at three different time points (15 min prior to training, immediately after training and 15 min before test). The results showed that higher doses of propranolol (10 and 20 mg/kg) given before the training trial impaired spatial recognition memory while those injected at other two time points did not. A detailed analysis of exploratory behavior in open-field showed that lower dose (2 mg/kg) of propranolol reduced exploratory behavior of mice. Our findings indicate that higher dose of propranolol can impair acquisition of spatial information in the Y-maze without altering locomotion, suggesting that the beta-adrenergic system may be involved in modulating memory processes at the time of learning. Copyright © 2011. Published by Elsevier Ireland Ltd.

  4. Exploratory visualization software for reporting environmental survey results.

    PubMed

    Fisher, P; Arnot, C; Bastin, L; Dykes, J

    2001-08-01

    Environmental surveys yield three principal products: maps, a set of data tables, and a textual report. The relationships between these three elements, however, are often cumbersome to present, making full use of all the information in an integrated and systematic sense difficult. The published paper report is only a partial solution. Modern developments in computing, particularly in cartography, GIS, and hypertext, mean that it is increasingly possible to conceive of an easier and more interactive approach to the presentation of such survey results. Here, we present such an approach which links map and tabular datasets arising from a vegetation survey, allowing users ready access to a complex dataset using dynamic mapping techniques. Multimedia datasets equipped with software like this provide an exciting means of quick and easy visual data exploration and comparison. These techniques are gaining popularity across the sciences as scientists and decision-makers are presented with increasing amounts of diverse digital data. We believe that the software environment actively encourages users to make complex interrogations of the survey information, providing a new vehicle for the reader of an environmental survey report.

  5. Investigating skin-to-skin care patterns with extremely preterm infants in the NICU and their effect on early cognitive and communication performance: a retrospective cohort study.

    PubMed

    Gonya, Jenn; Ray, William C; Rumpf, R Wolfgang; Brock, Guy

    2017-03-20

    The primary objective of the study was to investigate how patterns of skin-to-skin care might impact infant early cognitive and communication performance. This was a retrospective cohort study. This study took place in a level-IV all-referral neonatal intensive care unit in the Midwest USA specialising in the care of extremely preterm infants. Data were collected from the electronic medical records of all extremely preterm infants (gestational age <27 weeks) admitted to the unit during 2010-2011 and who completed 6-month and 12-month developmental assessments in the follow-up clinic (n=97). Outcome measures included the cognitive and communication subscales of the Bayley Scales of Infant Development, Third Edition (Bayley-III); and skin-to-skin patterns including: total hours of maternal and paternal participation throughout hospitalisation, total duration in weeks and frequency (hours per week). Extracted data were analysed through a multistep process of logistic regressions, t-tests, χ 2 tests and Fisher's exact tests followed with exploratory network analysis using novel visual analytic software. Infants who received above the sample median in total hours, weekly frequency and total hours from mothers and fathers of skin-to-skin care were more likely to score ≥80 on the cognitive and communication scales of the Bayley-III. However, the results were not statistically significant (p>0.05). Mothers provided the majority of skin-to-skin care with a sharp decline at 30 weeks corrected age, regardless of when extremely preterm infants were admitted. Additional exploratory network analysis suggests that medical and skin-to-skin factors play a parallel, non-synergistic role in contributing to early cognitive and communication performance as assessed through the Bayley-III. This study suggests an association between early and frequent skin-to-skin care with extremely preterm infants and early cognitive and communication performance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  6. ROC curves in clinical chemistry: uses, misuses, and possible solutions.

    PubMed

    Obuchowski, Nancy A; Lieber, Michael L; Wians, Frank H

    2004-07-01

    ROC curves have become the standard for describing and comparing the accuracy of diagnostic tests. Not surprisingly, ROC curves are used often by clinical chemists. Our aims were to observe how the accuracy of clinical laboratory diagnostic tests is assessed, compared, and reported in the literature; to identify common problems with the use of ROC curves; and to offer some possible solutions. We reviewed every original work using ROC curves and published in Clinical Chemistry in 2001 or 2002. For each article we recorded phase of the research, prospective or retrospective design, sample size, presence/absence of confidence intervals (CIs), nature of the statistical analysis, and major analysis problems. Of 58 articles, 31% were phase I (exploratory), 50% were phase II (challenge), and 19% were phase III (advanced) studies. The studies increased in sample size from phase I to III and showed a progression in the use of prospective designs. Most phase I studies were powered to assess diagnostic tests with ROC areas >/=0.70. Thirty-eight percent of studies failed to include CIs for diagnostic test accuracy or the CIs were constructed inappropriately. Thirty-three percent of studies provided insufficient analysis for comparing diagnostic tests. Other problems included dichotomization of the gold standard scale and inappropriate analysis of the equivalence of two diagnostic tests. We identify available software and make some suggestions for sample size determination, testing for equivalence in diagnostic accuracy, and alternatives to a dichotomous classification of a continuous-scale gold standard. More methodologic research is needed in areas specific to clinical chemistry.

  7. An engineering approach to automatic programming

    NASA Technical Reports Server (NTRS)

    Rubin, Stuart H.

    1990-01-01

    An exploratory study of the automatic generation and optimization of symbolic programs using DECOM - a prototypical requirement specification model implemented in pure LISP was undertaken. It was concluded, on the basis of this study, that symbolic processing languages such as LISP can support a style of programming based upon formal transformation and dependent upon the expression of constraints in an object-oriented environment. Such languages can represent all aspects of the software generation process (including heuristic algorithms for effecting parallel search) as dynamic processes since data and program are represented in a uniform format.

  8. Exploratory Study of 4D Versus 3D Robust Optimization in Intensity-Modulated Proton Therapy for Lung Cancer

    PubMed Central

    Liu, Wei; Schild, Steven E.; Chang, Joe Y.; Liao, Zhongxing; Chang, Yu-Hui; Wen, Zhifei; Shen, Jiajian; Stoker, Joshua B.; Ding, Xiaoning; Hu, Yanle; Sahoo, Narayan; Herman, Michael G.; Vargas, Carlos; Keole, Sameer; Wong, William; Bues, Martin

    2015-01-01

    Background To compare the impact of uncertainties and interplay effect on 3D and 4D robustly optimized intensity-modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods IMPT plans were created for 11 non-randomly selected non-small-cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D CTs to irradiate clinical target volume (CTV). Regular fractionation (66 Gy[RBE] in 33 fractions) were considered. In 4D optimization, the CTV of individual phases received non-uniform doses to achieve a uniform cumulative dose. The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed-rank test. Results 4D robust optimization plans led to smaller AUC for CTV (14.26 vs. 18.61 (p=0.001), better CTV coverage (Gy[RBE]) [D95% CTV: 60.6 vs 55.2 (p=0.001)], and better CTV homogeneity [D5%–D95% CTV: 10.3 vs 17.7 (p=0.002)] in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage [D95% CTV: 64.5 vs 63.8 (p=0.0068)], comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions Our exploratory methodology study showed that, compared to 3D robust optimization, 4D robust optimization produced significantly more robust and interplay-effect-resistant plans for targets with comparable dose distributions for normal tissues. A further study with a larger and more realistic patient population is warranted to generalize the conclusions. PMID:26725727

  9. Executable assertions and flight software

    NASA Technical Reports Server (NTRS)

    Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.

    1984-01-01

    Executable assertions are used to test flight control software. The techniques used for testing flight software; however, are different from the techniques used to test other kinds of software. This is because of the redundant nature of flight software. An experimental setup for testing flight software using executable assertions is described. Techniques for writing and using executable assertions to test flight software are presented. The error detection capability of assertions is studied and many examples of assertions are given. The issues of placement and complexity of assertions and the language features to support efficient use of assertions are discussed.

  10. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...

  11. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...

  12. The point of entry contributes to the organization of exploratory behavior of rats on an open field: an example of spontaneous episodic memory.

    PubMed

    Nemati, Farshad; Whishaw, Ian Q

    2007-08-22

    The exploratory behavior of rats on an open field is organized in that animals spend disproportionate amounts of time at certain locations, termed home bases, which serve as centers for excursions. Although home bases are preferentially formed near distinctive cues, including visual cues, animals also visit and pause and move slowly, or linger, at many other locations in a test environment. In order to further examine the organization of exploratory behavior, the present study examined the influence of the point of entry on animals placed on an open field table that was illuminated either by room light or infrared light (a wavelength in which they cannot see) and near which, or on which, distinctive cues were placed. The main findings were that in both room light and infrared light tests, rats visited and lingered at the point of entry significantly more often than comparative control locations. Although the rats also visited and lingered in the vicinity of salient visual cues, the point of entry still remained a focus of visits. Finally, the preference for the point of entry increased as a function of salience of the cues marking that location. That the point of entry influences the organization of exploratory behavior is discussed in relation to the idea that the exploratory behavior of the rat is directed toward optimizing security as well as forming a spatial representation of the environment.

  13. Dtest Testing Software

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Cameron, Jonathan M.; Myint, Steven

    2013-01-01

    This software runs a suite of arbitrary software tests spanning various software languages and types of tests (unit level, system level, or file comparison tests). The dtest utility can be set to automate periodic testing of large suites of software, as well as running individual tests. It supports distributing multiple tests over multiple CPU cores, if available. The dtest tool is a utility program (written in Python) that scans through a directory (and its subdirectories) and finds all directories that match a certain pattern and then executes any tests in that directory as described in simple configuration files.

  14. From proteomics to systems biology: MAPA, MASS WESTERN, PROMEX, and COVAIN as a user-oriented platform.

    PubMed

    Weckwerth, Wolfram; Wienkoop, Stefanie; Hoehenwarter, Wolfgang; Egelhofer, Volker; Sun, Xiaoliang

    2014-01-01

    Genome sequencing and systems biology are revolutionizing life sciences. Proteomics emerged as a fundamental technique of this novel research area as it is the basis for gene function analysis and modeling of dynamic protein networks. Here a complete proteomics platform suited for functional genomics and systems biology is presented. The strategy includes MAPA (mass accuracy precursor alignment; http://www.univie.ac.at/mosys/software.html ) as a rapid exploratory analysis step; MASS WESTERN for targeted proteomics; COVAIN ( http://www.univie.ac.at/mosys/software.html ) for multivariate statistical analysis, data integration, and data mining; and PROMEX ( http://www.univie.ac.at/mosys/databases.html ) as a database module for proteogenomics and proteotypic peptides for targeted analysis. Moreover, the presented platform can also be utilized to integrate metabolomics and transcriptomics data for the analysis of metabolite-protein-transcript correlations and time course analysis using COVAIN. Examples for the integration of MAPA and MASS WESTERN data, proteogenomic and metabolic modeling approaches for functional genomics, phosphoproteomics by integration of MOAC (metal-oxide affinity chromatography) with MAPA, and the integration of metabolomics, transcriptomics, proteomics, and physiological data using this platform are presented. All software and step-by-step tutorials for data processing and data mining can be downloaded from http://www.univie.ac.at/mosys/software.html.

  15. Integrating voice evaluation: correlation between acoustic and audio-perceptual measures.

    PubMed

    Vaz Freitas, Susana; Melo Pestana, Pedro; Almeida, Vítor; Ferreira, Aníbal

    2015-05-01

    This article aims to establish correlations between acoustic and audio-perceptual measures using the GRBAS scale with respect to four different voice analysis software programs. Exploratory, transversal. A total of 90 voice records were collected and analyzed with the Dr. Speech (Tiger Electronics, Seattle, WA), Multidimensional Voice Program (Kay Elemetrics, NJ, USA), PRAAT (University of Amsterdam, The Netherlands), and Voice Studio (Seegnal, Oporto, Portugal) software programs. The acoustic measures were correlated to the audio-perceptual parameters of the GRBAS and rated by 10 experts. The predictive value of the acoustic measurements related to the audio-perceptual parameters exhibited magnitudes ranging from weak (R(2)a=0.17) to moderate (R(2)a=0.71). The parameter exhibiting the highest correlation magnitude is B (Breathiness), whereas the weaker correlation magnitudes were found to be for A (Asthenia) and S (Strain). The acoustic measures with stronger predictive values were local Shimmer, harmonics-to-noise ratio, APQ5 shimmer, and PPQ5 jitter, with different magnitudes for each one of the studied software programs. Some acoustic measures are pointed as significant predictors of GRBAS parameters, but they differ among software programs. B (Breathiness) was the parameter exhibiting the highest correlation magnitude. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  16. 15 CFR 995.27 - Format validation software testing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...

  17. 15 CFR 995.27 - Format validation software testing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...

  18. 15 CFR 995.27 - Format validation software testing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...

  19. 15 CFR 995.27 - Format validation software testing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...

  20. P-MartCancer-Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets.

    PubMed

    Webb-Robertson, Bobbie-Jo M; Bramer, Lisa M; Jensen, Jeffrey L; Kobold, Markus A; Stratton, Kelly G; White, Amanda M; Rodland, Karin D

    2017-11-01

    P-MartCancer is an interactive web-based software environment that enables statistical analyses of peptide or protein data, quantitated from mass spectrometry-based global proteomics experiments, without requiring in-depth knowledge of statistical programming. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification, and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access and the capability to analyze multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium at the peptide, gene, and protein levels. P-MartCancer is deployed as a web service (https://pmart.labworks.org/cptac.html), alternatively available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/). Cancer Res; 77(21); e47-50. ©2017 AACR . ©2017 American Association for Cancer Research.

  1. Adolescent chronic variable social stress influences exploratory behavior and nicotine responses in male, but not female, BALB/cJ mice.

    PubMed

    Caruso, M J; Reiss, D E; Caulfield, J I; Thomas, J L; Baker, A N; Cavigelli, S A; Kamens, H M

    2018-04-01

    Anxiety disorders and nicotine use are significant contributors to global morbidity and mortality as independent and comorbid diseases. Early-life stress, potentially via stress-induced hypothalamic-pituitary-adrenal axis (HPA) dysregulation, can exacerbate both. However, little is known about the factors that predispose individuals to the development of both anxiety disorders and nicotine use. Here, we examined the relationship between anxiety-like behaviors and nicotine responses following adolescent stress. Adolescent male and female BALB/cJ mice were exposed to either chronic variable social stress (CVSS) or control conditions. CVSS consisted of repeated cycles of social isolation and social reorganization. In adulthood, anxiety-like behavior and social avoidance were measured using the elevated plus-maze (EPM) and social approach-avoidance test, respectively. Nicotine responses were assessed with acute effects on body temperature, corticosterone production, locomotor activity, and voluntary oral nicotine consumption. Adolescent stress had sex-dependent effects on nicotine responses and exploratory behavior, but did not affect anxiety-like behavior or social avoidance in males or females. Adult CVSS males exhibited less exploratory behavior, as indicated by reduced exploratory locomotion in the EPM and social approach-avoidance test, compared to controls. Adolescent stress did not affect nicotine-induced hypothermia in either sex, but CVSS males exhibited augmented nicotine-induced locomotion during late adolescence and voluntarily consumed less nicotine during adulthood. Stress effects on male nicotine-induced locomotion were associated with individual differences in exploratory locomotion in the EPM and social approach-avoidance test. Relative to controls, adult CVSS males and females also exhibited reduced corticosterone levels at baseline and adult male CVSS mice exhibited increased corticosterone levels following an acute nicotine injection. Results suggest that the altered nicotine responses observed in CVSS males may be associated with HPA dysregulation. Taken together, adolescent social stress influences later-life nicotine responses and exploratory behavior. However, there is little evidence of an association between nicotine responses and prototypical anxiety-like behavior or social avoidance in BALB/cJ mice. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Evaluation of Visual Analytics Environments: The Road to the Visual Analytics Science and Technology Challenge Evaluation Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Plaisant, Catherine; Whiting, Mark A.

    The evaluation of visual analytics environments was a topic in Illuminating the Path [Thomas 2005] as a critical aspect of moving research into practice. For a thorough understanding of the utility of the systems available, evaluation not only involves assessing the visualizations, interactions or data processing algorithms themselves, but also the complex processes that a tool is meant to support (such as exploratory data analysis and reasoning, communication through visualization, or collaborative data analysis [Lam 2012; Carpendale 2007]). Researchers and practitioners in the field have long identified many of the challenges faced when planning, conducting, and executing an evaluation ofmore » a visualization tool or system [Plaisant 2004]. Evaluation is needed to verify that algorithms and software systems work correctly and that they represent improvements over the current infrastructure. Additionally to effectively transfer new software into a working environment, it is necessary to ensure that the software has utility for the end-users and that the software can be incorporated into the end-user’s infrastructure and work practices. Evaluation test beds require datasets, tasks, metrics and evaluation methodologies. As noted in [Thomas 2005] it is difficult and expensive for any one researcher to setup an evaluation test bed so in many cases evaluation is setup for communities of researchers or for various research projects or programs. Examples of successful community evaluations can be found [Chinchor 1993; Voorhees 2007; FRGC 2012]. As visual analytics environments are intended to facilitate the work of human analysts, one aspect of evaluation needs to focus on the utility of the software to the end-user. This requires representative users, representative tasks, and metrics that measure the utility to the end-user. This is even more difficult as now one aspect of the test methodology is access to representative end-users to participate in the evaluation. In many cases the sensitive nature of data and tasks and difficult access to busy analysts puts even more of a burden on researchers to complete this type of evaluation. User-centered design goes beyond evaluation and starts with the user [Beyer 1997, Shneiderman 2009]. Having some knowledge of the type of data, tasks, and work practices helps researchers and developers know the correct paths to pursue in their work. When access to the end-users is problematic at best and impossible at worst, user-centered design becomes difficult. Researchers are unlikely to go to work on the type of problems faced by inaccessible users. Commercial vendors have difficulties evaluating and improving their products when they cannot observe real users working with their products. In well-established fields such as web site design or office software design, user-interface guidelines have been developed based on the results of empirical studies or the experience of experts. Guidelines can speed up the design process and replace some of the need for observation of actual users [heuristics review references]. In 2006 when the visual analytics community was initially getting organized, no such guidelines existed. Therefore, we were faced with the problem of developing an evaluation framework for the field of visual analytics that would provide representative situations and datasets, representative tasks and utility metrics, and finally a test methodology which would include a surrogate for representative users, increase interest in conducting research in the field, and provide sufficient feedback to the researchers so that they could improve their systems.« less

  3. Simulation Testing of Embedded Flight Software

    NASA Technical Reports Server (NTRS)

    Shahabuddin, Mohammad; Reinholtz, William

    2004-01-01

    Virtual Real Time (VRT) is a computer program for testing embedded flight software by computational simulation in a workstation, in contradistinction to testing it in its target central processing unit (CPU). The disadvantages of testing in the target CPU include the need for an expensive test bed, the necessity for testers and programmers to take turns using the test bed, and the lack of software tools for debugging in a real-time environment. By virtue of its architecture, most of the flight software of the type in question is amenable to development and testing on workstations, for which there is an abundance of commercially available debugging and analysis software tools. Unfortunately, the timing of a workstation differs from that of a target CPU in a test bed. VRT, in conjunction with closed-loop simulation software, provides a capability for executing embedded flight software on a workstation in a close-to-real-time environment. A scale factor is used to convert between execution time in VRT on a workstation and execution on a target CPU. VRT includes high-resolution operating- system timers that enable the synchronization of flight software with simulation software and ground software, all running on different workstations.

  4. A microcontroller-based three degree-of-freedom manipulator testbed. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Brown, Robert Michael, Jr.

    1995-01-01

    A wheeled exploratory vehicle is under construction at the Mars Mission Research Center at North Carolina State University. In order to serve as more than an inspection tool, this vehicle requires the ability to interact with its surroundings. A crane-type manipulator, as well as the necessary control hardware and software, has been developed for use as a sample gathering tool on this vehicle. The system is controlled by a network of four Motorola M68HC11 microcontrollers. Control hardware and software were developed in a modular fashion so that the system can be used to test future control algorithms and hardware. Actuators include three stepper motors and one solenoid. Sensors include three optical encoders and one cable tensiometer. The vehicle supervisor computer provides the manipulator system with the approximate coordinates of the target object. This system maps the workspace surrounding the given location by lowering the claw, along a set of evenly spaced vertical lines, until contact occurs. Based on this measured height information and prior knowledge of the target object size, the system determines if the object exists in the searched area. The system can find and retrieve a 1.25 in. diameter by 1.25 in. tall cylinder placed within the 47.5 sq in search area in less than 12 minutes. This manipulator hardware may be used for future control algorithm verification and serves as a prototype for other manipulator hardware.

  5. Planning bioinformatics workflows using an expert system.

    PubMed

    Chen, Xiaoling; Chang, Jeffrey T

    2017-04-15

    Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Planning bioinformatics workflows using an expert system

    PubMed Central

    Chen, Xiaoling; Chang, Jeffrey T.

    2017-01-01

    Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928

  7. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, C.; Crook, J.

    1998-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for advanced engine control systems that will result in lower software maintenance (operations) costs. It effectively accommodates software requirement changes that occur due to hardware technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives, benefits, and status of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishments are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software architecture, reuse software, and reduced software reverification time related to software changes. MRECS was recently modified to support a Space Shuttle Main Engine (SSME) hot-fire test. Cold Flow and Flight Readiness Testing were completed before the test was cancelled. Currently, the program is focused on supporting NASA MSFC in accomplishing development testing of the Fastrac Engine, part of NASA's Low Cost Technologies (LCT) Program. MRECS will be used for all engine development testing.

  8. Correcting for multiple-testing in multi-arm trials: is it necessary and is it done?

    PubMed

    Wason, James M S; Stecher, Lynne; Mander, Adrian P

    2014-09-17

    Multi-arm trials enable the evaluation of multiple treatments within a single trial. They provide a way of substantially increasing the efficiency of the clinical development process. However, since multi-arm trials test multiple hypotheses, some regulators require that a statistical correction be made to control the chance of making a type-1 error (false-positive). Several conflicting viewpoints are expressed in the literature regarding the circumstances in which a multiple-testing correction should be used. In this article we discuss these conflicting viewpoints and review the frequency with which correction methods are currently used in practice. We identified all multi-arm clinical trials published in 2012 by four major medical journals. Summary data on several aspects of the trial design were extracted, including whether the trial was exploratory or confirmatory, whether a multiple-testing correction was applied and, if one was used, what type it was. We found that almost half (49%) of published multi-arm trials report using a multiple-testing correction. The percentage that corrected was higher for trials in which the experimental arms included multiple doses or regimens of the same treatments (67%). The percentage that corrected was higher in exploratory than confirmatory trials, although this is explained by a greater proportion of exploratory trials testing multiple doses and regimens of the same treatment. A sizeable proportion of published multi-arm trials do not correct for multiple-testing. Clearer guidance about whether multiple-testing correction is needed for multi-arm trials that test separate treatments against a common control group is required.

  9. Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects

    ERIC Educational Resources Information Center

    Buffardi, Kevin John

    2014-01-01

    Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…

  10. Educational Software Acquisition for Microcomputers.

    ERIC Educational Resources Information Center

    Erikson, Warren; Turban, Efraim

    1985-01-01

    Examination of issues involved in acquiring appropriate microcomputer software for higher education focuses on the following points: developing your own software; finding commercially available software; using published evaluations; pre-purchase testing; customizing and adapting commercial software; post-purchase testing; and software use. A…

  11. Testing the Efficacy of Two New Variants of Recasts with Standard Recasts in Communicative Conversational Settings: An Exploratory Longitudinal Study

    ERIC Educational Resources Information Center

    Wacha, Richard Charles; Liu, Yeu-Ting

    2017-01-01

    The purpose of this exploratory longitudinal study was to evaluate the efficacy of two new forms of recasts (i.e., elaborated and paraphrased recasts), each of which was designed to be more in accordance with contested views of input processing. The effectiveness of the two new forms of recasts was compared to that of conventional standard…

  12. Use of the Transcendental Meditation Technique to Reduce Symptoms of Attention Deficit Hyperactivity Disorder (ADHD) by Reducing Stress and Anxiety: An Exploratory Study

    ERIC Educational Resources Information Center

    Grosswald, Sarina J.; Stixrud, William R.; Travis, Fred; Bateh, Mark A.

    2008-01-01

    This exploratory study tested the feasibility of using the Transcendental Meditation[R] technique to reduce stress and anxiety as a means of reducing symptoms of ADHD. Students ages 11-14 were taught the technique, and practiced it twice daily in school. Common ADHD inventories and performance measures of executive function were administered at…

  13. Impacts of Social-Emotional Curricula on Three-Year-Olds: Exploratory Findings from the Head Start CARES Demonstration. Research Snapshot. OPRE Report 2014-78

    ERIC Educational Resources Information Center

    Hsueh, JoAnn; Lowenstein, Amy E.; Morris, Pamela; Mattera, Shira K.; Bangser, Michael

    2014-01-01

    This report presents exploratory impact findings for 3-year-olds from the Head Start CARES demonstration, a large-scale randomized controlled trial implemented in Head Start centers for one academic year across the country. The study was designed primarily to test the effects of the enhancements on 4-year-olds, but it also provides an opportunity…

  14. Determination of diffusivities in the Rustler Formation from exploratory-shaft construction at the Waste Isolation Pilot Plant in southeastern New Mexico

    USGS Publications Warehouse

    Stevens, Ken; Beyeler, Walt

    1985-01-01

    The construction of an exploratory shaft 12 feet in diameter into the Salado Formation (repository horizon for transuranic waste material) at the Waste Isolation Pilot Plant site in southeastern New Mexico affected water-levels in water-bearing zones above the repository horizon. By reading the construction history of the exploratory shaft, an approximation of construction-generated hydraulic stresses at the shaft was made. The magnitude of the construction-generated stresses was calibrated using the hydrographs from one hydrologic test pad. Whereas flow rates from the Magenta Dolomite and Culebra Dolomite Members in the Rustler Formation into the exploratory shaft were unknown, the ratio of transmissivity to storage (diffusivity) was determined by mathematically simulating the aquifers and the hydrologic stresses with flood-wave-response digital model. These results indicate that the Magenta Dolomite and Culebra Dolomite Members of the Rustler Formation can be modeled as homogeneous, isotropic, and confined water-bearing zones. One simple and consistent explanation, but by no means the only explanation, of the lack of a single diffusivity value in the Culebra aquifer is that the open-hole observation wells at the hydrologic test pads dampen the amplitude of water-level changes. (USGS)

  15. An exploratory study of the relationship between socioeconomic status and motor vehicle safety features.

    PubMed

    Girasek, Deborah C; Taylor, Brett

    2010-04-01

    The purpose of this study was to assess the association between motor vehicle owners' socioeconomic status (SES) and the safety of their motor vehicles. Truncated vehicle identification numbers (VINs) were obtained from the Maryland Motor Vehicle Administration office. ZIP code-level income and educational data were assigned to each VIN. Software was used to identify safety-related vehicle characteristics including crash test rating, availability of electronic stability control and side impact air bags, age, and weight. Correlations and analyses of variance were performed to assess whether a ZIP code's median household income and educational level were associated with its proportion of registered vehicles with safety features. For 13 of the 16 correlations performed, SES was significantly associated with the availability of vehicle safety features in a direction that favored upper-income individuals. Vehicle weight was not associated with income or education. When ZIP codes were divided into median household income quintiles, their mean proportions of safety features also differed significantly, in the same direction, for availability of electronic stability control, side impact air bags, vehicle age, and crash test ratings. Safer motor vehicles appear to be distributed along socioeconomic lines, with lower income groups experiencing more risk. This previously unidentified mechanism of disparity merits further study and the attention of policy makers.

  16. Anomalies in social behaviors and exploratory activities in an APPswe/PS1 mouse model of Alzheimer's disease.

    PubMed

    Filali, Mohammed; Lalonde, Robert; Rivest, Serge

    2011-10-24

    Alzheimer's disease is characterized by deficits in social communication, associated with generalized apathy or agitation, as well as social memory. To assess social behaviors in 6-month-old male APPswe/PS1 bigenics relative to non-transgenic controls, the 3-chamber test was used, together with open-field and elevated plus-maze tests of exploration. APPswe/PS1 mice were less willing to engage in social interaction than wild-type, avoiding an unfamiliar stimulus mouse, probably not due to generalized apathy because in both tests of exploratory activity the mutants were hyperactive. This study reveals reduced "sociability" combined with hyperactivity in an APPswe/PS1 mouse model of Alzheimer dementia. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Effects of 3,4-methylenedioxy-methamphetamine (MDMA) on anxiety in mice tested in the light-dark box.

    PubMed

    Maldonado, E; Navarro, J F

    2000-04-01

    1. The effects of acute administration of 3,4-methylenedioxymethamphetamine (MDMA; "ecstasy") on anxiety tested in the light/dark box were examined in albino male mice of the OF.1 strain. 2. Animals were evaluated in the light/dark test 30 min after injection of MDMA (1, 8, and 15 mg/kg, i.p) or saline. The following parameters were recorded (for 5 min); (a) number of exploratory rearings in the light and dark sections; (b) number of transitions between the lit and dark areas; (c) time spent in the light and dark areas; (d) latency of the initial movement from the light to the dark area, and (e) locomotor activity in light area. 3. MDMA (8 and 15 mg/kg) produced a significant reduction in exploratory activity (rearings and transitions), without decreasing motility, in comparison with saline-treated mice. However, time spent in lit/dark compartments was not significantly affected by the drug, which could be a consequence of the anti-exploratory properties of MDMA. 4. Overall, the behavioral profile found in the light/dark test indicates an anxiogenic-like activity of MDMA in mice. It is suggested, however, that animal models of anxiety which emphasize a social interaction could be more sensitive to the effects of this substance.

  18. On-Line Computer Testing: Implementation and Endorsement.

    ERIC Educational Resources Information Center

    Gwinn, John F.; Beal, Loretta F.

    1988-01-01

    Describes an interactive computer-testing and record-keeping system that was implemented for a self-paced anatomy and physiology course. Results of exploratory research are reported that focus on student preference for online testing, test anxiety, attitude, and achievement; and suggestions are given for integrating a computer-testing program into…

  19. Factors That Affect Software Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.

    1991-01-01

    Software faults that infrequently affect software's output are dangerous. When a software fault causes frequent software failures, testing is likely to reveal the fault before the software is releases; when the fault remains undetected during testing, it can cause disaster after the software is installed. A technique for predicting whether a particular piece of software is likely to reveal faults within itself during testing is found in [Voas91b]. A piece of software that is likely to reveal faults within itself during testing is said to have high testability. A piece of software that is not likely to reveal faults within itself during testing is said to have low testability. It is preferable to design software with higher testabilities from the outset, i.e., create software with as high of a degree of testability as possible to avoid the problems of having undetected faults that are associated with low testability. Information loss is a phenomenon that occurs during program execution that increases the likelihood that a fault will remain undetected. In this paper, I identify two brad classes of information loss, define them, and suggest ways of predicting the potential for information loss to occur. We do this in order to decrease the likelihood that faults will remain undetected during testing.

  20. A methodology for testing fault-tolerant software

    NASA Technical Reports Server (NTRS)

    Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.

    1985-01-01

    A methodology for testing fault tolerant software is presented. There are problems associated with testing fault tolerant software because many errors are masked or corrected by voters, limiter, or automatic channel synchronization. This methodology illustrates how the same strategies used for testing fault tolerant hardware can be applied to testing fault tolerant software. For example, one strategy used in testing fault tolerant hardware is to disable the redundancy during testing. A similar testing strategy is proposed for software, namely, to move the major emphasis on testing earlier in the development cycle (before the redundancy is in place) thus reducing the possibility that undetected errors will be masked when limiters and voters are added.

  1. Differences in Spatio-Temporal Behavior of Zebrafish in the Open Tank Paradigm after a Short-Period Confinement into Dark and Bright Environments

    PubMed Central

    Rosemberg, Denis B.; Rico, Eduardo P.; Mussulini, Ben Hur M.; Piato, Ângelo L.; Calcagnotto, Maria E.; Bonan, Carla D.; Dias, Renato D.; Blaser, Rachel E.; Souza, Diogo O.; de Oliveira, Diogo L.

    2011-01-01

    The open tank paradigm, also known as novel tank diving test, is a protocol used to evaluate the zebrafish behavior. Several characteristics have been described for this species, including scototaxis, which is the natural preference for dark environments in detriment of bright ones. However, there is no evidence regarding the influence of “natural stimuli” in zebrafish subjected to novelty-based paradigms. In this report, we evaluated the spatio-temporal exploratory activity of the short-fin zebrafish phenotype in the open tank after a short-period confinement into dark/bright environments. A total of 44 animals were individually confined during a 10-min single session into one of three environments: black-painted, white-painted, and transparent cylinders (dark, bright, and transparent groups). Fish were further subjected to the novel tank test and their exploratory profile was recorded during a 15-min trial. The results demonstrated that zebrafish increased their vertical exploratory activity during the first 6-min, where the bright group spent more time and travelled a higher distance in the top area. Interestingly, all behavioral parameters measured for the dark group were similar to the transparent one. These data were confirmed by automated analysis of track and occupancy plots and also demonstrated that zebrafish display a classical homebase formation in the bottom area of the tank. A detailed spatio-temporal study of zebrafish exploratory behavior and the construction of representative ethograms showed that the experimental groups presented significant differences in the first 3-min vs. last 3-min of test. Although the main factors involved in these behavioral responses still remain ambiguous and require further investigation, the current report describes an alternative methodological approach for assessing the zebrafish behavior after a forced exposure to different environments. Additionally, the analysis of ethologically-relevant patterns across time could be a potential phenotyping tool to evaluate the zebrafish exploratory profile in the open tank task. PMID:21559304

  2. A high order approach to flight software development and testing

    NASA Technical Reports Server (NTRS)

    Steinbacher, J.

    1981-01-01

    The use of a software development facility is discussed as a means of producing a reliable and maintainable ECS software system, and as a means of providing efficient use of the ECS hardware test facility. Principles applied to software design are given, including modularity, abstraction, hiding, and uniformity. The general objectives of each phase of the software life cycle are also given, including testing, maintenance, code development, and requirement specifications. Software development facility tools are summarized, and tool deficiencies recognized in the code development and testing phases are considered. Due to limited lab resources, the functional simulation capabilities may be indispensable in the testing phase.

  3. A Framework for Testing Scientific Software: A Case Study of Testing Amsterdam Discrete Dipole Approximation Software

    NASA Astrophysics Data System (ADS)

    Shao, Hongbing

    Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.

  4. [NASA/DOD Aerospace Knowledge Diffusion Research Project. Report 3:] Technical communications in aeronautics: Results of an exploratory study. An analysis of profit managers' and nonprofit managers' responses

    NASA Technical Reports Server (NTRS)

    Pinelli, Thomas E.; Glassman, Myron; Barclay, Rebecca O.; Oliu, Walter E.

    1989-01-01

    Data collected from an exploratory study concerned with the technical communications practices of aerospace engineers and scientists were analyzed to test the primary assumption that profit and nonprofit managers in the aerospace community have different technical communications practices. Five assumptions were established for the analysis. Profit and nonprofit managers in the aerospace community were found to have different technical communications practices for one of the five assumptions tested. It was, therefore, concluded that profit and nonprofit managers in the aerospace community do not have different technical communications practices.

  5. Infant titi monkey behavior in the open field test and the effect of early adversity

    PubMed Central

    Larke, Rebecca H.; Toubiana, Alice; Lindsay, Katrina A.; Mendoza, Sally P.; Bales, Karen L.

    2017-01-01

    The open field test is commonly used to measure anxiety-related behavior and exploration in rodents. Here, we used it as a standardized novel environment in which to evaluate the behavioral response of infant titi monkeys (Callicebus cupreus), to determine the effect of presence of individual family members, and to assess how adverse early experience alters infant behavior. Infants were tested in the open field for 5 days at ages 4 and 6 months in four successive 5 min trials on each day. A transport cage, which was situated on one side of the open field, was either empty (non-social control) or contained the father, mother, or sibling. Infant locomotor, vocalization, and exploratory behavior were quantified. Results indicated that age, sex, social condition, and early experience all had significant effects on infant behavior. Specifically, infants were generally more exploratory at 6 months and male infants were more exploratory than females. Infants distinguished between social and non-social conditions but made few behavioral distinctions between the attachment figure and other individuals. Infants which had adverse early life experience demonstrated greater emotional and physical independence, suggesting that early adversity led to resiliency in the novel environment. PMID:28605039

  6. Development and psychometric testing of the Nurse Practitioner Primary Care Organizational Climate Questionnaire.

    PubMed

    Poghosyan, Lusine; Nannini, Angela; Finkelstein, Stacey R; Mason, Emanuel; Shaffer, Jonathan A

    2013-01-01

    Policy makers and healthcare organizations are calling for expansion of the nurse practitioner (NP) workforce in primary care settings to assure timely access and high-quality care for the American public. However, many barriers, including those at the organizational level, exist that may undermine NP workforce expansion and their optimal utilization in primary care. This study developed a new NP-specific survey instrument, Nurse Practitioner Primary Care Organizational Climate Questionnaire (NP-PCOCQ), to measure organizational climate in primary care settings and conducted its psychometric testing. Using instrument development design, the organizational climate domain pertinent for primary care NPs was identified. Items were generated from the evidence and qualitative data. Face and content validity were established through two expert meetings. Content validity index was computed. The 86-item pool was reduced to 55 items, which was pilot tested with 81 NPs using mailed surveys and then field-tested with 278 NPs in New York State. SPSS 18 and Mplus software were used for item analysis, reliability testing, and maximum likelihood exploratory factor analysis. Nurse Practitioner Primary Care Organizational Climate Questionnaire had face and content validity. The content validity index was .90. Twenty-nine items loaded on four subscale factors: professional visibility, NP-administration relations, NP-physician relations, and independent practice and support. The subscales had high internal consistency reliability. Cronbach's alphas ranged from.87 to .95. Having a strong instrument is important to promote future research. Also, administrators can use it to assess organizational climate in their clinics and propose interventions to improve it, thus promoting NP practice and the expansion of NP workforce.

  7. Strategic Planning towards a World-Class University

    NASA Astrophysics Data System (ADS)

    Usoh, E. J.; Ratu, D.; Manongko, A.; Taroreh, J.; Preston, G.

    2018-02-01

    Strategic planning with a focus on world-class university status is an option that cannot be avoided by universities today to survive and succeed in competition as a provider of higher education. The objective of this research is to obtain exploratory research results on the strategic plans of universities that are prepared to generate world-class university status. This research utilised exploratory qualitative research method and data was collected by in-depth interviews method. Interview transcripts were analyzed by using thematic content analysis through NVivo software analysis and manual systems. The main finding of interview shows that most interviewees agreed that UNIMA has been engaged in strategic planning. Contribution from faculties and schools are acknowledged and inform the planning process. However, a new model of strategic planning should be adopted by UNIMA due to the shift towards a “corporate university”. The finding results from documents, literature review and interview were the addition of world-class university characteristics and features to current strategic planning of UNIMA and how to upgrade by considering to use the characteristics and features towards world-class university.

  8. MATTS- A Step Towards Model Based Testing

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.

    2016-08-01

    In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.

  9. Using video-annotation software to identify interactions in group therapies for schizophrenia: assessing reliability and associations with outcomes.

    PubMed

    Orfanos, Stavros; Akther, Syeda Ferhana; Abdul-Basit, Muhammad; McCabe, Rosemarie; Priebe, Stefan

    2017-02-10

    Research has shown that interactions in group therapies for people with schizophrenia are associated with a reduction in negative symptoms. However, it is unclear which specific interactions in groups are linked with these improvements. The aims of this exploratory study were to i) develop and test the reliability of using video-annotation software to measure interactions in group therapies in schizophrenia and ii) explore the relationship between interactions in group therapies for schizophrenia with clinically relevant changes in negative symptoms. Video-annotation software was used to annotate interactions from participants selected across nine video-recorded out-patient therapy groups (N = 81). Using the Individual Group Member Interpersonal Process Scale, interactions were coded from participants who demonstrated either a clinically significant improvement (N = 9) or no change (N = 8) in negative symptoms at the end of therapy. Interactions were measured from the first and last sessions of attendance (>25 h of therapy). Inter-rater reliability between two independent raters was measured. Binary logistic regression analysis was used to explore the association between the frequency of interactive behaviors and changes in negative symptoms, assessed using the Positive and Negative Syndrome Scale. Of the 1275 statements that were annotated using ELAN, 1191 (93%) had sufficient audio and visual quality to be coded using the Individual Group Member Interpersonal Process Scale. Rater-agreement was high across all interaction categories (>95% average agreement). A higher frequency of self-initiated statements measured in the first session was associated with improvements in negative symptoms. The frequency of questions and giving advice measured in the first session of attendance was associated with improvements in negative symptoms; although this was only a trend. Video-annotation software can be used to reliably identify interactive behaviors in groups for schizophrenia. The results suggest that proactive communicative gestures, as assessed by the video-analysis, predict outcomes. Future research should use this novel method in larger and clinically different samples to explore which aspects of therapy facilitate such proactive communication early on in therapy.

  10. Using Automation to Improve the Flight Software Testing Process

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Andrews, Stephen F.; Morgenstern, Wendy M.; Bartholomew, Maureen O.; McComas, David C.; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, attitude control, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on previous missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the perceived benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.

  11. Using Automation to Improve the Flight Software Testing Process

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Morgenstern, Wendy M.; Bartholomew, Maureen O.

    2001-01-01

    One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, knowledge of attitude control, and attitude control hardware, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on other missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.

  12. Space shuttle orbiter avionics software: Post review report for the entry FACI (First Article Configuration Inspection). [including orbital flight tests integrated system

    NASA Technical Reports Server (NTRS)

    Markos, H.

    1978-01-01

    Status of the computer programs dealing with space shuttle orbiter avionics is reported. Specific topics covered include: delivery status; SSW software; SM software; DL software; GNC software; level 3/4 testing; level 5 testing; performance analysis, SDL readiness for entry first article configuration inspection; and verification assessment.

  13. Exploratory analysis of textual data from the Mother and Child Handbook using a text mining method (II): Monthly changes in the words recorded by mothers.

    PubMed

    Tagawa, Miki; Matsuda, Yoshio; Manaka, Tomoko; Kobayashi, Makiko; Ohwada, Michitaka; Matsubara, Shigeki

    2017-01-01

    The aim of the study was to examine the possibility of converting subjective textual data written in the free column space of the Mother and Child Handbook (MCH) into objective information using text mining and to compare any monthly changes in the words written by the mothers. Pregnant women without complications (n = 60) were divided into two groups according to State-Trait Anxiety Inventory grade: low trait anxiety (group I, n = 39) and high trait anxiety (group II, n = 21). Exploratory analysis of the textual data from the MCH was conducted by text mining using the Word Miner software program. Using 1203 structural elements extracted after processing, a comparison of monthly changes in the words used in the mothers' comments was made between the two groups. The data was mainly analyzed by a correspondence analysis. The structural elements in groups I and II were divided into seven and six clusters, respectively, by cluster analysis. Correspondence analysis revealed clear monthly changes in the words used in the mothers' comments as the pregnancy progressed in group I, whereas the association was not clear in group II. The text mining method was useful for exploratory analysis of the textual data obtained from pregnant women, and the monthly change in the words used in the mothers' comments as pregnancy progressed differed according to their degree of unease. © 2016 Japan Society of Obstetrics and Gynecology.

  14. Exploratory analysis of textual data from the Mother and Child Handbook using the text-mining method: Relationships with maternal traits and post-partum depression.

    PubMed

    Matsuda, Yoshio; Manaka, Tomoko; Kobayashi, Makiko; Sato, Shuhei; Ohwada, Michitaka

    2016-06-01

    The aim of the present study was to examine the possibility of screening apprehensive pregnant women and mothers at risk for post-partum depression from an analysis of the textual data in the Mother and Child Handbook by using the text-mining method. Uncomplicated pregnant women (n = 58) were divided into two groups according to State-Trait Anxiety Inventory grade (high trait [group I, n = 21] and low trait [group II, n = 37]) or Edinburgh Postnatal Depression Scale score (high score [group III, n = 15] and low score [group IV, n = 43]). An exploratory analysis of the textual data from the Maternal and Child Handbook was conducted using the text-mining method with the Word Miner software program. A comparison of the 'structure elements' was made between the two groups. The number of structure elements extracted by separated words from text data was 20 004 and the number of structure elements with a threshold of 2 or more as an initial value was 1168. Fifteen key words related to maternal anxiety, and six key words related to post-partum depression were extracted. The text-mining method is useful for the exploratory analysis of textual data obtained from pregnant woman, and this screening method has been suggested to be useful for apprehensive pregnant women and mothers at risk for post-partum depression. © 2016 Japan Society of Obstetrics and Gynecology.

  15. Developmental subchronic exposure to diphenylarsinic acid induced increased exploratory behavior, impaired learning behavior, and decreased cerebellar glutathione concentration in rats.

    PubMed

    Negishi, Takayuki; Matsunaga, Yuki; Kobayashi, Yayoi; Hirano, Seishiro; Tashiro, Tomoko

    2013-12-01

    In Japan, people using water from the well contaminated with high-level arsenic developed neurological, mostly cerebellar, symptoms, where diphenylarsinic acid (DPAA) was a major compound. Here, we investigated the adverse effects of developmental exposure to 20mg/l DPAA in drinking water (early period [0-6 weeks of age] and/or late period [7-12]) on behavior and cerebellar development in male rats. In the open field test at 6 weeks of age, early exposure to DPAA significantly increased exploratory behaviors. At 12 weeks of age, late exposure to DPAA similarly increased exploratory behavior independent of the early exposure although a 6-week recovery from DPAA could reverse that change. In the passive avoidance test at 6 weeks of age, early exposure to DPAA significantly decreased the avoidance performance. Even at 12 weeks of age, early exposure to DPAA significantly decreased the test performance, which was independent of the late exposure to DPAA. These results suggest that the DPAA-induced increase in exploratory behavior is transient, whereas the DPAA-induced impairment of passive avoidance is long lasting. At 6 weeks of age, early exposure to DPAA significantly reduced the concentration of cerebellar total glutathione. At 12 weeks of age, late, but not early, exposure to DPAA also significantly reduced the concentration of cerebellar glutathione, which might be a primary cause of oxidative stress. Early exposure to DPAA induced late-onset suppressed expression of NMDAR1 and PSD95 protein at 12 weeks of age, indicating impaired glutamatergic system in the cerebellum of rats developmentally exposed to DPAA.

  16. Evaluating Heuristics for Planning Effective and Efficient Inspections

    NASA Technical Reports Server (NTRS)

    Shull, Forrest J.; Seaman, Carolyn B.; Diep, Madeline M.; Feldmann, Raimund L.; Godfrey, Sara H.; Regardie, Myrna

    2010-01-01

    A significant body of knowledge concerning software inspection practice indicates that the value of inspections varies widely both within and across organizations. Inspection effectiveness and efficiency can be measured in numerous ways, and may be affected by a variety of factors such as Inspection planning, the type of software, the developing organization, and many others. In the early 1990's, NASA formulated heuristics for inspection planning based on best practices and early NASA inspection data. Over the intervening years, the body of data from NASA inspections has grown. This paper describes a multi-faceted exploratory analysis performed on this · data to elicit lessons learned in general about conducting inspections and to recommend improvements to the existing heuristics. The contributions of our results include support for modifying some of the original inspection heuristics (e.g. Increasing the recommended page rate), evidence that Inspection planners must choose between efficiency and effectiveness, as a good tradeoff between them may not exist, and Identification of small subsets of inspections for which new inspection heuristics are needed. Most Importantly, this work illustrates the value of collecting rich data on software Inspections, and using it to gain insight into, and Improve, inspection practice.

  17. Women and Educational Testing: A Selective Review of the Research Literature and Testing Practices.

    ERIC Educational Resources Information Center

    Tittle, Carol Kehr; And Others

    This report provides an exploratory survey of several aspects of educational testing, with a view toward identifying discrimination against women. Two major ways in which discrimination can occur are examined in educational testing: reinforcement of sex-role stereotypes and restriction of individual choice. Major educational achievement tests are…

  18. Common Data Acquisition Systems (DAS) Software Development for Rocket Propulsion Test (RPT) Test Facilities

    NASA Technical Reports Server (NTRS)

    Hebert, Phillip W., Sr.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Hughes, Mark S.

    2012-01-01

    The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition systems (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis development and deployment.

  19. SLS Flight Software Testing: Using a Modified Agile Software Testing Approach

    NASA Technical Reports Server (NTRS)

    Bolton, Albanie T.

    2016-01-01

    NASA's Space Launch System (SLS) is an advanced launch vehicle for a new era of exploration beyond earth's orbit (BEO). The world's most powerful rocket, SLS, will launch crews of up to four astronauts in the agency's Orion spacecraft on missions to explore multiple deep-space destinations. Boeing is developing the SLS core stage, including the avionics that will control vehicle during flight. The core stage will be built at NASA's Michoud Assembly Facility (MAF) in New Orleans, LA using state-of-the-art manufacturing equipment. At the same time, the rocket's avionics computer software is being developed here at Marshall Space Flight Center in Huntsville, AL. At Marshall, the Flight and Ground Software division provides comprehensive engineering expertise for development of flight and ground software. Within that division, the Software Systems Engineering Branch's test and verification (T&V) team uses an agile test approach in testing and verification of software. The agile software test method opens the door for regular short sprint release cycles. The idea or basic premise behind the concept of agile software development and testing is that it is iterative and developed incrementally. Agile testing has an iterative development methodology where requirements and solutions evolve through collaboration between cross-functional teams. With testing and development done incrementally, this allows for increased features and enhanced value for releases. This value can be seen throughout the T&V team processes that are documented in various work instructions within the branch. The T&V team produces procedural test results at a higher rate, resolves issues found in software with designers at an earlier stage versus at a later release, and team members gain increased knowledge of the system architecture by interfacing with designers. SLS Flight Software teams want to continue uncovering better ways of developing software in an efficient and project beneficial manner. Through agile testing, there has been increased value through individuals and interactions over processes and tools, improved customer collaboration, and improved responsiveness to changes through controlled planning. The presentation will describe agile testing methodology as taken with the SLS FSW Test and Verification team at Marshall Space Flight Center.

  20. Overview of software development at the parabolic dish test site

    NASA Technical Reports Server (NTRS)

    Miyazono, C. K.

    1985-01-01

    The development history of the data acquisition and data analysis software is discussed. The software development occurred between 1978 and 1984 in support of solar energy module testing at the Jet Propulsion Laboratory's Parabolic Dish Test Site, located within Edwards Test Station. The development went through incremental stages, starting with a simple single-user BASIC set of programs, and progressing to the relative complex multi-user FORTRAN system that was used until the termination of the project. Additional software in support of testing is discussed including software in support of a meteorological subsystem and the Test Bed Concentrator Control Console interface. Conclusions and recommendations for further development are discussed.

  1. Designing Test Suites for Software Interactions Testing

    DTIC Science & Technology

    2004-01-01

    the annual cost of insufficient software testing methods and tools in the United States is between 22.2 to 59.5 billion US dollars [13, 14]. This study...10 (2004), 1–29. [21] Cheng, C., Dumitrescu, A., and Schroeder , P. Generating small com- binatorial test suites to cover input-output relationships... Proceedings of the Conference on the Future of Software Engineering (May 2000), pp. 61 – 72. [51] Hartman, A. Software and hardware testing using

  2. Exploring faculty perceptions towards electronic health records for nursing education.

    PubMed

    Kowitlawakul, Y; Chan, S W C; Wang, L; Wang, W

    2014-12-01

    The use of electronic health records in nursing education is rapidly increasing worldwide. The successful implementation of electronic health records for nursing education software program relies on students as well as nursing faculty members. This study aimed to explore the experiences and perceptions of nursing faculty members using electronic health records for nursing education software program, and to identify the influential factors for successful implementation of this technology. This exploratory qualitative study was conducted using in-depth individual interviews at a university in Singapore. Seven faculty members participated in the study. The data were gathered and analysed at the end of the semester in the 2012/2013 academic year. The participants' perceptions of the software program were organized into three main categories: innovation, transition and integration. The participants perceived this technology as innovative, with both values and challenges for the users. In addition, using the new software program was perceived as transitional process. The integration of this technology required time from faculty members and students, as well as support from administrators. The software program had only been implemented for 2-3 months at the time of the interviews. Consequently, the participants might have lacked the necessary skill and competence and confidence to implement it successfully. In addition, the unequal exposure to the software program might have had an impact on participants' perceptions. The findings show that the integration of electronic health records into nursing education curricula is dependent on the faculty members' experiences with the new technology, as well as their perceptions of it. Hence, cultivating a positive attitude towards the use of new technologies is important. Electronic health records are significant applications of health information technology. Health informatics competency should be included as a required competency component in faculty professional development policy and programmes. © 2014 International Council of Nurses.

  3. ETICS: the international software engineering service for the grid

    NASA Astrophysics Data System (ADS)

    Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.

    2008-07-01

    The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.

  4. Chronic exposure to low doses bisphenol A interferes with pair-bonding and exploration in female Mongolian gerbils.

    PubMed

    Razzoli, M; Valsecchi, P; Palanza, P

    2005-04-15

    Estrogenic endocrine disruptors, synthetic or naturally occurring substances found in the environment, can interfere with the vertebrate endocrine system and, mimicking estrogens, interact with the neuroendocrine substrates of behavior. Since species vary in their sensitivity to steroids, it is of great interest to widen the range of species included in the researches on neurobehavioral effects of estrogenic endocrine disruptors. We examined socio-sexual and exploratory behavior of Mongolian gerbil females (Meriones unguiculatus), a monogamous rodent, in response to chronic exposure to the estrogenic endocrine disruptor bisphenol A. Paired females were daily administered with one of the following treatments: bisphenol A (2 or 20 microg/kg body weight/day); 17alpha-ethynil estradiol (0.04 microg/kg body weight/day 17alphaE); oil (vehicle). Females were treated for 3 weeks after pairing. Starting on day of pairing, social interactions within pairs were daily recorded. Three weeks after pairing, females were individually tested in a free exploratory paradigm. Bisphenol A and 17alphaE affected male-female social interactions by increasing social investigation. Bisphenol A reduced several exploratory parameters, indicating a decreased exploratory propensity of females. These results highlight the sensitivity of adult female gerbils to bisphenol A during the hormonally sensitive period of pair formation, also considering that the bisphenol A doses tested are well below the suggested human tolerable daily intake.

  5. "Test" is a Four Letter Word

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G M

    2005-05-03

    For a number of years I had the pleasure of teaching Testing Seminars all over the world and meeting and learning from others in our field. Over a twelve year period, I always asked the following questions to Software Developers, Test Engineers, and Managers who took my two or three day seminar on Software Testing: 'When was the first time you heard the word test'? 'Where were you when you first heard the word test'? 'Who said the word test'? 'How did the word test make you feel'? Most of the thousands of responses were similar to 'It was mymore » third grade teacher at school, and I felt nervous and afraid'. Now there were a few exceptions like 'It was my third grade teacher, and I was happy and excited to show how smart I was'. But by and large, my informal survey found that 'testing' is a word to which most people attach negative meanings, based on its historical context. So why is this important to those of us in the software development business? Because I have found that a preponderance of software developers do not get real excited about hearing that the software they just wrote is going to be 'tested' by the Test Group. Typical reactions I have heard over the years run from: 'I'm sure there is nothing wrong with the software, so go ahead and test it, better you find defects than our customers'. to these extremes: 'There is no need to test my software because there is nothing wrong with it'. 'You are not qualified to test my software because you don't know as much as I do about it'. 'If any Test Engineers come into our office again to test our software we will throw them through the third floor window'. So why is there such a strong negative reaction to testing? It is primitive. It goes back to grade school for many of us. It is a negative word that congers up negative emotions. In other words, 'test' is a four letter word. How many of us associate 'Joy' with 'Test'? Not many. It is hard for most of us to reprogram associations learned at an early age. So what can we do about it (short of hypnotic therapy for software developers)? Well one concept I have used (and still use) is to not call testing 'testing'. Call it something else. Ever wonder why most of the Independent Software Testing groups are called Software Quality Assurance groups? Now you know. Software Quality Assurance is not such a negatively charged phrase, even though Software Quality Assurance is much more than simply testing. It was a real blessing when the concept of Validation and Verification came about for software. Now I define Validation to mean assuring that the product produced does the right thing (usually what the customer wants it to do), and verification means that the product was built the right way (in accordance with some good design principles and practices). So I have deliberately called the System Test Group the Verification and Validation Group, or V&V Group, as a way of avoiding the negative image problem. I remember once having a conversation with a developer colleague who said, in the heat of battle, that it was fine to V&V his code, just don't test it! Once again V&V includes many things besides testing, but it just doesn't sound like an onerous thing to do to software. In my current job, working at a highly regarded national laboratory with world renowned physicists, I have again encountered the negativity about testing software. Except here they don't take kindly to Software Quality Assurance or Software Verification and Validation either. After all, software is just a trivial tool to automate algorithms that implement physics models. Testing, SQA, and V&V take time and get in the way of completing ground breaking science experiments. So I have again had to change the name of software testing to something less negative in the physics world. I found (the hard way) that if I requested more time to do software experimentation, the physicist's resistance melted. And so the conversation continues, 'We have time to run more software experiments. Just don't waste any time testing the software'! In case the concept of not calling testing 'testing' appeals to you, and there may be an opportunity for you to take the sting out of the name at your place of employment, I have compiled a table of things that testing could be called besides 'testing'. Of course we can embellish this by adding some good sounding prefixes and suffixes also. To come up with alternate names for testing, pick a word from columns A, B, and C in the table below. For instance Unified Acceptance Trials (A2,B7,C3) or Tailored Observational Demonstration (A6,B5,C5) or Agile Criteria Scoring (A3,B8,C8) or Rapid Requirement Proof (A1,B9,C7) or Satisfaction Assurance (B10,C1). You can probably think of some additional combinations appropriate for your industry.« less

  6. Integrating Testing into Software Engineering Courses Supported by a Collaborative Learning Environment

    ERIC Educational Resources Information Center

    Clarke, Peter J.; Davis, Debra; King, Tariq M.; Pava, Jairo; Jones, Edward L.

    2014-01-01

    As software becomes more ubiquitous and complex, the cost of software bugs continues to grow at a staggering rate. To remedy this situation, there needs to be major improvement in the knowledge and application of software validation techniques. Although there are several software validation techniques, software testing continues to be one of the…

  7. Walk a Mile in My Shoes: Stakeholder Accounts of Testing Experience with a Computer-Administered Test

    ERIC Educational Resources Information Center

    Fox, Janna; Cheng, Liying

    2015-01-01

    In keeping with the trend to elicit multiple stakeholder responses to operational tests as part of test validation, this exploratory mixed methods study examines test-taker accounts of an Internet-based (i.e., computer-administered) test in the high-stakes context of proficiency testing for university admission. In 2013, as language testing…

  8. Enduring effects of post-weaning rearing condition on depressive- and anxiety-like behaviors and motor activity in male rats.

    PubMed

    Mosaferi, Belal; Babri, Shirin; Ebrahimi, Hadi; Mohaddes, Gisou

    2015-04-01

    Environmental manipulation at early critical periods could have long-lasting effects. In spite of the great interest in the biological effects of the environmental condition so far, its long-lasting effects are less documented. This study looks at the enduring effects of rearing condition on tasks that measure affective responses and exploratory behavior in male Wistar rats. The animals were reared from weaning to adulthood in an enriched environment, standard laboratory condition, or isolated condition. Then, all rats were housed in standard laboratory cages to provide a common environment, and successively exposed to different tests between 0 and 11 weeks post-manipulation. The open field test indicated a more efficient exploratory behavior in the enriched group, and an enhanced spontaneous motor activity in both standard and isolated groups. In addition, rats reared in standard condition showed heightened motor activity in forced swimming test and elevated plus maze. Forced swimming test showed an antidepressive-like effect in the enriched environment group by increased climbing behavior. In respect to the anxiety behavior, environmental enrichment improved threat detection ability. It is concluded that rearing condition from weaning to adulthood has important and long-lasting effects on depressive- and anxiety-like and exploratory behaviors as well as motor activity. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Path generation algorithm for UML graphic modeling of aerospace test software

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  10. Wall adjustment strategy software for use with the NASA Langley 0.3-meter transonic cryogenic tunnel adaptive wall test section

    NASA Technical Reports Server (NTRS)

    Wolf, Stephen W. D.

    1988-01-01

    The Wall Adjustment Strategy (WAS) software provides successful on-line control of the 2-D flexible walled test section of the Langley 0.3-m Transonic Cryogenic Tunnel. This software package allows the level of operator intervention to be regulated as necessary for research and production type 2-D testing using and Adaptive Wall Test Section (AWTS). The software is designed to accept modification for future requirements, such as 3-D testing, with a minimum of complexity. The WAS software described is an attempt to provide a user friendly package which could be used to control any flexible walled AWTS. Control system constraints influence the details of data transfer, not the data type. Then this entire software package could be used in different control systems, if suitable interface software is available. A complete overview of the software highlights the data flow paths, the modular architecture of the software and the various operating and analysis modes available. A detailed description of the software modules includes listings of the code. A user's manual is provided to explain task generation, operating environment, user options and what to expect at execution.

  11. [NASA/DOD Aerospace Knowledge Diffusion Research Project. Report 2:] Technical communications in aeronautics: Results of an exploratory study. An analysis of managers' and nonmanagers' responses

    NASA Technical Reports Server (NTRS)

    Pinelli, Thomas E.; Glassman, Myron; Barclay, Rebecca O.; Oliu, Walter E.

    1989-01-01

    Data collected from an exploratory study concerned with the technical communications practices of aerospace engineers and scientists were analyzed to test the primary assumption that aerospace managers and nonmanagers have different technical communications practices. Five assumptions were established for the analysis. Aerospace managers and nonmanagers were found to have different technical communications practices for three of the five assumptions tested. Although aerospace managers and nonmanagers were found to have different technical communications practices, the evidence was neither conclusive nor compelling that the presumption of difference in practices could be attributed to the duties performed by aerospace managers and nonmanagers.

  12. Test Item Linguistic Complexity and Assessments for Deaf Students

    ERIC Educational Resources Information Center

    Cawthon, Stephanie

    2011-01-01

    Linguistic complexity of test items is one test format element that has been studied in the context of struggling readers and their participation in paper-and-pencil tests. The present article presents findings from an exploratory study on the potential relationship between linguistic complexity and test performance for deaf readers. A total of 64…

  13. Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing

    NASA Astrophysics Data System (ADS)

    Srivastava, Praveen Ranjan; Pareek, Deepak

    Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.

  14. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  15. Statistics of software vulnerability detection in certification testing

    NASA Astrophysics Data System (ADS)

    Barabanov, A. V.; Markov, A. S.; Tsirlov, V. L.

    2018-05-01

    The paper discusses practical aspects of introduction of the methods to detect software vulnerability in the day-to-day activities of the accredited testing laboratory. It presents the approval results of the vulnerability detection methods as part of the study of the open source software and the software that is a test object of the certification tests under information security requirements, including software for communication networks. Results of the study showing the allocation of identified vulnerabilities by types of attacks, country of origin, programming languages used in the development, methods for detecting vulnerability, etc. are given. The experience of foreign information security certification systems related to the detection of certified software vulnerabilities is analyzed. The main conclusion based on the study is the need to implement practices for developing secure software in the development life cycle processes. The conclusions and recommendations for the testing laboratories on the implementation of the vulnerability analysis methods are laid down.

  16. Integrated testing and verification system for research flight software

    NASA Technical Reports Server (NTRS)

    Taylor, R. N.

    1979-01-01

    The MUST (Multipurpose User-oriented Software Technology) program is being developed to cut the cost of producing research flight software through a system of software support tools. An integrated verification and testing capability was designed as part of MUST. Documentation, verification and test options are provided with special attention on real-time, multiprocessing issues. The needs of the entire software production cycle were considered, with effective management and reduced lifecycle costs as foremost goals.

  17. Common Data Acquisition Systems (DAS) Software Development for Rocket Propulsion Test (RPT) Test Facilities - A General Overview

    NASA Technical Reports Server (NTRS)

    Hebert, Phillip W., Sr.; Hughes, Mark S.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Marshall, PeggL.; Duncan, Michael E.; Morris, Jon A.; Franzl, Richard W.

    2012-01-01

    The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition system (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis' development and deployment.

  18. PubMed Central

    Gazzani, D.; Pilati, S.; Paiano, J.; Sannino, A.; Ferrari, S.; Checchin, E.

    2017-01-01

    Summary Introduction. The non-medical use of prescription stimulants (NMUPS) has become the subject of great interest for its diffusion among university students, who abuse these substances to cope with the increasing load of academic stress. NMUPS has been widely investigated in the U.S. due to its increasing trend; this behavior, however, has also been reported in Europe. The aim of this cross-sectional study was to examine stimulants misuse in a Northern Italian geographic area, identifying possible developments of the phenomenon in Italy. Methods. To evaluate academic and extra-academic NMUPS (Methylphenidate and Amphetamines), an anonymous multiplechoice questionnaire was administrated to a sample of Bachelor's and Master's degrees students attending a University North East of Italy. Data elaboration and CI 95% were performed with Excel software 2013. Fisher's exact tests were performed using Graph- Pad INSTAT software. Results. Data from 899 correctly completed questionnaires were analyzed in this study. 11.3% of students reported NMUPS, with an apparent greater use by students aged 18-22 years (73.5%) and without any statistically significant gender predominance. Fifty-seven point eight percent of students used stimulants at most five times in six months, and the most frequent academic and extra-academic reasons to use them were respectively to improve concentration while studying (51.0%) and sports performance (25.5%). NMUPS was higher among working students than nonworking ones (p < 0.05), suggesting a use of stimulants to cope with stress by the first ones. Conclusions. These exploratory and preliminary data suggest that NMUPS is quite relevant in Northern Italy, suggesting a need for preventive and monitoring measures, as well as future analysis via a longitudinal multicenter study. PMID:28900353

  19. ExoData: A Python package to handle large exoplanet catalogue data

    NASA Astrophysics Data System (ADS)

    Varley, Ryan

    2016-10-01

    Exoplanet science often involves using the system parameters of real exoplanets for tasks such as simulations, fitting routines, and target selection for proposals. Several exoplanet catalogues are already well established but often lack a version history and code friendly interfaces. Software that bridges the barrier between the catalogues and code enables users to improve the specific repeatability of results by facilitating the retrieval of exact system parameters used in articles results along with unifying the equations and software used. As exoplanet science moves towards large data, gone are the days where researchers can recall the current population from memory. An interface able to query the population now becomes invaluable for target selection and population analysis. ExoData is a Python interface and exploratory analysis tool for the Open Exoplanet Catalogue. It allows the loading of exoplanet systems into Python as objects (Planet, Star, Binary, etc.) from which common orbital and system equations can be calculated and measured parameters retrieved. This allows researchers to use tested code of the common equations they require (with units) and provides a large science input catalogue of planets for easy plotting and use in research. Advanced querying of targets is possible using the database and Python programming language. ExoData is also able to parse spectral types and fill in missing parameters according to programmable specifications and equations. Examples of use cases are integration of equations into data reduction pipelines, selecting planets for observing proposals and as an input catalogue to large scale simulation and analysis of planets. ExoData is a Python package available freely on GitHub.

  20. COLORcation: A new application to phenotype exploratory behavior models of anxiety in mice.

    PubMed

    Dagan, Shachar Y; Tsoory, Michael M; Fainzilber, Mike; Panayotis, Nicolas

    2016-09-01

    Behavioral analyses in rodents have successfully delineated the function of many genes and signaling pathways in the brain. Behavioral testing uses highly defined experimental conditions to identify abnormalities in a given mouse strain or genotype. The open field (OF) is widely used to assess both locomotion and anxiety in rodents. In this test, the more a mouse explores and spend time in the center of the arena, the less anxious it is considered to be. However, the simplistic distinction between center and border substantially reduces the information content of the analysis and may fail to detect biologically meaningful differences. Here we describe COLORcation, a new application for improved analyses of mouse behavior in the OF. The application analyses animal exploration patterns in detailed spatial resolution (e.g. 10×10 bins) to provide a color-encoded heat map of mouse activity. In addition, COLORcation provides new parameters to track activity and locomotion of the test animals. We demonstrate the use of COLORcation in different experimental paradigms, including pharmacological and restraint-based induction of stress and anxiety. COLORcation is compatible with multiple acquisition systems, giving users the option to make the most of their raw data organized text files containing time and coordinates of animal locations as input. These analyses validate the utility of the software and establish its reliability and potential as a new tool to analyze OF data. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. 15 CFR 995.27 - Format validation software testing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Format validation software testing... CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES CERTIFICATION REQUIREMENTS FOR... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying...

  2. The Design of Software for Three-Phase Induction Motor Test System

    NASA Astrophysics Data System (ADS)

    Haixiang, Xu; Fengqi, Wu; Jiai, Xue

    2017-11-01

    The design and development of control system software is important to three-phase induction motor test equipment, which needs to be completely familiar with the test process and the control procedure of test equipment. In this paper, the software is developed according to the national standard (GB/T1032-2005) about three-phase induction motor test method by VB language. The control system and data analysis software and the implement about motor test system are described individually, which has the advantages of high automation and high accuracy.

  3. CATS, continuous automated testing of seismological, hydroacoustic, and infrasound (SHI) processing software.

    NASA Astrophysics Data System (ADS)

    Brouwer, Albert; Brown, David; Tomuta, Elena

    2017-04-01

    To detect nuclear explosions, waveform data from over 240 SHI stations world-wide flows into the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), located in Vienna, Austria. A complex pipeline of software applications processes this data in numerous ways to form event hypotheses. The software codebase comprises over 2 million lines of code, reflects decades of development, and is subject to frequent enhancement and revision. Since processing must run continuously and reliably, software changes are subjected to thorough testing before being put into production. To overcome the limitations and cost of manual testing, the Continuous Automated Testing System (CATS) has been created. CATS provides an isolated replica of the IDC processing environment, and is able to build and test different versions of the pipeline software directly from code repositories that are placed under strict configuration control. Test jobs are scheduled automatically when code repository commits are made. Regressions are reported. We present the CATS design choices and test methods. Particular attention is paid to how the system accommodates the individual testing of strongly interacting software components that lack test instrumentation.

  4. Analysis of key technologies for virtual instruments metrology

    NASA Astrophysics Data System (ADS)

    Liu, Guixiong; Xu, Qingui; Gao, Furong; Guan, Qiuju; Fang, Qiang

    2008-12-01

    Virtual instruments (VIs) require metrological verification when applied as measuring instruments. Owing to the software-centered architecture, metrological evaluation of VIs includes two aspects: measurement functions and software characteristics. Complexity of software imposes difficulties on metrological testing of VIs. Key approaches and technologies for metrology evaluation of virtual instruments are investigated and analyzed in this paper. The principal issue is evaluation of measurement uncertainty. The nature and regularity of measurement uncertainty caused by software and algorithms can be evaluated by modeling, simulation, analysis, testing and statistics with support of powerful computing capability of PC. Another concern is evaluation of software features like correctness, reliability, stability, security and real-time of VIs. Technologies from software engineering, software testing and computer security domain can be used for these purposes. For example, a variety of black-box testing, white-box testing and modeling approaches can be used to evaluate the reliability of modules, components, applications and the whole VI software. The security of a VI can be assessed by methods like vulnerability scanning and penetration analysis. In order to facilitate metrology institutions to perform metrological verification of VIs efficiently, an automatic metrological tool for the above validation is essential. Based on technologies of numerical simulation, software testing and system benchmarking, a framework for the automatic tool is proposed in this paper. Investigation on implementation of existing automatic tools that perform calculation of measurement uncertainty, software testing and security assessment demonstrates the feasibility of the automatic framework advanced.

  5. Cognition, personality, and stress in budgerigars, Melopsittacus undulatus.

    PubMed

    Medina-García, Angela; Jawor, Jodie M; Wright, Timothy F

    2017-01-01

    To study the fitness effects of individual variation in cognitive traits, it is paramount to understand whether traits such as personality and physiological stress influence cognitive performance. We first tested whether budgerigars showed both consistent personalities and cognitive performance across time and tasks. We tested object and food neophobia, and exploratory behavior. We measured cognitive performance in habituation, ability to solve foraging problems, spatial memory, and seed discrimination tasks. Budgerigars showed consistency in their neophobic tendencies and these tendencies were associated with their exploratory behavior. Birds were also consistent in how they performed in most of the cognitive tasks (temporal consistency), but were not consistent in their performance across tasks (context consistency). Neither corticosterone levels (baseline and stress-induced) showed a significant relationship with either cognitive or personality measures. Neophobic and exploratory tendencies determined the willingness of birds to engage only in the seed discrimination task. Such tendencies also had a significant effect on problem-solving ability. Our results suggest that consistent individual differences in cognitive performance along with consistent differences in personality could determine response to environmental change and therefore have important fitness consequences.

  6. MapFactory - Towards a mapping design pattern for big geospatial data

    NASA Astrophysics Data System (ADS)

    Rautenbach, Victoria; Coetzee, Serena

    2018-05-01

    With big geospatial data emerging, cartographers and geographic information scientists have to find new ways of dealing with the volume, variety, velocity, and veracity (4Vs) of the data. This requires the development of tools that allow processing, filtering, analysing, and visualising of big data through multidisciplinary collaboration. In this paper, we present the MapFactory design pattern that will be used for the creation of different maps according to the (input) design specification for big geospatial data. The design specification is based on elements from ISO19115-1:2014 Geographic information - Metadata - Part 1: Fundamentals that would guide the design and development of the map or set of maps to be produced. The results of the exploratory research suggest that the MapFactory design pattern will help with software reuse and communication. The MapFactory design pattern will aid software developers to build the tools that are required to automate map making with big geospatial data. The resulting maps would assist cartographers and others to make sense of big geospatial data.

  7. Infant titi monkey behavior in the open field test and the effect of early adversity.

    PubMed

    Larke, Rebecca H; Toubiana, Alice; Lindsay, Katrina A; Mendoza, Sally P; Bales, Karen L

    2017-09-01

    The open field test is commonly used to measure anxiety-related behavior and exploration in rodents. Here, we used it as a standardized novel environment in which to evaluate the behavioral response of infant titi monkeys (Callicebus cupreus), to determine the effect of presence of individual family members, and to assess how adverse early experience alters infant behavior. Infants were tested in the open field for 5 days at ages 4 and 6 months in four successive 5 min trials on each day. A transport cage, which was situated on one side of the open field, was either empty (non-social control) or contained the father, mother, or sibling. Infant locomotor, vocalization, and exploratory behavior were quantified. Results indicated that age, sex, social condition, and early experience all had significant effects on infant behavior. Specifically, infants were generally more exploratory at 6 months and male infants were more exploratory than females. Infants distinguished between social and non-social conditions but made few behavioral distinctions between the attachment figure and other individuals. Infants which had adverse early life experience demonstrated greater emotional and physical independence, suggesting that early adversity led to resiliency in the novel environment. © 2017 Wiley Periodicals, Inc.

  8. Developmental Subchronic Exposure to Diphenylarsinic Acid Induced Increased Exploratory Behavior, Impaired Learning Behavior, and Decreased Cerebellar Glutathione Concentration in Rats

    PubMed Central

    Negishi, Takayuki; Matsunaga, Yuki

    2013-01-01

    In Japan, people using water from the well contaminated with high-level arsenic developed neurological, mostly cerebellar, symptoms, where diphenylarsinic acid (DPAA) was a major compound. Here, we investigated the adverse effects of developmental exposure to 20mg/l DPAA in drinking water (early period [0–6 weeks of age] and/or late period [7–12]) on behavior and cerebellar development in male rats. In the open field test at 6 weeks of age, early exposure to DPAA significantly increased exploratory behaviors. At 12 weeks of age, late exposure to DPAA similarly increased exploratory behavior independent of the early exposure although a 6-week recovery from DPAA could reverse that change. In the passive avoidance test at 6 weeks of age, early exposure to DPAA significantly decreased the avoidance performance. Even at 12 weeks of age, early exposure to DPAA significantly decreased the test performance, which was independent of the late exposure to DPAA. These results suggest that the DPAA-induced increase in exploratory behavior is transient, whereas the DPAA-induced impairment of passive avoidance is long lasting. At 6 weeks of age, early exposure to DPAA significantly reduced the concentration of cerebellar total glutathione. At 12 weeks of age, late, but not early, exposure to DPAA also significantly reduced the concentration of cerebellar glutathione, which might be a primary cause of oxidative stress. Early exposure to DPAA induced late-onset suppressed expression of NMDAR1 and PSD95 protein at 12 weeks of age, indicating impaired glutamatergic system in the cerebellum of rats developmentally exposed to DPAA. PMID:24008832

  9. Lipopolysaccharide affects exploratory behaviors toward novel objects by impairing cognition and/or motivation in mice: Possible role of activation of the central amygdala.

    PubMed

    Haba, Ryota; Shintani, Norihito; Onaka, Yusuke; Wang, Hyper; Takenaga, Risa; Hayata, Atsuko; Baba, Akemichi; Hashimoto, Hitoshi

    2012-03-17

    Lipopolysaccharide (LPS) produces a series of systemic and psychiatric changes called sickness behavior. In the present study, we characterized the LPS-induced decrease in novel object exploratory behaviors in BALB/c mice. As already reported, LPS (0.3-5 μg/mouse) induced dose- and time-dependent decreases in locomotor activity, food intake, social interaction, and exploration for novel objects, and an increase in immobility in the forced-swim test. Although the decrease in locomotor activity was ameliorated by 10h postinjection, novel object exploratory behaviors remained decreased at 24h and were observed even with the lowest dose of LPS. In an object exploration test, LPS shortened object exploration time but did not affect moving time or the frequency of object exploration. Although pre-exposure to the same object markedly decreased the duration of exploration and LPS did not change this reduction, LPS significantly impaired the exploration of a novel object that replaced the familiar one. LPS did not affect anxiety-like behaviors in open-field and elevated plus-maze tests. An LPS-induced increase in the number of c-Fos-immunoreactive cells was observed in several brain regions within 6h of LPS administration, but the number of cells quickly returned to control levels, except in the central amygdala where the increase continued for 24h. These results suggest that LPS most prominently affects object exploratory behaviors by impairing cognition and/or motivation including continuous attention and curiosity toward objects, and that this may be associated with activation of brain nuclei such as the central amygdala. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Early chronic lead exposure reduces exploratory activity in young C57BL/6J mice.

    PubMed

    Flores-Montoya, Mayra Gisel; Sobin, Christina

    2015-07-01

    Research has suggested that chronic low-level lead exposure diminishes neurocognitive function in children. Tests that are sensitive to behavioral effects at lowest levels of lead exposure are needed for the development of animal models. In this study we investigated the effects of chronic low-level lead exposure on exploratory activity (unbaited nose poke task), exploratory ambulation (open field task) and motor coordination (Rotarod task) in pre-adolescent mice. C57BL/6J pups were exposed to 0 ppm (controls), 30 ppm (low-dose) or 230 ppm (high-dose) lead acetate via dams' drinking water administered from birth to postnatal day 28, to achieve a range of blood lead levels (BLLs) from not detectable to 14.84 µg dl(-1) ). At postnatal day 28, mice completed behavioral testing and were killed (n = 61). BLLs were determined by inductively coupled plasma mass spectrometry. The effects of lead exposure on behavior were tested using generalized linear mixed model analyses with BLL, sex and the interaction as fixed effects, and litter as the random effect. BLL predicted decreased exploratory activity and no threshold of effect was apparent. As BLL increased, nose pokes decreased. The C57BL/6J mouse is a useful model for examining effects of early chronic low-level lead exposure on behavior. In the C57BL/6J mouse, the unbaited nose poke task is sensitive to the effects of early chronic low-level lead exposure. This is the first animal study to show behavioral effects in pre-adolescent lead-exposed mice with BLL below 5 µg dl(-1). Copyright © 2014 John Wiley & Sons, Ltd.

  11. Early chronic lead exposure reduces exploratory activity in young C57BL/6J mice

    PubMed Central

    Flores-Montoya, Mayra Gisel; Sobin, Christina

    2014-01-01

    Research has suggested that chronic low-level lead exposure diminishes neurocognitive function in children. Tests that are sensitive to behavioral effects at lowest levels of lead exposure are needed for the development of animal models. In this study we investigated the effects of chronic low-level lead exposure on exploratory activity (unbaited nose poke task), exploratory ambulation (open field task) and motor coordination (Rotarod task) in pre-adolescent mice. C57BL/6J pups were exposed to 0 ppm (controls), 30 ppm (low-dose) or 230 ppm (high-dose) lead acetate via dams’ drinking water administered from birth to postnatal day 28, to achieve a range of blood lead levels (BLLs) from not detectable to 14.84 μg dl−1). At postnatal day 28, mice completed behavioral testing and were killed (n = 61). BLLs were determined by inductively coupled plasma mass spectrometry. The effects of lead exposure on behavior were tested using generalized linear mixed model analyses with BLL, sex and the interaction as fixed effects, and litter as the random effect. BLL predicted decreased exploratory activity and no threshold of effect was apparent. As BLL increased, nose pokes decreased. The C57BL/6J mouse is a useful model for examining effects of early chronic low-level lead exposure on behavior. In the C57BL/6J mouse, the unbaited nose poke task is sensitive to the effects of early chronic low-level lead exposure. This is the first animal study to show behavioral effects in pre-adolescent lead-exposed mice with BLL below 5 μg dl−1. PMID:25219894

  12. Exploratory Studies of Bias in Achievement Tests.

    ERIC Educational Resources Information Center

    Green, Donald Ross; Draper, John F.

    This paper considers the question of bias in group administered academic achievement tests, bias which is inherent in the instruments themselves. A body of data on the test of performance of three disadvantaged minority groups--northern, urban black; southern, rural black; and, southwestern, Mexican-Americans--as tryout samples in contrast to…

  13. Do Test Scores Buy Happiness?

    ERIC Educational Resources Information Center

    McCluskey, Neal

    2017-01-01

    Since at least the enactment of No Child Left Behind in 2002, standardized test scores have served as the primary measures of public school effectiveness. Yet, such scores fail to measure the ultimate goal of education: maximizing happiness. This exploratory analysis assesses nation level associations between test scores and happiness, controlling…

  14. An exploratory study on emotion recognition in patients with a clinically isolated syndrome and multiple sclerosis.

    PubMed

    Jehna, Margit; Neuper, Christa; Petrovic, Katja; Wallner-Blazek, Mirja; Schmidt, Reinhold; Fuchs, Siegrid; Fazekas, Franz; Enzinger, Christian

    2010-07-01

    Multiple sclerosis (MS) is a chronic multifocal CNS disorder which can affect higher order cognitive processes. Whereas cognitive disturbances in MS are increasingly better characterised, emotional facial expression (EFE) has rarely been tested, despite its importance for adequate social behaviour. We tested 20 patients with a clinically isolated syndrome suggestive of MS (CIS) or MS and 23 healthy controls (HC) for the ability to differ between emotional facial stimuli, controlling for the influence of depressive mood (ADS-L). We screened for cognitive dysfunction using The Faces Symbol Test (FST). The patients demonstrated significant decreased reaction-times regarding emotion recognition tests compared to HC. However, the results also suggested worse cognitive abilities in the patients. Emotional and cognitive test results were correlated. This exploratory pilot study suggests that emotion recognition deficits might be prevalent in MS. However, future studies will be needed to overcome the limitations of this study. Copyright 2010 Elsevier B.V. All rights reserved.

  15. An Exploratory Analysis of Factors Affecting Participation in Air Force Knowledge Now Communities of Practice

    DTIC Science & Technology

    2004-03-01

    reliability coefficients are presented in chapter four in the factor analysis section. Along with Crobach’s Alpha coefficients, the Kaiser - Meyer - Olkin ...the pattern of correlation coefficients > 0.300 in the correlation matrix • Kaiser - Meyer - Olkin Measure of Sampling Adequacy (MSA) > 0.700 • Bartlett’s...exploratory factor analysis. The Kaiser - Meyer - Olkin measure of sampling adequacy yielded a value of .790, and Bartlett’s test of sphericity yielded a

  16. Exploratory Study of 4D versus 3D Robust Optimization in Intensity Modulated Proton Therapy for Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wei, E-mail: Liu.Wei@mayo.edu; Schild, Steven E.; Chang, Joe Y.

    Purpose: The purpose of this study was to compare the impact of uncertainties and interplay on 3-dimensional (3D) and 4D robustly optimized intensity modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods and Materials: IMPT plans were created for 11 nonrandomly selected non-small cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D computed tomography (CT) to irradiate clinical target volume (CTV). Regular fractionation (66 Gy [relative biological effectiveness; RBE] in 33 fractions) was considered.more » In 4D optimization, the CTV of individual phases received nonuniform doses to achieve a uniform cumulative dose. The root-mean-square dose-volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram (DVH) indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed rank test. Results: 4D robust optimization plans led to smaller AUC for CTV (14.26 vs 18.61, respectively; P=.001), better CTV coverage (Gy [RBE]) (D{sub 95%} CTV: 60.6 vs 55.2, respectively; P=.001), and better CTV homogeneity (D{sub 5%}-D{sub 95%} CTV: 10.3 vs 17.7, resspectively; P=.002) in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage (D{sub 95%} CTV: 64.5 vs 63.8, respectively; P=.0068), comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions: Our exploratory methodology study showed that, compared to 3D robust optimization, 4D robust optimization produced significantly more robust and interplay-effect-resistant plans for targets with comparable dose distributions for normal tissues. A further study with a larger and more realistic patient population is warranted to generalize the conclusions.« less

  17. Sustaining Software-Intensive Systems

    DTIC Science & Technology

    2006-05-01

    2.2 Multi- Service Operational Test and Evaluation .......................................4 2.3 Stable Software Baseline...or equivalent document • completed Multi- Service Operational Test and Evaluation (MOT&E) for the potential production software package (or OT&E if...not multi- service ) • stable software production baseline • complete and current software documentation • Authority to Operate (ATO) for an

  18. Artificial intelligence and expert systems in-flight software testing

    NASA Technical Reports Server (NTRS)

    Demasie, M. P.; Muratore, J. F.

    1991-01-01

    The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.

  19. T-Pattern Analysis and Cognitive Load Manipulation to Detect Low-Stake Lies: An Exploratory Study.

    PubMed

    Diana, Barbara; Zurloni, Valentino; Elia, Massimiliano; Cavalera, Cesare; Realdon, Olivia; Jonsson, Gudberg K; Anguera, M Teresa

    2018-01-01

    Deception has evolved to become a fundamental aspect of human interaction. Despite the prolonged efforts in many disciplines, there has been no definite finding of a univocally "deceptive" signal. This work proposes an approach to deception detection combining cognitive load manipulation and T-pattern methodology with the objective of: (a) testing the efficacy of dual task-procedure in enhancing differences between truth tellers and liars in a low-stakes situation; (b) exploring the efficacy of T-pattern methodology in discriminating truthful reports from deceitful ones in a low-stakes situation; (c) setting the experimental design and procedure for following research. We manipulated cognitive load to enhance differences between truth tellers and liars, because of the low-stakes lies involved in our experiment. We conducted an experimental study with a convenience sample of 40 students. We carried out a first analysis on the behaviors' frequencies coded through the observation software, using SPSS (22). The aim was to describe shape and characteristics of behavior's distributions and explore differences between groups. Datasets were then analyzed with Theme 6.0 software which detects repeated patterns (T-patterns) of coded events (non-verbal behaviors) that regularly or irregularly occur within a period of observation. A descriptive analysis on T-pattern frequencies was carried out to explore differences between groups. An in-depth analysis on more complex patterns was performed to get qualitative information on the behavior structure expressed by the participants. Results show that the dual-task procedure enhances differences observed between liars and truth tellers with T-pattern methodology; moreover, T-pattern detection reveals a higher variety and complexity of behavior in truth tellers than in liars. These findings support the combination of cognitive load manipulation and T-pattern methodology for deception detection in low-stakes situations, suggesting the testing of directional hypothesis on a larger probabilistic sample of population.

  20. T-Pattern Analysis and Cognitive Load Manipulation to Detect Low-Stake Lies: An Exploratory Study

    PubMed Central

    Diana, Barbara; Zurloni, Valentino; Elia, Massimiliano; Cavalera, Cesare; Realdon, Olivia; Jonsson, Gudberg K.; Anguera, M. Teresa

    2018-01-01

    Deception has evolved to become a fundamental aspect of human interaction. Despite the prolonged efforts in many disciplines, there has been no definite finding of a univocally “deceptive” signal. This work proposes an approach to deception detection combining cognitive load manipulation and T-pattern methodology with the objective of: (a) testing the efficacy of dual task-procedure in enhancing differences between truth tellers and liars in a low-stakes situation; (b) exploring the efficacy of T-pattern methodology in discriminating truthful reports from deceitful ones in a low-stakes situation; (c) setting the experimental design and procedure for following research. We manipulated cognitive load to enhance differences between truth tellers and liars, because of the low-stakes lies involved in our experiment. We conducted an experimental study with a convenience sample of 40 students. We carried out a first analysis on the behaviors’ frequencies coded through the observation software, using SPSS (22). The aim was to describe shape and characteristics of behavior’s distributions and explore differences between groups. Datasets were then analyzed with Theme 6.0 software which detects repeated patterns (T-patterns) of coded events (non-verbal behaviors) that regularly or irregularly occur within a period of observation. A descriptive analysis on T-pattern frequencies was carried out to explore differences between groups. An in-depth analysis on more complex patterns was performed to get qualitative information on the behavior structure expressed by the participants. Results show that the dual-task procedure enhances differences observed between liars and truth tellers with T-pattern methodology; moreover, T-pattern detection reveals a higher variety and complexity of behavior in truth tellers than in liars. These findings support the combination of cognitive load manipulation and T-pattern methodology for deception detection in low-stakes situations, suggesting the testing of directional hypothesis on a larger probabilistic sample of population. PMID:29551986

  1. 77 FR 50720 - Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Test Documentation for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1207, ``Test Documentation for Digital... practices for test documentation for software and computer systems as described in the Institute of...

  2. Validation and Verification of LADEE Models and Software

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2013-01-01

    The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.

  3. Testing of Hand-Held Mine Detection Systems

    DTIC Science & Technology

    2015-01-08

    ITOP 04-2-5208 for guidance on software testing . Testing software is necessary to ensure that safety is designed into the software algorithm, and that...sensor verification areas or target lanes. F.2. TESTING OBJECTIVES. a. Testing objectives will impact on the test design . Some examples of...overall safety, performance, and reliability of the system. It describes activities necessary to ensure safety is designed into the system under test

  4. Variance of foot biomechanical parameters across age groups for the elderly people in Romania

    NASA Astrophysics Data System (ADS)

    Deselnicu, D. C.; Vasilescu, A. M.; Militaru, G.

    2017-10-01

    The paper presents the results of a fieldwork study conducted in order to analyze major causal factors that influence the foot deformities and pathologies of elderly women in Romania. The study has an exploratory and descriptive nature and uses quantitative methodology. The sample consisted of 100 elderly women from Romania, ranging from 55 to over 75 years of age. The collected data was analyzed on multiple dimensions using a statistic analysis software program. The analysis of variance demonstrated significant differences across age groups in terms of several biomechanical parameters such as travel speed, toe off phase and support phase in the case of elderly women.

  5. The effect of neonatal N-methyl-D-aspartate receptor blockade on exploratory and anxiety-like behaviors in adult BALB/c and C57BL/6 mice.

    PubMed

    Akillioglu, Kubra; Binokay, Secil; Kocahan, Sayad

    2012-07-15

    N-methyl-D-aspartate (NMDA) receptors play an important role in brain maturation and developmental processes. In our study, we evaluated the effects of neonatal NMDA receptor blockade on exploratory locomotion and anxiety-like behaviors of adult BALB/c and C57BL/6 mice. In this study, NMDA receptor hypofunction was induced 7-10 days after birth using MK-801 in BALB/c and C57BL/6 mice (0.25mg/kg twice a day for 4 days via intraperitoneal injection). The open-field (OF) and elevated plus maze (EPM) tests were used to evaluate exploratory locomotion and anxiety-like behaviors. In the OF, BALB/c mice spent less time in the center of the field (p<0.05) and had less vertical locomotor activity (p<0.01) compared to C57BL/6 mice. In BALB/c mice, MK-801 caused a decrease in vertical and horizontal locomotor activity in the OF test, compared to the control group (p<0.05). In C57BL/6 mice, MK-801 treatment increased horizontal locomotor activity and decreased time spent in the center in the OF test (p<0.05). In the EPM, the number of open-arm entries, the percentage of open-arm time (p<0.01) and total arm entries (p<0.05) were lower in BALB/c mice compared to C57BL/6 mice. In BALB/c mice, MK-801 caused an increase in the percentage of open-arm time compared to the control group (p<0.05). In C57BL/6 mice, MK-801 caused a decrease in the percentage of open-arm time compared to the control group (p<0.05). MK-801 decreased exploratory and anxiety-like behaviors in BALB/c mice. In contrast, MK-801 increased exploratory and anxiety-like behaviors in C57BL/6 mice. In conclusion, hereditary factors may play an important role in neonatal NMDA receptor blockade-induced responses. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. SSE software test management STM capability: Using STM in the Ground Systems Development Environment (GSDE)

    NASA Technical Reports Server (NTRS)

    Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo

    1992-01-01

    This report is one of a series discussing configuration management (CM) topics for Space Station ground systems software development. It provides a description of the Software Support Environment (SSE)-developed Software Test Management (STM) capability, and discusses the possible use of this capability for management of developed software during testing performed on target platforms. This is intended to supplement the formal documentation of STM provided by the SEE Project. How STM can be used to integrate contractor CM and formal CM for software before delivery to operations is described. STM provides a level of control that is flexible enough to support integration and debugging, but sufficiently rigorous to insure the integrity of the testing process.

  7. Software testing

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.

    2016-01-01

    Now more than ever, scientific results are dependent on sophisticated software and analysis. Why should we trust code written by others? How do you ensure your own code produces sensible results? How do you make sure it continues to do so as you update, modify, and add functionality? Software testing is an integral part of code validation and writing tests should be a requirement for any software project. I will talk about Python-based tools that make managing and running tests much easier and explore some statistics for projects hosted on GitHub that contain tests.

  8. Academic Testing and Grading with Spreadsheet Software.

    ERIC Educational Resources Information Center

    Ho, James K.

    1987-01-01

    Explains how spreadsheet software can be used in the design and grading of academic tests and in assigning grades. Macro programs and menu-driven software are highlighted and an example using IBM PCs and Lotus 1-2-3 software is given. (Author/LRW)

  9. Assessment Environment for Complex Systems Software Guide

    NASA Technical Reports Server (NTRS)

    2013-01-01

    This Software Guide (SG) describes the software developed to test the Assessment Environment for Complex Systems (AECS) by the West Virginia High Technology Consortium (WVHTC) Foundation's Mission Systems Group (MSG) for the National Aeronautics and Space Administration (NASA) Aeronautics Research Mission Directorate (ARMD). This software is referred to as the AECS Test Project throughout the remainder of this document. AECS provides a framework for developing, simulating, testing, and analyzing modern avionics systems within an Integrated Modular Avionics (IMA) architecture. The purpose of the AECS Test Project is twofold. First, it provides a means to test the AECS hardware and system developed by MSG. Second, it provides an example project upon which future AECS research may be based. This Software Guide fully describes building, installing, and executing the AECS Test Project as well as its architecture and design. The design of the AECS hardware is described in the AECS Hardware Guide. Instructions on how to configure, build and use the AECS are described in the User's Guide. Sample AECS software, developed by the WVHTC Foundation, is presented in the AECS Software Guide. The AECS Hardware Guide, AECS User's Guide, and AECS Software Guide are authored by MSG. The requirements set forth for AECS are presented in the Statement of Work for the Assessment Environment for Complex Systems authored by NASA Dryden Flight Research Center (DFRC). The intended audience for this document includes software engineers, hardware engineers, project managers, and quality assurance personnel from WVHTC Foundation (the suppliers of the software), NASA (the customer), and future researchers (users of the software). Readers are assumed to have general knowledge in the field of real-time, embedded computer software development.

  10. Software interface verifier

    NASA Technical Reports Server (NTRS)

    Soderstrom, Tomas J.; Krall, Laura A.; Hope, Sharon A.; Zupke, Brian S.

    1994-01-01

    A Telos study of 40 recent subsystem deliveries into the DSN at JPL found software interface testing to be the single most expensive and error-prone activity, and the study team suggested creating an automated software interface test tool. The resulting Software Interface Verifier (SIV), which was funded by NASA/JPL and created by Telos, employed 92 percent software reuse to quickly create an initial version which incorporated early user feedback. SIV is now successfully used by developers for interface prototyping and unit testing, by test engineers for formal testing, and by end users for non-intrusive data flow tests in the operational environment. Metrics, including cost, are included. Lessons learned include the need for early user training. SIV is ported to many platforms and can be successfully used or tailored by other NASA groups.

  11. Proactive Security Testing and Fuzzing

    NASA Astrophysics Data System (ADS)

    Takanen, Ari

    Software is bound to have security critical flaws, and no testing or code auditing can ensure that software is flaw-less. But software security testing requirements have improved radically during the past years, largely due to criticism from security conscious consumers and Enterprise customers. Whereas in the past, security flaws were taken for granted (and patches were quietly and humbly installed), they now are probably one of the most common reasons why people switch vendors or software providers. The maintenance costs from security updates often add to become one of the biggest cost items to large Enterprise users. Fortunately test automation techniques have also improved. Techniques like model-based testing (MBT) enable efficient generation of security tests that reach good confidence levels in discovering zero-day mistakes in software. This technique is called fuzzing.

  12. Agile deployment and code coverage testing metrics of the boot software on-board Solar Orbiter's Energetic Particle Detector

    NASA Astrophysics Data System (ADS)

    Parra, Pablo; da Silva, Antonio; Polo, Óscar R.; Sánchez, Sebastián

    2018-02-01

    In this day and age, successful embedded critical software needs agile and continuous development and testing procedures. This paper presents the overall testing and code coverage metrics obtained during the unit testing procedure carried out to verify the correctness of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. The ICU boot software is a critical part of the project so its verification should be addressed at an early development stage, so any test case missed in this process may affect the quality of the overall on-board software. According to the European Cooperation for Space Standardization ESA standards, testing this kind of critical software must cover 100% of the source code statement and decision paths. This leads to the complete testing of fault tolerance and recovery mechanisms that have to resolve every possible memory corruption or communication error brought about by the space environment. The introduced procedure enables fault injection from the beginning of the development process and enables to fulfill the exigent code coverage demands on the boot software.

  13. Florida alternative NTCIP testing software (ANTS) for actuated signal controllers.

    DOT National Transportation Integrated Search

    2009-01-01

    The scope of this research project did include the development of a software tool to test devices for NTCIP compliance. Development of the Florida Alternative NTCIP Testing Software (ANTS) was developed by the research team due to limitations found w...

  14. Individual consistency in exploratory behaviour and mating tactics in male guppies

    NASA Astrophysics Data System (ADS)

    Kelley, Jennifer L.; Phillips, Samuel C.; Evans, Jonathan P.

    2013-10-01

    While behavioural plasticity is considered an adaptation to fluctuating social and environmental conditions, many animals also display a high level of individual consistency in their behaviour over time or across contexts (generally termed ‘personality’). However, studies of animal personalities that include sexual behaviour, or functionally distinct but correlated traits, are relatively scarce. In this study, we tested for individual behavioural consistency in courtship and exploratory behaviour in male guppies ( Poecilia reticulata) in two light environments (high vs. low light intensity). Based on previous work on guppies, we predicted that males would modify their behaviour from sneak mating tactics to courtship displays under low light conditions, but also that the rank orders of courtship effort would remain unchanged (i.e. highly sexually active individuals would display relatively high levels of courtship under both light regimes). We also tested for correlations between courtship and exploratory behaviour, predicting that males that had high display rates would also be more likely to approach a novel object. Although males showed significant consistency in their exploratory and mating behaviour over time (1 week), we found no evidence that these traits constituted a behavioural syndrome. Furthermore, in contrast to previous work, we found no overall effect of the light environment on any of the behaviours measured, although males responded to the treatment on an individual-level basis, as reflected by a significant individual-by-environment interaction. The future challenge is to investigate how individual consistency across different environmental contexts relates to male reproductive success.

  15. Taking advantage of ground data systems attributes to achieve quality results in testing software

    NASA Technical Reports Server (NTRS)

    Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.

    1994-01-01

    During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.

  16. Framing matters: Effects of framing on older adults’ exploratory decision-making

    PubMed Central

    Cooper, Jessica A.; Blanco, Nathaniel; Maddox, W. Todd

    2016-01-01

    We examined framing effects on exploratory decision-making. In Experiment 1 we tested older and younger adults in two decision-making tasks separated by one week, finding that older adults’ decision-making performance was preserved when maximizing gains, but declined when minimizing losses. Computational modeling indicates that younger adults in both conditions, and older adults in gains-maximization, utilized a decreasing threshold strategy (which is optimal), but older adults in losses were better fit by a fixed-probability model of exploration. In Experiment 2 we examined within-subjects behavior in older and younger adults in the same exploratory decision-making task, but without a time separation between tasks. We replicated the older adult disadvantage in loss-minimization from Experiment 1, and found that the older adult deficit was significantly reduced when the loss-minimization task immediately followed the gains-maximization task. We conclude that older adults’ performance in exploratory decision-making is hindered when framed as loss-minimization, but that this deficit is attenuated when older adults can first develop a strategy in a gains-framed task. PMID:27977218

  17. Framing matters: Effects of framing on older adults' exploratory decision-making.

    PubMed

    Cooper, Jessica A; Blanco, Nathaniel J; Maddox, W Todd

    2017-02-01

    We examined framing effects on exploratory decision-making. In Experiment 1 we tested older and younger adults in two decision-making tasks separated by one week, finding that older adults' decision-making performance was preserved when maximizing gains, but it declined when minimizing losses. Computational modeling indicates that younger adults in both conditions, and older adults in gains maximization, utilized a decreasing threshold strategy (which is optimal), but older adults in losses were better fit by a fixed-probability model of exploration. In Experiment 2 we examined within-subject behavior in older and younger adults in the same exploratory decision-making task, but without a time separation between tasks. We replicated the older adult disadvantage in loss minimization from Experiment 1 and found that the older adult deficit was significantly reduced when the loss-minimization task immediately followed the gains-maximization task. We conclude that older adults' performance in exploratory decision-making is hindered when framed as loss minimization, but that this deficit is attenuated when older adults can first develop a strategy in a gains-framed task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Correlation of Selected Cognitive Abilities and Cognitive Processing Parameters: An Exploratory Study.

    ERIC Educational Resources Information Center

    Snow, Richard E.; And Others

    This pilot study investigated some relationships between tested ability variables and processing parameters obtained from memory search and visual search tasks. The 25 undergraduates who participated had also participated in a previous investigation by Chiang and Atkinson. A battery of traditional ability tests and several film tests were…

  19. Modernized build and test infrastructure for control software at ESO: highly flexible building, testing, and automatic quality practices for telescope control software

    NASA Astrophysics Data System (ADS)

    Pellegrin, F.; Jeram, B.; Haucke, J.; Feyrin, S.

    2016-07-01

    The paper describes the introduction of a new automatized build and test infrastructure, based on the open-source software Jenkins1, into the ESO Very Large Telescope control software to replace the preexisting in-house solution. A brief introduction to software quality practices is given, a description of the previous solution, the limitations of it and new upcoming requirements. Modifications required to adapt the new system are described, how these were implemented to current software and the results obtained. An overview on how the new system may be used in future projects is also presented.

  20. Field Test of Route Planning Software for Lunar Polar Missions

    NASA Astrophysics Data System (ADS)

    Horchler, A. D.; Cunningham, C.; Jones, H. L.; Arnett, D.; Fang, E.; Amoroso, E.; Otten, N.; Kitchell, F.; Holst, I.; Rock, G.; Whittaker, W.

    2017-10-01

    A novel field test paradigm has been developed to demonstrate and validate route planning software in the stark low-angled light and sweeping shadows a rover would experience at the poles of the Moon. Software, ConOps, and test results are presented.

  1. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  2. Proceedings of the Annual Ada Software Engineering Education and Training Symposium (3rd) Held in Denver, Colorado on June 14-16, 1988

    DTIC Science & Technology

    1988-06-01

    Based Software Engineering Project Course .............. 83 SSoftware Engineering, Software Engineering Concepts: The Importance of Object-Based...quality assurance, and independent system testing . The Chief Programmer is responsible for all software development activities, including prototyping...during the Requirements Analysis phase, the Preliminary Design, the Detailed Design, Coding and Unit Testing , CSC Integration and Testing , and informal

  3. Software OT&E Guidelines. Volume 1. Software Test Manager’s Handbook

    DTIC Science & Technology

    1981-02-01

    on reverse side If neceeary and identify by block number) The Software OT&E Guidelines is a set of handbooks prepared by the Computer / Support Systems...is one of a set of handbooks prepared by the Computer /Support Systems Division of the Test and Evaluation Directorate, Air Force Test and Evaluation...15 E. Software Maintainability .. .. ........ ... 16 F. Standard Questionnaires. .. .. ....... .... 16 1. Operator- Computer Interface Evaluation

  4. A socio-technical model to explore urban water systems scenarios.

    PubMed

    de Haan, Fjalar J; Ferguson, Briony C; Deletic, Ana; Brown, Rebekah R

    2013-01-01

    This article reports on the ongoing work and research involved in the development of a socio-technical model of urban water systems. Socio-technical means the model is not so much concerned with the technical or biophysical aspects of urban water systems, but rather with the social and institutional implications of the urban water infrastructure and vice versa. A socio-technical model, in the view purported in this article, produces scenarios of different urban water servicing solutions gaining or losing influence in meeting water-related societal needs, like potable water, drainage, environmental health and amenity. The urban water system is parameterised with vectors of the relative influence of each servicing solution. The model is a software implementation of the Multi-Pattern Approach, a theory on societal systems, like urban water systems, and how these develop and go through transitions under various internal and external conditions. Acknowledging that social dynamics comes with severe and non-reducible uncertainties, the model is set up to be exploratory, meaning that for any initial condition several possible future scenarios are produced. This article gives a concise overview of the necessary theoretical background, the model architecture and some initial test results using a drainage example.

  5. Exploratory investigation of the HIPPO gas-jet target fluid dynamic properties

    NASA Astrophysics Data System (ADS)

    Meisel, Zach; Shi, Ke; Jemcov, Aleksandar; Couder, Manoel

    2016-08-01

    In order to optimize the performance of gas-jet targets for future nuclear reaction measurements, a detailed understanding of the dependence of the gas-jet properties on experiment design parameters is required. Common methods of gas-jet characterization rely on measuring the effective thickness using nuclear elastic scattering and energy loss techniques; however, these tests are time intensive and limit the range of design modifications which can be explored to improve the properties of the jet as a nuclear reaction target. Thus, a more rapid jet-characterization method is desired. We performed the first steps towards characterizing the gas-jet density distribution of the HIPPO gas-jet target at the University of Notre Dame's Nuclear Science Laboratory by reproducing results from 20Ne(α,α)20Ne elastic scattering measurements with computational fluid dynamics (CFD) simulations performed with the state-of-the-art CFD software ANSYS Fluent. We find a strong sensitivity to experimental design parameters of the gas-jet target, such as the jet nozzle geometry and ambient pressure of the target chamber. We argue that improved predictive power will require moving to three-dimensional simulations and additional benchmarking with experimental data.

  6. Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction

    PubMed Central

    Venkatesan, R.

    2016-01-01

    Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets. PMID:27738649

  7. Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction.

    PubMed

    Kumudha, P; Venkatesan, R

    Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets.

  8. Coordination and organization of security software process for power information application environment

    NASA Astrophysics Data System (ADS)

    Wang, Qiang

    2017-09-01

    As an important part of software engineering, the software process decides the success or failure of software product. The design and development feature of security software process is discussed, so is the necessity and the present significance of using such process. Coordinating the function software, the process for security software and its testing are deeply discussed. The process includes requirement analysis, design, coding, debug and testing, submission and maintenance. In each process, the paper proposed the subprocesses to support software security. As an example, the paper introduces the above process into the power information platform.

  9. The Rapid Integration and Test Environment: A Process for Achieving Software Test Acceptance

    DTIC Science & Technology

    2010-05-01

    Test Environment : A Process for Achieving Software Test Acceptance 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...mlif`v= 365= k^s^i=mlpqdo^ar^qb=p`elli= The Rapid Integration and Test Environment : A Process for Achieving Software Test Acceptance Patrick V...was awarded the Bronze Star. Introduction The Rapid Integration and Test Environment (RITE) initiative, implemented by the Program Executive Office

  10. IMCS reflight certification requirements and design specifications

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The requirements for reflight certification are established. Software requirements encompass the software programs that are resident in the PCC, DEP, PDSS, EC, or any related GSE. A design approach for the reflight software packages is recommended. These designs will be of sufficient detail to permit the implementation of reflight software. The PDSS/IMC Reflight Certification system provides the tools and mechanisms for the user to perform the reflight certification test procedures, test data capture, test data display, and test data analysis. The system as defined will be structured to permit maximum automation of reflight certification procedures and test data analysis.

  11. French validation of the internet addiction test.

    PubMed

    Khazaal, Yasser; Billieux, Joël; Thorens, Gabriel; Khan, Riaz; Louati, Youssr; Scarlatti, Elisa; Theintz, Florence; Lederrey, Jerome; Van Der Linden, Martial; Zullino, Daniele

    2008-12-01

    The main goal of the present study is to investigate the psychometric properties of a French version of the Internet Addiction Test (IAT) and to assess its relationship with both time spent on Internet and online gaming. The French version of the Young's Internet Addiction Test (IAT) was administered to a sample of 246 adults. Exploratory and confirmatory analyses were carried out. We discovered that a one-factor model of the IAT has good psychometric properties and fits the data well, which is not the case of a six-factor model as found in previous studies using exploratory methods. Correlation analysis revealed positive significant relationships between IAT scores and both the daily duration of Internet use and the fact of being an online player. In addition, younger people scored higher on the IAT. The one-factor model found in this study has to be replicated in other IAT language versions.

  12. Exploring the Factor Structure of Neurocognitive Measures in Older Individuals

    PubMed Central

    Santos, Nadine Correia; Costa, Patrício Soares; Amorim, Liliana; Moreira, Pedro Silva; Cunha, Pedro; Cotter, Jorge; Sousa, Nuno

    2015-01-01

    Here we focus on factor analysis from a best practices point of view, by investigating the factor structure of neuropsychological tests and using the results obtained to illustrate on choosing a reasonable solution. The sample (n=1051 individuals) was randomly divided into two groups: one for exploratory factor analysis (EFA) and principal component analysis (PCA), to investigate the number of factors underlying the neurocognitive variables; the second to test the “best fit” model via confirmatory factor analysis (CFA). For the exploratory step, three extraction (maximum likelihood, principal axis factoring and principal components) and two rotation (orthogonal and oblique) methods were used. The analysis methodology allowed exploring how different cognitive/psychological tests correlated/discriminated between dimensions, indicating that to capture latent structures in similar sample sizes and measures, with approximately normal data distribution, reflective models with oblimin rotation might prove the most adequate. PMID:25880732

  13. Cassini's Test Methodology for Flight Software Verification and Operations

    NASA Technical Reports Server (NTRS)

    Wang, Eric; Brown, Jay

    2007-01-01

    The Cassini spacecraft was launched on 15 October 1997 on a Titan IV-B launch vehicle. The spacecraft is comprised of various subsystems, including the Attitude and Articulation Control Subsystem (AACS). The AACS Flight Software (FSW) and its development has been an ongoing effort, from the design, development and finally operations. As planned, major modifications to certain FSW functions were designed, tested, verified and uploaded during the cruise phase of the mission. Each flight software upload involved extensive verification testing. A standardized FSW testing methodology was used to verify the integrity of the flight software. This paper summarizes the flight software testing methodology used for verifying FSW from pre-launch through the prime mission, with an emphasis on flight experience testing during the first 2.5 years of the prime mission (July 2004 through January 2007).

  14. Mars Science Laboratory Flight Software Boot Robustness Testing Project Report

    NASA Technical Reports Server (NTRS)

    Roth, Brian

    2011-01-01

    On the surface of Mars, the Mars Science Laboratory will boot up its flight computers every morning, having charged the batteries through the night. This boot process is complicated, critical, and affected by numerous hardware states that can be difficult to test. The hardware test beds do not facilitate testing a long duration of back-to-back unmanned automated tests, and although the software simulation has provided the necessary functionality and fidelity for this boot testing, there has not been support for the full flexibility necessary for this task. Therefore to perform this testing a framework has been build around the software simulation that supports running automated tests loading a variety of starting configurations for software and hardware states. This implementation has been tested against the nominal cases to validate the methodology, and support for configuring off-nominal cases is ongoing. The implication of this testing is that the introduction of input configurations that have yet proved difficult to test may reveal boot scenarios worth higher fidelity investigation, and in other cases increase confidence in the robustness of the flight software boot process.

  15. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.

    1987-01-01

    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.

  16. Integration of Personalized Healthcare Pathways in an ICT Platform for Diabetes Managements: A Small-Scale Exploratory Study.

    PubMed

    Fico, Giuseppe; Fioravanti, Alessio; Arredondo, Maria Teresa; Gorman, Joe; Diazzi, Chiara; Arcuri, Giovanni; Conti, Claudio; Pirini, Giampiero

    2016-01-01

    The availability of new tools able to support patient monitoring and personalized care may substantially improve the quality of chronic disease management. A personalized healthcare pathway (PHP) has been developed for diabetes disease management and integrated into an information and communication technology system to accomplish a shift from organization-centered care to patient-centered care. A small-scale exploratory study was conducted to test the platform. Preliminary results are presented that shed light on how the PHP influences system usage and performance outcomes.

  17. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  18. IHE cross-enterprise document sharing for imaging: interoperability testing software

    PubMed Central

    2010-01-01

    Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241

  19. IHE cross-enterprise document sharing for imaging: interoperability testing software.

    PubMed

    Noumeir, Rita; Renaud, Bérubé

    2010-09-21

    With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  20. Exploring the Use of Individualized, Reflective Guidance In an Educational Multi-User Virtual Environment

    NASA Astrophysics Data System (ADS)

    Nelson, Brian C.

    2007-02-01

    This study examines the patterns of use and potential impact of individualized, reflective guidance in an educational Multi-User Virtual Environment (MUVE). A guidance system embedded within a MUVE-based scientific inquiry curriculum was implemented with a sample of middle school students in an exploratory study investigating (a) whether access to the guidance system was associated with improved learning, (b) whether students viewing more guidance messages saw greater improvement on content tests than those viewing less, and (c) whether there were any differences in guidance use among boys and girls. Initial experimental findings showed that basic access to individualized guidance used with a MUVE had no measurable impact on learning. However, post-hoc exploratory analyses indicated that increased use of the system among those with access to it was positively associated with content test score gains. In addition, differences were found in overall learning outcomes by gender and in patterns of guidance use by boys and girls, with girls outperforming boys across a spectrum of guidance system use. Based on these exploratory findings, the paper suggests design guidelines for the development of guidance systems embedded in MUVEs and outlines directions for further research.

  1. Exploratory behavior is associated with plasma carotenoid accumulation in two congeneric species of waterfowl.

    PubMed

    Rowe, Melissah; Pierson, Kasey L; McGraw, Kevin J

    2015-06-01

    Recently, carotenoid pigments have received considerable attention as modulators of animal health and performance. While studies show that elevated carotenoid intake and accumulation can influence activities like parental care and escape-flight performance, little is known of how carotenoid status influences the expression of animal personality traits, which can be energy-demanding and entail survival costs but also rewarding in the context of foraging and mating. We experimentally investigated the effects of carotenoid availability on exploratory behavior and activity level, using adult males and females of two species of waterfowl: mallard (Anas platyrhynchos) and northern pintail (Anas acuta). We assessed behavior using a novel-environment test designed to measure an individual's response to novel objects and a potential predator threat (fox urine scent). We found that carotenoid availability was positively associated with some aspects of exploratory behavior: birds with higher concentrations of circulating carotenoids entered the test arena sooner and approached and entered predator-scented bedding material more frequently than birds with low carotenoid concentrations. These results suggest that the availability of carotenoid resources can influence personality traits in waterfowl, and we discuss putative physiological mechanisms underlying this effect. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. 15 CFR 740.9 - Temporary imports, exports, and reexports (TMP).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... the end of the beta test period as defined by the software producer or, if the software producer does... States; and exports and reexports of beta test software. (a) Temporary exports and reexports—(1) Scope. You may export and reexport commodities and software for temporary use abroad (including use in...

  3. 15 CFR 740.9 - Temporary imports, exports, and reexports (TMP).

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... the end of the beta test period as defined by the software producer or, if the software producer does... States; and exports and reexports of beta test software. (a) Temporary exports and reexports—(1) Scope. You may export and reexport commodities and software for temporary use abroad (including use in...

  4. Mars Science Laboratory Boot Robustness Testing

    NASA Technical Reports Server (NTRS)

    Banazadeh, Payam; Lam, Danny

    2011-01-01

    Mars Science Laboratory (MSL) is one of the most complex spacecrafts in the history of mankind. Due to the nature of its complexity, a large number of flight software (FSW) requirements have been written for implementation. In practice, these requirements necessitate very complex and very precise flight software with no room for error. One of flight software's responsibilities is to be able to boot up and check the state of all devices on the spacecraft after the wake up process. This boot up and initialization is crucial to the mission success since any misbehavior of different devices needs to be handled through the flight software. I have created a test toolkit that allows the FSW team to exhaustively test the flight software under variety of different unexpected scenarios and validate that flight software can handle any situation after booting up. The test includes initializing different devices on spacecraft to different configurations and validate at the end of the flight software boot up that the flight software has initialized those devices to what they are suppose to be in that particular scenario.

  5. Supporting the Use of CERT (registered trademark) Secure Coding Standards in DoD Acquisitions

    DTIC Science & Technology

    2012-07-01

    Capability Maturity Model IntegrationSM (CMMI®) [Davis 2009]. SM Team Software Process, TSP, and Capability Maturity Model Integration are service...STP Software Test Plan TEP Test and Evaluation Plan TSP Team Software Process V & V verification and validation CMU/SEI-2012-TN-016 | 47...Supporting the Use of CERT® Secure Coding Standards in DoD Acquisitions Tim Morrow ( Software Engineering Institute) Robert Seacord ( Software

  6. Tools for Embedded Computing Systems Software

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.

  7. Firing Room Remote Application Software Development

    NASA Technical Reports Server (NTRS)

    Liu, Kan

    2015-01-01

    The Engineering and Technology Directorate (NE) at National Aeronautics and Space Administration (NASA) Kennedy Space Center (KSC) is designing a new command and control system for the checkout and launch of Space Launch System (SLS) and future rockets. The purposes of the semester long internship as a remote application software developer include the design, development, integration, and verification of the software and hardware in the firing rooms, in particular with the Mobile Launcher (ML) Launch Accessories (LACC) subsystem. In addition, a software test verification procedure document was created to verify and checkout LACC software for Launch Equipment Test Facility (LETF) testing.

  8. Perspectives on Teacher Quality: Bilingual Education and ESL Teacher Certification, Test-Taking Experiences, and Instructional Practices

    ERIC Educational Resources Information Center

    Lemberger, Nancy; Reyes-Carrasquillo, Angela

    2011-01-01

    This descriptive exploratory study looked at the certification process, test-taking experiences, and instructional practices of a group of graduate bilingual education (BE) and English-as-a-Second-Language (ESL) teachers to understand why some had problems passing teacher certification tests after completing their degrees. The study surveyed 63 BE…

  9. Comparison of Male and Female Performance on the ATP Physics Test.

    ERIC Educational Resources Information Center

    Wheeler, Patricia; Harris, Abigail

    This exploratory study on the College Board's Admissions Testing Program (ATP) Physics Test can be divided into two main parts, each designed to address a specific set of questions: Part I, Are there any systematic differences in male/female performance on individual items or subgroups of items that can help in interpreting the differences between…

  10. Test Anxiety and GCSE Performance: The Effect of Gender and Socio-Economic Background

    ERIC Educational Resources Information Center

    Putwain, David William

    2008-01-01

    Despite a well established body of international literature describing the effect of test anxiety on student performance in a range of assessments, there has been little work conducted on samples of students from the UK. The purpose of this exploratory study is two-fold. First, to establish the relationship between test anxiety and assessment…

  11. Phonological Awareness and Speech Comprehensibility: An Exploratory Study

    ERIC Educational Resources Information Center

    Venkatagiri, H. S.; Levis, John M.

    2007-01-01

    This study examined whether differences in phonological awareness were related to differences in speech comprehensibility. Seventeen adults who learned English as a foreign language (EFL) in academic settings completed 14 tests of phonological awareness that measured their explicit knowledge of English phonological structures, and three tests of…

  12. Software Reliability, Measurement, and Testing. Volume 2. Guidebook for Software Reliability Measurement and Testing

    DTIC Science & Technology

    1992-04-01

    contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more

  13. Debates—Hypothesis testing in hydrology: Theory and practice

    NASA Astrophysics Data System (ADS)

    Pfister, Laurent; Kirchner, James W.

    2017-03-01

    The basic structure of the scientific method—at least in its idealized form—is widely championed as a recipe for scientific progress, but the day-to-day practice may be different. Here, we explore the spectrum of current practice in hypothesis formulation and testing in hydrology, based on a random sample of recent research papers. This analysis suggests that in hydrology, as in other fields, hypothesis formulation and testing rarely correspond to the idealized model of the scientific method. Practices such as "p-hacking" or "HARKing" (Hypothesizing After the Results are Known) are major obstacles to more rigorous hypothesis testing in hydrology, along with the well-known problem of confirmation bias—the tendency to value and trust confirmations more than refutations—among both researchers and reviewers. Nonetheless, as several examples illustrate, hypothesis tests have played an essential role in spurring major advances in hydrological theory. Hypothesis testing is not the only recipe for scientific progress, however. Exploratory research, driven by innovations in measurement and observation, has also underlain many key advances. Further improvements in observation and measurement will be vital to both exploratory research and hypothesis testing, and thus to advancing the science of hydrology.

  14. Adaptation of the ORTHO-15 test to Polish women and men.

    PubMed

    Brytek-Matera, Anna; Krupa, Magdalena; Poggiogalle, Eleonora; Donini, Lorenzo Maria

    2014-03-01

    There is a lack of Polish tools to measure behaviour related to orthorexia nervosa. The purpose of the present study was to validate the Polish version of the ORTHO-15 test. 341 women and 59 men (N = 400) were recruited, whose age ranged from 18 to 35 years. Mean age was 23.09 years (SD = 3.14) in women and 24.02 years (SD = 3.87) in men. The ORTHO-15 test and the EAT-26 test were used in the present study. Factor analysis (exploratory and confirmatory analysis) was used in the present study. Exploratory factor analysis performed on the initial 15 items from a random split half of the study group suggested a nine-item two-factor structure. Confirmatory factor analysis performed on the second randomly selected half of the study group supported this two-factor structure of the ORTHO-15 test. The Polish version of the ORTHO-15 test demonstrated an internal consistency (Cronbach's alpha) equal to 0.644. The Polish version of the ORTHO-15 test is a reliable and valuable instrument to assess obsessive attitudes related to healthy and proper nutrition in Polish female and male population.

  15. System Testing of Ground Cooling System Components

    NASA Technical Reports Server (NTRS)

    Ensey, Tyler Steven

    2014-01-01

    This internship focused primarily upon software unit testing of Ground Cooling System (GCS) components, one of the three types of tests (unit, integrated, and COTS/regression) utilized in software verification. Unit tests are used to test the software of necessary components before it is implemented into the hardware. A unit test determines that the control data, usage procedures, and operating procedures of a particular component are tested to determine if the program is fit for use. Three different files are used to make and complete an efficient unit test. These files include the following: Model Test file (.mdl), Simulink SystemTest (.test), and autotest (.m). The Model Test file includes the component that is being tested with the appropriate Discrete Physical Interface (DPI) for testing. The Simulink SystemTest is a program used to test all of the requirements of the component. The autotest tests that the component passes Model Advisor and System Testing, and puts the results into proper files. Once unit testing is completed on the GCS components they can then be implemented into the GCS Schematic and the software of the GCS model as a whole can be tested using integrated testing. Unit testing is a critical part of software verification; it allows for the testing of more basic components before a model of higher fidelity is tested, making the process of testing flow in an orderly manner.

  16. Free and open-source software application for the evaluation of coronary computed tomography angiography images.

    PubMed

    Hadlich, Marcelo Souza; Oliveira, Gláucia Maria Moraes; Feijóo, Raúl A; Azevedo, Clerio F; Tura, Bernardo Rangel; Ziemer, Paulo Gustavo Portela; Blanco, Pablo Javier; Pina, Gustavo; Meira, Márcio; Souza e Silva, Nelson Albuquerque de

    2012-10-01

    The standardization of images used in Medicine in 1993 was performed using the DICOM (Digital Imaging and Communications in Medicine) standard. Several tests use this standard and it is increasingly necessary to design software applications capable of handling this type of image; however, these software applications are not usually free and open-source, and this fact hinders their adjustment to most diverse interests. To develop and validate a free and open-source software application capable of handling DICOM coronary computed tomography angiography images. We developed and tested the ImageLab software in the evaluation of 100 tests randomly selected from a database. We carried out 600 tests divided between two observers using ImageLab and another software sold with Philips Brilliance computed tomography appliances in the evaluation of coronary lesions and plaques around the left main coronary artery (LMCA) and the anterior descending artery (ADA). To evaluate intraobserver, interobserver and intersoftware agreements, we used simple and kappa statistics agreements. The agreements observed between software applications were generally classified as substantial or almost perfect in most comparisons. The ImageLab software agreed with the Philips software in the evaluation of coronary computed tomography angiography tests, especially in patients without lesions, with lesions < 50% in the LMCA and < 70% in the ADA. The agreement for lesions > 70% in the ADA was lower, but this is also observed when the anatomical reference standard is used.

  17. Writing executable assertions to test flight software

    NASA Technical Reports Server (NTRS)

    Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.

    1984-01-01

    An executable assertion is a logical statement about the variables or a block of code. If there is no error during execution, the assertion statement results in a true value. Executable assertions can be used for dynamic testing of software. They can be employed for validation during the design phase, and exception and error detection during the operation phase. The present investigation is concerned with the problem of writing executable assertions, taking into account the use of assertions for testing flight software. They can be employed for validation during the design phase, and for exception handling and error detection during the operation phase The digital flight control system and the flight control software are discussed. The considered system provides autopilot and flight director modes of operation for automatic and manual control of the aircraft during all phases of flight. Attention is given to techniques for writing and using assertions to test flight software, an experimental setup to test flight software, and language features to support efficient use of assertions.

  18. HPC Software Stack Testing Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garvey, Cormac

    The HPC Software stack testing framework (hpcswtest) is used in the INL Scientific Computing Department to test the basic sanity and integrity of the HPC Software stack (Compilers, MPI, Numerical libraries and Applications) and to quickly discover hard failures, and as a by-product it will indirectly check the HPC infrastructure (network, PBS and licensing servers).

  19. Results of the exploratory drill hole Ue5n,Frenchman Flat, Nevada Test Site. [Geologic and geophysical parameters of selected locations with anomalous seismic signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramspott, L.D.; McArthur, R.D.

    1977-02-18

    Exploratory hole Ue5n was drilled to a depth of 514 m in central Frenchmam Flat, Nevada Test Site, as part of a program sponsored by the Nuclear Monitoring Office (NMO) of the Advanced Research Projects Agency (ARPA) to determine the geologic and geophysical parameters of selected locations with anomalous seismic signals. The specific goal of drilling Ue5n was to provide the site characteristics for emplacement sites U5b and U5e. We present here data on samples, geophysical logs, lithology and stratigraphy, and depth to the water table. From an analysis of the measurements of the physical properties, a set of recommendedmore » values is given.« less

  20. dictyExpress: a web-based platform for sequence data management and analytics in Dictyostelium and beyond.

    PubMed

    Stajdohar, Miha; Rosengarten, Rafael D; Kokosar, Janez; Jeran, Luka; Blenkus, Domen; Shaulsky, Gad; Zupan, Blaz

    2017-06-02

    Dictyostelium discoideum, a soil-dwelling social amoeba, is a model for the study of numerous biological processes. Research in the field has benefited mightily from the adoption of next-generation sequencing for genomics and transcriptomics. Dictyostelium biologists now face the widespread challenges of analyzing and exploring high dimensional data sets to generate hypotheses and discovering novel insights. We present dictyExpress (2.0), a web application designed for exploratory analysis of gene expression data, as well as data from related experiments such as Chromatin Immunoprecipitation sequencing (ChIP-Seq). The application features visualization modules that include time course expression profiles, clustering, gene ontology enrichment analysis, differential expression analysis and comparison of experiments. All visualizations are interactive and interconnected, such that the selection of genes in one module propagates instantly to visualizations in other modules. dictyExpress currently stores the data from over 800 Dictyostelium experiments and is embedded within a general-purpose software framework for management of next-generation sequencing data. dictyExpress allows users to explore their data in a broader context by reciprocal linking with dictyBase-a repository of Dictyostelium genomic data. In addition, we introduce a companion application called GenBoard, an intuitive graphic user interface for data management and bioinformatics analysis. dictyExpress and GenBoard enable broad adoption of next generation sequencing based inquiries by the Dictyostelium research community. Labs without the means to undertake deep sequencing projects can mine the data available to the public. The entire information flow, from raw sequence data to hypothesis testing, can be accomplished in an efficient workspace. The software framework is generalizable and represents a useful approach for any research community. To encourage more wide usage, the backend is open-source, available for extension and further development by bioinformaticians and data scientists.

  1. Testing of Safety-Critical Software Embedded in an Artificial Heart

    NASA Astrophysics Data System (ADS)

    Cha, Sungdeok; Jeong, Sehun; Yoo, Junbeom; Kim, Young-Gab

    Software is being used more frequently to control medical devices such as artificial heart or robotic surgery system. While much of software safety issues in such systems are similar to other safety-critical systems (e.g., nuclear power plants), domain-specific properties may warrant development of customized techniques to demonstrate fitness of the system on patients. In this paper, we report results of a preliminary analysis done on software controlling a Hybrid Ventricular Assist Device (H-VAD) developed by Korea Artificial Organ Centre (KAOC). It is a state-of-the-art artificial heart which completed animal testing phase. We performed software testing in in-vitro experiments and animal experiments. An abnormal behaviour, never detected during extensive in-vitro analysis and animal testing, was found.

  2. A program downloader and other utility software for the DATAC bus monitor unit

    NASA Technical Reports Server (NTRS)

    Novacki, Stanley M., III

    1987-01-01

    A set or programs designed to facilitate software testing on the DATAC Bus Monitor is described. By providing a means to simplify program loading, firmware generation, and subsequent testing of programs, the overhead involved in software evaluation is reduced and that time is used more productively in performance, analysis and improvement of current software.

  3. Prenatal stress exposure alters postnatal behavioral expression under conditions of novelty challenge in rhesus monkey infants.

    PubMed

    Schneider, M L

    1992-11-01

    This prospective study investigated whether mild maternal stress during pregnancy could alter the behavioral and affective responses in rhesus monkey infants in a complex, novel environment. Twenty-four rhesus monkey infants were tested on three occasions at 6 months of age in a novel environment. Twelve infants were derived from mothers exposed to a daily 10-min mild stressor from Day 90 to Day 145 postconception, while 12 were derived from mothers undisturbed during pregnancy. Prenatally stressed infants demonstrated more disturbance behavior, and lower levels of gross motor/exploratory behavior. Moreover, half of the prenatally stressed infants showed an abnormal response, falling asleep, while none of the control infants displayed this behavior. Males exhibited more clinging to surrogates, while females spent more time in gross motor/exploratory behaviors, with prenatally stressed males tending to spend the least time in gross motor/exploratory activity.

  4. 50 CFR 660.60 - Specifications and management measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., for the following species: big skate, California skate, California scorpionfish, leopard shark... regulations at 50 CFR part § 600.745 for limited testing, public display, data collection, exploratory, health...

  5. 50 CFR 660.60 - Specifications and management measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., for the following species: big skate, California skate, California scorpionfish, leopard shark... regulations at 50 CFR part § 600.745 for limited testing, public display, data collection, exploratory, health...

  6. Risk-Based Object Oriented Testing

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda H.; Stapko, Ruth; Gallo, Albert

    2000-01-01

    Software testing is a well-defined phase of the software development life cycle. Functional ("black box") testing and structural ("white box") testing are two methods of test case design commonly used by software developers. A lesser known testing method is risk-based testing, which takes into account the probability of failure of a portion of code as determined by its complexity. For object oriented programs, a methodology is proposed for identification of risk-prone classes. Risk-based testing is a highly effective testing technique that can be used to find and fix the most important problems as quickly as possible.

  7. Proceedings of the Joint Logistics Commanders Joint Policy Coordinating Group on Computer Resource Management; Computer Software Management Software Workshop, 2-5 April 1979.

    DTIC Science & Technology

    1979-08-21

    Appendix s - Outline and Draft Material for Proposed Triservice Interim Guideline on Application of Software Acceptance Criteria....... 269 Appendix 9...AND DRAFT MATERIAL FOR PROPOSED TRISERVICE INTERIM GUIDELINE ON APPLICATION OF SOFTWARE ACCEPTANCE CRITERIA I I INTRODUCTION The purpose of this guide...contract item (CPCI) (code) 5. CPCI test plan 6. CPCI test procedures 7. CPCI test report 8. Handbooks and manuals. Al though additional material does

  8. The MIMIC Model as a Tool for Differential Bundle Functioning Detection

    ERIC Educational Resources Information Center

    Finch, W. Holmes

    2012-01-01

    Increasingly, researchers interested in identifying potentially biased test items are encouraged to use a confirmatory, rather than exploratory, approach. One such method for confirmatory testing is rooted in differential bundle functioning (DBF), where hypotheses regarding potential differential item functioning (DIF) for sets of items (bundles)…

  9. 77 FR 69658 - Agency Information Collection Activities; Proposed Collection; Comment Request: Generic Clearance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-20

    ... Activities; Proposed Collection; Comment Request: Generic Clearance for Cognitive, Pilot and Field Studies...) Title of the Form/Collection: BJS Generic Clearance for Cognitive, Pilot, and Field Test Studies. (3... respondents will be involved in exploratory, field test, pilot, cognitive, and focus group work conducted...

  10. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304

  11. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    PubMed

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise.

  12. Multi-modal virtual environment research at Armstrong Laboratory

    NASA Technical Reports Server (NTRS)

    Eggleston, Robert G.

    1995-01-01

    One mission of the Paul M. Fitts Human Engineering Division of Armstrong Laboratory is to improve the user interface for complex systems through user-centered exploratory development and research activities. In support of this goal, many current projects attempt to advance and exploit user-interface concepts made possible by virtual reality (VR) technologies. Virtual environments may be used as a general purpose interface medium, an alternative display/control method, a data visualization and analysis tool, or a graphically based performance assessment tool. An overview is given of research projects within the division on prototype interface hardware/software development, integrated interface concept development, interface design and evaluation tool development, and user and mission performance evaluation tool development.

  13. Information Power Grid Posters

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    2003-01-01

    This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.

  14. A taxonomy and discussion of software attack technologies

    NASA Astrophysics Data System (ADS)

    Banks, Sheila B.; Stytz, Martin R.

    2005-03-01

    Software is a complex thing. It is not an engineering artifact that springs forth from a design by simply following software coding rules; creativity and the human element are at the heart of the process. Software development is part science, part art, and part craft. Design, architecture, and coding are equally important activities and in each of these activities, errors may be introduced that lead to security vulnerabilities. Therefore, inevitably, errors enter into the code. Some of these errors are discovered during testing; however, some are not. The best way to find security errors, whether they are introduced as part of the architecture development effort or coding effort, is to automate the security testing process to the maximum extent possible and add this class of tools to the tools available, which aids in the compilation process, testing, test analysis, and software distribution. Recent technological advances, improvements in computer-generated forces (CGFs), and results in research in information assurance and software protection indicate that we can build a semi-intelligent software security testing tool. However, before we can undertake the security testing automation effort, we must understand the scope of the required testing, the security failures that need to be uncovered during testing, and the characteristics of the failures. Therefore, we undertook the research reported in the paper, which is the development of a taxonomy and a discussion of software attacks generated from the point of view of the security tester with the goal of using the taxonomy to guide the development of the knowledge base for the automated security testing tool. The representation for attacks and threat cases yielded by this research captures the strategies, tactics, and other considerations that come into play during the planning and execution of attacks upon application software. The paper is organized as follows. Section one contains an introduction to our research and a discussion of the motivation for our work. Section two contains a presents our taxonomy of software attacks and a discussion of the strategies employed and general weaknesses exploited for each attack. Section three contains a summary and suggestions for further research.

  15. Impaired hippocampus-dependent and -independent learning in IL-6 deficient mice.

    PubMed

    Baier, Paul Christian; May, Ulrike; Scheller, Jürgen; Rose-John, Stefan; Schiffelholz, Thomas

    2009-06-08

    Interleukin-6 (IL-6) is a cytokine that, in addition to its essential role in the function of the immune system, is present in the central nervous system (CNS). In particular, pathologically increased CNS IL-6 has been linked to impairments in memory performance. Thus, the aim of our present study was to investigate hippocampus-dependent and -independent memory, in combination with exploratory and anxiety related behaviour in IL-6 knock-out (IL-6KO) mice. The experiments were performed with 9 male IL-6KO and 9 age matched male wild-type (CTRL) mice. Hippocampus-dependent learning was assessed with the Morris water maze (MWM), hippocampus-independent learning with the novel object recognition memory test (NORM). The test-battery for additional behavioural assessments included open field (OF), elevated plus maze (EPM) and forced swim test (FST). IL-6KO mice showed impaired memory processes in the NORM as well in the MWM test. This could not be explained by reduced general activity or increased baseline anxiety. But, there was evidence for a higher susceptibility for stress and reduced exploratory behaviour in IL-6KO mice. In conclusion, absent CNS IL-6 does not lead to an improvement in memory function, but instead to an impairment. As "too little and too much spoils everything", our findings do not contradict the hypothesis of an involvement of IL-6 in memory processes. However, it remains unclear if impairments of memory are a specific result of disturbed IL-6 signalling, or rather an epiphenomenon associated with reduced exploratory behaviour and stress resistance.

  16. Development of Evidence-Based Health Policy Documents in Developing Countries: A Case of Iran

    PubMed Central

    Imani-Nasab, Mohammad Hasan; Seyedin, Hesam; Majdzadeh, Reza; Yazdizadeh, Bahareh; Salehi, Masoud

    2014-01-01

    Background: Evidence-based policy documents that are well developed by senior civil servants and are timely available can reduce the barriers to evidence utilization by health policy makers. This study examined the barriers and facilitators in developing evidence-based health policy documents from the perspective of their producers in a developing country. Methods: In a qualitative study with a framework analysis approach, we conducted semi-structured interviews using purposive and snowball sampling. A qualitative analysis software (MAXQDA-10) was used to apply the codes and manage the data. This study was theory-based and the results were compared to exploratory studies about the factors influencing evidence-based health policymaking. Results: 18 codes and three main themes of behavioral, normative, and control beliefs were identified. Factors that influence the development of evidence-based policy documents were identified by the participants: behavioral beliefs included quality of policy documents, use of resources, knowledge and innovation, being time-consuming and contextualization; normative beliefs included policy authorities, policymakers, policy administrators, and co-workers; and control beliefs included recruitment policy, performance management, empowerment, management stability, physical environment, access to evidence, policy making process, and effect of other factors. Conclusion: Most of the cited barriers to the development of evidence-based policy were related to control beliefs, i.e. barriers at the organizational and health system levels. This study identified the factors that influence the development of evidence-based policy documents based on the components of the theory of planned behavior. But in exploratory studies on evidence utilization by health policymakers, the identified factors were only related to control behaviors. This suggests that the theoretical approach may be preferable to the exploratory approach in identifying the barriers and facilitators of a behavior. PMID:24762343

  17. Development of evidence-based health policy documents in developing countries: a case of Iran.

    PubMed

    Imani-Nasab, Mohammad Hasan; Seyedin, Hesam; Majdzadeh, Reza; Yazdizadeh, Bahareh; Salehi, Masoud

    2014-02-07

    Evidence-based policy documents that are well developed by senior civil servants and are timely available can reduce the barriers to evidence utilization by health policy makers. This study examined the barriers and facilitators in developing evidence-based health policy documents from the perspective of their producers in a developing country. In a qualitative study with a framework analysis approach, we conducted semi-structured interviews using purposive and snowball sampling. A qualitative analysis software (MAXQDA-10) was used to apply the codes and manage the data. This study was theory-based and the results were compared to exploratory studies about the factors influencing evidence-based health policy-making. 18 codes and three main themes of behavioral, normative, and control beliefs were identified. Factors that influence the development of evidence-based policy documents were identified by the participants: behavioral beliefs included quality of policy documents, use of resources, knowledge and innovation, being time-consuming and contextualization; normative beliefs included policy authorities, policymakers, policy administrators, and co-workers; and control beliefs included recruitment policy, performance management, empowerment, management stability, physical environment, access to evidence, policy making process, and effect of other factors. Most of the cited barriers to the development of evidence-based policy were related to control beliefs, i.e. barriers at the organizational and health system levels. This study identified the factors that influence the development of evidence-based policy documents based on the components of the theory of planned behavior. But in exploratory studies on evidence utilization by health policymakers, the identified factors were only related to control behaviors. This suggests that the theoretical approach may be preferable to the exploratory approach in identifying the barriers and facilitators of a behavior.

  18. PDSS/IMC qualification test software acceptance procedures

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Tests to be performed for qualifying the payload development support system image motion compensator (IMC) are identified. The performance of these tests will verify the IMC interfaces and thereby verify the qualification test software.

  19. DSN system performance test software

    NASA Technical Reports Server (NTRS)

    Martin, M.

    1978-01-01

    The system performance test software is currently being modified to include additional capabilities and enhancements. Additional software programs are currently being developed for the Command Store and Forward System and the Automatic Total Recall System. The test executive is the main program. It controls the input and output of the individual test programs by routing data blocks and operator directives to those programs. It also processes data block dump requests from the operator.

  20. Test Driven Development of Scientific Models

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.

    2012-01-01

    Test-Driven Development (TDD) is a software development process that promises many advantages for developer productivity and has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices. Of course, scientific/technical software differs from other software categories in a number of important respects, but I nonetheless believe that TDD is quite applicable to the development of such software and has the potential to significantly improve programmer productivity and code quality within the scientific community. After a detailed introduction to TDD, I will present the experience within the Software Systems Support Office (SSSO) in applying the technique to various scientific applications. This discussion will emphasize the various direct and indirect benefits as well as some of the difficulties and limitations of the methodology. I will conclude with a brief description of pFUnit, a unit testing framework I co-developed to support test-driven development of parallel Fortran applications.

  1. Applications of Logic Coverage Criteria and Logic Mutation to Software Testing

    ERIC Educational Resources Information Center

    Kaminski, Garrett K.

    2011-01-01

    Logic is an important component of software. Thus, software logic testing has enjoyed significant research over a period of decades, with renewed interest in the last several years. One approach to detecting logic faults is to create and execute tests that satisfy logic coverage criteria. Another approach to detecting faults is to perform mutation…

  2. Test-driven programming

    NASA Astrophysics Data System (ADS)

    Georgiev, Bozhidar; Georgieva, Adriana

    2013-12-01

    In this paper, are presented some possibilities concerning the implementation of a test-driven development as a programming method. Here is offered a different point of view for creation of advanced programming techniques (build tests before programming source with all necessary software tools and modules respectively). Therefore, this nontraditional approach for easier programmer's work through building tests at first is preferable way of software development. This approach allows comparatively simple programming (applied with different object-oriented programming languages as for example JAVA, XML, PYTHON etc.). It is predictable way to develop software tools and to provide help about creating better software that is also easier to maintain. Test-driven programming is able to replace more complicated casual paradigms, used by many programmers.

  3. 29 CFR 4.130 - Types of covered service contracts illustrated.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... services. (8) Chemical testing and analysis. (9) Clothing alteration and repair. (10) Computer services... maintenance and operation and engineering support services. (16) Exploratory drilling (other than part of...

  4. 29 CFR 4.130 - Types of covered service contracts illustrated.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... services. (8) Chemical testing and analysis. (9) Clothing alteration and repair. (10) Computer services... maintenance and operation and engineering support services. (16) Exploratory drilling (other than part of...

  5. Assessment of a prototype for the Systemization of Nursing Care on a mobile device.

    PubMed

    Rezende, Laura Cristhiane Mendonça; Santos, Sérgio Ribeiro Dos; Medeiros, Ana Lúcia

    2016-01-01

    assess a prototype for use on mobile devices that permits registering data for the Systemization of Nursing Care at a Neonatal Intensive Care Unit. an exploratory and descriptive study was undertaken, characterized as an applied methodological research, developed at a teaching hospital. the mobile technology the nurses at the Neonatal Intensive Care Unit use was positive, although some reported they faced difficulties to manage it, while others with experience in using mobile devices did not face problems to use it. The application has the functions needed for the Systematization of Nursing Care at the unit, but changes were suggested in the interface of the screens, some data collection terms and parameters the application offers. The main contributions of the software were: agility in the development and documentation of the systemization, freedom to move, standardization of infant assessment, optimization of time to develop bureaucratic activities, possibilities to recover information and reduction of physical space the registers occupy. prototype software for the Systemization of Nursing Care with mobile technology permits flexibility for the nurses to register their activities, as the data can be collected at the bedside.

  6. Traffic accident in Cuiabá-MT: an analysis through the data mining technology.

    PubMed

    Galvão, Noemi Dreyer; de Fátima Marin, Heimar

    2010-01-01

    The traffic road accidents (ATT) are non-intentional events with an important magnitude worldwide, mainly in the urban centers. This article aims to analyzes data related to the victims of ATT recorded by the Justice Secretariat and Public Security (SEJUSP) in hospital morbidity and mortality incidence at the city of Cuiabá-MT during 2006, using data mining technology. An observational, retrospective and exploratory study of the secondary data bases was carried out. The three database selected were related using the probabilistic method, through the free software RecLink. One hundred and thirty-nine (139) real pairs of victims of ATT were obtained. In this related database the data mining technology was applied with the software WEKA using the Apriori algorithm. The result generated 10 best rules, six of them were considered according to the parameters established that indicated a useful and comprehensible knowledge to characterize the victims of accidents in Cuiabá. Finally, the findings of the associative rules showed peculiarities of the road traffic accident victims in Cuiabá and highlight the need of prevention measures in the collision accidents for males.

  7. Statistical tools for analysis and modeling of cosmic populations and astronomical time series: CUDAHM and TSE

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas; Budavari, Tamas; Scargle, Jeffrey D.

    2018-01-01

    This presentation provides an overview of open-source software packages addressing two challenging classes of astrostatistics problems. (1) CUDAHM is a C++ framework for hierarchical Bayesian modeling of cosmic populations, leveraging graphics processing units (GPUs) to enable applying this computationally challenging paradigm to large datasets. CUDAHM is motivated by measurement error problems in astronomy, where density estimation and linear and nonlinear regression must be addressed for populations of thousands to millions of objects whose features are measured with possibly complex uncertainties, potentially including selection effects. An example calculation demonstrates accurate GPU-accelerated luminosity function estimation for simulated populations of $10^6$ objects in about two hours using a single NVIDIA Tesla K40c GPU. (2) Time Series Explorer (TSE) is a collection of software in Python and MATLAB for exploratory analysis and statistical modeling of astronomical time series. It comprises a library of stand-alone functions and classes, as well as an application environment for interactive exploration of times series data. The presentation will summarize key capabilities of this emerging project, including new algorithms for analysis of irregularly-sampled time series.

  8. A Case Study to Explore Rigorous Teaching and Testing Practices to Narrow the Achievement Gap

    ERIC Educational Resources Information Center

    Isler, Tesha

    2012-01-01

    The problem examined in this study: Does the majority of teachers use rigorous teaching and testing practices? The purpose of this qualitative exploratory case study was to explore the classroom techniques of six effective teachers who use rigorous teaching and testing practices. The hypothesis for this study is that the examination of the…

  9. Wrong Answers on Multiple-Choice Achievement Tests: Blind Guesses or Systematic Choices?.

    ERIC Educational Resources Information Center

    Powell, J. C.

    A multi-faceted model for the selection of answers for multiple-choice tests was developed from the findings of a series of exploratory studies. This model implies that answer selection should be curvilinear. A series of models were tested for fit using the chi square procedure. Data were collected from 359 elementary school students ages 9-12.…

  10. Identifying the HIV Testing Beliefs of Healthcare Provider Staff at a University Student Health Center: An Exploratory Study

    ERIC Educational Resources Information Center

    Harris, Cornelia A.

    2012-01-01

    This research project examined the views and perceptions of healthcare provider staff regarding HIV testing and the implementation of HIV testing as a routine part of medical practice in a university student health center at a Historically Black College or University (HBCU). This study further explored whether healthcare provider staff promoted…

  11. English Language Proficiency and Test Performance: An Evaluation of Bilingual Students with the Woodcock-Johnson III Tests of Cognitive Abilities

    ERIC Educational Resources Information Center

    Sotelo-Dynega, Marlene; Ortiz, Samuel O.; Flanagan, Dawn P.; Chaplin, William F.

    2013-01-01

    In this article, we report the findings of an exploratory empirical study that investigated the relationship between English Language Proficiency (ELP) on performance on the Woodcock-Johnson Tests of Cognitive Abilities-Third Edition (WJ III) when administered in English to bilingual students of varying levels of ELP. Sixty-one second-grade…

  12. Environmental Assessment of the Hawaii Geothermal Project Well Flow Test Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1976-11-01

    The Hawaii Geothermal Project, a coordinated research effort of the University of Hawaii, funded by the County and State of Hawaii, and ERDA, was initiated in 1973 in an effort to identify, generate, and use geothermal energy on the Big Island of Hawaii. A number of stages are involved in developing geothermal power resources: exploration, test drilling, production testing, field development, power plant and powerline construction, and full-scale production. Phase I of the Project, which began in the summer of 1973, involved conducting exploratory surveys, developing analytical models for interpretation of geophysical results, conducting studies on energy recovery from hotmore » brine, and examining the legal and economic implications of developing geothermal resources in the state. Phase II of the Project, initiated in the summer of 1975, centers on drilling an exploratory research well on the Island of Hawaii, but also continues operational support for the geophysical, engineering, and socioeconomic activities delineated above. The project to date is between the test drilling and production testing phase. The purpose of this assessment is to describe the activities and potential impacts associated with extensive well flow testing to be completed during Phase II.« less

  13. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  14. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  15. Exploratory benchtop study evaluating the use of surgical design and simulation in fibula free flap mandibular reconstruction

    PubMed Central

    2013-01-01

    Background Surgical design and simulation (SDS) is a useful tool to help surgeons visualize the anatomy of the patient and perform operative maneuvers on the computer before implementation in the operating room. While these technologies have many advantages, further evidence of their potential to improve outcomes is required. The present benchtop study was intended to identify if there is a difference in surgical outcome between free-hand surgery completed without virtual surgical planning (VSP) software and preoperatively planned surgery completed with the use of VSP software. Methods Five surgeons participated in the study. In Session A, participants were asked to do a free-hand reconstruction of a 3d printed mandible with a defect using a 3d printed fibula. Four weeks later, in Session B, the participants were asked to do the same reconstruction, but in this case using a preoperatively digitally designed surgical plan. Digital registration computer software, hard tissue measures and duration of the task were used to compare the outcome of the benchtop reconstructions. Results The study revealed that: (1) superimposed images produced in a computer aided design (CAD) software were effective in comparing pre and post-surgical outcomes, (2) there was a difference, based on hard tissue measures, in surgical outcome between the two scenarios and (3) there was no difference in the time it took to complete the sessions. Conclusion The study revealed that the participants were more consistent in the preoperatively digitally planned surgery than they were in the free hand surgery. PMID:23800209

  16. NASA Data Acquisition System Software Development for Rocket Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Herbert, Phillip W., Sr.; Elliot, Alex C.; Graves, Andrew R.

    2015-01-01

    Current NASA propulsion test facilities include Stennis Space Center in Mississippi, Marshall Space Flight Center in Alabama, Plum Brook Station in Ohio, and White Sands Test Facility in New Mexico. Within and across these centers, a diverse set of data acquisition systems exist with different hardware and software platforms. The NASA Data Acquisition System (NDAS) is a software suite designed to operate and control many critical aspects of rocket engine testing. The software suite combines real-time data visualization, data recording to a variety formats, short-term and long-term acquisition system calibration capabilities, test stand configuration control, and a variety of data post-processing capabilities. Additionally, data stream conversion functions exist to translate test facility data streams to and from downstream systems, including engine customer systems. The primary design goals for NDAS are flexibility, extensibility, and modularity. Providing a common user interface for a variety of hardware platforms helps drive consistency and error reduction during testing. In addition, with an understanding that test facilities have different requirements and setups, the software is designed to be modular. One engine program may require real-time displays and data recording; others may require more complex data stream conversion, measurement filtering, or test stand configuration management. The NDAS suite allows test facilities to choose which components to use based on their specific needs. The NDAS code is primarily written in LabVIEW, a graphical, data-flow driven language. Although LabVIEW is a general-purpose programming language; large-scale software development in the language is relatively rare compared to more commonly used languages. The NDAS software suite also makes extensive use of a new, advanced development framework called the Actor Framework. The Actor Framework provides a level of code reuse and extensibility that has previously been difficult to achieve using LabVIEW. The

  17. Rules of thumb to increase the software quality through testing

    NASA Astrophysics Data System (ADS)

    Buttu, M.; Bartolini, M.; Migoni, C.; Orlati, A.; Poppi, S.; Righini, S.

    2016-07-01

    The software maintenance typically requires 40-80% of the overall project costs, and this considerable variability mostly depends on the software internal quality: the more the software is designed and implemented to constantly welcome new changes, the lower will be the maintenance costs. The internal quality is typically enforced through testing, which in turn also affects the development and maintenance costs. This is the reason why testing methodologies have become a major concern for any company that builds - or is involved in building - software. Although there is no testing approach that suits all contexts, we infer some general guidelines learned during the Development of the Italian Single-dish COntrol System (DISCOS), which is a project aimed at producing the control software for the three INAF radio telescopes (the Medicina and Noto dishes, and the newly-built SRT). These guidelines concern both the development and the maintenance phases, and their ultimate goal is to maximize the DISCOS software quality through a Behavior-Driven Development (BDD) workflow beside a continuous delivery pipeline. We consider different topics and patterns; they involve the proper apportion of the tests (from end-to-end to low-level tests), the choice between hardware simulators and mockers, why and how to apply TDD and the dependency injection to increase the test coverage, the emerging technologies available for test isolation, bug fixing, how to protect the system from the external resources changes (firmware updating, hardware substitution, etc.) and, eventually, how to accomplish BDD starting from functional tests and going through integration and unit tests. We discuss pros and cons of each solution and point out the motivations of our choices either as a general rule or narrowed in the context of the DISCOS project.

  18. The Alignment of Software Testing Skills of IS Students with Industry Practices--A South African Perspective

    ERIC Educational Resources Information Center

    Scott, Elsje; Zadirov, Alexander; Feinberg, Sean; Jayakody, Ruwanga

    2004-01-01

    Software testing is a crucial component in the development of good quality systems in industry. For this reason it was considered important to investigate the extent to which the Information Systems (IS) syllabus at the University of Cape Town (UCT) was aligned with accepted software testing practices in South Africa. For students to be effective…

  19. Acquisition Handbook - Update. Comprehensive Approach to Reusable Defensive Software (CARDS)

    DTIC Science & Technology

    1994-03-25

    designs, and implementation components (source code, test plans, procedures and results, and system/software documentation). This handbook provides a...activities where software components are acquired, evaluated, tested and sometimes modified. In addition to serving as a facility for the acquisition and...systems from such components [1]. Implementation components are at the lowest level and consist of: specifications; detailed designs; code, test

  20. Improving Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2012-02-01

    New test procedure evaluates quality and accuracy of energy analysis tools for the residential building retrofit market. Reducing the energy use of existing homes in the United States offers significant energy-saving opportunities, which can be identified through building simulation software tools that calculate optimal packages of efficiency measures. To improve the accuracy of energy analysis for residential buildings, the National Renewable Energy Laboratory's (NREL) Buildings Research team developed the Building Energy Simulation Test for Existing Homes (BESTEST-EX), a method for diagnosing and correcting errors in building energy audit software and calibration procedures. BESTEST-EX consists of building physics and utility billmore » calibration test cases, which software developers can use to compare their tools simulation findings to reference results generated with state-of-the-art simulation tools. Overall, the BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX is helping software developers identify and correct bugs in their software, as well as develop and test utility bill calibration procedures.« less

  1. Grasping objects autonomously in simulated KC-135 zero-g

    NASA Technical Reports Server (NTRS)

    Norsworthy, Robert S.

    1994-01-01

    The KC-135 aircraft was chosen for simulated zero gravity testing of the Extravehicular Activity Helper/retriever (EVAHR). A software simulation of the EVAHR hardware, KC-135 flight dynamics, collision detection and grasp inpact dynamics has been developed to integrate and test the EVAHR software prior to flight testing on the KC-135. The EVAHR software will perform target pose estimation, tracking, and motion estimation for rigid, freely rotating, polyhedral objects. Manipulator grasp planning and trajectory control software has also been developed to grasp targets while avoiding collisions.

  2. Software Quality Metrics: A Software Management Monitoring Method for Air Force Logistics Command in Its Software Quality Assurance Program for the Quantitative Assessment of the System Development Life Cycle under Configuration Management.

    DTIC Science & Technology

    1982-03-01

    pilot systems. Magnitude of the mutant error is classified as: o Program does not compute. o Program computes but does not run test data. o Program...14 Test and Integration ... ............ .. 105 15 The Mapping of SQM to the SDLC ........ ... 108 16 ADS Development .... .............. . 224 17...and funds. While the test phase concludes the normal development cycle, one should realize that with software the development continues in the

  3. Antisense oligonucleotide therapy rescues disruptions in organization of exploratory movements associated with Usher syndrome type 1C in mice.

    PubMed

    Donaldson, Tia N; Jennings, Kelsey T; Cherep, Lucia A; McNeela, Adam M; Depreux, Frederic F; Jodelka, Francine M; Hastings, Michelle L; Wallace, Douglas G

    2018-02-15

    Usher syndrome, Type 1C (USH1C) is an autosomal recessive inherited disorder in which a mutation in the gene encoding harmonin is associated with multi-sensory deficits (i.e., auditory, vestibular, and visual). USH1C (Usher) mice, engineered with a human USH1C mutation, exhibit these multi-sensory deficits by circling behavior and lack of response to sound. Administration of an antisense oligonucleotide (ASO) therapeutic that corrects expression of the mutated USH1C gene, has been shown to increase harmonin levels, reduce circling behavior, and improve vestibular and auditory function. The current study evaluates the organization of exploratory movements to assess spatial organization in Usher mice and determine the efficacy of ASO therapy in attenuating any such deficits. Usher and heterozygous mice received the therapeutic ASO, ASO-29, or a control, non-specific ASO treatment at postnatal day five. Organization of exploratory movements was assessed under dark and light conditions at two and six-months of age. Disruptions in exploratory movement organization observed in control-treated Usher mice were consistent with impaired use of self-movement and environmental cues. In general, ASO-29 treatment rescued organization of exploratory movements at two and six-month testing points. These observations are consistent with ASO-29 rescuing processing of multiple sources of information and demonstrate the potential of ASO therapies to ameliorate topographical disorientation associated with other genetic disorders. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Adaptive Integration of Nonsmooth Dynamical Systems

    DTIC Science & Technology

    2017-10-11

    controlled time stepping method to interactively design running robots. [1] John Shepherd, Samuel Zapolsky, and Evan M. Drumwright, “Fast multi-body...software like this to test software running on my robots. Started working in simulation after attempting to use software like this to test software... running on my robots. The libraries that produce these beautiful results have failed at simulating robotic manipulation. Postulate: It is easier to

  5. Guidance and Navigation Software Architecture Design for the Autonomous Multi-Agent Physically Interacting Spacecraft (AMPHIS) Test Bed

    DTIC Science & Technology

    2006-12-01

    NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI-AGENT PHYSICALLY INTERACTING SPACECRAFT (AMPHIS) TEST BED by Blake D. Eikenberry...Engineer Degree 4. TITLE AND SUBTITLE Guidance and Navigation Software Architecture Design for the Autonomous Multi- Agent Physically Interacting...iii Approved for public release; distribution is unlimited GUIDANCE AND NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI

  6. Model-based software process improvement

    NASA Technical Reports Server (NTRS)

    Zettervall, Brenda T.

    1994-01-01

    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  7. Software for Automated Testing of Mission-Control Displays

    NASA Technical Reports Server (NTRS)

    OHagan, Brian

    2004-01-01

    MCC Display Cert Tool is a set of software tools for automated testing of computerterminal displays in spacecraft mission-control centers, including those of the space shuttle and the International Space Station. This software makes it possible to perform tests that are more thorough, take less time, and are less likely to lead to erroneous results, relative to tests performed manually. This software enables comparison of two sets of displays to report command and telemetry differences, generates test scripts for verifying telemetry and commands, and generates a documentary record containing display information, including version and corrective-maintenance data. At the time of reporting the information for this article, work was continuing to add a capability for validation of display parameters against a reconfiguration file.

  8. Using articulation and inscription as catalysts for reflection: Design principles for reflective inquiry

    NASA Astrophysics Data System (ADS)

    Loh, Ben Tun-Bin

    2003-07-01

    The demand for students to engage in complex student-driven and information-rich inquiry investigations poses challenges to existing learning environments. Students are not familiar with this style of work, and lack the skills, tools, and expectations it demands, often forging blindly forward in the investigation. If students are to be successful, they need to learn to be reflective inquirers, periodically stepping back from an investigation to evaluate their work. The fundamental goal of my dissertation is to understand how to design learning environments to promote and support reflective inquiry. I have three basic research questions: how to define this mode of work, how to help students learn it, and understanding how it facilitates reflection when enacted in a classroom. I take an exploratory approach in which, through iterative cycles of design, development, and reflection, I develop principles of design for reflective inquiry, instantiate those principles in the design of a software environment, and test that software in the context of classroom work. My work contributes to the understanding of reflective inquiry in three ways: First, I define a task model that describes the kinds of operations (cognitive tasks) that students should engage in as reflective inquirers. These operations are defined in terms of two basic tasks: articulation and inscription, which serve as catalysts for externalizing student thinking as objects of and triggers for reflection. Second, I instantiate the task model in the design of software tools (the Progress Portfolio). And, through proof of concept pilot studies, I examine how the task model and tools helped students with their investigative classroom work. Finally, I take a step back from these implementations and articulate general design principles for reflective inquiry with the goal of informing the design of other reflective inquiry learning environments. There are three design principles: (1) Provide a designated work space for reflection activities to focus student attention on reflection. (2) Help students create and use artifacts that represent their work and their thinking as a means to create referents for reflection. (3) Support and take advantage of social processes that help students reflect on their own work.

  9. NASA Data Acquisitions System (NDAS) Software Architecture

    NASA Technical Reports Server (NTRS)

    Davis, Dawn; Duncan, Michael; Franzl, Richard; Holladay, Wendy; Marshall, Peggi; Morris, Jon; Turowski, Mark

    2012-01-01

    The NDAS Software Project is for the development of common low speed data acquisition system software to support NASA's rocket propulsion testing facilities at John C. Stennis Space Center (SSC), White Sands Test Facility (WSTF), Plum Brook Station (PBS), and Marshall Space Flight Center (MSFC).

  10. Predictors of Assessment Accommodations Use for Students Who Are Deaf or Hard of Hearing

    ERIC Educational Resources Information Center

    Cawthon, Stephanie W.; Wurtz, Keith A.

    2010-01-01

    Current accountability reform requires annual assessment for all students, including students with disabilities. Testing accommodations are one way to increase access to assessments while maintaining the validity of test scores. This paper provides findings from an exploratory logistic regression analysis of predictors of four accommodations used…

  11. Substance Abuse Counselors and Moral Reasoning: Hypothetical and Authentic Dilemmas

    ERIC Educational Resources Information Center

    Sias, Shari M.

    2009-01-01

    This exploratory study examined the assumption that the level of moral reasoning (Defining Issues Test; J. R. Rest, 1986) used in solving hypothetical and authentic dilemmas is similar for substance abuse counselors (N = 188). The statistical analyses used were paired-sample t tests, Pearson product-moment correlation, and simultaneous multiple…

  12. The Development of the Motivation for Critical Reasoning in Online Discussions Inventory (MCRODI)

    ERIC Educational Resources Information Center

    Zhang, Tianyi; Koehler, Matthew J.; Spatariu, Alexandru

    2009-01-01

    This study was conducted to develop an inventory that measures students' motivation to engage in critical reasoning in online discussions. Inventory items were developed based on theoretical frameworks and then tested on 168 participants. Using exploratory factor analysis, test-retest reliability, and internal consistency, twenty-two items were…

  13. Validating the Posttraumatic Stress Disorder Symptom Scale with Persons Who Have Severe Mental Illnesses

    ERIC Educational Resources Information Center

    O'Hare, Thomas; Shen, Ce; Sherrer, Margaret

    2007-01-01

    Objective: Interview data collected from 275 clients with severe mental illnesses are used to test the construct and criterion validity of the Posttraumatic Stress Disorder Symptom Scale (PSS). Method: First, exploratory and confirmatory factor analyses are used to test whether the scale reflects the posttraumatic stress disorder (PTSD) symptom…

  14. Using decision trees to understand structure in missing data

    PubMed Central

    Tierney, Nicholas J; Harden, Fiona A; Harden, Maurice J; Mengersen, Kerrie L

    2015-01-01

    Objectives Demonstrate the application of decision trees—classification and regression trees (CARTs), and their cousins, boosted regression trees (BRTs)—to understand structure in missing data. Setting Data taken from employees at 3 different industrial sites in Australia. Participants 7915 observations were included. Materials and methods The approach was evaluated using an occupational health data set comprising results of questionnaires, medical tests and environmental monitoring. Statistical methods included standard statistical tests and the ‘rpart’ and ‘gbm’ packages for CART and BRT analyses, respectively, from the statistical software ‘R’. A simulation study was conducted to explore the capability of decision tree models in describing data with missingness artificially introduced. Results CART and BRT models were effective in highlighting a missingness structure in the data, related to the type of data (medical or environmental), the site in which it was collected, the number of visits, and the presence of extreme values. The simulation study revealed that CART models were able to identify variables and values responsible for inducing missingness. There was greater variation in variable importance for unstructured as compared to structured missingness. Discussion Both CART and BRT models were effective in describing structural missingness in data. CART models may be preferred over BRT models for exploratory analysis of missing data, and selecting variables important for predicting missingness. BRT models can show how values of other variables influence missingness, which may prove useful for researchers. Conclusions Researchers are encouraged to use CART and BRT models to explore and understand missing data. PMID:26124509

  15. The Nursing Performance Instrument: Exploratory and Confirmatory Factor Analyses in Registered Nurses.

    PubMed

    Sagherian, Knar; Steege, Linsey M; Geiger-Brown, Jeanne; Harrington, Donna

    2018-04-01

    The optimal performance of nurses in healthcare settings plays a critical role in care quality and patient safety. Despite this importance, few measures are provided in the literature that evaluate nursing performance as an independent construct from competencies. The nine-item Nursing Performance Instrument (NPI) was developed to fill this gap. The aim of this study was to examine and confirm the underlying factor structure of the NPI in registered nurses. The design was cross-sectional, using secondary data collected between February 2008 and April 2009 for the "Fatigue in Nursing Survey" (N = 797). The sample was predominantly dayshift female nurses working in acute care settings. Using Mplus software, exploratory and confirmatory factor analyses were applied to the NPI data, which were divided into two equal subsamples. Multiple fit indices were used to evaluate the fit of the alternative models. The three-factor model was determined to fit the data adequately. The factors that were labeled as "physical/mental decrements," "consistent practice," and "behavioral change" were moderately to strongly intercorrelated, indicating good convergent validity. The reliability coefficients for the subscales were acceptable. The NPI consists of three latent constructs. This instrument has the potentialto be used as a self-monitoring instrument that addressesnurses' perceptions of performance while providing patient care.

  16. An experience of qualified preventive screening: shiraz smart screening software.

    PubMed

    Islami Parkoohi, Parisa; Zare, Hashem; Abdollahifard, Gholamreza

    2015-01-01

    Computerized preventive screening software is a cost effective intervention tool to address non-communicable chronic diseases. Shiraz Smart Screening Software (SSSS) was developed as an innovative tool for qualified screening. It allows simultaneous smart screening of several high-burden chronic diseases and supports reminder notification functionality. The extent in which SSSS affects screening quality is also described. Following software development, preventive screening and annual health examinations of 261 school staff (Medical School of Shiraz, Iran) was carried out in a software-assisted manner. To evaluate the quality of the software-assisted screening, we used quasi-experimental study design and determined coverage, irregular attendance and inappropriateness proportions in relation with the manual and software-assisted screening as well as the corresponding number of requested tests. In manual screening method, 27% of employees were covered (with 94% irregular attendance) while by software-assisted screening, the coverage proportion was 79% (attendance status will clear after the specified time). The frequency of inappropriate screening test requests, before the software implementation, was 41.37% for fasting plasma glucose, 41.37% for lipid profile, 0.84% for occult blood, 0.19% for flexible sigmoidoscopy/colonoscopy, 35.29% for Pap smear, 19.20% for mammography and 11.2% for prostate specific antigen. All of the above were corrected by the software application. In total, 366 manual screening and 334 software-assisted screening tests were requested. SSSS is an innovative tool to improve the quality of preventive screening plans in terms of increased screening coverage, reduction in inappropriateness and the total number of requested tests.

  17. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, W. A.; Lepicovsky, J.

    1992-01-01

    The software for configuring an LV counter processor system has been developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system has been developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.

  18. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, William A.

    1992-01-01

    The software for configuring a Laser Velocimeter (LV) counter processor system was developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system was developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.

  19. An Exploratory Analysis of Economic Factors in the Navy Total Force Strength Model (NTFSM)

    DTIC Science & Technology

    2015-12-01

    NTFSM is still in the testing phase and its overall behavior is largely unknown. In particular, the analysts that NTFSM was designed to help are...NTFSM is still in the testing phase and its overall behavior is largely unknown. In particular, the analysts that NTFSM was designed to help are...7 B. NTFSM VERIFICATION AND TESTING ......................................... 8 C

  20. Ffuzz: Towards full system high coverage fuzz testing on binary executables.

    PubMed

    Zhang, Bin; Ye, Jiaxi; Bi, Xing; Feng, Chao; Tang, Chaojing

    2018-01-01

    Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool-Ffuzz-on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently.

  1. Simulation test beds for the space station electrical power system

    NASA Technical Reports Server (NTRS)

    Sadler, Gerald G.

    1988-01-01

    NASA Lewis Research Center and its prime contractor are responsible for developing the electrical power system on the space station. The power system will be controlled by a network of distributed processors. Control software will be verified, validated, and tested in hardware and software test beds. Current plans for the software test bed involve using real time and nonreal time simulations of the power system. This paper will discuss the general simulation objectives and configurations, control architecture, interfaces between simulator and controls, types of tests, and facility configurations.

  2. System integration test plan for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D.

    This document presents the system integration test plan for the Commercial-Off-The-Shelf, PassPort and PeopleSoft software, and custom software created to work with the COTS products. The PP software is an integrated application for AP, Contract Management, Inventory Management, Purchasing and Material Safety Data Sheet. The PS software is an integrated application for Project Costing, General Ledger, Human Resources/Training, Payroll, and Base Benefits.

  3. Emotionality in growing pigs: is the open field a valid test?

    PubMed

    Donald, Ramona D; Healy, Susan D; Lawrence, Alistair B; Rutherford, Kenneth M D

    2011-10-24

    The ability to assess emotionality is important within animal welfare research. Yet, for farm animals, few tests of emotionality have been well validated. Here we investigated the construct validity of behavioural measures of pig emotionality in an open-field test by manipulating the experiences of pigs in three ways. In Experiment One (pharmacological manipulation), pigs pre-treated with Azaperone, a drug used to reduce stress in commercial pigs, were more active, spent more time exploring and vocalised less than control pigs. In Experiment Two (social manipulation), pigs that experienced the open-field arena with a familiar companion were also more exploratory, spent less time behaviourally idle, and were less vocal than controls although to a lesser degree than in Experiment One. In Experiment Three (novelty manipulation), pigs experiencing the open field for a second time were less active, explored less and vocalised less than they had done in the first exposure to the arena. A principal component analysis was conducted on data from all three trials. The first two components could be interpreted as relating to the form (cautious to exploratory) and magnitude (low to high arousal) of the emotional response to open-field testing. Based on these dimensions, in Experiment One, Azaperone pigs appeared to be less fearful than saline-treated controls. However, in Experiment Two, exposure to the arena with a conspecific did not affect the first two dimensions but did affect a third behavioural dimension, relating to oro-nasal exploration of the arena floor. In Experiment Three, repeat exposure altered the form but not the magnitude of emotional response: pigs were less exploratory in the second test. In conclusion, behavioural measures taken from pigs in an open-field test are sensitive to manipulations of their prior experience in a manner that suggests they reflect underlying emotionality. Behavioural measures taken during open-field exposure can be useful for making assessments of both pig emotionality and of their welfare. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Instrument control software development process for the multi-star AO system ARGOS

    NASA Astrophysics Data System (ADS)

    Kulas, M.; Barl, L.; Borelli, J. L.; Gässler, W.; Rabien, S.

    2012-09-01

    The ARGOS project (Advanced Rayleigh guided Ground layer adaptive Optics System) will upgrade the Large Binocular Telescope (LBT) with an AO System consisting of six Rayleigh laser guide stars. This adaptive optics system integrates several control loops and many different components like lasers, calibration swing arms and slope computers that are dispersed throughout the telescope. The purpose of the instrument control software (ICS) is running this AO system and providing convenient client interfaces to the instruments and the control loops. The challenges for the ARGOS ICS are the development of a distributed and safety-critical software system with no defects in a short time, the creation of huge and complex software programs with a maintainable code base, the delivery of software components with the desired functionality and the support of geographically distributed project partners. To tackle these difficult tasks, the ARGOS software engineers reuse existing software like the novel middleware from LINC-NIRVANA, an instrument for the LBT, provide many tests at different functional levels like unit tests and regression tests, agree about code and architecture style and deliver software incrementally while closely collaborating with the project partners. Many ARGOS ICS components are already successfully in use in the laboratories for testing ARGOS control loops.

  5. Exploring the Use of a Test Automation Framework

    NASA Technical Reports Server (NTRS)

    Cervantes, Alex

    2009-01-01

    It is known that software testers, more often than not, lack the time needed to fully test the delivered software product within the time period allotted to them. When problems in the implementation phase of a development project occur, it normally causes the software delivery date to slide. As a result, testers either need to work longer hours, or supplementary resources need to be added to the test team in order to meet aggressive test deadlines. One solution to this problem is to provide testers with a test automation framework to facilitate the development of automated test solutions.

  6. Behavior driven testing in ALMA telescope calibration software

    NASA Astrophysics Data System (ADS)

    Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang

    2016-07-01

    ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.

  7. Alcohol hangover: type and time-extension of motor function impairments.

    PubMed

    Karadayian, Analía G; Cutrera, Rodolfo A

    2013-06-15

    Alcohol hangover is defined as the unpleasant next-day state following an evening of excessive alcohol consumption. Hangover begins when ethanol is absent in plasma and is characterized by physical and psychological symptoms. During hangover cognitive functions and subjective capacities are affected along with inefficiency, reduced productivity, absenteeism, driving impairments, poor academic achievement and reductions in motor coordination. The aim of this work was to study the type and length of motor and exploratory functions from the beginning to the end of the alcohol hangover. Male Swiss mice were injected i.p. either with saline (control group) or with ethanol (3.8 g/kg BW) (hangover group). Motor performance, walking deficiency, motor strength, locomotion and exploratory activity were evaluated at a basal point (ZT0) and every 2 h up to 20 h after blood alcohol levels were close to zero (hangover onset). Motor performance was 80% decreased at the onset of hangover (p<0.001). Hangover mice exhibited a reduced motor performance during the next 16 h (p<0.01). Motor function was recovered 20 h after hangover onset. Hangover mice displayed walking deficiencies from the beginning to 16 h after hangover onset (p<0.05). Moreover, mice suffering from a hangover, exhibited a significant decrease in neuromuscular strength during 16 h (p<0.001). Averaged speed and total distance traveled in the open field test and the exploratory activity on T-maze and hole board tests were reduced during 16 h after hangover onset (p<0.05). Our findings demonstrate a time-extension between 16 to 20 h for hangover motor and exploratory impairments. As a whole, this study shows the long lasting effects of alcohol hangover. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Real-Time Extended Interface Automata for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi; Man, Tianlong; Liu, Bin

    2014-01-01

    Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA) which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system. PMID:24892080

  9. Ovarian cancer

    MedlinePlus

    ... test (serum HCG) CT or MRI of the pelvis or abdomen Ultrasound of the pelvis Surgery, such as a pelvic laparoscopy or exploratory ... uterus, or other structures in the belly or pelvis. Chemotherapy is used after surgery to treat any ...

  10. All Reading Tests Are Not Created Equal: A Comparison of the State of Texas Assessment of Academic Readiness (STAAR) and the Gray Oral Reading Test-4 (GORT-4)

    ERIC Educational Resources Information Center

    Johnson, Kary A.; Wilson, Celia M.; Williams-Rossi, Dara

    2013-01-01

    This exploratory study investigated how reading comprehension was conceptualized on the new high-stakes test, the 2011-2012 State of Texas Assessment of Academic Readiness (STAAR). Specifically, comprehension, rate, and accuracy scores on the Gray Oral Reading Test 4 (GORT-4) from a group of struggling, low-SES, Hispanic middle school students (n…

  11. Loran-C flight test software

    NASA Technical Reports Server (NTRS)

    Nickum, J. D.

    1978-01-01

    The software package developed for the KIM-1 Micro-System and the Mini-L PLL receiver to simplify taking flight test data is described along with the address and data bus buffers used in the KIM-1 Micro-system. The interface hardware and timing are also presented to describe completely the software programs.

  12. Design ATE systems for complex assemblies

    NASA Astrophysics Data System (ADS)

    Napier, R. S.; Flammer, G. H.; Moser, S. A.

    1983-06-01

    The use of ATE systems in radio specification testing can reduce the test time by approximately 90 to 95 percent. What is more, the test station does not require a highly trained operator. Since the system controller has full power over all the measurements, human errors are not introduced into the readings. The controller is immune to any need to increase output by allowing marginal units to pass through the system. In addition, the software compensates for predictable, repeatable system errors, for example, cabling losses, which are an inherent part of the test setup. With no variation in test procedures from unit to unit, there is a constant repeatability factor. Preparing the software, however, usually entails considerable expense. It is pointed out that many of the problems associated with ATE system software can be avoided with the use of a software-intensive, or computer-intensive, system organization. Its goal is to minimize the user's need for software development, thereby saving time and money.

  13. Software platform virtualization in chemistry research and university teaching

    PubMed Central

    2009-01-01

    Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997

  14. Software platform virtualization in chemistry research and university teaching.

    PubMed

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  15. Cognitive and locomotor/exploratory behavior after chronic exercise in the olfactory bulbectomy animal model of depression.

    PubMed

    Van Hoomissen, Jacqueline; Kunrath, Julie; Dentlinger, Renee; Lafrenz, Andrew; Krause, Mark; Azar, Afaf

    2011-09-12

    Despite the evidence that exercise improves cognitive behavior in animal models, little is known about these beneficial effects in animal models of pathology. We examined the effects of activity wheel (AW) running on contextual fear conditioning (CFC) and locomotor/exploratory behavior in the olfactory bulbectomy (OBX) model of depression, which is characterized by hyperactivity and changes in cognitive function. Twenty-four hours after the conditioning session of the CFC protocol, the animals were tested for the conditioned response in a conditioned and a novel context to test for the effects of both AW and OBX on CFC, but also the context specificity of the effect. OBX reduced overall AW running behavior throughout the experiment, but increased locomotor/exploratory behavior during CFC, thus demonstrating a context-dependent effect. OBX animals, however, displayed normal CFC behavior that was context-specific, indicating that aversively conditioned memory is preserved in this model. AW running increased freezing behavior during the testing session of the CFC protocol in the control animals but only in the conditioned context, supporting the hypothesis that AW running improves cognitive function in a context-specific manner that does not generalize to an animal model of pathology. Blood corticosterone levels were increased in all animals at the conclusion of the testing sessions, but levels were higher in AW compared to sedentary groups indicating an effect of exercise on neuroendocrine function. Given the differential results of AW running on behavior and neuroendocrine function after OBX, further exploration of the beneficial effects of exercise in animal models of neuropathology is warranted. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Simulation-based Testing of Control Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozmen, Ozgur; Nutaro, James J.; Sanyal, Jibonananda

    It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulatormore » can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.« less

  17. Waveform Generator Signal Processing Software

    DOT National Transportation Integrated Search

    1988-09-01

    This report describes the software that was developed to process test waveforms that were recorded by crash test data acquisition systems. The test waveforms are generated by an electronic waveform generator developed by MGA Research Corporation unde...

  18. Software error data collection and categorization

    NASA Technical Reports Server (NTRS)

    Ostrand, T. J.; Weyuker, E. J.

    1982-01-01

    Software errors detected during development of an interactive special purpose editor system were studied. This product was followed during nine months of coding, unit testing, function testing, and system testing. A new error categorization scheme was developed.

  19. Method and computer product to increase accuracy of time-based software verification for sensor networks

    DOEpatents

    Foo Kune, Denis [Saint Paul, MN; Mahadevan, Karthikeyan [Mountain View, CA

    2011-01-25

    A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.

  20. A Model Independent S/W Framework for Search-Based Software Testing

    PubMed Central

    Baik, Jongmoon

    2014-01-01

    In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314

  1. Developing software to "track and catch" missed follow-up of abnormal test results in a complex sociotechnical environment.

    PubMed

    Smith, M; Murphy, D; Laxmisan, A; Sittig, D; Reis, B; Esquivel, A; Singh, H

    2013-01-01

    Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider's prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA's EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility's "test" EHR system, thus demonstrating technical compatibility. To address the factors involved in missed test results, we developed a software prototype to account for technical, usability, organizational, and workflow needs. Our evaluation has shown the feasibility of the prototype as a means of facilitating better follow-up for cancer-related abnormal test results.

  2. Test Driven Development: Lessons from a Simple Scientific Model

    NASA Astrophysics Data System (ADS)

    Clune, T. L.; Kuo, K.

    2010-12-01

    In the commercial software industry, unit testing frameworks have emerged as a disruptive technology that has permanently altered the process by which software is developed. Unit testing frameworks significantly reduce traditional barriers, both practical and psychological, to creating and executing tests that verify software implementations. A new development paradigm, known as test driven development (TDD), has emerged from unit testing practices, in which low-level tests (i.e. unit tests) are created by developers prior to implementing new pieces of code. Although somewhat counter-intuitive, this approach actually improves developer productivity. In addition to reducing the average time for detecting software defects (bugs), the requirement to provide procedure interfaces that enable testing frequently leads to superior design decisions. Although TDD is widely accepted in many software domains, its applicability to scientific modeling still warrants reasonable skepticism. While the technique is clearly relevant for infrastructure layers of scientific models such as the Earth System Modeling Framework (ESMF), numerical and scientific components pose a number of challenges to TDD that are not often encountered in commercial software. Nonetheless, our experience leads us to believe that the technique has great potential not only for developer productivity, but also as a tool for understanding and documenting the basic scientific assumptions upon which our models are implemented. We will provide a brief introduction to test driven development and then discuss our experience in using TDD to implement a relatively simple numerical model that simulates the growth of snowflakes. Many of the lessons learned are directly applicable to larger scientific models.

  3. The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that the quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exhorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance separately programmed versions fail independently is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation.

  4. Software verification plan for GCS. [guidance and control software

    NASA Technical Reports Server (NTRS)

    Dent, Leslie A.; Shagnea, Anita M.; Hayhurst, Kelly J.

    1990-01-01

    This verification plan is written as part of an experiment designed to study the fundamental characteristics of the software failure process. The experiment will be conducted using several implementations of software that were produced according to industry-standard guidelines, namely the Radio Technical Commission for Aeronautics RTCA/DO-178A guidelines, Software Consideration in Airborne Systems and Equipment Certification, for the development of flight software. This plan fulfills the DO-178A requirements for providing instructions on the testing of each implementation of software. The plan details the verification activities to be performed at each phase in the development process, contains a step by step description of the testing procedures, and discusses all of the tools used throughout the verification process.

  5. Software development predictors, error analysis, reliability models and software metric analysis

    NASA Technical Reports Server (NTRS)

    Basili, Victor

    1983-01-01

    The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.

  6. CrossTalk. The Journal of Defense Software Engineering. Volume 13, Number 6, June 2000

    DTIC Science & Technology

    2000-06-01

    Techniques for Efficiently Generating and Testing Software This paper presents a proven process that uses advanced tools to design, develop and test... optimal software. by Keith R. Wegner Large Software Systems—Back to Basics Development methods that work on small problems seem to not scale well to...Ability Requirements for Teamwork: Implications for Human Resource Management, Journal of Management, Vol. 20, No. 2, 1994. 11. Ferguson, Pat, Watts S

  7. Open source IPSEC software in manned and unmanned space missions

    NASA Astrophysics Data System (ADS)

    Edwards, Jacob

    Network security is a major topic of research because cyber attackers pose a threat to national security. Securing ground-space communications for NASA missions is important because attackers could endanger mission success and human lives. This thesis describes how an open source IPsec software package was used to create a secure and reliable channel for ground-space communications. A cost efficient, reproducible hardware testbed was also created to simulate ground-space communications. The testbed enables simulation of low-bandwidth and high latency communications links to experiment how the open source IPsec software reacts to these network constraints. Test cases were built that allowed for validation of the testbed and the open source IPsec software. The test cases also simulate using an IPsec connection from mission control ground routers to points of interest in outer space. Tested open source IPsec software did not meet all the requirements. Software changes were suggested to meet requirements.

  8. The use of emulator-based simulators for on-board software maintenance

    NASA Astrophysics Data System (ADS)

    Irvine, M. M.; Dartnell, A.

    2002-07-01

    Traditionally, onboard software maintenance activities within the space sector are performed using hardware-based facilities. These facilities are developed around the use of hardware emulation or breadboards containing target processors. Some sort of environment is provided around the hardware to support the maintenance actives. However, these environments are not easy to use to set-up the required test scenarios, particularly when the onboard software executes in a dynamic I/O environment, e.g. attitude control software, or data handling software. In addition, the hardware and/or environment may not support the test set-up required during investigations into software anomalies, e.g. raise spurious interrupt, fail memory, etc, and the overall "visibility" of the software executing may be limited. The Software Maintenance Simulator (SOMSIM) is a tool that can support the traditional maintenance facilities. The following list contains some of the main benefits that SOMSIM can provide: Low cost flexible extension to existing product - operational simulator containing software processor emulator; System-level high-fidelity test-bed in which software "executes"; Provides a high degree of control/configuration over the entire "system", including contingency conditions perhaps not possible with real hardware; High visibility and control over execution of emulated software. This paper describes the SOMSIM concept in more detail, and also describes the SOMSIM study being carried out for ESA/ESOC by VEGA IT GmbH.

  9. Modular, Autonomous Command and Data Handling Software with Built-In Simulation and Test

    NASA Technical Reports Server (NTRS)

    Cuseo, John

    2012-01-01

    The spacecraft system that plays the greatest role throughout the program lifecycle is the Command and Data Handling System (C&DH), along with the associated algorithms and software. The C&DH takes on this role as cost driver because it is the brains of the spacecraft and is the element of the system that is primarily responsible for the integration and interoperability of all spacecraft subsystems. During design and development, many activities associated with mission design, system engineering, and subsystem development result in products that are directly supported by the C&DH, such as interfaces, algorithms, flight software (FSW), and parameter sets. A modular system architecture has been developed that provides a means for rapid spacecraft assembly, test, and integration. This modular C&DH software architecture, which can be targeted and adapted to a wide variety of spacecraft architectures, payloads, and mission requirements, eliminates the current practice of rewriting the spacecraft software and test environment for every mission. This software allows missionspecific software and algorithms to be rapidly integrated and tested, significantly decreasing time involved in the software development cycle. Additionally, the FSW includes an Onboard Dynamic Simulation System (ODySSy) that allows the C&DH software to support rapid integration and test. With this solution, the C&DH software capabilities will encompass all phases of the spacecraft lifecycle. ODySSy is an on-board simulation capability built directly into the FSW that provides dynamic built-in test capabilities as soon as the FSW image is loaded onto the processor. It includes a six-degrees- of-freedom, high-fidelity simulation that allows complete closed-loop and hardware-in-the-loop testing of a spacecraft in a ground processing environment without any additional external stimuli. ODySSy can intercept and modify sensor inputs using mathematical sensor models, and can intercept and respond to actuator commands. ODySSy integration is unique in that it allows testing of actual mission sequences on the flight vehicle while the spacecraft is in various stages of assembly, test, and launch operations all without any external support equipment or simulators. The ODySSy component of the FSW significantly decreases the time required for integration and test by providing an automated, standardized, and modular approach to integrated avionics and component interface and functional verification. ODySSy further provides the capability for on-orbit support in the form of autonomous mission planning and fault protection.

  10. Software engineering and automatic continuous verification of scientific software

    NASA Astrophysics Data System (ADS)

    Piggott, M. D.; Hill, J.; Farrell, P. E.; Kramer, S. C.; Wilson, C. R.; Ham, D.; Gorman, G. J.; Bond, T.

    2011-12-01

    Software engineering of scientific code is challenging for a number of reasons including pressure to publish and a lack of awareness of the pitfalls of software engineering by scientists. The Applied Modelling and Computation Group at Imperial College is a diverse group of researchers that employ best practice software engineering methods whilst developing open source scientific software. Our main code is Fluidity - a multi-purpose computational fluid dynamics (CFD) code that can be used for a wide range of scientific applications from earth-scale mantle convection, through basin-scale ocean dynamics, to laboratory-scale classic CFD problems, and is coupled to a number of other codes including nuclear radiation and solid modelling. Our software development infrastructure consists of a number of free tools that could be employed by any group that develops scientific code and has been developed over a number of years with many lessons learnt. A single code base is developed by over 30 people for which we use bazaar for revision control, making good use of the strong branching and merging capabilities. Using features of Canonical's Launchpad platform, such as code review, blueprints for designing features and bug reporting gives the group, partners and other Fluidity uers an easy-to-use platform to collaborate and allows the induction of new members of the group into an environment where software development forms a central part of their work. The code repositoriy are coupled to an automated test and verification system which performs over 20,000 tests, including unit tests, short regression tests, code verification and large parallel tests. Included in these tests are build tests on HPC systems, including local and UK National HPC services. The testing of code in this manner leads to a continuous verification process; not a discrete event performed once development has ceased. Much of the code verification is done via the "gold standard" of comparisons to analytical solutions via the method of manufactured solutions. By developing and verifying code in tandem we avoid a number of pitfalls in scientific software development and advocate similar procedures for other scientific code applications.

  11. Pyteomics--a Python framework for exploratory data analysis and rapid software prototyping in proteomics.

    PubMed

    Goloborodko, Anton A; Levitsky, Lev I; Ivanov, Mark V; Gorshkov, Mikhail V

    2013-02-01

    Pyteomics is a cross-platform, open-source Python library providing a rich set of tools for MS-based proteomics. It provides modules for reading LC-MS/MS data, search engine output, protein sequence databases, theoretical prediction of retention times, electrochemical properties of polypeptides, mass and m/z calculations, and sequence parsing. Pyteomics is available under Apache license; release versions are available at the Python Package Index http://pypi.python.org/pyteomics, the source code repository at http://hg.theorchromo.ru/pyteomics, documentation at http://packages.python.org/pyteomics. Pyteomics.biolccc documentation is available at http://packages.python.org/pyteomics.biolccc/. Questions on installation and usage can be addressed to pyteomics mailing list: pyteomics@googlegroups.com.

  12. The numerical simulation of heat transfer during a hybrid laser-MIG welding using equivalent heat source approach

    NASA Astrophysics Data System (ADS)

    Bendaoud, Issam; Matteï, Simone; Cicala, Eugen; Tomashchuk, Iryna; Andrzejewski, Henri; Sallamand, Pierre; Mathieu, Alexandre; Bouchaud, Fréderic

    2014-03-01

    The present study is dedicated to the numerical simulation of an industrial case of hybrid laser-MIG welding of high thickness duplex steel UR2507Cu with Y-shaped chamfer geometry. It consists in simulation of heat transfer phenomena using heat equivalent source approach and implementing in finite element software COMSOL Multiphysics. A numerical exploratory designs method is used to identify the heat sources parameters in order to obtain a minimal required difference between the numerical results and the experiment which are the shape of the welded zone and the temperature evolution in different locations. The obtained results were found in good correspondence with experiment, both for melted zone shape and thermal history.

  13. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, George

    1993-01-01

    The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multiparameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resource.

  14. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, Stanislav

    1992-01-01

    The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multi parameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resources.

  15. Effect of ketamine on exploratory behaviour in BALB/C and C57BL/6 mice.

    PubMed

    Akillioglu, Kubra; Melik, Emine Babar; Melik, Enver; Boga, Ayper

    2012-01-01

    In this study, we evaluated the effect of ketamine on exploratory locomotion behaviours in the Balb/c and C57BL/6 strains of mice, which differ in their locomotion behaviours. Intraperitoneal administration of ketamine at three different doses (1, 5 or 10 mg/kg, 0.1 ml/10 gr body weight) was performed on adult male Balb/c and C57BL/6 mice. The same volume of saline was applied to the control group. The open-field and elevated plus maze apparatus were used to evaluate exploratory locomotion. In the open-field test, Balb/c mice less spend time in the centre of the field and was decreased locomotor activity compared to C57BL/6 mice (p<0.01). Ketamine treatment of Balb/c mice at 10 mg/kg dose caused an increase in locomotor activity and an increase in the amount of time spent in the centre in the open-field test, compared to the control group (p<0.05). In C57BL/6 mice, ketamine treatment (1 and 10 mg/kg) decreased locomotor activity (p<0.05). In C57BL/6 mice, the three different doses of ketamine application each caused a decrease in the frequency of centre crossing (p<0.001) and the spent time in the centre (p<0.05). In the elevated plus maze, the number of open-arm entries, the percentage of open-arm time and total arm entries were decreased in Balb/c mice compared to C57BL/6 mice (p<0.001). Ketamine treatment of Balb/c mice at 10 mg/kg dose caused an increase in the open-arm activity (p<0.001). Ketamine application (10 mg/kg) decreased the open-arm activity in C57BL/6 mice (p<0.05). A subanaesthetic dose of ketamine increased exploratory locomotion in Balb/c mice. In contrast, a subanaesthetic dose of ketamine decreased exploratory locomotion in C57BL/6 mice. In conclusion, hereditary factors may play an important role in ketamine-induced responses. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Detection and avoidance of errors in computer software

    NASA Technical Reports Server (NTRS)

    Kinsler, Les

    1989-01-01

    The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.

  17. On the Factor Structure of a Reading Comprehension Test

    ERIC Educational Resources Information Center

    Salehi, Mohammad

    2011-01-01

    To investigate the construct validly of a section of a high stakes test, an exploratory factor analysis using principal components analysis was employed. The rotation used was varimax with the suppression level of 0.30. Eleven factors were extracted out of 35 reading comprehension items. The fact that these factors emerged speak to the construct…

  18. An Exploratory Study Investigating How Adults with Intellectual Disabilities Perform on the Visual Association Test (VAT)

    ERIC Educational Resources Information Center

    McPaul, Ann; Walker, Brigid; Law, Jim; McKenzie, Karen

    2017-01-01

    Background: Neuropsychological tests of memory are believed to offer the greatest sensitivity at identifying people at the risk of developing dementia. There is a paucity of standardized and appropriate neuropsychological assessments of memory for adults with an intellectual disability. This study examines how adults with an intellectual…

  19. Psychometric Properties and Factorial Structure of the Chinese Version of the Gratitude Resentment and Appreciation Test

    ERIC Educational Resources Information Center

    Lin, Shu-Hui; Huang, Yun-Chen

    2016-01-01

    The purpose of this study was to validate a Chinese version of the Gratitude Resentment and Appreciation Test (GRAT) with Taiwanese students. In Study 1, a total of 2511 Taiwanese students participated and completed the translated GRAT. Exploratory factor analysis, confirmatory factor analysis and reliability analysis were undertaken to assess the…

  20. An Exploratory Study of User Searching of the World Wide Web: A Holistic Approach.

    ERIC Educational Resources Information Center

    Wang, Peiling; Tenopir, Carol; Laymman, Elizabeth; Penniman, David; Collins, Shawn

    1998-01-01

    Examines Web users' behaviors and needs and tests a methodology for studying users' interaction with the Web. A process-tracing technique, together with tests of cognitive style, anxiety levels, and self-report computer experience, provided data on how users interact with the Web in the process of finding factual information. (Author/AEF)

  1. Motor Control Test Responses to Balance Perturbations in Adults with an Intellectual Disability

    ERIC Educational Resources Information Center

    Hale, Leigh; Miller, Rebekah; Barach, Alice; Skinner, Margot; Gray, Andrew

    2009-01-01

    Background: The aims of this small exploratory study were to determine (1) whether adults with intellectual disability who had a recent history of falling had slower motor responses to postural perturbations than a sample of adults without disability when measured with the Motor Control Test (MCT) and (2) to identify any learning effects…

  2. The Woodcock Johnson III Tests of Achievement in Foreign Language Course Substitution Decisions for University Students with Learning Disabilities: An Exploratory Analysis

    ERIC Educational Resources Information Center

    Madaus, Joseph W.

    2005-01-01

    Selected subtests from the Woodcock Johnson III Tests of Achievement (Woodcock, McGrew, & Mather, 2001) were administered to three groups of university students. The groups included students with learning disabilities who received course substitutions for the institution's foreign language requirement, students with learning disabilities who…

  3. Homeschooling Parent/Teachers' Perceptions on Educating Struggling High School Students and their College Readiness

    ERIC Educational Resources Information Center

    McCullough, Brenda Tracy

    2013-01-01

    A general problem is that testing a homeschooled child for learning disabilities (LD) is not required in the state of Texas and therefore dependent on the homeschooling parent's recognition and desire to test. A qualitative exploratory method was used to determine the perceptions of parent/teachers on their struggling high school students'…

  4. Development of the Spatial Ability Test for Middle School Students

    ERIC Educational Resources Information Center

    Yildiz, Sevda Göktepe; Özdemir, Ahmet Sükrü

    2017-01-01

    The purpose of this study was to develop a test to determine spatial ability of middle school students. The participants were 704 middle school students (6th, 7th and 8th grade) who were studying at different schools from Istanbul. Item analysis, exploratory and confirmatory factor analysis, reliability analysis were used to analyse the data.…

  5. Perceptions of Examiner Behavior Modulate Power Relations in Oral Performance Testing

    ERIC Educational Resources Information Center

    Plough, India C.; Bogart, Pamela S. H.

    2008-01-01

    To what extent are the discourse behaviors of examiners salient to participants of an oral performance test? This exploratory study employs a grounded ethnographic approach to investigate the perceptions of the verbal, paralinguistic and nonverbal discourse behaviors of an examiner in a one-on-one role-play task that is one of four tasks in an…

  6. Test Driven Development of Scientific Models

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.

    2014-01-01

    Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.

  7. [Application of Stata software to test heterogeneity in meta-analysis method].

    PubMed

    Wang, Dan; Mou, Zhen-yun; Zhai, Jun-xia; Zong, Hong-xia; Zhao, Xiao-dong

    2008-07-01

    To introduce the application of Stata software to heterogeneity test in meta-analysis. A data set was set up according to the example in the study, and the corresponding commands of the methods in Stata 9 software were applied to test the example. The methods used were Q-test and I2 statistic attached to the fixed effect model forest plot, H statistic and Galbraith plot. The existence of the heterogeneity among studies could be detected by Q-test and H statistic and the degree of the heterogeneity could be detected by I2 statistic. The outliers which were the sources of the heterogeneity could be spotted from the Galbraith plot. Heterogeneity test in meta-analysis can be completed by the four methods in Stata software simply and quickly. H and I2 statistics are more robust, and the outliers of the heterogeneity can be clearly seen in the Galbraith plot among the four methods.

  8. Model-Based Development of Automotive Electronic Climate Control Software

    NASA Astrophysics Data System (ADS)

    Kakade, Rupesh; Murugesan, Mohan; Perugu, Bhupal; Nair, Mohanan

    With increasing complexity of software in today's products, writing and maintaining thousands of lines of code is a tedious task. Instead, an alternative methodology must be employed. Model-based development is one candidate that offers several benefits and allows engineers to focus on the domain of their expertise than writing huge codes. In this paper, we discuss the application of model-based development to the electronic climate control software of vehicles. The back-to-back testing approach is presented that ensures flawless and smooth transition from legacy designs to the model-based development. Simulink report generator to create design documents from the models is presented along with its usage to run the simulation model and capture the results into the test report. Test automation using model-based development tool that support the use of unique set of test cases for several testing levels and the test procedure that is independent of software and hardware platform is also presented.

  9. Using simple agent-based modeling to inform and enhance neighborhood walkability.

    PubMed

    Badland, Hannah; White, Marcus; Macaulay, Gus; Eagleson, Serryn; Mavoa, Suzanne; Pettit, Christopher; Giles-Corti, Billie

    2013-12-11

    Pedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory 'what-if' scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate. This study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input. The resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and modeling of different walking speeds and wait time at intersections. The tool has the capacity to influence planning and public health advocacy and practice, and by using open-access source software, it is available for use locally and internationally. There is also scope to extend this version of the tool from a simple to a complex model, which includes agents (i.e., simulated pedestrians) 'learning' and incorporating other environmental attributes that enhance walkability (e.g., residential density, mixed land use, traffic volume).

  10. Some Improved Diagnostics for Failure of The Rasch Model.

    ERIC Educational Resources Information Center

    Molenaar, Ivo W.

    1983-01-01

    Goodness of fit tests for the Rasch model are typically large-sample, global measures. This paper offers suggestions for small-sample exploratory techniques for examining the fit of item data to the Rasch model. (Author/JKS)

  11. AERIS : eco-driving application development and testing.

    DOT National Transportation Integrated Search

    2012-06-01

    This exploratory study investigates the potential of developing an Eco-Driving application that utilizes an eco-cruise control (ECC) system within state-of-the-art car-following models. The research focuses on integrating predictive cruise control an...

  12. Housing conditions influence motor functions and exploratory behavior following focal damage of the rat brain.

    PubMed

    Gornicka-Pawlak, Elzbieta; Jabłońska, Anna; Chyliński, Andrzej; Domańska-Janik, Krystyna

    2009-01-01

    The present study investigated influence of housing conditions on motor functions recovery and exploratory behavior following ouabain focal brain lesion in the rat. During 30 days post-surgery period rats were housed individually in standard cages (IS) or in groups in enriched environment (EE) and behaviorally tested. The EE lesioned rats showed enhanced recovery from motor impairments in walking beam task, comparing with IS animals. Contrarily, in the open field IS rats (both lesioned and control) traveled a longer distance, showed less habituation and spent less time resting at the home base than the EE animals. Unlike the EE lesioned animals, the lesioned IS rats, presented a tendency to hyperactivity in postinjury period. Turning tendency was significantly affected by unilateral brain lesion only in the EE rats. We can conclude that housing conditions distinctly affected the rat's behavior in classical laboratory tests.

  13. Improving the Context Supporting Quality Improvement in a Neonatal Intensive Care Unit Quality Collaborative: An Exploratory Field Study.

    PubMed

    Grooms, Heather R; Froehle, Craig M; Provost, Lloyd P; Handyside, James; Kaplan, Heather C

    Successful quality improvement (QI) requires a supportive context. The goal was to determine whether a structured curriculum could help QI teams improve the context supporting their QI work. An exploratory field study was conducted of 43 teams participating in a neonatal intensive care unit QI collaborative. Using a curriculum based on the Model for Understanding Success in Quality, teams identified gaps in their context and tested interventions to modify context. Surveys and self-reflective journals were analyzed to understand how teams developed changes to modify context. More than half (55%) targeted contextual improvements within the microsystem, focusing on motivation and culture. "Information sharing" interventions to communicate information about the project as a strategy to engage more staff were the most common interventions tested. Further study is needed to determine if efforts to modify context consistently lead to greater outcome improvements.

  14. Blended learning in K-12 mathematics and science instruction -- An exploratory study

    NASA Astrophysics Data System (ADS)

    Schmidt, Jason

    Blended learning has developed into a hot topic in education over the past several years. Flipped classrooms, online learning environments, and the use of technology to deliver educational content using rich media continue to garner national attention. While generally well accepted and researched in post-secondary education, not much research has focused on blended learning in elementary, middle, and high schools. This thesis is an exploratory study to begin to determine if students and teachers like blended learning and whether or not it affects the amount of time they spend in math and science. Standardized achievement test data were also analyzed to determine if blended learning had any effect on test scores. Based on student and teacher surveys, this population seems to like blended learning and to work more efficiently in this environment. There is no evidence from this study to support any effect on student achievement.

  15. Orbit attitude processor. STS-1 bench program verification test plan

    NASA Technical Reports Server (NTRS)

    Mcclain, C. R.

    1980-01-01

    A plan for the static verification of the STS-1 ATT PROC ORBIT software requirements is presented. The orbit version of the SAPIENS bench program is used to generate the verification data. A brief discussion of the simulation software and flight software modules is presented along with a description of the test cases.

  16. Pettit performs the EPIC Card Testing and X2R10 Software Transition

    NASA Image and Video Library

    2011-12-28

    ISS030-E-022574 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.

  17. Pettit performs the EPIC Card Testing and X2R10 Software Transition

    NASA Image and Video Library

    2011-12-28

    ISS030-E-022575 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.

  18. Tank Monitor and Control System (TMACS) Rev 11.0 Acceptance Test Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HOLM, M.J.

    The purpose of this document is to describe tests performed to validate Revision 11 of the TMACS Monitor and Control System (TMACS) and verify that the software functions as intended by design. This document is intended to test the software portion of TMACS. The tests will be performed on the development system. The software to be tested is the TMACS knowledge bases (KB) and the I/O driver/services. The development system will not be talking to field equipment; instead, the field equipment is simulated using emulators or multiplexers in the lab.

  19. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  20. Developing Software to “Track and Catch” Missed Follow-up of Abnormal Test Results in a Complex Sociotechnical Environment

    PubMed Central

    Smith, M.; Murphy, D.; Laxmisan, A.; Sittig, D.; Reis, B.; Esquivel, A.; Singh, H.

    2013-01-01

    Summary Background Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider’s prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. Objectives The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. Methods We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA’s EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Results Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility’s “test” EHR system, thus demonstrating technical compatibility. Conclusion To address the factors involved in missed test results, we developed a software prototype to account for technical, usability, organizational, and workflow needs. Our evaluation has shown the feasibility of the prototype as a means of facilitating better follow-up for cancer-related abnormal test results. PMID:24155789

  1. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1987-01-01

    The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management.

  2. Self-concept in institutionalized children with disturbed attachment: The mediating role of exploratory behaviours.

    PubMed

    Vacaru, V S; Sterkenburg, P S; Schuengel, C

    2018-05-01

    Self-concept is seen as both an outcome of sociocognitive and emotional development, and a factor in social and mental health outcomes. Although the contribution of attachment experiences to self-concept has been limited to quality of primary attachment relationships, little is known of the effects of disturbed attachment on self-concept in institutionalized children. Thus, the current study examined associations between disturbed attachment behaviours in institutionalized children and self-concept, testing limited exploration as an explanatory factor. Thirty-three institutionalized children, aged 4-12, participated in a multimethod and multi-informant assessment of disturbed attachment behaviours (i.e., Disturbances of Attachment Interview and Behavioral Signs of Disturbed Attachment in Young Children), self-concept (i.e., Pictorial Scale of Perceived Competence and Social Acceptance for Young Children), and exploratory behaviours (i.e., Student Exploratory Behaviours Observation Scale). Analyses were conducted using bootstrapping techniques. Global self-concept converged with teacher-rated children's self-concept, except for physical competence domain. Disturbed attachment behaviours were identified in 62.5% of the children, and this was associated with lower levels of exploration and lower scores on self-concept, compared with children without disturbed attachment behaviours. Furthermore, exploratory behaviours mediated the effects of disturbed attachment behaviours on self-concept. Institution-reared children with disturbed attachment behaviours were likely to have a negative perception of self and one's own competences. Limited exploratory behaviours explained this linkage. Targeting disordered attachment in children reared in institutions and their caregivers should become a high priority as a means for preventing socioemotional development issues. © 2017 John Wiley & Sons Ltd.

  3. Designing Control System Application Software for Change

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard

    2001-01-01

    The Unified Modeling Language (UML) was used to design the Environmental Systems Test Stand (ESTS) control system software. The UML was chosen for its ability to facilitate a clear dialog between software designer and customer, from which requirements are discovered and documented in a manner which transposes directly to program objects. Applying the UML to control system software design has resulted in a baseline set of documents from which change and effort of that change can be accurately measured. As the Environmental Systems Test Stand evolves, accurate estimates of the time and effort required to change the control system software will be made. Accurate quantification of the cost of software change can be before implementation, improving schedule and budget accuracy.

  4. Workstation-Based Avionics Simulator to Support Mars Science Laboratory Flight Software Development

    NASA Technical Reports Server (NTRS)

    Henriquez, David; Canham, Timothy; Chang, Johnny T.; McMahon, Elihu

    2008-01-01

    The Mars Science Laboratory developed the WorkStation TestSet (WSTS) to support flight software development. The WSTS is the non-real-time flight avionics simulator that is designed to be completely software-based and run on a workstation class Linux PC. This provides flight software developers with their own virtual avionics testbed and allows device-level and functional software testing when hardware testbeds are either not yet available or have limited availability. The WSTS has successfully off-loaded many flight software development activities from the project testbeds. At the writing of this paper, the WSTS has averaged an order of magnitude more usage than the project's hardware testbeds.

  5. Integrated testing and verification system for research flight software design document

    NASA Technical Reports Server (NTRS)

    Taylor, R. N.; Merilatt, R. L.; Osterweil, L. J.

    1979-01-01

    The NASA Langley Research Center is developing the MUST (Multipurpose User-oriented Software Technology) program to cut the cost of producing research flight software through a system of software support tools. The HAL/S language is the primary subject of the design. Boeing Computer Services Company (BCS) has designed an integrated verification and testing capability as part of MUST. Documentation, verification and test options are provided with special attention on real time, multiprocessing issues. The needs of the entire software production cycle have been considered, with effective management and reduced lifecycle costs as foremost goals. Capabilities have been included in the design for static detection of data flow anomalies involving communicating concurrent processes. Some types of ill formed process synchronization and deadlock also are detected statically.

  6. 78 FR 1162 - Cardiovascular Devices; Reclassification of External Cardiac Compressor

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-08

    ... safety and electromagnetic compatibility; For devices containing software, software verification... electromagnetic compatibility; For devices containing software, software verification, validation, and hazard... electrical components, appropriate analysis and testing must validate electrical safety and electromagnetic...

  7. The Development and Psychometric Properties of the Immigration Law Concerns Scale (ILCS) for HIV Testing.

    PubMed

    Lechuga, Julia; Galletly, Carol L; Broaddus, Michelle R; Dickson-Gomez, Julia B; Glasman, Laura R; McAuliffe, Timothy L; Vega, Miriam Y; LeGrand, Sarah; Mena, Carla A; Barlow, Morgan L; Valera, Erik; Montenegro, Judith I

    2017-11-08

    To develop, pilot test, and conduct psychometric analyses of an innovative scale measuring the influence of perceived immigration laws on Latino migrants' HIV-testing behavior. The Immigration Law Concerns Scale (ILCS) was developed in three phases: Phase 1 involved a review of law and literature, generation of scale items, consultation with project advisors, and subsequent revision of the scale. Phase 2 involved systematic translation- back translation and consensus-based editorial processes conducted by members of a bilingual and multi-national study team. In Phase 3, 339 sexually active, HIV-negative Spanish-speaking, non-citizen Latino migrant adults (both documented and undocumented) completed the scale via audio computer-assisted self-interview. The psychometric properties of the scale were tested with exploratory factor analysis and estimates of reliability coefficients were generated. Bivariate correlations were conducted to test the discriminant and predictive validity of identified factors. Exploratory factor analysis revealed a three-factor, 17-item scale. subscale reliability ranged from 0.72 to 0.79. There were significant associations between the ILCS and the HIV-testing behaviors of participants. Results of the pilot test and psychometric analysis of the ILCS are promising. The scale is reliable and significantly associated with the HIV-testing behaviors of participants. Subscales related to unwanted government attention and concerns about meeting moral character requirements should be refined.

  8. Practical Issues in Implementing Software Reliability Measurement

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.

    1999-01-01

    Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.

  9. Hardware and Software Integration to Support Real-Time Space Link Emulation

    NASA Technical Reports Server (NTRS)

    Murawski, Robert; Bhasin, Kul; Bittner, David; Sweet, Aaron; Coulter, Rachel; Schwab, Devin

    2012-01-01

    Prior to operational use, communications hardware and software must be thoroughly tested and verified. In space-link communications, field testing equipment can be prohibitively expensive and cannot test to non-ideal situations. In this paper, we show how software and hardware emulation tools can be used to accurately model the characteristics of a satellite communication channel in a lab environment. We describe some of the challenges associated with developing an emulation lab and present results to demonstrate the channel modeling. We then show how network emulation software can be used to extend a hardware emulation model without requiring additional network and channel simulation hardware.

  10. Hardware and Software Integration to Support Real-Time Space-Link Emulation

    NASA Technical Reports Server (NTRS)

    Murawski, Robert; Bhasin, Kul; Bittner, David

    2012-01-01

    Prior to operational use, communications hardware and software must be thoroughly tested and verified. In space-link communications, field testing equipment can be prohibitively expensive and cannot test to non-ideal situations. In this paper, we show how software and hardware emulation tools can be used to accurately model the characteristics of a satellite communication channel in a lab environment. We describe some of the challenges associated with developing an emulation lab and present results to demonstrate the channel modeling. We then show how network emulation software can be used to extend a hardware emulation model without requiring additional network and channel simulation hardware.

  11. The use of applied software for the professional training of students studying humanities

    NASA Astrophysics Data System (ADS)

    Sadchikova, A. S.; Rodin, M. M.

    2017-01-01

    Research practice is an integral part of humanities students' training process. In this regard the training process is to include modern information techniques of the training process of students studying humanities. This paper examines the most popular applied software products used for data processing in social science. For testing purposes we selected the most commonly preferred professional packages: MS Excel, IBM SPSS Statistics, STATISTICA, STADIA. Moreover the article contains testing results of a specialized software Prikladnoy Sotsiolog that is applicable for the preparation stage of the research. The specialised software were tested during one term in groups of students studying humanities.

  12. STGT program: Ada coding and architecture lessons learned

    NASA Technical Reports Server (NTRS)

    Usavage, Paul; Nagurney, Don

    1992-01-01

    STGT (Second TDRSS Ground Terminal) is currently halfway through the System Integration Test phase (Level 4 Testing). To date, many software architecture and Ada language issues have been encountered and solved. This paper, which is the transcript of a presentation at the 3 Dec. meeting, attempts to define these lessons plus others learned regarding software project management and risk management issues, training, performance, reuse, and reliability. Observations are included regarding the use of particular Ada coding constructs, software architecture trade-offs during the prototyping, development and testing stages of the project, and dangers inherent in parallel or concurrent systems, software, hardware, and operations engineering.

  13. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience

    PubMed Central

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac

    2017-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255

  14. Advanced communications technology satellite high burst rate link evaluation terminal power control and rain fade software test plan, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Power Control and Rain Fade Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Power Control and Rain Fade Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Power Control and Rain Fade Software automatically controls the LET uplink power to compensate for signal fades. Besides power augmentation, the C&PM Software system is also responsible for instrument control during HBR-LET experiments, control of the Intermediate Frequency Switch Matrix on board the ACTS to yield a desired path through the spacecraft payload, and data display. The Power Control and Rain Fade Software User's Guide, Version 1.0 outlines the commands and procedures to install and operate the Power Control and Rain Fade Software. The Power Control and Rain Fade Software Maintenance Manual, Version 1.0 is a programmer's guide to the Power Control and Rain Fade Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Power Control and Rain Fade Software, computer algorithms, format representations, and computer hardware configuration. The Power Control and Rain Fade Test Plan provides a step-by-step procedure to verify the operation of the software using a predetermined signal fade event. The Test Plan also provides a means to demonstrate the capability of the software.

  15. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.

    PubMed

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac

    2016-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.

  16. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales.

    PubMed

    Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.

  17. ObsPy: A Python toolbox for seismology - Current state, applications, and ecosystem around it

    NASA Astrophysics Data System (ADS)

    Lecocq, Thomas; Megies, Tobias; Krischer, Lion; Sales de Andrade, Elliott; Barsch, Robert; Beyreuther, Moritz

    2016-04-01

    ObsPy (http://www.obspy.org) is a community-driven, open-source project offering a bridge for seismology into the scientific Python ecosystem. It provides * read and write support for essentially all commonly used waveform, station, and event metadata formats with a unified interface, * a comprehensive signal processing toolbox tuned to the needs of seismologists, * integrated access to all large data centers, web services and databases, and * convenient wrappers to third party codes like libmseed and evalresp. Python, in contrast to many other languages and tools, is simple enough to enable an exploratory and interactive coding style desired by many scientists. At the same time it is a full-fledged programming language usable by software engineers to build complex and large programs. This combination makes it very suitable for use in seismology where research code often has to be translated to stable and production ready environments. It furthermore offers many freely available high quality scientific modules covering most needs in developing scientific software. ObsPy has been in constant development for more than 5 years and nowadays enjoys a large rate of adoption in the community with thousands of users. Successful applications include time-dependent and rotational seismology, big data processing, event relocations, and synthetic studies about attenuation kernels and full-waveform inversions to name a few examples. Additionally it sparked the development of several more specialized packages slowly building a modern seismological ecosystem around it. This contribution will give a short introduction and overview of ObsPy and highlight a number of use cases and software built around it. We will furthermore discuss the issue of sustainability of scientific software.

  18. ObsPy: A Python toolbox for seismology - Current state, applications, and ecosystem around it

    NASA Astrophysics Data System (ADS)

    Krischer, L.; Megies, T.; Sales de Andrade, E.; Barsch, R.; Beyreuther, M.

    2015-12-01

    ObsPy (http://www.obspy.org) is a community-driven, open-source project offering a bridge for seismology into the scientific Python ecosystem. It provides read and write support for essentially all commonly used waveform, station, and event metadata formats with a unified interface, a comprehensive signal processing toolbox tuned to the needs of seismologists, integrated access to all large data centers, web services and databases, and convenient wrappers to third party codes like libmseed and evalresp. Python, in contrast to many other languages and tools, is simple enough to enable an exploratory and interactive coding style desired by many scientists. At the same time it is a full-fledged programming language usable by software engineers to build complex and large programs. This combination makes it very suitable for use in seismology where research code often has to be translated to stable and production ready environments. It furthermore offers many freely available high quality scientific modules covering most needs in developing scientific software.ObsPy has been in constant development for more than 5 years and nowadays enjoys a large rate of adoption in the community with thousands of users. Successful applications include time-dependent and rotational seismology, big data processing, event relocations, and synthetic studies about attenuation kernels and full-waveform inversions to name a few examples. Additionally it sparked the development of several more specialized packages slowly building a modern seismological ecosystem around it.This contribution will give a short introduction and overview of ObsPy and highlight a number of us cases and software built around it. We will furthermore discuss the issue of sustainability of scientific software.

  19. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales

    PubMed Central

    Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482

  20. Experimental heart failure causes depression-like behavior together with differential regulation of inflammatory and structural genes in the brain.

    PubMed

    Frey, Anna; Popp, Sandy; Post, Antonia; Langer, Simon; Lehmann, Marc; Hofmann, Ulrich; Sirén, Anna-Leena; Hommers, Leif; Schmitt, Angelika; Strekalova, Tatyana; Ertl, Georg; Lesch, Klaus-Peter; Frantz, Stefan

    2014-01-01

    Depression and anxiety are common and independent outcome predictors in patients with chronic heart failure (CHF). However, it is unclear whether CHF causes depression. Thus, we investigated whether mice develop anxiety- and depression-like behavior after induction of ischemic CHF by myocardial infarction (MI). In order to assess depression-like behavior, anhedonia was investigated by repeatedly testing sucrose preference for 8 weeks after coronary artery ligation or sham operation. Mice with large MI and increased left ventricular dimensions on echocardiography (termed CHF mice) showed reduced preference for sucrose, indicating depression-like behavior. 6 weeks after MI, mice were tested for exploratory activity, anxiety-like behavior and cognitive function using the elevated plus maze (EPM), light-dark box (LDB), open field (OF), and object recognition (OR) tests. In the EPM and OF, CHF mice exhibited diminished exploratory behavior and motivation despite similar movement capability. In the OR, CHF mice had reduced preference for novelty and impaired short-term memory. On histology, CHF mice had unaltered overall cerebral morphology. However, analysis of gene expression by RNA-sequencing in prefrontal cortical, hippocampal, and left ventricular tissue revealed changes in genes related to inflammation and cofactors of neuronal signal transduction in CHF mice, with Nr4a1 being dysregulated both in prefrontal cortex and myocardium after MI. After induction of ischemic CHF, mice exhibited anhedonic behavior, decreased exploratory activity and interest in novelty, and cognitive impairment. Thus, ischemic CHF leads to distinct behavioral changes in mice analogous to symptoms observed in humans with CHF and comorbid depression.

  1. Activation of adenosine A(1) receptors alters behavioral and biochemical parameters in hyperthyroid rats.

    PubMed

    Bruno, Alessandra Nejar; Fontella, Fernanda Urruth; Bonan, Carla Denise; Barreto-Chaves, Maria Luiza M; Dalmaz, Carla; Sarkis, João José Freitas

    2006-02-28

    Adenosine acting on A(1) receptors has been related with neuroprotective and neuromodulatory actions, protection against oxidative stress and decrease of anxiety and nociceptive signaling. Previous studies demonstrated an inhibition of the enzymes that hydrolyze ATP to adenosine in the rat central nervous system after hyperthyroidism induction. Manifestations of hyperthyroidism include increased anxiety, nervousness, high O(2) consumption and physical hyperactivity. Here, we investigated the effects of administration of a specific agonist of adenosine A(1) receptor (N(6)-cyclopentyladenosine; CPA) on nociception, anxiety, exploratory response, locomotion and brain oxidative stress of hyperthyroid rats. Hyperthyroidism was induced by daily intraperitoneal injections of l-thyroxine (T4) for 14 days. Nociception was assessed with a tail-flick apparatus and exploratory behavior, locomotion and anxiety were analyzed by open-field and plus-maze tests. We verified the total antioxidant reactivity (TAR), lipid peroxide levels by the thiobarbituric acid reactive species (TBARS) reaction and the free radicals content by the DCF test. Our results demonstrated that CPA reverted the hyperalgesia induced by hyperthyroidism and decreased the exploratory behavior, locomotion and anxiety in hyperthyroid rats. Furthermore, CPA decreased lipid peroxidation in hippocampus and cerebral cortex of control rats and in cerebral cortex of hyperthyroid rats. CPA also increased the total antioxidant reactivity in hippocampus and cerebral cortex of control and hyperthyroid rats, but the production of free radicals verified by the DCF test was changed only in cerebral cortex. These results suggest that some of the hyperthyroidism effects are subjected to regulation by adenosine A(1) receptor, demonstrating the involvement of the adenosinergic system in this pathology.

  2. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, Charlie; Crook, Jerry

    1997-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for a generic, advanced engine control system that will result in lower software maintenance (operations) costs. It effectively accommodates software requirements changes that occur due to hardware. technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives and benefits of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishment are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software, architecture, reuse software, and reduced software reverification time related to software changes. Currently, the program is focused on supporting MSFC in accomplishing a Space Shuttle Main Engine (SSME) hot-fire test at Stennis Space Center and the Low Cost Boost Technology (LCBT) Program.

  3. Ffuzz: Towards full system high coverage fuzz testing on binary executables

    PubMed Central

    2018-01-01

    Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool—Ffuzz—on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently. PMID:29791469

  4. AN INITIAL EVALUATION OF THE BTRACKS BALANCE PLATE AND SPORTS BALANCE SOFTWARE FOR CONCUSSION DIAGNOSIS

    PubMed Central

    Manyak, Kristin A.; Abdenour, Thomas E.; Rauh, Mitchell J.; Baweja, Harsimran S.

    2016-01-01

    Background As recently dictated by the American Medical Society, balance testing is an important component in the clinical evaluation of concussion. Despite this, previous research on the efficacy of balance testing for concussion diagnosis suggests low sensitivity (∼30%), based primarily on the popular Balance Error Scoring System (BESS). The Balance Tracking System (BTrackS, Balance Tracking Systems Inc., San Diego, CA, USA) consists of a force plate (BTrackS Balance Plate) and software (BTrackS Sport Balance) which can quickly (<2 min) perform concussion balance testing with gold standard accuracy. Purpose The present study aimed to determine the sensitivity of the BTrackS Balance Plate and Sports Balance Software for concussion diagnosis. Study Design Cross-Sectional Study Methods Preseason baseline balance testing of 519 healthy Division I college athletes playing sports with a relatively high risk for concussions was performed with the BTrackS Balance Test. Testing was administered by certified athletic training staff using the BTrackS Balance Plate and Sport Balance software. Of the baselined athletes, 25 later experienced a concussion during the ensuing sport season. Post-injury balance testing was performed on these concussed athletes within 48 of injury and the sensitivity of the BTrackS Balance Plate and Sport Balance software was estimated based on the number of athletes showing a balance decline according to the criteria specified in the Sport Balance software. This criteria is based on the minimal detectable change statistic with a 90% confidence level (i.e. 90% specificity). Results Of 25 athletes who experienced concussions, 16 had balance declines relative to baseline testing results according to the BTrackS Sport Balance software criteria. This corresponds to an estimated concussion sensitivity of 64%, which is twice as great as that reported previously for the BESS. Conclusions The BTrackS Balance Plate and Sport Balance software has the greatest concussion sensitivity of any balance testing instrument reported to date. Level of Evidence Level 2 (Individual cross sectional diagnostic study) PMID:27104048

  5. The NOvA software testing framework

    NASA Astrophysics Data System (ADS)

    Tamsett, M.; C Group

    2015-12-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study vε appearance in a vμ beam. NOvA has already produced more than one million Monte Carlo and detector generated files amounting to more than 1 PB in size. This data is divided between a number of parallel streams such as far and near detector beam spills, cosmic ray backgrounds, a number of data-driven triggers and over 20 different Monte Carlo configurations. Each of these data streams must be processed through the appropriate steps of the rapidly evolving, multi-tiered, interdependent NOvA software framework. In total there are greater than 12 individual software tiers, each of which performs a different function and can be configured differently depending on the input stream. In order to regularly test and validate that all of these software stages are working correctly NOvA has designed a powerful, modular testing framework that enables detailed validation and benchmarking to be performed in a fast, efficient and accessible way with minimal expert knowledge. The core of this system is a novel series of python modules which wrap, monitor and handle the underlying C++ software framework and then report the results to a slick front-end web-based interface. This interface utilises modern, cross-platform, visualisation libraries to render the test results in a meaningful way. They are fast and flexible, allowing for the easy addition of new tests and datasets. In total upwards of 14 individual streams are regularly tested amounting to over 70 individual software processes, producing over 25 GB of output files. The rigour enforced through this flexible testing framework enables NOvA to rapidly verify configurations, results and software and thus ensure that data is available for physics analysis in a timely and robust manner.

  6. Using software agents to preserve individual health data confidentiality in micro-scale geographical analyses.

    PubMed

    Kamel Boulos, Maged N; Cai, Qiang; Padget, Julian A; Rushton, Gerard

    2006-04-01

    Confidentiality constraints often preclude the release of disaggregate data about individuals, which limits the types and accuracy of the results of geographical health analyses that could be done. Access to individually geocoded (disaggregate) data often involves lengthy and cumbersome procedures through review boards and committees for approval (and sometimes is not possible). Moreover, current data confidentiality-preserving solutions compatible with fine-level spatial analyses either lack flexibility or yield less than optimal results (because of confidentiality-preserving changes they introduce to disaggregate data), or both. In this paper, we present a simulation case study to illustrate how some analyses cannot be (or will suffer if) done on aggregate data. We then quickly review some existing data confidentiality-preserving techniques, and move on to explore a solution based on software agents with the potential of providing flexible, controlled (software-only) access to unmodified confidential disaggregate data and returning only results that do not expose any person-identifiable details. The solution is thus appropriate for micro-scale geographical analyses where no person-identifiable details are required in the final results (i.e., only aggregate results are needed). Our proposed software agent technique also enables post-coordinated analyses to be designed and carried out on the confidential database(s), as needed, compared to a more conventional solution based on the Web Services model that would only support a rigid, pre-coordinated (pre-determined) and rather limited set of analyses. The paper also provides an exploratory discussion of mobility, security, and trust issues associated with software agents, as well as possible directions/solutions to address these issues, including the use of virtual organizations. Successful partnerships between stakeholder organizations, proper collaboration agreements, clear policies, and unambiguous interpretations of laws and regulations are also much needed to support and ensure the success of any technological solution.

  7. Modeling strategic use of human computer interfaces with novel hidden Markov models

    PubMed Central

    Mariano, Laura J.; Poore, Joshua C.; Krum, David M.; Schwartz, Jana L.; Coskren, William D.; Jones, Eric M.

    2015-01-01

    Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit. PMID:26191026

  8. RELAP-7 Software Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less

  9. NDAS Hardware Translation Layer Development

    NASA Technical Reports Server (NTRS)

    Nazaretian, Ryan N.; Holladay, Wendy T.

    2011-01-01

    The NASA Data Acquisition System (NDAS) project is aimed to replace all DAS software for NASA s Rocket Testing Facilities. There must be a software-hardware translation layer so the software can properly talk to the hardware. Since the hardware from each test stand varies, drivers for each stand have to be made. These drivers will act more like plugins for the software. If the software is being used in E3, then the software should point to the E3 driver package. If the software is being used at B2, then the software should point to the B2 driver package. The driver packages should also be filled with hardware drivers that are universal to the DAS system. For example, since A1, A2, and B2 all use the Preston 8300AU signal conditioners, then the driver for those three stands should be the same and updated collectively.

  10. Validation of software for calculating the likelihood ratio for parentage and kinship.

    PubMed

    Drábek, J

    2009-03-01

    Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.

  11. SELECTED ORGANIC POLLUTANT EMISSIONS FROM UNVENTED KEROSENE HEATERS

    EPA Science Inventory

    An exploratory study was performed to assess the semivolatile and nonvolatile organic pollutant emission rates from unvented kerosene space heaters. A well-tuned radiant heater and maltuned convective heater were tested for semivolatile and nonvolatile organic pollutant emiss...

  12. 10 CFR 60.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .... Site characterization includes borings, surface excavations, excavation of exploratory shafts, limited subsurface lateral excavations and borings, and in situ testing at depth needed to determine the suitability of the site for a geologic repository, but does not include preliminary borings and geophysical...

  13. 10 CFR 60.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .... Site characterization includes borings, surface excavations, excavation of exploratory shafts, limited subsurface lateral excavations and borings, and in situ testing at depth needed to determine the suitability of the site for a geologic repository, but does not include preliminary borings and geophysical...

  14. 10 CFR 60.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... Site characterization includes borings, surface excavations, excavation of exploratory shafts, limited subsurface lateral excavations and borings, and in situ testing at depth needed to determine the suitability of the site for a geologic repository, but does not include preliminary borings and geophysical...

  15. RELAP-7 Software Verification and Validation Plan: Requirements Traceability Matrix (RTM) Part 1 – Physics and numerical methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Yong Joon; Yoo, Jun Soo; Smith, Curtis Lee

    2015-09-01

    This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.

  16. Expert system verification and validation guidelines/workshop task. Deliverable no. 1: ES V/V guidelines

    NASA Technical Reports Server (NTRS)

    French, Scott W.

    1991-01-01

    The goals are to show that verifying and validating a software system is a required part of software development and has a direct impact on the software's design and structure. Workshop tasks are given in the areas of statistics, integration/system test, unit and architectural testing, and a traffic controller problem.

  17. Proceedings of the Eighth Annual Software Engineering Workshop

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The four major topics of discussion included: the NASA Software Engineering Laboratory, software testing, human factors in software engineering and software quality assessment. As in the past years, there were 12 position papers presented (3 for each topic) followed by questions and very heavy participation by the general audience.

  18. Guidelines for testing and release procedures

    NASA Technical Reports Server (NTRS)

    Molari, R.; Conway, M.

    1984-01-01

    Guidelines and procedures are recommended for the testing and release of the types of computer software efforts commonly performed at NASA/Ames Research Center. All recommendations are based on the premise that testing and release activities must be specifically selected for the environment, size, and purpose of each individual software project. Guidelines are presented for building a Test Plan and using formal Test Plan and Test Care Inspections on it. Frequent references are made to NASA/Ames Guidelines for Software Inspections. Guidelines are presented for selecting an Overall Test Approach and for each of the four main phases of testing: (1) Unit Testing of Components, (2) Integration Testing of Components, (3) System Integration Testing, and (4) Acceptance Testing. Tools used for testing are listed, including those available from operating systems used at Ames, specialized tools which can be developed, unit test drivers, stub module generators, and the use of format test reporting schemes.

  19. Support for Diagnosis of Custom Computer Hardware

    NASA Technical Reports Server (NTRS)

    Molock, Dwaine S.

    2008-01-01

    The Coldfire SDN Diagnostics software is a flexible means of exercising, testing, and debugging custom computer hardware. The software is a set of routines that, collectively, serve as a common software interface through which one can gain access to various parts of the hardware under test and/or cause the hardware to perform various functions. The routines can be used to construct tests to exercise, and verify the operation of, various processors and hardware interfaces. More specifically, the software can be used to gain access to memory, to execute timer delays, to configure interrupts, and configure processor cache, floating-point, and direct-memory-access units. The software is designed to be used on diverse NASA projects, and can be customized for use with different processors and interfaces. The routines are supported, regardless of the architecture of a processor that one seeks to diagnose. The present version of the software is configured for Coldfire processors on the Subsystem Data Node processor boards of the Solar Dynamics Observatory. There is also support for the software with respect to Mongoose V, RAD750, and PPC405 processors or their equivalents.

  20. The Mars Science Laboratory Entry, Descent, and Landing Flight Software

    NASA Technical Reports Server (NTRS)

    Gostelow, Kim P.

    2013-01-01

    This paper describes the design, development, and testing of the EDL program from the perspective of the software engineer. We briefly cover the overall MSL flight software organization, and then the organization of EDL itself. We discuss the timeline, the structure of the GNC code (but not the algorithms as they are covered elsewhere in this conference) and the command and telemetry interfaces. Finally, we cover testing and the influence that testability had on the EDL flight software design.

  1. Top Down Implementation Plan for system performance test software

    NASA Technical Reports Server (NTRS)

    Jacobson, G. N.; Spinak, A.

    1982-01-01

    The top down implementation plan used for the development of system performance test software during the Mark IV-A era is described. The plan is based upon the identification of the hierarchical relationship of the individual elements of the software design, the development of a sequence of functionally oriented demonstrable steps, the allocation of subroutines to the specific step where they are first required, and objective status reporting. The results are: determination of milestones, improved managerial visibility, better project control, and a successful software development.

  2. Enhanced Master Controller Unit Tester

    NASA Technical Reports Server (NTRS)

    Benson, Patricia; Johnson, Yvette; Johnson, Brian; Williams, Philip; Burton, Geoffrey; McCoy, Anthony

    2007-01-01

    The Enhanced Master Controller Unit Tester (EMUT) software is a tool for development and testing of software for a master controller (MC) flight computer. The primary function of the EMUT software is to simulate interfaces between the MC computer and external analog and digital circuitry (including other computers) in a rack of equipment to be used in scientific experiments. The simulations span the range of nominal, off-nominal, and erroneous operational conditions, enabling the testing of MC software before all the equipment becomes available.

  3. Estimation and enhancement of real-time software reliability through mutation analysis

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.

    1992-01-01

    A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.

  4. [Confirming the Utility of RAISUS Antifungal Susceptibility Testing by New-Software].

    PubMed

    Ono, Tomoko; Suematsu, Hiroyuki; Sawamura, Haruki; Yamagishi, Yuka; Mikamo, Hiroshige

    2017-08-15

    Clinical and Laboratory Standards Institute (CLSI) methods for susceptibility tests of yeast are used in Japan. On the other hand, the methods have some disadvantage; 1) reading at 24 and 48 h, 2) using unclear scale, approximately 50% inhibition, to determine MICs, 3) calculating trailing growth and paradoxical effects. These makes it difficult to test the susuceptibility for yeasts. Old software of RAISUS, Ver. 6.0 series, resolved problem 1) and 2) but did not resolve problem 3). Recently, new software of RAISUS, Ver. 7.0 series, resolved problem 3). We confirmed that using the new software made it clear whether all these issue were settled or not. Eighty-four Candida isolated from Aichi Medical University was used in this study. We compared the MICs obtained by using RAISUS antifungal susceptibility testing of yeasts RSMY1, RSMY1, with those obtained by using ASTY. The concordance rates (±four-fold of MICs) between the MICs obtained by using ASTY and RSMY1 with the new software were more than 90%, except for miconazole (MCZ). The rate of MCZ was low, but MICs obtained by using CLSI methods and Yeast-like Fungus DP 'EIKEN' methods, E-DP, showed equivalent MICs of RSMY1 using the new software. The frequency of skip effects on RSMY1 using the new software markedly decreased relative to RSMY1 using the old software. In case of showing trailing growth, the new software of RAISUS made it possible to choice the correct MICs and to put up the sign of trailing growth on the result screen. New software of RAISUS enhances its usability and the accuracy of MICs. Using automatic instrument to determine MICs is useful to obtain objective results easily.

  5. Power, Avionics and Software - Phase 1.0:. [Subsystem Integration Test Report

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.

    2014-01-01

    This report describes Power, Avionics and Software (PAS) 1.0 subsystem integration testing and test results that occurred in August and September of 2013. This report covers the capabilities of each PAS assembly to meet integration test objectives for non-safety critical, non-flight, non-human-rated hardware and software development. This test report is the outcome of the first integration of the PAS subsystem and is meant to provide data for subsequent designs, development and testing of the future PAS subsystems. The two main objectives were to assess the ability of the PAS assemblies to exchange messages and to perform audio testing of both inbound and outbound channels. This report describes each test performed, defines the test, the data, and provides conclusions and recommendations.

  6. Absorbing Software Testing into the Scrum Method

    NASA Astrophysics Data System (ADS)

    Tuomikoski, Janne; Tervonen, Ilkka

    In this paper we study, how to absorb software testing into the Scrum method. We conducted the research as an action research during the years 2007-2008 with three iterations. The result showed that testing can and even should be absorbed to the Scrum method. The testing team was merged into the Scrum teams. The teams can now deliver better working software in a shorter time, because testing keeps track of the progress of the development. Also the team spirit is higher, because the Scrum team members are committed to the same goal. The biggest change from test manager’s point of view was the organized Product Owner Team. Test manager don’t have testing team anymore, and in the future all the testing tasks have to be assigned through the Product Backlog.

  7. A test matrix sequencer for research test facility automation

    NASA Technical Reports Server (NTRS)

    Mccartney, Timothy P.; Emery, Edward F.

    1990-01-01

    The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.

  8. CoRoTlog

    NASA Astrophysics Data System (ADS)

    Plasson, Ph.

    2006-11-01

    LESIA, in close cooperation with CNES, DLR and IWF, is responsible for the tests and validation of the CoRoT instrument digital process unit which is made up of the BEX and DPU assembly. The main part of the work has consisted in validating the DPU software and in testing the BEX/DPU coupling. This work took more than two years due to the central role of the software tested and its technical complexity. The first task, in the validation process, was to carry out the acceptance tests of the DPU software. These tests consisted in checking each of the 325 requirements identified in the URD (User Requirements Document) and were played in a configuration using the DPU coupled to a BEX simulator. During the acceptance tests, all the transversal functionalities of the DPU software, like the TC/TM management, the state machine management, the BEX driving, the system monitoring or the maintenance functionalities were checked in depth. The functionalities associated with the seismology and exoplanetology processing, like the loading of window and mask descriptors or the configuration of the service execution parameters, were also exhaustively tested. After having validated the DPU software against the user requirements using a BEX simulator, the following step consisted in coupling the DPU and the BEX in order to check that the formed unit worked correctly and met the performance requirements. These tests were conducted in two phases: the first one was devoted to the functional aspects and the tests of interface, the second one to the performance aspects. The performance tests were based on the use of the DPU software scientific services and on the use of full images representative of a realistic sky as inputs. These tests were also based on the use of a reference set of windows and parameters, which was provided by the scientific team and was representative, in terms of load and complexity, of the one that could be used during the observation mode of the CoRoT instrument. Theywere played in a configuration using either a BCC simulator or a real BCC coupled to a video simulator, to feed the BEX/DPU unit. The validation of the scientific algorithms was conducted in parallel to the phase of the BEX/DPU coupling tests. The objective of this phase was to check that the algorithms implemented in the scientific services of the DPU software were in good conformity with those specified in the URD and that the obtained numerical precision corresponded to that expected. Forty cases of tests were defined covering the fine and rough angular error measurement processing, the rejection of the brilliant pixels, the subtraction of the offset and the sky background, the photometry algorithms, the SAA handling and reference image management. For each test case, the LESIA scientific team produced, by simulation, using the model instrument, the dynamic data files and the parameter sets allowing to feed the DPU on the one hand, and, on the other hand, a model of the onboard software. These data files correspond to FITS images (black windows, star windows, offset windows) containing more or less disturbances and making it possible to test the DPU software in dynamic mode over durations of up to 48 hours. To perform the test and validation activities of the CoRoT instrument digital process unit, a set of software testing tools was developed by LESIA (Software Ground Support Equipment, hereafter "SGSE"). Thanks to their versatility and modularity, these software testing tools were actually used during all the activities of integration, tests and validation of the instrument and its subsystems CoRoTCase and CoRoTCam. The CoRoT SGSE were specified, designed and developed by LESIA. The objective was to have a software system allowing the users (validation team of the onboard software, instrument integration team, etc.) to remotely control and monitor the whole instrument or only one of the subsystems of the instrument like the DPU coupled to a simulator BEX or the BEX/DPU unit coupled to a BCC simulator. The idea was to be able to interact in real time with the system under test by driving the various EGSE, but also to play test procedures implemented as scripts organized into libraries, to record the telemetries and housekeeping data in a database, and to be able to carry out post-mortem analyses.

  9. Cleanroom certification model

    NASA Technical Reports Server (NTRS)

    Currit, P. A.

    1983-01-01

    The Cleanroom software development methodology is designed to take the gamble out of product releases for both suppliers and receivers of the software. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (Mean Time To Failure) of the product at the time of its release. A statistical approach to software product testing using randomly selected samples of test cases is considered. A statistical model is defined for the certification process which uses the timing data recorded during test. A reasonableness argument for this model is provided that uses previously published data on software product execution. Also included is a derivation of the certification model estimators and a comparison of the proposed least squares technique with the more commonly used maximum likelihood estimators.

  10. Enrichment with Wood Blocks Does Not Affect Toxicity Assessment in an Exploratory Toxicology Model Using Sprague–Dawley Rats

    PubMed Central

    Ditewig, Amy C; Bratcher, Natalie A; Davila, Donna R; Dayton, Brian D; Ebert, Paige; Lesuisse, Philippe; Liguori, Michael J; Wetter, Jill M; Yang, Hyuna; Buck, Wayne R

    2014-01-01

    Environmental enrichment in rodents may improve animal well-being but can affect neurologic development, immune system function, and aging. We tested the hypothesis that wood block enrichment affects the interpretation of traditional and transcriptomic endpoints in an exploratory toxicology testing model using a well-characterized reference compound, cyclophosphamide. ANOVA was performed to distinguish effects of wood block enrichment separate from effects of 40 mg/kg cyclophosphamide treatment. Biologically relevant and statistically significant effects of wood block enrichment occurred only for body weight gain. ANOVA demonstrated the expected effects of cyclophosphamide on food consumption, spleen weight, and hematology. According to transcriptomic endpoints, cyclophosphamide induced fewer changes in gene expression in liver than in spleen. Splenic transcriptomic pathways affected by cyclophosphamide included: iron hemostasis; vascular tissue angiotensin system; hepatic stellate cell activation and fibrosis; complement activation; TGFβ-induced hypertrophy and fibrosis; monocytes, macrophages, and atherosclerosis; and platelet activation. Changes in these pathways due to cyclophosphamide treatment were consistent with bone marrow toxicity regardless of enrichment. In a second study, neither enrichment nor type of cage flooring altered body weight or food consumption over a 28-d period after the first week. In conclusion, wood block enrichment did not interfere with a typical exploratory toxicology study; the effects of ingested wood on drug level kinetics may require further consideration. PMID:24827566

  11. Development of a flight software testing methodology

    NASA Technical Reports Server (NTRS)

    Mccluskey, E. J.; Andrews, D. M.

    1985-01-01

    The research to develop a testing methodology for flight software is described. An experiment was conducted in using assertions to dynamically test digital flight control software. The experiment showed that 87% of typical errors introduced into the program would be detected by assertions. Detailed analysis of the test data showed that the number of assertions needed to detect those errors could be reduced to a minimal set. The analysis also revealed that the most effective assertions tested program parameters that provided greater indirect (collateral) testing of other parameters. In addition, a prototype watchdog task system was built to evaluate the effectiveness of executing assertions in parallel by using the multitasking features of Ada.

  12. Effectiveness of back-to-back testing

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.

    1987-01-01

    Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.

  13. Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.

    PubMed

    Villarrubia, J S; Tondare, V N; Vladár, A E

    2016-01-01

    The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.

  14. Studies of extraction, storage, and testing of pine pollen

    Treesearch

    J. W. Duffield

    1953-01-01

    This report assembles the results of a number of small exploratory studies on the extraction, storage, and viability testing of pollen of several species of pines. These studies indicate clearly the need for more knowledge of the physiology of pollen — particularly of the relation between atmospheric humidity at the time of pollen shedding and the subsequent reactions...

  15. First-Grade Spelling Scores within the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Screening: An Exploratory Study

    ERIC Educational Resources Information Center

    Munger, Kristen A.; Murray, Maria S.

    2017-01-01

    The purpose of this study was to examine the validity evidence of first-grade spelling scores from a standardized test of nonsense word spellings and their potential value within universal literacy screening. Spelling scores from the Test of Phonological Awareness: Second Edition PLUS for 47 first-grade children were scored using a standardized…

  16. Empirical Testing of a Theoretical Extension of the Technology Acceptance Model: An Exploratory Study of Educational Wikis

    ERIC Educational Resources Information Center

    Liu, Xun

    2010-01-01

    This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…

  17. PRECONSTRUCTION IMAGE OF THE MTR SITE. ABANDONED IRRIGATION CANAL (FROM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PRE-CONSTRUCTION IMAGE OF THE MTR SITE. ABANDONED IRRIGATION CANAL (FROM EARLY 1900s) ILLUSTRATES FLATNESS OF MTR/TRA TERRAIN. FEATURE ON HORIZON IN LEFT OF VIEW IS EXPLORATORY WATER DRILLING EQUIPMENT. CAMERA LOOKS SOUTHEAST. INL NEGATIVE NO. 136. Unknown Photographer, 12/5/1949 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  18. Exploratory Analyses To Improve Model Fit: Errors Due to Misspecification and a Strategy To Reduce Their Occurrence.

    ERIC Educational Resources Information Center

    Green, Samuel B.; Thompson, Marilyn S.; Poirier, Jennifer

    1999-01-01

    The use of Lagrange multiplier (LM) tests in specification searches and the efforts that involve the addition of extraneous parameters to models are discussed. Presented are a rationale and strategy for conducting specification searches in two stages that involve adding parameters to LM tests to maximize fit and then deleting parameters not needed…

  19. Psychomotor and Perceptual Speed Abilities and Skilled Performance.

    DTIC Science & Technology

    1999-02-01

    of the perceptual speed and touch-panel psychomotor tests used in the current project were administered to School of Dentistry students. Although...progress). Touch-panel monitor based psychomotor tests for predicting skilled performance: An exploratory study with School of Dentistry students...Paper to be submitted for presentation at the 1999 Human Factors and Ergonomics Society annual meeting. Ackerman, P. L., & Kanfer, R. (1993

  20. Linking Course-Embedded Assessment Measures and Performance on the Educational Testing Service Major Field Test in Business

    ERIC Educational Resources Information Center

    Barboza, Gustavo A.; Pesek, James

    2012-01-01

    Assessment of the business curriculum and its learning goals and objectives has become a major field of interest for business schools. The exploratory results of the authors' model using a sample of 173 students show robust support for the hypothesis that high marks in course-embedded assessment on business-specific analytical skills positively…

Top